onsdag den 12. januar 2011

Lego Lab 17

Putting everything together and integration test

Date: 03-06 - January - 2011
Duration: 7 hours 3 - January , 7.5 hours 5 - January, 6 hours 6 -January
Participants: Everyone

Goals for the lab session
  • Fixing mechanical issues (3 - Jan.)
    • Gripper modification
    • Object modification
  • Integration of the communication functionality in robot 1 and 2 and computer (3 - Jan.)
  • Test and setup of demonstration (5 - Jan.)
    • Fine tuning parameters and software
    • Plan final demonstration
  • Adding PC to robot 2 communication (6 - Jan.)
Gripper modification
One of the problems that we found with the previous gripper structure was that it was not gripping properly the object every time it was triggered. Sometimes the object was sliding out of the gripper in the sides because it was not properly fixed by the movable part. Therefore we modified the gripper structure so longer sticks were used. The final gripper structure can be seen in the following picture.

Figure 1: The collector robot with the new gripper in the frontal part.

The previous construction was making use of a gearbox to drive the gripper. That gearbox was present because the motor structure was reused from previous constructions. Actually, it was not a good design from the mechanical point of view, since the gearbox was decreasing the torque of the gripper driving gear. This happened because a gear with big diameter was moving a gear with a smaller one.
Since we wanted to keep the design simple and the NXT motor torque was high enough to fix the ball, we decided to suppress the gearbox and attach directly the gripper to the gear directly connected to the motor. The gripper was attached through the gear holes using standard black and blue Lego pins. In the following pictures different views of the motor gripper can be seen.

Figure 2: Top lateral view of the driving gear.

Figure 3: Side view of the driving gear.

Figure 4: Motor frontal view.

Object modification
One of the problems we detected was that the ball was offering some resistance so it was difficult to be moved. This is a new problem that arose after cutting the lower part of the ball as explained in previous lab reports (avoid ball tilting due to uneven weight distribution). The solution  to make the movement smother has been to add some transparent tape in the ball base. The final result can be seen in the following picture.

Figure 5: Flamingo board with tape in the base.

Integration of the communication functionality in robot 1 and robot 2
The goal was for robot 1 to find the object and communicate the position to robot 2. Since robot 1 is the active agent we have decided to let robot 2 be active waiting all time and let robot 1 take initiative to establish the connection when the object is found and transfer the position.

Once robot 2 has reached the object, it needs to know where to deliver it to. It then waits again until it gets the position from the remote computer.

This section describes how we have added the common communication classes to the previous developed software for robot 1 and 2. Both robots uses the common classes: Command, ACKCommand, FetchCommand, DeliverCommand, DataLogger and Utils that provide common functionality as described in lab session 16. A static class BTSend for robot 1 and BTReceive for robot 2 is made to encapsulate the bluetooth command protocol developed for this project. Robot 1 and the computer act as the masters and robot 2 as the slave in the communication.

Robot 1 communication

The behaviour SenseIdentifyObject is changed, see lab session 13 [1]. When this behavior is searching for the object in the colored area the waitMoving method is called. This method calls the searchObject method listed below that uses the ultrasonic sensor to detect if an object is found. If an object is found the static method is now called that establish a bluetooth connection to robot 2 and sends the pose, color value and estimated distance to the object. If the communication succeeds a high beep tone is made and the robot 1 moves a bit backward leaving space for robot 2 to carry the object away. Finally robot 1 stops and waits for the user to press the enter key.

private void searchObject()
{
    int distance = us.getDistance();
    if (distance < foundThreshold)
    {
    // Stop robot until released by user
     stop();
    stopped = true;
   
   // Save location
  addPose(Car.getPose());
   // Send command to Robot2 to come and get the object
   boolean success = BTSend.sendPose(Car.getPose(),
color_val, dist_to_obj, logger);
   if (success)
   {
    Sound.playTone(800, 2000, 50); // High tone
    // Backup giving space for robot #2 to pick up object
   backward();
    delay(500);
   }
   else
    Sound.playTone(100, 2000, 50); // Low tone
    
   // Stop and await for object to be removed
   while (stopped)
  {
    stop();
    delay(1000);
   }
    }
}

Robot 2 communication

For robot 2 the behaviour SeqStrategy is changed, see lab session 15 [2]. Instead of just hard code a location to find the object a call to the static method WaitAndReceiveObjectLocation is called in WaitForObjLocation. This method waits forever on robot 1 to create a bluetooth connection and sending the object location. See code snippet below. The received pose of robot 1 is converted to a location where robot 2 should be able to navigate to the infrared light of the object.

public void WaitForObjLocation()
{
ObjectLocation objLoc = null;

while(objLoc == null)
{
  objLoc = BTReceive.WaitAndReceiveObjectLocation(logger);
  if(objLoc == null)
   logLine("There were errors receiving the object location");
}
      
// Convert robot #1 position to location for robot #2
Pose robot2pose = objLoc.GetRobot2Pose();
x_loc = Math.round(robot2pose.getX());
y_loc = Math.round(robot2pose.getY());
head = Math.round(robot2pose.getHeading());  

// Display robot #2 pose
String msg = x_loc + "," + y_loc + "," + head;
    LCD.drawString(msg, 0, 7);
}

The same strategy is followed when waiting for the position where it has to deliver the object. Instead of going to position (0, 0), as it was done in lab session 15 [2], the method
BringObjectHome has been refactored to wait for a position from the computer. The method is very similar to the WaitForObjLocation shown earlier, but it calls WaitAndReceiveHomeLocation instead of WaitAndReceiveObjectLocation.

The below code snippet shows how the BringObjectHome method is changed:

private void BringObjectHome()
{
ObjectLocation homeLoc = null;
while(homeLoc == null)
{
  homeLoc = BTReceive.WaitAndReceiveHomeLocation(logger);
  if(homeLoc == null)
   logLine("There were errors receiving the home location");
}
      
// Get position for destination
Pose pose = homeLoc.GetRobot1Pose();
x_loc = Math.round(pose.getX());
y_loc = Math.round(pose.getY());

// Display robot #2 pose
String msg = x_loc + "," + y_loc;
    LCD.drawString(msg, 0, 7);
   
goTo(x_loc, y_loc, true);
WaitMoving();
    liftGripArm(); // Releases object
    delay(2500); // Object must be manual removed
    rotateTo(0, true);
WaitMoving();  
}

Testing and improving software for robots
On January the 5th intensive testing was performed by repeating robot 1 finding objects at different locations with different headings and sending the position to robot 1. The test was performed letting the robot finding the object after 1 turn and 3 turns at the border of the arena.
We found that the turn angle at the boarder needed adjustment, since the drift error on the y coordinate was too big. By changing the turn angle from 170 to 140 degrees this problem is reduced. We now have more tacho readings for both x and y moving a certain distance.
A reset function was added by adding a possibility to restart robot 1 (Pressing enter) after finding the object instead of recalibrating the system after each test.  Some of the parameters were adjusted like the tooClose (15 -> 10 cm) value used to avoid objects outside the colored area. Sometimes this function was activated to early by robot 1.
Robot 2 did in some cases start gripping the object too early. A filtering strategy was implemented in the SeqStrategy to ensure that the gripping was not perform by random ultrasonic reading. The problem occurs when the robot starts oscillating trying to get close to the light of the object. The parameter was adjusted from 10 -> 8 cm distance.
 
Remaining issues found during testing that have not been completely solved:

Robot #1
  • Precision and drift of x, y coordinates adds up when turning at boarder (Increased turn angle could be improved by adjust coordinates at arena border)
  • Sometimes avoid object instead of identify object (Adjusted tooClose parameter)
  • Sometimes enters a deadlock situation when object found and starts communicate
  • Java exception error when communication with robot #2 turned off


Robot #2
  • Sometimes gripping the object to early (Added filtering on detecting object)
  • Doesn’t return to home position due to error when turning with object
  • Sometimes motors locks when moving to object location (Could be solved with 2 different speeds – adding a faster speed moving towards location of object)
  • PC to robot 2 connection takes a long time


Some of the above bugs we have decided not to solve since they seem hard to find and will not stop us from making the final demonstration.

Link to final code for robot 1 and 2 and for the computer program  with bluetooth communication and modifications.

Final demonstration
In the next videos it is shown how the robots operating and accomplishing the initial goal of finding and collecting the objects. These final test were made in the actual arena under real conditions.

In the first video it can be seen how the robot locates an object after one turn at the border. Once the object has been found the collector robot is called so the object is collected.

In the next video it can be seen the finder robot locating the object after three turns. After that the collector robot is called as shown above.

In the last video below robot 2 waits for a command from the PC for where to drop of the object.
http://www.youtube.com/watch?v=7HUUlSJkv8M

Figure 6: Finder and collector robots deployed in the arena. The deployed targets can be seen as well above the red paper markers.


Conclusion
Along this lab session we have been improving the mechanics of the collector robot, working on the gearing and the gripping structure. This implied some reprogramming of the gripper control logic. The objects have been modified to be more stable when they are deployed in the arena.
It must be remarked that these fixes have been done after testing the robots and observing their behaviour in previous labs. Testing under real conditions (like the actual arena) provide us valuable feedback to improve our robot logic and structure. We have added communication to the robots and remote PC computer and a final integration test is performed.

Finally, the results have been recorded to illustrate the final state of the project. As it can be seen in the videos, we have achieved the target functionality described in our initial goal set, introduced in [3]. The remaining issues that we have decided not to solve is listed in this lab session.

References
[1] Lego Lab 13: http://josemariakim.blogspot.com/2011/01/lego-lab-13.html
[2] Lego Lab 15: http://josemariakim.blogspot.com/2011/01/lego-lab-15.html
[3] Lego Lab 11: http://josemariakim.blogspot.com/2011/01/lego-lab-11.html

Ingen kommentarer:

Send en kommentar