lørdag den 1. januar 2011

Lego Lab 13

R2 construction, R1 localization and communication

Date: 10, 13 - December - 2010
Duration:
7 hours - 10/12/2010 (all together)
7 hours - 13/12/2010 (all together)
3 hours - 24,25/12/2010 - José Antonio Esparza - Documentation
Participants: Kim Bjerge, Maria Soler, José Antonio Esparza

Goals for the lab session
- Communication between PC and robot (Maria)
- Communication between robot 1 and 2 (Maria)
- Coordinate representation (R1 + R2 + PC) (Maria+Kim)
- Robot #1 localization (Kim)
- Robot #1 arena search algorithm (Kim)
- Robot #2 construction (Jose)

Homework for next time
Maria: Work on communication between robots and PC - Lego Lab 16
José: Finalize blocks 11, 12 and 13
Kim: Robot #2 to pick up object and carry it away - Lego Lab 14+15

Robot 2 object gripper

The magnet based object gripper

One of the considered approaches for the gripper was based on an electromagnet. The electromagnet is composed by a metallic core inserted in a coil. When the current goes through the coil a magnetic field is inducted. Once the current flow stops, the magnetic field is interrupted. The idea was basically to add small magnets to the objects and create an electromagnet so it could be controlled from the NXT. The electromagnet we created is shown in figure 1.
Figure 1: Electromagnet mounted in a Lego base

One of the problems we faced was to draw enough current in a safe way from the NXT in order to excite the coil. After some research, we figured out that we had to use a small driver to interface this electrical component. One of the options was to use a similar circuit to the one shown in figure 2. The i/o pin would be connected to one of the NXT’s digital output (working at a TTL level). This output would be saturating or cutting-off the NPN transistor, depending on the logical level set by the NXT. This would interrupt or allow the current flow through the coil. In this circuit a diode is used to protect the electronics from reverse current effects. As it can be seen in the figure, the driven element is a relay. This does not matter in this case, and the driver is still totally applicable in our case. A relay is composed by an electromagnet.

Figure 2: simple coil driver to interface a relay from a digital electronics based device. Image from [5].

Lego - brick based object gripper

The second approach was to use a pure mechanical gripper, triggered by a motor. The basic idea was to use two hooks mounted on a simple structure. The hooks should be able to be positioned in a high or low level. While the robot is looking for the ball, the hooks should be at a high level. Once the ball has been found, the robot would be positioned facing the ball, which would be under the hooks. At that point the hooks would be lowered so the ball would be fixed, under robot control. The idea is very simple but it requires a more advanced lego construction.

Final decision - The Lego based gripper

The electromagnet gripper was original and very easy to construct. The main drawback was that it required new hardware, which was not tested nor supported by lego. A second drawback was that the robot would have been consuming continuously while transporting the object from one place to another. The lego based gripper, was more simple to use and command, since it could be controlled from a lego motor. The only drawback of this approach was that it required a bit more complex lego construction and mechanical tuning. Considering both alternatives, we finally decided to implement the lego based gripper, final results of this construction can be seen in figure 3. A rear view of the robot can be seen in figure 5.

Figure 3: Frontal view of the gripper

Fine object detection (applied in both electromagnetic and Lego based grippers)

An important step in the collecting operation is to detect properly the object and position the robot so it can grab the object. In order to implement the fine object detection mechanism, we have decided to use light sensors mounted in the frontal part of the vehicle. The light sensors will be able to detect objects in front due to the light reflection which will be enforced due to the white colour of the balls (in theory).
This detection mechanism have been applied in the final gripper implementation (hooks), but it could have been used in the electromagnetic gripper as well.

Testing the lego based gripper

Initial program

Figure 5: Testing the ball gripping operation.

A simple test program was written to test the Robot #2 construction. The test program will be an initial controller to verify the construction of the Robot #2. The program uses a sequential approach as described by Fred Martin [1]. We used the Lejos SimpleNavigator [2] to move the robot to a location hard coded that robot #1 previously has found. Robot #2 moves to the x,y coordinates and rotates to the same heading angle as recorded by robot #1. Robot #2 stops and displays the measured  readings from the ultrasonic sensor and the two light sensors.

The robot is manually moved in position being able to grip the object. When a button is pressed the gripper arm is moved down to hold the object while the robot carries the object back to the original (0,0) position.  We repeated the test a number of times and found that the light sensors and ultrasonic sensor would be very difficult to used in moving the robot in position to grip the object. The gripper arm works find when the robot is in the right position.

We tried with different position of the light sensors and ultrasonic sensor. The problem is that the RTX light sensors need to be very close to the object (2-3 cm) before we have good readings. We decided to add light to the object and the best would be to try with inferred light since the RTX light sensors are very sensitive to eg. fire from a lighter. In order to position properly the robot, we would use a Differential Homing based approach (see [7]). We reached that conclusion after some basic analysis and we realized that to make active objects could definitely solve our problem. Therefore, we decided to spend more time in the coming iterations.

In the picture below one of the test setups we used can be seen. In that case we were using a bike light torch and the small light bulbs from lego (left and right respectively).
Figure 6: Test setup in order to validate the differential homing based idea


A video in which we were discussing and manually performing the differential homing approach can be seen in the following site:

Video in which the hooks are gripping an object:

Robot #2 test program:


public class Robot2Test
{
// Wheel diameter, track width connected to motor B, C
static TachoPilot pilot = new TachoPilot(56f,112f,Motor.B, Motor.C,false);
static SimpleNavigator robot;
public static void main(String[] args ) throws Exception
{
 boolean exit = false;
 int distance = 0;
 int light = 0;
 // Gripper arm attached to Motor A
 Motor gripper = Motor.A;
 // Ultrasonic sensor attached to input port S1
 UltrasonicSensor us =  new UltrasonicSensor(SensorPort.S1);
 // Light sensors attached on the gripper arm
 RCXLightSensor leftLight;
 RCXLightSensor rightLight;
 
 leftLight = new RCXLightSensor(SensorPort.S2);
 leftLight.setFloodlight(true);
 rightLight = new RCXLightSensor(SensorPort.S3);
 rightLight.setFloodlight(true);
 robot = new SimpleNavigator(pilot);
 
 while (!exit)
 {
  // Lift gripper arm
  gripper.rotateTo(35);
  
  // Wait for user to press ENTER
  LCD.drawString("Robot 2 Test ", 0, 0);
  Button.ENTER.waitForPressAndRelease();
  
  // Set travel speed
  robot.setMoveSpeed(250);
  robot.setTurnSpeed(100);
  robot.setPose(0, 0, 0);
  
  // Move to position with heading, Located by pose of Robot 1
  robot.goTo(278, -194, true);
  if (WaitMoving()) return;
  showCount(3);
  robot.rotateTo(-1, true);
  if (WaitMoving()) return;
  showCount(3);
  robot.stop();
     
  // Return to home position after object detected
  while (Button.ENTER.isPressed());
  LCD.drawString("Press ENTER to  ", 0, 0);
  LCD.drawString("grip object/ret.", 0, 1);
  LCD.drawString("Distance :   ", 0, 2);
  LCD.drawString("Left :   ", 0, 3);
  LCD.drawString("Right :   ", 0, 4);
  while (!Button.ENTER.isPressed())
  {
   // Displays distance to object
   distance = us.getDistance();
   LCD.drawInt(distance,4,12,2);
   light = leftLight.getLightValue();
   LCD.drawInt(light,4,12,3);
   light = rightLight.getLightValue();
   LCD.drawInt(light,4,12,4);
  }
  Button.ENTER.waitForPressAndRelease();
  
  // Lower gripper
  gripper.rotateTo(10);
  // Return to home position
  robot.goTo(0, 0, true);
  if (WaitMoving()) return;
  showCount(3);
  
  LCD.drawString("Press ENTER to  ", 0, 0);
  LCD.drawString("lift gripper ", 0, 1);
  while (!Button.ENTER.isPressed());
  
  // Lift gripper
  gripper.rotateTo(35);
  
  LCD.drawString("Press ENTER to  ", 0, 0);
  LCD.drawString("restart robot 2 ", 0, 1);
  while (!Button.ENTER.isPressed() &&
   !Button.ESCAPE.isPressed());
  
  gripper.rotateTo(0);
  if (Button.ESCAPE.isPressed())
  {
   // Lower gripper
   exit = true;
  }
 }
}

Code source:

Robot 1 localization
In the previous lab session we managed for robot 1 to search and localize the object. But we should consider as well how good is the pose (x,y,heading) that is recorded using the Lejos SimpleNavigator. Remember that this class is applying the forward kinematics model [2] by turning on the spot.

It would have been interesting to implement our own version of a navigator by using the theory of forward kinematics. We have though decided to use the SimpleNavigator. When we decide only to change direction by turning on the spot we found in lab session 9 [5] that is was very precise.  

To find out we will try to let robot 1 find the object running different paths and record the pose of location and compare it with the theoretical position in a x,y,heading coordinate system. Three test is performed turning one, three and five times at the boarder and measuring the recorded pose. See results in figure and table below:

Figure 7: Sketch of the robot movements that corresponds to test 2.
Test 1 - one turn
Test 2 - three turns
Test 3 - five turns
x, y (Robot)
439, -259
402, -723
325, -1287
x, y (Measured)
382, -260
455, -740
251, -1251
heading (Robot)
-149
-152
-157
heading (Measured)
not done
-(180 - 23) = - 157
not done


In the picture below it can be seen the angular meter we used to check the heading in the three turn case.
Figure 8: Checking heading in the second test case.

As it can be seen in the table, there is an initial deviation in test 1 (one turn) of (57,-1) units. Since we are using a kinematic model already coded in the API, is difficult to compensate the incremental error the robot has to cope with. As a result of this, in the the final deviation detected after 5 turns is (-74,36). This deviation is considerably bigger than the initial one.
There are several ways to approach this problem:
  • Solve it by implementing a new kinematic model considering the current mechanics.
  • Solve it by creating a better mechanical structure, bearings, good wheels and perfect surface.
  • Compensate for the drift of x,y coordinates every time the robot tuns at the boarders since we know the x or y position of the borders of the arena. We could correct the x or y coordinate accordingly every time the robot turns.
  • Mitigate it by using readable IR coded beacons deployed in the arena (see [7], chapter 5). Those beacons would be transmitting precise coordinates of its location, so the robot would be able to recalculate its position by applying an offset to its current internal coordinate.
  • Mitigate it by considering those coordinates as orientative in the robot 2 controller. The robot 2 would go to the signaled position and start applying a detection algorithm there with its ultrasonic sensor.

Robot 1 arena search algorithm
The subsumptional algorithm that was developed in last lab session for robot #1 needs some improvements. One problem is the stability of the color sensors and finding the object in the colored marked area. The last issue to be improved is the problem of the robot getting stucked in corners. Theses problems are addressed in this section.

Robot 1 Calibration

It turned out that detection of the colored marked area under the object was not very stable. Calibration of the colored sensor improved the detection of the colored area. Beside this change we added calibration for the area to detect. Steps to perform in starting robot #1.

1. Calibrate white color
2. Calibrate black color
3. Calibrate marked colored area for object identification

Robot 1 Search object in colored area

When robot #1 detects the colored area. The behavior SenseIdentifyObject is improved. The robot starts now to move from side to side searching for an object. The problem is that some times it turns out that the robot just passed the object very close or even hitting the object without notice. The problem relates to the ultrasonic sensor only detects objects just in front of the robot and not to the side. The behavior SenseIndentifyObject is changed to start searching for an object by moving the robot from side to side in an angle of 30 degrees at slove speed in the colored area see code listing below:



// Method that detects object saves pose (x,y,heading), play tone and waits 30 sec.
private void searchObject()
{
    int distance = us.getDistance();
    if (distance < foundThreshold)
    {
   stop();
   // Save location
   addPose(Car.getPose());
   Sound.playTone(800, 2000, 50);
   delay(30000);
    }
}

public void run()
 {
delay(500);  
while (true)
{  
       delay(5);
  
       if (cs.getColorNumber() == color_val)
  {
      // When color area is detected robot
      // looks for object within distance threshold.
      // If object is detected the location is recorded.
       suppress();
           stop();
       searchObject();
  
// Set slow speed and look for object
// in an angle of +/- 30 degrees
       Car.setSpeed(75);
           rotate(30, true);
           drawString("l");
           waitMoving();
           rotate(-60, true);
           drawString("r");
           waitMoving();
           rotate(30, true);
           drawString("l");
           waitMoving();
       Car.setDefaultSpeed();
  
    release();
 
           drawString(" ");
// Search while moving a bit for a while
           for (int i = 0; i < 100; i++)
           {
               delay(5);
               searchObject();   
           }
  }
}
}  

Robot 1 getting stuck in corners of the arena


The TurnAtBoarder behaviour is improved by counting the time between changing direction of turns. Each second time the direction of the turning is changed but the robot gets stuck in a corner of the arena therefore this method is changed. If the robots discovers a boarder two times in a row based on measuring the time between turns; the turn direction is unchanged and angle set to 90 degrees.

See new behaviour of the TurnAtBoarder class below:



public class TurnAtBoarder extends Behavior {

private final int lightThreshold = 40; // Black ~34
private final int angleDefault = 170; // Turn angle in search for object
private boolean rotateLeft = false;
private int rotateAngle = angleDefault;
private LightSensor ls;
public TurnAtBoarder(String name, int LCDrow,
                                    Behavior subsumedBehavior, LightSensor lightSensor)
{
 super(name, LCDrow, subsumedBehavior);
 this.ls = lightSensor;
}

public void run() {
 int countBoarderSeen = 0;  // Counts how many times the boarder seen in a row
 int timeSinceLastTurn = 0; // Counts the time between turns
 
 while(true){
  
  while(ls.readValue() > lightThreshold){

   delay(5);
   if (timeSinceLastTurn++ > 200)
   {
    countBoarderSeen = 0; // Clear boarder seen counter (~0.5 sec.)
   }
  }
  timeSinceLastTurn = 0;
  
  // When robot detects boarder area marker
  // it turns to search for object in a new direction.
  // Turn either +/- 170 degrees
  suppress();

  if (++countBoarderSeen < 2)
  {
   // Default turn behavior
   rotateLeft = !rotateLeft;
   rotateAngle = angleDefault;
  }
  else
  {
   // If robot got stuck in a corner only turn 90 degrees in same direction
   rotateAngle = 90;
  }
  
  if (rotateLeft)
   rotate(-rotateAngle);
  else
   rotate(rotateAngle);
  
  delay(100);
  stop();
  release();
}

Source code for improved robot 1 with search in color area and turn at boarder in the below link:

Coordinate representation
In the paper Integration of “Representation Into Goal-Driving Behavior-Based Robots” of Maja J. Mataric [4] page 308 she describes the way a landmark descriptor are composed in building a graph of landmark nodes. We have been inspired to create a similar landmark descriptor of the objects that robots #1 finds on its search. Later we could extend the robot to build a similar graph being able to create a map of objects found in the arena.

Every time robot #1 identifies and detect an object the pose is stored in the memory. The pose consist of a coordinate (x,y) and heading. The information about the pose, radius distance to object, object identification and number of time the object has been detected is transmitted to robot #2. The object identification will be the color value of the object. The radius is used to determine the exact position of the found object. The radius is an estimate based on the physical dimension of the robot, object and distance measured to the object using the ultrasonic sensor.

Coarse position: x - real, y - real
Heading angle:  h - real
Object identification: color value (0-14) - int
Radius: r - int - radius from center of robot to center of object

Communication protocol
The protocol for communicating in this system is string based, to make it human readable. Performance and bandwidth are not and issue in this project, so it is not necessary to go for a more optimized protocol.

The general protocol will be as follows:
CMD ARGS[] <EOL>
The available commands are:
  • FETCH - transmits the position of an object to fetch
  • DELIVER - transmits the position where the found object needs to be delivered (if communication between PC and robot is implemented, otherwise a hardcoded position will be used)
Each command should be acknowledged by a “ACK<EOL>”.

FETCH command

The FETCH command will be send by robot1 to robot2 to give the position of the object that needs to be fetched.

It has the following parameters, that need to be separated by spaces:
  • ID - Object identification: color value (0-14) - int (decimal)
  • X & Y - Coarse position: x - real, y - real
  • H - Heading :  h - real
  • R - Radius: r - real - distance from the center of the robot to the center of the object
Example:

FETCH ID=3 X=25.46 Y=2.6 H=6.75 R=3.5<EOL>

DELIVER command

The DELIVER command will be send by the computer to robot2 to give the position where the object needs to be delivered.

It has the following parameters, that need to be separated by spaces:
  • X - Coarse position: x - real
  • Y - Coarse position: y - real

Example:

DELIVER X=0.0 Y=-2000.0<EOL>

Communication between PC and robot
In this lab session there was no luck pairing the any of the robots with the PC. We tried with the example programs from lejos, and also with the nxjbrowse little program. We have used more than a couple of hours on it without any success, so we decide to move forward.

Since the communication between PC and robot is not the most essential part, we have decided to leave it for later, if there is time for it. The purpose for this communication was to tell the Robot2 where should it place the object after fetching it. For now, it will be a hard-coded location.

Communication between robots
The communication between robots only happens when robot1 finds the object and has to transmit the location to robot2. Robot2 will have bluetooth in listening mode while it is waiting, but will not be able to receive when it is working (fetching the object). Robot1 will start communication with Robot2 when it has found the object, and it will not do anything else until the communication is done.

The purpose of this lab session is to test a basic communication. We use a simple program from the lejos examples that tests the communication [8]. That works fine, so we can use it as a base for our specific purpose in the next lab session.

Conclusion
The software for robot 1 has now been improved searching the object in the colored area and not getting stuck in corners of the search arena. We have now a basic construction for robot 2 and we have found that it requires more work. We need more investigation for how robot 2 can navigate getting close to the object being able to grip and move it away.

The communication protocol using bluetooth is defined and needs now to be implemented. If we have the time, we should investigate further in the bluetooth communication with the PC.
The basic communication test between robots works, now it needs refinement.

References
[4] Maja J. Mataric, Integration of Representation Into Goal-Driven Behavior-Based Robots, in IEEE Transactions on Robotics and Automation, 8(3), Jun 1992, 304-312.
[5] Lego Lab lesson 9: Navigation
[6] 5 volt relay circuit for controlling AC current. “The wiring diagram website”
[7] Joes, Joseph L. “Robot programming. A practical guide to behaviour-based robotics”
[8] Lejos Bluetooth Tutorial http://lejos.sourceforge.net/nxt/nxj/tutorial/Communications/Communications.htm

Ingen kommentarer:

Send en kommentar