fredag den 21. januar 2011

Who we are and why we are doing this

After receiving an email asking for references about the course we are following, we have realized that we have not made any mention to it, and that we have not introduced ourselves. So here it goes!


We are three students taking a Master of Science in Technical IT in Aarhus University.
We are following the course "Embedded systems - Embodied agents, digital control in a physical world", and it is required to write a blog explaining each of our lab sessions, specially on the final project phase. They are used as out project report and contain reflections, tests, conclusions, pictures, code, videos...


And another robotics blog (http://orangerobots.com/) already has a post on us


We hope you enjoy our blog!


Regards,


José, Maria & Kim

onsdag den 19. januar 2011

Lego Lab 18

Project conclusions

Date: 13 - January - 2011
Duration: 5 hours
Participants: Everyone

Goals for the lab session
  • Discuss about the learning outcome of the project.
  • Discuss how we have covered the course contents and fulfilled the learning objectives in the project.
  • Discuss about the project presentation.
  • Comment on further work, possible expansions and improvements to the current system.


Further work
In this section we have summarized the thoughts we have discussed after finishing the project. Some of these ideas could easily require an entire new 7-weeks project, but they are definitely interesting to highlight. We believe as well that our ideas for further work are tightly connected as well with the theory covered during the first part of the lectures. This makes them interesting inputs for further editions of the course.
  • Improved simple navigation applying customized odometry or techniques like dead reckoning. We have seen that some errors were introduced while navigating and that had a relevant impact in the overall robot performance (especially in the finder robot). We believe that it is possible to create a simple and accurate positioning method using odometry and the readings provided by the tacho counters. An alternative approach would be to use inertial navigation (seen [5]) like dead reckoning. This technique consist on determining the position by considering an initial coordinate and computing the distance traveled by using speed estimates over elapsed time and course. We found how to implement dead reckoning for specific differential based robots in [1]. In expression 1 it can be seen how incremented position can be computed in terms of wheel radius (Rw), heading (tita), encoder ticks recorded for each motor (T1 and T2) and total number of ticks (Tr).
Expression 1: Dead reckoning expressions for a differential drive based robot. Source [1]
  • Coded IR beacons to identify different objects. A possible approach to identify the objects is to emit a unique signal through the IR diodes attach to them. This would require a microcontroller responsible for coding the data and modulate it at a higher frequency (making transmission more robust to external IR signals). Robots should be equipped with an IR transducer and an additional logic to decode the signals. Whether this logic should be implemented in the NXT software or in a separate microcontroller attached to the robot is something that should have to be considered. In figure 1 two different IR beacons can be seen.

Figure 1: Simple coded IR beacons using 38kHz modulated pulses. Image source [2] and [3].
  • Improved object signaling by adding several IR diodes to the sphere. At this point we have embed one IR unit in the objects (as shown in figure 2). This is a good solution if the robot is facing always the diode when it is approaching it. But what happen if the collector robot is approaching the object from behind? Since the IR beam is directed towards the other side of the object, it will be impossible to locate properly the object. The solution to this problem is to create a ring of IR diodes and place them around the objects body, as shown in figure 2. Covering the different object angles with separate diodes will allow the collector robot to approach the target from any angle.

Figure 2: Sketch showing the current and optimal arrangement of the IR diodes.
  • Improved object gripper for the collector robot. The currently mounted gripper is purely mechanical. It is working properly most of the times, failing when the object is gripped in the extreme side positions. Improvements in the mechanics would avoid these problems. Different alternatives, like electromagnet based grippers (see Lego lab report 15) could be used to solve the problem in the case small magnets are attached to the object. In that case the collector robot would just need to approach the object and sweep the area with the gripper. Eventually the object would be attracted by the arm. This even solve the problem of requiring fine grained positioning techniques.
  • More reliable and easily deployable  and expandable communication interface. The communication technology that we are using in this project is bluetooth. That technology was chosen because the NXT already integrates a bluetooth modem in the brick. We think that, even though it is a widely expanded technology, it is not the best communication interface for our robotic system. We found out that it is not easy to get the bluetooth connection up and running, and we had to spend a lot of time pairing and configuring the devices. A limitation on the number of bluetooth devices exists as well, since the maximum number of devices in a piconet bluetooth network is 8. An advantage of using bluetooth is that the TDM technique could be useful in real time applications, making access to the medium predictable, something that is not required in the project. As an alternative to this interface we have considered simple 802.15.4 modems, that could act as a transparent serial line between different robots. This technology was successfully used in one of the previous projects for the Lego lab course see [7]. A cheap modem that could successfully have been applied in this project is the Serie 1 Xbee modem from Digi, shown in figure 3.
Figure 3: Xbee modem with a serial input implementing 802.15.4 Image source [6].
  • Use of calibrated light sensors. One of the problems that we detected was that the light sensors were presenting different and variable sensitivities (see lab report for session 14). A key improvement would be to substitute those sensor with more stable ones that retrieves the same reading given the same light intensity and conditions (ambient light, orientation, etc...).
  • Positioning error correction. IR, light or color beacons with a small spot could be deployed in the arena (see figure 4). These spots would be reference points with a unique ID number, and known position by the robots deployed in the arena. Once the spots are detected by the robots, they will be able to look for the spot known coordinates in memory (see figure 5) and readjust robot position. This technique will help to cope with the accumulated error in the navigation.

Figure 4: Spot positioning deployment.

Figure 5: Coordinates look-up table.
  • Consider cooperation at task level. At this point the system is implementing cooperation at goal level, which is collecting the objects. The robots are cooperating to achieve this goal by carrying out one task each: one finds and a second collects. A possible expansion to this project would be to involve a several robots to carry out the task of finding or collecting.
  • Energy efficient robot. At this point the robots have been programmed with accomplishing the goal without considering any kind of energy optimization. Both finder and collector robots could be programmed with the energy saving issue in mind. Some of the ideas in this line would be:
    • Deactivate unused sensors: turn of the light sensor and the ultrasonic sensor when the robot is returning with the object.
    • Detect that the battery level is low so the robot can return to a charging or off point, without dying in the middle of the arena.
    • Slowing the motor speed if low battery levels are detected. By applying this dynamic configuration the amount of time the robot can be operating is increased.

Conclusions

We have demonstrated how autonomous embodied agents [9] can be constructed and designed in terms of robots that cooperates in solving a specified task. We have succeeded to realize the selected project idea of a robot 1 finder and robot 2 collector that collaborates to identify objects in a marked arena and carries the objects to a location specified by a remote computer.

The robot design and implementation covers many subtopics from the course like: Sensors, Actuators, Localization, Navigation, Communication, PID Control, Subsumptional Architectures, Sequential and Reactive control strategies. We have learned that a reactive strategy composed by a number of behaviours in a subsumptional architecture [10] is good in case a plan cannot easy be made in advance for how to achieve a certain goal. This is the case for the robot 1 in finding the object inside the arena. The sequential strategy as described by Fred Martin [8] is good in the case a plan is given as for the robot 2 collector. It moves to the location of the identified object communicated by robot 1 and carries the object to a location communicated by a remote computer.

We are specially proud of:
  • The way we solved how robot 2 could approach the object being able to grip it. This problem took us two lab sessions to solve. A solution was found when we made an active object and realized that the robot was following a flame much better than a normal light source. That lead us to create the IR beacons and using light sensors and a PID controlled regulator to get close to the object.
  • The variety of the topics from the curriculum of the course that we have covered in the project and we succeed to complete the goal or our assignment within the time frame of the project.

What could be improved:
  • Fix the bugs and issues listed for robot 1 and 2 in last lab session in making the solution more stable and reliable
  • Being able to handle dynamically correction for drift of the position especially for robot 1

Since our robots are deployed on a physical environment and the purpose is to interact with it, the way the agents do this has been a key part of our work. This idea was part of the learning objectives of the course and it has been thoroughly exercised in this project. In previous lab reports we have elaborated on that, some examples could be how the readings coming from the sensors have been filtered and evaluated, how the environmental conditions have been compensated to offer good performance or how the gripper interface have been adjusted depending on the fixed position.

Most of the concepts introduced in the course lectures have been applied in this project. We have paid special attention to the control strategies thinking carefully whether sequential or reactive strategies should be used. We have applied as well the PID regulator concept in one of the robots in order to approach an object with fine grained precision. Additional concepts like communication or navigation have been used as well. We are particularly satisfied since this project, even though it might look simple, has served us to exercise the topics we have learned in this course.

We have learned that changes on the mechanics requires changes on the software that controls them. The changes could range from small repositioning of sensors or gearing modifications to more complex structural changes.

For each software modification, no matter how small it is, a new test in the arena is required. The idea is similar to the testing principles applied in the normal software construction field. The difference is that new tests in the robotics context imply a physical test setup and more time to evaluate whether the modifications have been successful or not. An additional difficulty to evaluate the changes and trace errors is that there is no way to debug code deployed in target while it is running. This is something straightforward in ordinary programming circumstances, but in this case additional software and hardware would be required (plus modifications at the electronics level of the NXT to support debugging interfaces like JTAG). Even though the idea is technically possible it is not feasible in this course.

We have found several limitations in the NXT platform. For example, the limited number of sensors that can be connected without using sensor multiplexers. In the case we would have had a more flexible platform, a more precise robot could have been developed. A similar problem appears with the number of actuators, which is reduced to three. All in all we have been able to create a system composed by two agents that fulfills the initial goal.

We have realized that experiments have been especially relevant while testing the robots and their performance. Therefore, carrying out measurements, designing the experiments considering environment and state of the device and analyzing data are valuable procedures. In a more complex project, statistics of the robot performance and percentage of success depending on object situation could have been an interesting analysis.

Reflecting upon the course and putting in perspective the project, we believe that this course could act as a meeting point for different fields and disciplines. We realized that ideas and techniques used in this project are coming from different courses followed so far by us. For example, embedded software architectures like state machines were studied in the course Embedded Real-Time systems. Different considerations about the positioning techniques have been taken from the course Pervasive Positioning. Different ideas about robot control were taken from Artificial Intelligence courses. Reflections about the communication interfaces available for the robots were made considering the knowledge we got from the Wireless communication course. As a general idea, it can be stated that robotics is a multidisciplinary area in which knowledge about different fields can be combined.
References

[1] Dead reckoning article in wikipedia. http://en.wikipedia.org/wiki/Dead_reckoning
[2] IR transmitters / virtual wall http://sites.google.com/site/irobotcreate2/createanirbeacon
[3] Mini IR beacon http://letsmakerobots.com/node/6737
[4] How we can make IR beacons with unique ID. http://diydrones.com/profiles/blog/show?id=705844%3ABlogPost%3A39610&commentId=705844%3AComment%3A56644
[5] Inertial navigation systems http://en.wikipedia.org/wiki/Inertial_navigation_system
[6] Xbee modems from Digi http://www.digi.com/products/wireless/point-multipoint/xbee-series1-module.jsp#overview
[7] Video showing ZigBee devices applied in a flock or robots. Project in the Embedded agents and digital control in a physical world course. January 2010 http://il.youtube.com/watch?v=gbet__yAOP8&feature=related
[9] Embodied Agents (Wikipedia).
[10] Subsumption architectures (Wikipedia).

onsdag den 12. januar 2011

Lego Lab 17

Putting everything together and integration test

Date: 03-06 - January - 2011
Duration: 7 hours 3 - January , 7.5 hours 5 - January, 6 hours 6 -January
Participants: Everyone

Goals for the lab session
  • Fixing mechanical issues (3 - Jan.)
    • Gripper modification
    • Object modification
  • Integration of the communication functionality in robot 1 and 2 and computer (3 - Jan.)
  • Test and setup of demonstration (5 - Jan.)
    • Fine tuning parameters and software
    • Plan final demonstration
  • Adding PC to robot 2 communication (6 - Jan.)
Gripper modification
One of the problems that we found with the previous gripper structure was that it was not gripping properly the object every time it was triggered. Sometimes the object was sliding out of the gripper in the sides because it was not properly fixed by the movable part. Therefore we modified the gripper structure so longer sticks were used. The final gripper structure can be seen in the following picture.

Figure 1: The collector robot with the new gripper in the frontal part.

The previous construction was making use of a gearbox to drive the gripper. That gearbox was present because the motor structure was reused from previous constructions. Actually, it was not a good design from the mechanical point of view, since the gearbox was decreasing the torque of the gripper driving gear. This happened because a gear with big diameter was moving a gear with a smaller one.
Since we wanted to keep the design simple and the NXT motor torque was high enough to fix the ball, we decided to suppress the gearbox and attach directly the gripper to the gear directly connected to the motor. The gripper was attached through the gear holes using standard black and blue Lego pins. In the following pictures different views of the motor gripper can be seen.

Figure 2: Top lateral view of the driving gear.

Figure 3: Side view of the driving gear.

Figure 4: Motor frontal view.

Object modification
One of the problems we detected was that the ball was offering some resistance so it was difficult to be moved. This is a new problem that arose after cutting the lower part of the ball as explained in previous lab reports (avoid ball tilting due to uneven weight distribution). The solution  to make the movement smother has been to add some transparent tape in the ball base. The final result can be seen in the following picture.

Figure 5: Flamingo board with tape in the base.

Integration of the communication functionality in robot 1 and robot 2
The goal was for robot 1 to find the object and communicate the position to robot 2. Since robot 1 is the active agent we have decided to let robot 2 be active waiting all time and let robot 1 take initiative to establish the connection when the object is found and transfer the position.

Once robot 2 has reached the object, it needs to know where to deliver it to. It then waits again until it gets the position from the remote computer.

This section describes how we have added the common communication classes to the previous developed software for robot 1 and 2. Both robots uses the common classes: Command, ACKCommand, FetchCommand, DeliverCommand, DataLogger and Utils that provide common functionality as described in lab session 16. A static class BTSend for robot 1 and BTReceive for robot 2 is made to encapsulate the bluetooth command protocol developed for this project. Robot 1 and the computer act as the masters and robot 2 as the slave in the communication.

Robot 1 communication

The behaviour SenseIdentifyObject is changed, see lab session 13 [1]. When this behavior is searching for the object in the colored area the waitMoving method is called. This method calls the searchObject method listed below that uses the ultrasonic sensor to detect if an object is found. If an object is found the static method is now called that establish a bluetooth connection to robot 2 and sends the pose, color value and estimated distance to the object. If the communication succeeds a high beep tone is made and the robot 1 moves a bit backward leaving space for robot 2 to carry the object away. Finally robot 1 stops and waits for the user to press the enter key.

private void searchObject()
{
    int distance = us.getDistance();
    if (distance < foundThreshold)
    {
    // Stop robot until released by user
     stop();
    stopped = true;
   
   // Save location
  addPose(Car.getPose());
   // Send command to Robot2 to come and get the object
   boolean success = BTSend.sendPose(Car.getPose(),
color_val, dist_to_obj, logger);
   if (success)
   {
    Sound.playTone(800, 2000, 50); // High tone
    // Backup giving space for robot #2 to pick up object
   backward();
    delay(500);
   }
   else
    Sound.playTone(100, 2000, 50); // Low tone
    
   // Stop and await for object to be removed
   while (stopped)
  {
    stop();
    delay(1000);
   }
    }
}

Robot 2 communication

For robot 2 the behaviour SeqStrategy is changed, see lab session 15 [2]. Instead of just hard code a location to find the object a call to the static method WaitAndReceiveObjectLocation is called in WaitForObjLocation. This method waits forever on robot 1 to create a bluetooth connection and sending the object location. See code snippet below. The received pose of robot 1 is converted to a location where robot 2 should be able to navigate to the infrared light of the object.

public void WaitForObjLocation()
{
ObjectLocation objLoc = null;

while(objLoc == null)
{
  objLoc = BTReceive.WaitAndReceiveObjectLocation(logger);
  if(objLoc == null)
   logLine("There were errors receiving the object location");
}
      
// Convert robot #1 position to location for robot #2
Pose robot2pose = objLoc.GetRobot2Pose();
x_loc = Math.round(robot2pose.getX());
y_loc = Math.round(robot2pose.getY());
head = Math.round(robot2pose.getHeading());  

// Display robot #2 pose
String msg = x_loc + "," + y_loc + "," + head;
    LCD.drawString(msg, 0, 7);
}

The same strategy is followed when waiting for the position where it has to deliver the object. Instead of going to position (0, 0), as it was done in lab session 15 [2], the method
BringObjectHome has been refactored to wait for a position from the computer. The method is very similar to the WaitForObjLocation shown earlier, but it calls WaitAndReceiveHomeLocation instead of WaitAndReceiveObjectLocation.

The below code snippet shows how the BringObjectHome method is changed:

private void BringObjectHome()
{
ObjectLocation homeLoc = null;
while(homeLoc == null)
{
  homeLoc = BTReceive.WaitAndReceiveHomeLocation(logger);
  if(homeLoc == null)
   logLine("There were errors receiving the home location");
}
      
// Get position for destination
Pose pose = homeLoc.GetRobot1Pose();
x_loc = Math.round(pose.getX());
y_loc = Math.round(pose.getY());

// Display robot #2 pose
String msg = x_loc + "," + y_loc;
    LCD.drawString(msg, 0, 7);
   
goTo(x_loc, y_loc, true);
WaitMoving();
    liftGripArm(); // Releases object
    delay(2500); // Object must be manual removed
    rotateTo(0, true);
WaitMoving();  
}

Testing and improving software for robots
On January the 5th intensive testing was performed by repeating robot 1 finding objects at different locations with different headings and sending the position to robot 1. The test was performed letting the robot finding the object after 1 turn and 3 turns at the border of the arena.
We found that the turn angle at the boarder needed adjustment, since the drift error on the y coordinate was too big. By changing the turn angle from 170 to 140 degrees this problem is reduced. We now have more tacho readings for both x and y moving a certain distance.
A reset function was added by adding a possibility to restart robot 1 (Pressing enter) after finding the object instead of recalibrating the system after each test.  Some of the parameters were adjusted like the tooClose (15 -> 10 cm) value used to avoid objects outside the colored area. Sometimes this function was activated to early by robot 1.
Robot 2 did in some cases start gripping the object too early. A filtering strategy was implemented in the SeqStrategy to ensure that the gripping was not perform by random ultrasonic reading. The problem occurs when the robot starts oscillating trying to get close to the light of the object. The parameter was adjusted from 10 -> 8 cm distance.
 
Remaining issues found during testing that have not been completely solved:

Robot #1
  • Precision and drift of x, y coordinates adds up when turning at boarder (Increased turn angle could be improved by adjust coordinates at arena border)
  • Sometimes avoid object instead of identify object (Adjusted tooClose parameter)
  • Sometimes enters a deadlock situation when object found and starts communicate
  • Java exception error when communication with robot #2 turned off


Robot #2
  • Sometimes gripping the object to early (Added filtering on detecting object)
  • Doesn’t return to home position due to error when turning with object
  • Sometimes motors locks when moving to object location (Could be solved with 2 different speeds – adding a faster speed moving towards location of object)
  • PC to robot 2 connection takes a long time


Some of the above bugs we have decided not to solve since they seem hard to find and will not stop us from making the final demonstration.

Link to final code for robot 1 and 2 and for the computer program  with bluetooth communication and modifications.

Final demonstration
In the next videos it is shown how the robots operating and accomplishing the initial goal of finding and collecting the objects. These final test were made in the actual arena under real conditions.

In the first video it can be seen how the robot locates an object after one turn at the border. Once the object has been found the collector robot is called so the object is collected.

In the next video it can be seen the finder robot locating the object after three turns. After that the collector robot is called as shown above.

In the last video below robot 2 waits for a command from the PC for where to drop of the object.
http://www.youtube.com/watch?v=7HUUlSJkv8M

Figure 6: Finder and collector robots deployed in the arena. The deployed targets can be seen as well above the red paper markers.


Conclusion
Along this lab session we have been improving the mechanics of the collector robot, working on the gearing and the gripping structure. This implied some reprogramming of the gripper control logic. The objects have been modified to be more stable when they are deployed in the arena.
It must be remarked that these fixes have been done after testing the robots and observing their behaviour in previous labs. Testing under real conditions (like the actual arena) provide us valuable feedback to improve our robot logic and structure. We have added communication to the robots and remote PC computer and a final integration test is performed.

Finally, the results have been recorded to illustrate the final state of the project. As it can be seen in the videos, we have achieved the target functionality described in our initial goal set, introduced in [3]. The remaining issues that we have decided not to solve is listed in this lab session.

References
[1] Lego Lab 13: http://josemariakim.blogspot.com/2011/01/lego-lab-13.html
[2] Lego Lab 15: http://josemariakim.blogspot.com/2011/01/lego-lab-15.html
[3] Lego Lab 11: http://josemariakim.blogspot.com/2011/01/lego-lab-11.html