lørdag den 1. januar 2011

Lego Lab 12

Construction of robot 1 and first prototype setup
Date: 02 - December - 2010
Duration:
2 hours - 02/12/2010 (all together)
2 hours - Robot 1 construction and sensor exploration (Kim)
6 hours - Robot 1 programming and test version 1 (Kim - 09/12/2010)
3 hours - Documentation (José 24/12/2010)
Participants: Kim Bjerge, Maria Soler, José Antonio Esparza

Goals for the lab session
- Sprint #1 planning
- Collect hardware
- Construct robot #1
- Exploring sensors for robot #1
- Analyze architecture
- Implement first prototype robot #1

Homework for the next time

Maria: NXT to PC communication

José: Finish lab report 11, lab report block
Kim: Flamingo blocks, black tape, white paper, Software ver. 1 robot 1

Everybody to put notes on blog for this session.

Sprint #1 planning
Sprint for next 2 weeks ending December 22.

Tasks in this sprint:
1.1- Object construction
1.1- Robot1 construction
1.1- Robot1 SW architecture of functional behavior
1.2- Arena design and construction.
1.2- Find and locate an object (R1)
1.2- Coordinate representation (R1 + R2 + PC)
1.3- Identify object (R1)

Collection of hardware
Robot 1: NXT, 2 motors, Ultrasonic sensor, Light, color sensor
Robot 2: NXT, 2 motors, Ultrasonic sensor
Different lego bricks, plastic components and wheels to compose the structure.

Environment design

Arena design - Architecture of the robotic system

Preliminary arena design

The initial conception of the arena was the following sketch:
Figure 1: Initial conception of the terrain.

The terrain in which the robot are going to be deployed has been named as playground. The playground should be delimited by a black line. Different objects, will be placed in the playground, size, weight and colour will be discussed later. Playground corners deserve special attention. Since the playground is a rectangle, there are four corners in the playground. One of them has been labeled as “Initial Point” since it is the corner in which the robots (both finder and collector) are going to start the movements. The rest can be used by the collector robot to deliver the objects once they have been picked.

Arena implementation

After having designed the arena and discussed it with the help of different sketches, we started with the implementation.
We considered different options to implement the base. The first one was to use A0 sheets (841mm x 1189mm) . This option could allow us to take the terrain with us, quite useful in the case we want to work outside the lab. It has some drawbacks as well, since the terrain could be broken easily (it is just paper), and since it is completely white it would get dirty. The second option was to use a wooden board.  The advantages of this is that the terrain will be solid an once it has been constructed it will last until the project will be finished with no problems. The drawbacks are that the terrain would be heavy and it will have to remain in the lab. Finally we chose to use the wooden board and the results can be seen in figure 6.  

In order to delimit the arena so the playground is defined, we decided to use black duct tape.

Object design

How the objects should be?

When we were considering the objects that should be processed by the robot, we were looking at different parameters. The most relevant in the beginning were size and weight. The robot was going to be propelled by two standard NXT motors, with no special transmission. No high Torque values could be expected out of them. The gripper mechanics should be considered as well. The mechanism is going to be triggered by a lego NXT motor as well, so limitations in mechanical power should be expected. Those mechanical factors made us think that the final objects should be light, with an approximate size of 5 cm^3. Coloured objects will be an advantage, since it will be possible for the robot in that case to distinguish between different kind of objects. An additional and quite relevant requirement is that the it should be possible to detect the objects by using an ultrasonic sensor. This is adding some limitations, for example, the objects should not be covered by clothes, since they are absorbing the ultrasonic signal sent by the sensor.

Robot 1 construction
The construction of robot 1 is simple and based on the one presented in the lego NXT instruction manual. There are minor changes so it is possible to attach three sensors in the frontal part of the robot (light, IR and ultrasonic). The final result can be seen in the picture below (figure 2).

Figure 2: Robot 1 (finder) constructed.

Exploring sensors for Robot 1
After Robot 1 has been constructed a  test program is written to explore the performance of the 3 different sensors.

public class SensorTest
{
  public static void main(String [] args)
  throws Exception
  {
int dist_val, light_val, color_val;
  
UltrasonicSensor us = new UltrasonicSensor(SensorPort.S1);
LightSensor ls = new LightSensor(SensorPort.S2);
ColorSensor cs = new ColorSensor(SensorPort.S3);
       ls.setFloodlight(true);
........
       while (! Button.ESCAPE.isPressed())
       {
    dist_val = us.getDistance();
    light_val = ls.readValue();
color_val = cs.getColorNumber();
       LCD.drawInt(dist_val,  3, 13, 1);
       LCD.drawInt(light_val, 3, 13, 2);
       LCD.drawInt(color_val, 3, 13, 3);

Ultra Sonic Sensor:

Difficult to detect the round balls. Squared objects is better in detecting the object and distance.

Light Sensor:

Good readings for the selected arena and black marking.

Arena: 45 - 50
Black: 34 - 40

Hightech Color Sensor:

We made a sensor test program to test the color sensor. We where only able to detect the color in a distance between aprox: 0.5 - 2 cm.

Measured values on different colors:

Yellow color value :  5
Orange color value : 8
Red color value: 9
Green color value: 0
White color value: 17
Black color value: 0

Different from specified in the Lejos API [1] for the Hightech ColorSensor:

Color index.
0 = black
1 = violet
2 = purple
3 = blue
4 = green
5 = lime
6 = yellow
7 = orange
8 = red
9 = crimson
10 = magenta
11 to 16 = pastels
17 = white



We could not find an explanation why this difference. Perhaps the color sensor needs calibration or the light conditions from surroundings is not good enough. We need to have this in mind when we lather design the robot 1 software.  

Complete Source code for the sensor test:


Robot 1 SW architecture
In the search for an object in the arena a reactive strategy is chosen since the robot don’t know how to find the object in advance. Robot 1 will be designed as an autonomous embodied agent [2] that interacts with the environment through a physical body within that environment. In our case the robot 1 senses the environment using light, color and distance sensor to detect the physical surroundings and moves around searching for the object. The subsumption architecture [3] is chosen to design the behavior of robot 1 that is composed of a number of different sub-behaviors. This approach makes it flexible to experiment and extend the robots autonomous behavior as long we are going to optimize the search. The architecture of prioritized behaviours are described below. These individual behaviors together makes it possible for the robot to find objects in the arena without using a programmed plan.

Move Forward:
Moves forward then sleeps 0,5 sec to allow higher prioritized behaviours to suppress moving forward.

Turn At Boarder:
Uses the light sensor to detect the black marked boarder of the arena. Whenever the boarder is detected a turn is made in either right or left direction to start searching for the object in a new part of the arena.

The below code snippet shows the active thread of the TurnAtBoarder class. Whenever the boarder is detected by the light sensor a turn is made in a new direction.  Experimentation has been made with different turn strategies. The simple of switching between turning left and right seems to be easy to handle.

public void run() {
 String message;
 
 while(true){
  
  while(ls.readValue() > lightThreshold){
    //only display the value if we don't detect the boarder of search area
    message = getStatusMessage();   
    drawString(message);
    delay(5);
  }
  
  // When robot detects boarder area marker
  // it turns to search for object in a new direction.
  // Turn either +/- 170 degrees
  suppress();
  
  switch (direction) {
   case 0:
    rotate(150);
    break;
   case 1:
    rotate(-150);
    break;
   case 2:
    rotate(90);
    break;
   case 4:
    rotate(-90);
    break;
  }
  if (++direction == 2) direction = 0;
  
  message = getStatusMessage();   
  drawString(message);
  delay(100);
  stop();
  release();
 }
}

Avoid Object:
To ensure we don’t hit any objects this behavior avoid objects detected within a certain critical distance. It back up and then turns either right or left. Using a similarly behavior as described by Maja J. Mataric in [4].

Sense Identify Object:
This behavior is the highest prioritized behavior that identities the object by using the color sensor and ultrasonic distance sensor. When a colored mark area is detected and an object is detected the pose is recorded and transmitted to robot 2. See next chapter for experiments in object detection. We decided to use a colored mark area in where to look for the object.

The below code snippet shows the active thread of the SenseIdentifyObject class. When the colored area in the arena is detected the robot starts looking for an object. If object is detected the pose of the robot is stored and a tone is played.

public void run()
{
    int distance;
while (true)
{  
  while (cs.getColorNumber() != color_val)
  {
delay(100);
dispPose(Car.getPose());
  }
 
  // When color area is detected robot
  // looks for object within distance threshold.
  // If object is detected the location is recorded.
       distance = us.getDistance();
       if (distance < foundThreshold)
       {
   suppress();
stop();
         // Save location
         addPose(Car.getPose());
Sound.playTone(800, 2000, 50);
         delay(10000);
         release();
       }
       delay(100);
  dispPose(Car.getPose());
 }
}  


Play Sound:
Whenever an object is detected in a certain distance from the robot a random song is played. Composition of song is made by Ole Caprani [5]. This behavior is independent of all above behaviors since it is not moving the robot car.  

Figure 3: Controller software architecture for robot 1

The UML class diagram shows the complete architecture for robot 1. The architecture is composed by 5 concurrent threads implementing prioritised behaviors that together defines the complete behavior of robot 1. The class Car uses the SimpleNavigator instead of the MotorPort class. Makes it able to record robot position when moving around in the arena.

Why use the subsumptional architecture in [5] and not the Lejos API? This architecture is better than the lego subsumtion architecture (see [6]) it is more flexible and well structured. The subsumtion class in the Lejos API is not real concurrent. It implements prioritized sequential way of doing different behaviors. In our case we have the PlaySound behavior that total independent of all other behaviours and therefore needs to run concurrently.

In [5] behaviours are implemented as concurrent threads that are preempted by the Lejos OS. Behaviours are inhibiting the access to the motors using a subsumption architecture implemented by the methods isSuppressed, suppress and release. The subsumption class offered by the lejos API is working in a prioritised sequential round robin fashion instead.

Source code for implementation for above subsumptional architecture inspired from [5].
http://code.google.com/p/josemariakim/source/browse/#svn/trunk/FP_Robot1Finder

The robot finding the objects can be seen in the following video:
http://www.youtube.com/watch?v=rwR-NVlFZmw

Experiments in object detection
The initial objects were the plastic balls that were provided in the NXT educational kit. They were light and coloured. Unfortunately they were too small to be detected properly by the ultrasonic sensor mounted on the robot. They were properly detected 30% of the times, so that was not reliable enough to ensure that the collector robot was able to pick the objects.
A second approach was to use a lego brick based object. This was a good option since colour and shape were up to us, so it was easy to customize in that sense. We created an object bigger in volume than the lego balls with a cubic shape. The object detection was very reliable but the object was to heavy to be transported properly by a lego based robot. The same happened when we were using NXT bricks, in which the situation was even worse. Another trial was to use two lego balls stuck together with tape, even though the size was adequate so the object could be recognized, the shape was making object handling difficult for the robot.

Figure 4: Robot 1 deployed in a test arena with different kind of objects.

The final option, which is the one that has been used in the project was to choose flamingo balls (expanded polystyrene). It is possible to buy this kind of material in hobby shops, but they are only available in one colour: white. Several sizes were available, and finally we are using two balls with a diameter of: 5 cm and one ball with a diameter of 10 cm.

Figure 5: The chosen object type.

The balls can be seen in the following picture, together with the finder robot deployed in the arena.

Figure 6: Robot deployed in the arena with three “flamingo” white balls.

Since the precision the color sensor was offering was not high enough for a proper operation, we needed another way to mark the different objects deployed in the arena. The solution was to place coloured paper under the objects. The robot was able then to place the colour sensor above the surface and determine if the object was going to be selected for collection or not.

Figure 7: The robot has detected one white ball.

Conclusion
The first version of the robot 1 prototype got stuck in the corners of the arena. A better approach would be to select a more random behavior in turning at the boarders of the arena. When robot 1 has found the object how good is the position is has found. Can it be used for robot #2 to localize the object and carry it away?
During this lab session we realized that simple things can take a very long time to be addressed. The idea of detecting an object and identifying it is very simple, but it took us a while to construct a proper object and a robot able to handle it (including selection, location and transportation). This made us think that in our small “proof of concept” project, in which we are defining the project to solve, things are not that simple. In a real problem the situation could be more complicated, for example, consider that the object already exists and you have to create a robot to handle it without refactoring the object. We have to consider as well the limitations of the hardware that we are working with (lego senors and actuators), specially while interacting with the physical environment. After this lab session now we know that the gripper construction for robot 2 can be complicated (since it involves interaction with the physical world).

References
[1] Lejos API for ColorSensor: http://lejos.sourceforge.net/nxt/nxj/api/index.html
[2] Embodied Agents (Wikipedia).
[3] Subsumption architectures (Wikipedia).
[4] Maja J. Mataric, Integration of Representation Into Goal-Driven Behavior-Based Robots, in IEEE Transactions on Robotics and Automation, 8(3), Jun 1992, 304-312.
[5] Lego lab lesson 8 - behaviours as concurrent threads.
http://www.legolab.daimi.au.dk/DigitalControl.dir/NXT/Lesson8.dir/Lesson.html
[6] Lego lab lesson 10 - Behaviour based architecture
http://josemariakim.blogspot.com/2010/11/lego-lab-lesson-10-behaviour-based.html

Ingen kommentarer:

Send en kommentar