fredag den 26. november 2010

Lego Lab 11

The Final Project

Date: 25 - November - 2010
Duration: 4 hours
Participants: Kim Bjerge, Maria Soler, José Antonio Esparza


Goals of the lab session

At the end of the lab session we have chosen a project and have discussed at least two alternative projects. The lab report from this lab session contains:
  • list of projects considered with a short description of each, e.g. a description could be:"robot that can dry wet spots on the floor during a handball match".
  • for each project describe the hardware/software platform and software architecture of each component.
  • try to point out the most difficult problems to be solved in each project, e.g. for the floorcleaner robot it is difficult to figure out when to stop cleaning.
  • for each project describe what you would expect to be able to present at the end of the project period.


Project Ideas:


1. Use Jacob’s paper [3] to let robots make formations in a flock of robots moving identifying neighbours

The robots should move around in a formation defined in a computer and transferred via bluetooth to the robots.
The robots should negotiate who is the leader and then move around in the defined formation.
It should be visible who is the master. Maybe playing a special tune, or blinking some lights.
They could either move around by following a line in the floor or by following a route that would be also transferred to the master robot. The non-master robots are not aware of the route to follow, they are only aware of the neighbour next to itself.
As an optional feature, the robots could react to a loud sound by changing formation, changing route, or changing master of the flock.

Hardware/Software platform:
For this project at least 3 robots will be used (minimum amount to talk about formation) and a PC will be used for creating the formations and routes.

Robots (3 of them):
- Color sensor to check identity by color
- Ultrasonic sensor to calculate the distance to the neighbour
- (Optional) Sound sensor
- Bluetooth communication

Computer:
- Bluetooth communication
- GUI to create formations and routes

Challenge problem:
One of the challenges is the identification of the neighbours by color. It has to be tested how sensitive the color sensors are when used in a large distance (we have been using them in maximum 5 cm distance)
The other challenge is to create the mechanical part that will rotate to position the sensors in the right angle. And create the software that will make it work.

Figure 1: Initial sketch showing the flock of the robots during the process of creating the formation.

Presentation:
It should be possible to present a flock of robots that, after receiving an order from the computer, negotiate who is the master, put themselves in formation and follow a route (either a line or a predefined route).

2. Train a robot to follow a path. The robot should be able to find its way back to the initial point. It should be able to avoid obstacles deployed in the environment after training

A robot will be given a certain path that it has to be followed in order to reach the goal point. This is considered the learning stage, and this knowledge could be achieved by remote controlling the robot from a PC while it is storing the path, by retrieving a path file from another robot or transmitted in real-time from another device.
Once the robot has reached the goal area, it should be able to go back to the initial position where it started the movement. This is considered the second phase of the robot operation.
Between the first phase, learning and the second phase, returning, the environment may change. These changes will introduced a set of obstacles that will be located in the path the robot has followed initially.
The aim is that the robot should be able to arrive to the initial location avoiding the obstacles.

Hardware/Software platform:
The required platform is similar to the one that we have used during the course labs that have taken place during the first part of the course. All the listed elements below can be found in a single lego NXT educational pack like the ones available in the lab.
  • NXT brick.
  • Lego bricks in order to create the mechanical structure of the robot.
  • Ultrasonic sensor
  • Two NXT motors


Challenge problem:
Even though the problem statement could lead us to think that it is a rather simple task to be achieved, there are some issues that deserve extra consideration.
During the course we have learned the different facilities provided in the lejOS API to perform navigation. In this project, it will be necessary to evaluate them and asses its performance, taking special consideration to the error introduced while the robot is moving. The use of Odometry to construct our own movement reconstruction algorithm should be carefully considered.
As we it was explained in the lab report [1], the error detection and possible correction in path following plays a major role, since a small deviation can lead to serious impressions if it is not corrected at time.  This introduces the need for studying strategies to tackle the drifting problem (use of way-points, dual-control, ...).
The kinematics of the robot are specially relevant in this case, and an in-depth analysis of the forward and inverse kinematics of the two-wheeled robot will be necessary.
As exposed in [1] the construction of the robot platform should be carefully done, since simple elements like the free wheel may introduced errors due to the friction or different positions in the beginning of the movement.
As in any robotics project, the control strategy should be studied in order to achieve a good performance, behaviour organization and error minimisation. Questions like: Should the agent be purely stimulus-response based? or Should the agent have memory to keep track of previous states will arise during the design process.
One of the most challenging parts will be the implementation of the behaviour once an object has been discovered. A predefined route may be used, or a new one could be constructed depending on the environmental conditions. While the second strategy is clearly more flexible, it is more complex to implement. It should be considered as well how the robot should react in the case it detects an object while it is already avoiding one.

Presentation:
In order to present the project a terrain of 4 square meters should be used. The terrain will change during the phases presented above. While in the first phase the terrain should be clear, in the second phase, several objects should be deployed in the followed track. The size of the objects should be, at least, comparable to the size of the robot.

Figure 2: Initial sketch showing the robot operation.

3. Let two robot cars with different behaviours collaborate together solving a specific task. The task would be for the first robot car to find a certain object and for a second robot to carry and transport the object to a predefined location


1. Robot - Searching for object and communicate coordinates to the second robot
2. Robot - Pick up and transport object away to a certain location

The first robot is equipped with sensors to find and identify different objects.The object could be a colored block. The robot search for the colored block in a restricted area marked with a black square. The robot car is equipped with a light sensor used to limit the search into the restricted arena where more blocks of different colors are located. A ultrasonic distance and color sensor is used to find the right colored block.  

The second robot must have a mechanical construction to pick up the object. When the block is found the first robot transmits the coordinates of the found block to the second robot car using a wireless bluetooth connection. The second car is equipped with a mechanical construction to carry the block away. The block is transported to a location controlled by a central remote PC form where drop off coordinates is transmitted. The second robot car navigates to this location where the block is dropped off and waits for the next block to be found.

Hardware/Software platform:
The hardware platform will be 2 different robot cars running different programs on the NXT computers. They communicates together using bluetooth in a peer-to-peer setup. A central remote PC is transmitting drop-off coordinates to the second robot car in a client server architecture.

1. Robot car:
NXT Computer, Ultrasonic sensor, Color sensor, light sensor, Motors with tacho for localization. Bluetooth for peer-to-peer communication with the second robot.

2. Robot car:
NXT Computer, Ultrasonic sensor, light sensor, Motors with tacho for localization.
Bluetooth for peer-to-peer communication with the first robot.
Mechanical construction for pickup of object.

Challenge problem:
One of the challenges will be to make a stable mechanical construction to pickup the colored block due to limitation of LEGO. Coordinating pick up of the colored block and transfer to the second car will be a challenge in coordinating the movement and positioning of the two cars in relation to each other.

Figure 3: Initial sketch showing how an object was found and picked by the collector robot.

Presentation:
Present the scenario where the first robot car finds a colored block, calls for the second car that carries the block to a drop-off location.

Selection of the project


Why not project #1
To make the robots moving in a flock and especially  follow the leader in a certain angle using the ultrasonic sensor  would be hard mechanical and sensor wise to make. It covers many topics from the course, but we have chosen not to do it due to the limitation of the LEGO sensors.  Some of the other project suggestions contains more interesting topics from the course we would like to do.  

Why not project #2
This project will be harder to work on for more people at the same time. It covers many interesting challenges. It is though very hard to do especially the localization when turning. Here we perhaps need to improve the Lejos API implementing the forward and inverse kinematic [4].  We have chosen not to do this project mainly on the difficulty of doing it in parallel.

Why project #3

This projects covers many topics touched during the lessons, such as localisation, navigation, communication, different architectures, line following and sensors. That gives us a chance to better understand and study deeper all these topics.

This project requires working at different abstraction levels; low level working with sensors and pure reactive control, and high level working with more complex architectures. This gives us the opportunity to work on most of the concepts introduced in the course.

The work can be divided easily because we have two different robots, with two different architecture and we have communication. This makes it possible to work in parallel and also to make it iterative, so we always have a working system, just adding functionality or improving the existing functionality in each iteration.


Project plan
We have chosen to use iterative development, using Scrum [5] as an inspiration , as we have worked with it before. Each iteration should produce a working system, which should be able to be presented. That gives us the chance to start with a basic system and improve it step by step, adding a bit of functionality at a time.
We will keep track on the times we meet and the duration of the meetings for later reference.

The prioritized list below contains the work breakdown structure of tasks that needs to be done in the project.  The project then is divided in 3 milestones or sprints (taken from Scrum notation), where the project should be at a delivery state.

WBS:
1.1- Object construction
1.1- Robot1 construction
1.1- Robot1 SW architecture of functional behavior
1.2- Arena construction
1.2- Find and locate an object (R1)
1.2- Coordinate representation (R1 + R2 + PC)
1.3- Identify object (R1)
2- Robot2 construction
2- Pick up object (magnet actuator)  (R2)
2- Robot2 SW architecture of functional behavior
3- Go to a specified location (R2)
4- Drop object (R2)
5- Bluetooth communication between robots
5- Send coordinates to the other robot (R1, R2, PC)
6 - Bluetooth communication between PC and robot
6- Send coordinates from PC to R2 (PC)

Milestone goals:
Sprint #1
1 - Robot 1 is able to find and locate object
2 - Robot 2 is able to pick up an object
(Before Christmas)

Sprint #2
3 - Robot 2 is able to navigate to the location of the object
4 - Robot 2 is able to carry the object away and drop it off
5 - Robot 1 communicates coordinates to Robot 2 where to find object
(First week in the New Year)

Sprint #3
6 - PC instructs robot 2 where to drop off object
-   Prepare presentation
(Last week presenting)

Conclusion

In this lab session we have suggest 3 projects that each covers different topics from the course. We have decided on the project that we think is feasibly and will cover most of the different topics learned in the course. Its focus will be on autonomous agents cooperating to achieve a certain goal. Subtopics of the project includes: Sensors, Actuators, Localization, Navigation, Communication, Subsumptional Architectures, Sequential and reactive behaviors. The task of the project would be for the first robot car to find a certain object and for a second robot to carry and transport the object to a predefined location. We have presented a draft project plan with 3 main milestones that will be our guide for how to achieve the goals of the project.

References

[1] Lego lab session 9: Navigation. http://josemariakim.blogspot.com/2010/11/lego-lab-lesson-9-navigation.html
[2] http://en.wikipedia.org/wiki/Scrum_(development)
[3] Jakob Fredslund and Maja J Matarić, "A General, Local Algorithm for Robot Formations", IEEE Transactions on Robotics and Automation,, special issue on Advances in Multi-Robot Systems, 18(5), Oct 2002, 837-846.
[4] Thomas Hellstrom, Foreward Kinematics for the Khepera Robot
[5] Scrum definition http://en.wikipedia.org/wiki/Scrum_(development)

onsdag den 24. november 2010

Lego lab lesson 10: Behaviour based architecture

Lego Lab lesson 10: Behaviour based architecture


Date: 18 - November - 2010
Duration: 3 hours
Participants: Kim Bjerge, María Soler, José Antonio Esparza

Goals of the lab session

Get experience working with Behaviour-based architecture by implementing simple actions grouped as behaviours.
Investigate the class organization in the lejOS related to the behaviour concept.
Explore how the behaviour structure could be extended.

Bumper car

  1. Press the touch sensor and keep it pressed. What happends ? Explain.


When the detecting wall behaviour is acting, the robot is not reacting as fast as in the case it is not.

The turning back action implemented in the behaviour is blocking. That implementation can be seen in the following code snippet:


public void action()
{
Motor.A.rotate(-180, true);// start Motor.A rotating backward
Motor.C.rotate(-360); // rotate C farther to make the turn
}

Even though the exit button is being pressed, the robot will have to finish the full turn before actually executing the exit instruction.

  1. Both DriveForward and DetectWall have a method takeControl that are called in the Arbitrator. Investigate the source code for the Arbitrator and figure out if takeControl of DriveForward is called when the triggering condition of DetectWall is true.


In the source code of the arbitrator function, we have found a relevant piece of code that influences its behaviour. This part of the code is shown below:

for (int i = maxPriority; i >= 0; i--)
{
if (_behavior[i].takeControl())
{
_highestPriority = i;
break;
}
}

The maximum priority variable is set with the number of behaviours registered in the system. The take control operation will be called no matter if there is a behaviour with higher priority currently waiting. This implies that, take control in the drive forward behaviour is called even when the trigger action in the waiting list of behaviours.
  1. Implement a third behavior, Exit. This behavior should react to the ESCAPE button and call System.Exit(0) if ESCAPE is pressed. Exit should be the highest priority behavior. Try to press ESCAPE both when DriveForward is active and when DetectWall is active. Is the Exit behavior activated immediately ? What if the parameter to Sound.pause(20) is changed to 2000 ? Explain.


The exit behaviour is implementing the Behaviour interface as follows:

class ExitBehavior implements Behavior
{

public boolean takeControl()
{
return Button.ENTER.isPressed();
}

public void suppress()
{
}

public void action()
{
System.exit(0);
}
}

As it can be seen, the suppress method has not been implemented. The reason is that, since this thread is the highest priority thread, it is never going to be interrupted.


Once the escape button has been pressed, the robot is exiting the program much faster than in the previous case. In the case the passed parameter to the method Sound.pause is set to 2000, the powering off time is increased considerably. This is because the Sound.pause method is blocking, and this blocking time has been multiplied by 100 once the modification has been introduced.

  1. To avoid the pause in the takeControl method of DetectWall a local thread in DetectWall could be implemented that sample the ultrasonic sensor every 20 msec and stores the result in a variable distance accessible to takeControl. Try that. For some behaviors the triggering condition depends on sensors sampled with a constant sample interval. E.g. a behavior that remembers sensor readings e.g. to sum a running average. Therefore, it might be a good idea to have a local thread to do the continuous sampling.


The below code snippet shows how the DetectWall method is separated into a thread on each own. This change did make the response from the exit behaviour much faster and steady.

class DetectWall implements Behavior
{
private TouchSensor touch;
private UltrasonicSensor sonar;
private boolean _ultrasonicDetected = false;
private Detect detect;
public DetectWall()
{
touch = new TouchSensor(SensorPort.S4);
sonar = new UltrasonicSensor(SensorPort.S1);
detect = new Detect();
detect.setDaemon(true);
detect.start();
}
private void pingWaitDetect()
{
sonar.ping();
Sound.pause(20);
LCD.drawInt(sonar.getDistance(),10,2);
_ultrasonicDetected = (sonar.getDistance() <>
}

public boolean takeControl()
{
return touch.isPressed() || _ultrasonicDetected;
}
....
private class Detect extends Thread
{
public void run()
{
while(true)
{
pingWaitDetect();
}
}
}
}
  1. Try to implement the behavior DetectWall so the actions taken also involve to move backwards for 1 sec before turning.


See code snippet below - the result is a system that is blocked while avoiding wall.

class DetectWall implements Behavior
{
....
public void action()
{
Motor.A.backward(); // Move backward 1 sec. before turning
Motor.C.backward();
Sound.pause(1000);
Motor.A.rotate(-180, true);// start Motor.A rotating backward
Motor.C.rotate(-360); // rotate C farther to make the turn
}
  1. Try to implement the behavior DetectWall so it can be interrupted and started again e.g. if the touch sensor is pressed again while turning.


We have chosen to move the motor control from the action of the DetectWall behavior to the separate thread that also samples the ultrasonic sensor. In this way the action will not be blocking and it is possible to extend the separate thread to control the moving backward and turning in a state machine that can be interrupted.

class DetectWall implements Behavior
{
....
public void action()
{
synchronized (detect)
{
detect.startAction();
}
while (!detect.isCompleted())
{
Thread.yield(); //don't exit before completing stateMachine
}
}

The DetectWall action starts the action and waits until it is completed. The detect class implements the new thread that starts the moving backward and turning. This behavior incorporate the checking of the touch sensor that will stop the action and reset the state machine.

private class Detect extends Thread
{
private int _counter;
private int _state = 0;
private void startAction()
{
_state = 1;
}
private boolean isCompleted()
{
return (_state == 0);
}

private void pingWaitDetect()
{
sonar.ping();
Sound.pause(20);
LCD.drawInt(sonar.getDistance(),10,2);
_ultrasonicDetected = (sonar.getDistance() <>
}
public void stateMachine()
{
synchronized (this)
{
if (touch.isPressed())
{
Motor.A.stop(); // Move backward 1 sec. before turning
Motor.C.stop();
_state = 0;
}
switch (_state)
{
case 0: // Idle
break;
case 1: // Start moving backward
Motor.A.backward(); // Move backward 1 sec. before turning
Motor.C.backward();
_counter = 0;
_state = 2;
break;
case 2: // Moving backward 1 sec
if (_counter++ == 25) // 1 sec.
_state = 3;
break;
case 3: // Rotating
Motor.A.rotate(-180, true);// start Motor.A rotating backward
Motor.C.rotate(-360); // rotate C farther to make the turn
_state = 0;
break;
}
}
}

public void run()
{
while(true)
{
pingWaitDetect();
// Called every 20 ms
stateMachine();
}
}

}

The robot in action can be seen in the following video: http://www.youtube.com/watch?v=8n5_UJtGNxc

Source code for this exercise can be found here:
http://code.google.com/p/josemariakim/source/browse/#svn/trunk/lab10_1/src

Motivation functions


Referring to Thiemo Krink’s motivation function in [2] how would it be possible to change the priority of the behaviors dynamically? When we take a closer look into the implementation of the Arbitrator in the Lejos API we see that a separate thread (Monitor) is started that continuously call takeControl and calls suppress. The priority is statical but could be change so takeControl instead returns a integer value defined by the individuals behaviors. This integer value could then be used to define the priority of the next behavior to be activated. The behavior with the highest value will then be activated each time through the loop in the Arbitrator. If takeControl returns zero nothing should happen.

Below is listed the code snippet of the Monitor thread that must be changed to select the dynamic prioritized behavior.

private class Monitor extends Thread
{
.....
synchronized (this)
{
_highestPriority = NONE;
for (int i = maxPriority; i >= 0; i--)
{
if (_behavior[i].takeControl())
{
_highestPriority = i;
break;
}
}
int active = _active;// local copy: avoid out of bounds error in 134
if (active != NONE && _highestPriority > active)
{
_behavior[active].suppress();
}
}// end synchronize block - main thread can run now
Thread.yield();

Conclusions

In this lab exercise we have worked with the concept of behaviour-based architecture. We started by trying to understand how this approach is implemented in the lejOS operative system. We realized that the behaviour interface has changed since the firsts versions of the lejOS, and we were missing some documentation and comments in the source code to understand its behaviour. Finally, we tried to explain how the behaviour interface together with the arbitrator could be modified to support dynamical behaviour priorities, based on Krinks motivation function [2]. Krinks ideas constitute an interesting link between computer science and biology very attractive for the robotic field experimentation.

References


[1], The leJOS Tutorial, Behavior Programming

[2], Thiemo Krink(in prep.). Motivation Networks - A Biological Model for Autonomous Agent Control.