fredag den 26. november 2010

Lego Lab 11

The Final Project

Date: 25 - November - 2010
Duration: 4 hours
Participants: Kim Bjerge, Maria Soler, José Antonio Esparza


Goals of the lab session

At the end of the lab session we have chosen a project and have discussed at least two alternative projects. The lab report from this lab session contains:
  • list of projects considered with a short description of each, e.g. a description could be:"robot that can dry wet spots on the floor during a handball match".
  • for each project describe the hardware/software platform and software architecture of each component.
  • try to point out the most difficult problems to be solved in each project, e.g. for the floorcleaner robot it is difficult to figure out when to stop cleaning.
  • for each project describe what you would expect to be able to present at the end of the project period.


Project Ideas:


1. Use Jacob’s paper [3] to let robots make formations in a flock of robots moving identifying neighbours

The robots should move around in a formation defined in a computer and transferred via bluetooth to the robots.
The robots should negotiate who is the leader and then move around in the defined formation.
It should be visible who is the master. Maybe playing a special tune, or blinking some lights.
They could either move around by following a line in the floor or by following a route that would be also transferred to the master robot. The non-master robots are not aware of the route to follow, they are only aware of the neighbour next to itself.
As an optional feature, the robots could react to a loud sound by changing formation, changing route, or changing master of the flock.

Hardware/Software platform:
For this project at least 3 robots will be used (minimum amount to talk about formation) and a PC will be used for creating the formations and routes.

Robots (3 of them):
- Color sensor to check identity by color
- Ultrasonic sensor to calculate the distance to the neighbour
- (Optional) Sound sensor
- Bluetooth communication

Computer:
- Bluetooth communication
- GUI to create formations and routes

Challenge problem:
One of the challenges is the identification of the neighbours by color. It has to be tested how sensitive the color sensors are when used in a large distance (we have been using them in maximum 5 cm distance)
The other challenge is to create the mechanical part that will rotate to position the sensors in the right angle. And create the software that will make it work.

Figure 1: Initial sketch showing the flock of the robots during the process of creating the formation.

Presentation:
It should be possible to present a flock of robots that, after receiving an order from the computer, negotiate who is the master, put themselves in formation and follow a route (either a line or a predefined route).

2. Train a robot to follow a path. The robot should be able to find its way back to the initial point. It should be able to avoid obstacles deployed in the environment after training

A robot will be given a certain path that it has to be followed in order to reach the goal point. This is considered the learning stage, and this knowledge could be achieved by remote controlling the robot from a PC while it is storing the path, by retrieving a path file from another robot or transmitted in real-time from another device.
Once the robot has reached the goal area, it should be able to go back to the initial position where it started the movement. This is considered the second phase of the robot operation.
Between the first phase, learning and the second phase, returning, the environment may change. These changes will introduced a set of obstacles that will be located in the path the robot has followed initially.
The aim is that the robot should be able to arrive to the initial location avoiding the obstacles.

Hardware/Software platform:
The required platform is similar to the one that we have used during the course labs that have taken place during the first part of the course. All the listed elements below can be found in a single lego NXT educational pack like the ones available in the lab.
  • NXT brick.
  • Lego bricks in order to create the mechanical structure of the robot.
  • Ultrasonic sensor
  • Two NXT motors


Challenge problem:
Even though the problem statement could lead us to think that it is a rather simple task to be achieved, there are some issues that deserve extra consideration.
During the course we have learned the different facilities provided in the lejOS API to perform navigation. In this project, it will be necessary to evaluate them and asses its performance, taking special consideration to the error introduced while the robot is moving. The use of Odometry to construct our own movement reconstruction algorithm should be carefully considered.
As we it was explained in the lab report [1], the error detection and possible correction in path following plays a major role, since a small deviation can lead to serious impressions if it is not corrected at time.  This introduces the need for studying strategies to tackle the drifting problem (use of way-points, dual-control, ...).
The kinematics of the robot are specially relevant in this case, and an in-depth analysis of the forward and inverse kinematics of the two-wheeled robot will be necessary.
As exposed in [1] the construction of the robot platform should be carefully done, since simple elements like the free wheel may introduced errors due to the friction or different positions in the beginning of the movement.
As in any robotics project, the control strategy should be studied in order to achieve a good performance, behaviour organization and error minimisation. Questions like: Should the agent be purely stimulus-response based? or Should the agent have memory to keep track of previous states will arise during the design process.
One of the most challenging parts will be the implementation of the behaviour once an object has been discovered. A predefined route may be used, or a new one could be constructed depending on the environmental conditions. While the second strategy is clearly more flexible, it is more complex to implement. It should be considered as well how the robot should react in the case it detects an object while it is already avoiding one.

Presentation:
In order to present the project a terrain of 4 square meters should be used. The terrain will change during the phases presented above. While in the first phase the terrain should be clear, in the second phase, several objects should be deployed in the followed track. The size of the objects should be, at least, comparable to the size of the robot.

Figure 2: Initial sketch showing the robot operation.

3. Let two robot cars with different behaviours collaborate together solving a specific task. The task would be for the first robot car to find a certain object and for a second robot to carry and transport the object to a predefined location


1. Robot - Searching for object and communicate coordinates to the second robot
2. Robot - Pick up and transport object away to a certain location

The first robot is equipped with sensors to find and identify different objects.The object could be a colored block. The robot search for the colored block in a restricted area marked with a black square. The robot car is equipped with a light sensor used to limit the search into the restricted arena where more blocks of different colors are located. A ultrasonic distance and color sensor is used to find the right colored block.  

The second robot must have a mechanical construction to pick up the object. When the block is found the first robot transmits the coordinates of the found block to the second robot car using a wireless bluetooth connection. The second car is equipped with a mechanical construction to carry the block away. The block is transported to a location controlled by a central remote PC form where drop off coordinates is transmitted. The second robot car navigates to this location where the block is dropped off and waits for the next block to be found.

Hardware/Software platform:
The hardware platform will be 2 different robot cars running different programs on the NXT computers. They communicates together using bluetooth in a peer-to-peer setup. A central remote PC is transmitting drop-off coordinates to the second robot car in a client server architecture.

1. Robot car:
NXT Computer, Ultrasonic sensor, Color sensor, light sensor, Motors with tacho for localization. Bluetooth for peer-to-peer communication with the second robot.

2. Robot car:
NXT Computer, Ultrasonic sensor, light sensor, Motors with tacho for localization.
Bluetooth for peer-to-peer communication with the first robot.
Mechanical construction for pickup of object.

Challenge problem:
One of the challenges will be to make a stable mechanical construction to pickup the colored block due to limitation of LEGO. Coordinating pick up of the colored block and transfer to the second car will be a challenge in coordinating the movement and positioning of the two cars in relation to each other.

Figure 3: Initial sketch showing how an object was found and picked by the collector robot.

Presentation:
Present the scenario where the first robot car finds a colored block, calls for the second car that carries the block to a drop-off location.

Selection of the project


Why not project #1
To make the robots moving in a flock and especially  follow the leader in a certain angle using the ultrasonic sensor  would be hard mechanical and sensor wise to make. It covers many topics from the course, but we have chosen not to do it due to the limitation of the LEGO sensors.  Some of the other project suggestions contains more interesting topics from the course we would like to do.  

Why not project #2
This project will be harder to work on for more people at the same time. It covers many interesting challenges. It is though very hard to do especially the localization when turning. Here we perhaps need to improve the Lejos API implementing the forward and inverse kinematic [4].  We have chosen not to do this project mainly on the difficulty of doing it in parallel.

Why project #3

This projects covers many topics touched during the lessons, such as localisation, navigation, communication, different architectures, line following and sensors. That gives us a chance to better understand and study deeper all these topics.

This project requires working at different abstraction levels; low level working with sensors and pure reactive control, and high level working with more complex architectures. This gives us the opportunity to work on most of the concepts introduced in the course.

The work can be divided easily because we have two different robots, with two different architecture and we have communication. This makes it possible to work in parallel and also to make it iterative, so we always have a working system, just adding functionality or improving the existing functionality in each iteration.


Project plan
We have chosen to use iterative development, using Scrum [5] as an inspiration , as we have worked with it before. Each iteration should produce a working system, which should be able to be presented. That gives us the chance to start with a basic system and improve it step by step, adding a bit of functionality at a time.
We will keep track on the times we meet and the duration of the meetings for later reference.

The prioritized list below contains the work breakdown structure of tasks that needs to be done in the project.  The project then is divided in 3 milestones or sprints (taken from Scrum notation), where the project should be at a delivery state.

WBS:
1.1- Object construction
1.1- Robot1 construction
1.1- Robot1 SW architecture of functional behavior
1.2- Arena construction
1.2- Find and locate an object (R1)
1.2- Coordinate representation (R1 + R2 + PC)
1.3- Identify object (R1)
2- Robot2 construction
2- Pick up object (magnet actuator)  (R2)
2- Robot2 SW architecture of functional behavior
3- Go to a specified location (R2)
4- Drop object (R2)
5- Bluetooth communication between robots
5- Send coordinates to the other robot (R1, R2, PC)
6 - Bluetooth communication between PC and robot
6- Send coordinates from PC to R2 (PC)

Milestone goals:
Sprint #1
1 - Robot 1 is able to find and locate object
2 - Robot 2 is able to pick up an object
(Before Christmas)

Sprint #2
3 - Robot 2 is able to navigate to the location of the object
4 - Robot 2 is able to carry the object away and drop it off
5 - Robot 1 communicates coordinates to Robot 2 where to find object
(First week in the New Year)

Sprint #3
6 - PC instructs robot 2 where to drop off object
-   Prepare presentation
(Last week presenting)

Conclusion

In this lab session we have suggest 3 projects that each covers different topics from the course. We have decided on the project that we think is feasibly and will cover most of the different topics learned in the course. Its focus will be on autonomous agents cooperating to achieve a certain goal. Subtopics of the project includes: Sensors, Actuators, Localization, Navigation, Communication, Subsumptional Architectures, Sequential and reactive behaviors. The task of the project would be for the first robot car to find a certain object and for a second robot to carry and transport the object to a predefined location. We have presented a draft project plan with 3 main milestones that will be our guide for how to achieve the goals of the project.

References

[1] Lego lab session 9: Navigation. http://josemariakim.blogspot.com/2010/11/lego-lab-lesson-9-navigation.html
[2] http://en.wikipedia.org/wiki/Scrum_(development)
[3] Jakob Fredslund and Maja J Matarić, "A General, Local Algorithm for Robot Formations", IEEE Transactions on Robotics and Automation,, special issue on Advances in Multi-Robot Systems, 18(5), Oct 2002, 837-846.
[4] Thomas Hellstrom, Foreward Kinematics for the Khepera Robot
[5] Scrum definition http://en.wikipedia.org/wiki/Scrum_(development)

Ingen kommentarer:

Send en kommentar