Alan Davidson, Mac Mason, Susanna Ricco, and Ben Tribelhorn
Page for second half
Mobile Robot
Competition
We are entering this competition in July in Pittsburgh.
Introduction
We are using the Evolution platform to create a robot ("Twitchy") to
participate in a scavenger hunt. It will need to navigate through an
environment filled with dynamic obstacles. Twitchy will need to find and
acquire certain items around a hotel. We will use AI to map the hotel.
We expect this to break new ground in the field of AI. Our goal is to use
a combination of various sensors and video in this endeavor.
Background
What makes this problem particularly difficult is that Twitchy will need to
map a changing environment. We will need a way to distinguish between
people milling around and walls, and only map the walls.
Approach
We plan on first creating a system which simply maps a static environment,
without any moving people. We will then adapt this system to take into
account people and other moving obstacles.
A timetable for our work is as follows:
- Week 1: Build the Evolution platform, and familiarize
ourselves with its capabilities.
- Week 2: Test odometric precision.
- Week 3: Use video to detect red and write a client.
- Weeks 4 & 5: Using sonar and wandering behavior. Also,
mapping and localization.
Progress - Pictures &
Movies for Weeks 2-4
- Week 1: We don't have a clue what we did during Week 1. We
might have picked a project. We really don't remember.
- Week 2: We have succeeded at making Twitchy move and turn. We
also tested the odemetry and found that there is considerable uncertianty
in its positional awareness.
- Week 3: This week we have installed a servomotor and took data for error
analysis of odometry. We found that there is about 4% error in position,
which of course compounds as Twitchy moves.
- Week 4: This week, we finally got the servomotor to work (Dodds
forgot to give us a file, and then our batteries were dead). It now
rotates to exactly where we want it to.
We also created the basics of a new client for
Twitchy that relays input without pressing "enter." This python program
implements an "oh shit!" button as well as commands for moving small and
continuous indefinite amounts.
Finally, we tried getting the camera to work. However, the program locks
up when we use it (we know that it has trouble undistorting the image; it
might have trouble elsewhere too). We have created a method to convert RGB
colors to HSV colors. We finally found a way to make the camera work. We
have to run two instances of our videoTool. To fix this problem we
switched to a different (faster) laptop. We hand tuned our definition of
"red" as seen by the Evolution camera. Twitchy now has a very good idea of
how to identify the characteristic red paneling in the Olin underground.
Red detection is basically a set of HSI and RGB ranges that we check. We
find a line to represent the red paneling by applying least squares to the
y-coordinates of the red pixels we detect. This allows us to find a line
that is so heavily weighted by the majority of the data, that mis-detected
red (e.g. a fire alarm) won't impact our wall detection. The algorithm we
are using for guessing if Twitchy is facing a wall uses the slope of the
line.
VideoTool.cpp
- Week 5: We attached the sonar unit provided by Professor Dodds
(now attached to the servo-motor). We used masking tape annd twine to
keep the breadboard and battery pack from falling off. The servo-motor is
attached by velcro. We had to move the camera a bit lower so that the
sonar would be able to see past it on the left. We now have IR bump
sensors operating. We have two (hand-callibrated) cutoffs, sides and
front.
Here is our wandering code (part of our remote
client) and the finite state machine. The basic
structure or Twitchy's autonomous wandering streategy is go until IR bumps
and then backs up turns and continues. Head on bumps cause her to turn at
least 90 degrees. We tested the autonomous wandering quite a bit. Using
the mapping code the odometry placed Twitchy usually within the walls of
the Libra complex. Our code implements a random turn amount, so that
given time Twitchy should be able to get herself unstuck. Based on our
tests, Twitchy covers a wide portion of a hallway environment because she
zigzags across the hall. So far we are not using any calibration for the
sonar. We have added the sonar to our remote client as well.
- Week 5++: With working sonar, IR bump sensors, and red detection,
we've moved on to implementing Monte Carlo Localization. As the robot
moves, the sonar does a three-point scanning routine (forwards, left,
forwards, right, forwards, etc.). Should the robot ever detect a wall
(either with the sonar or with the IR sensors), it stops. Before it
continues the wandering behavior from week 4, it does a complete sonar
sweep (taking a total of between one an three values) and updates the MCL
particles accordingly.
Our MCL maintains a list of 500 particles. Every time the robot moves, the
remote control code sends the requisite translate command to the MapTool. From this absolute pose, and the stored
previous pose, MapTool calculates delta_x, delta_y, and
delta_theta. From this information, we calculate two things:
- The distance travelled. (Using the Pythagorean Theorem) We need this
because every particle needs to move roughly the same distance as the
robot did, but in whatever direction the particle is facing. Once we've
calculated the direction (see item 2), we determine the x and y
translations that will result in a movement of the correct distance in
that direction. We then add a random uncertianty between 0 and 10 % to
x, y, and theta, and move the particle the correct amount.
- The direction the robot went. The robot is assumed to always move
either forwards or backwards. (Implemeting MCL on a holonomic robot
would be a pain!) However, we don't know which. We calculate this by
splitting the plane up into "North", "South", "East", and "West"
(arranged like the compass points, not the dorms) and using the
robot's theta-value and deltas to determine which direction is going.
The reasoning here is that the robot is often told to go backwards, and
the particles need to know what to do in that case.
After moving the particles, we update the probability of each particle
using data from a sonar reading . We use the famous
"Ricco TeePee" probability model. For each particle, we determine what
that particle thinks the sonar reading should be (based on the map). We
then calculate x = abs(expected value - actual value). If this is
within some threshold, then the probability of that particle becomes
1 - (x / threshold). Otherwise, it becomes small. (0.001)
We then normalize the values so the sum of the particles' probabilities
adds to 1.
After the updates, we re-sample the particles. We do this by sorting the
list of particles by their probabilities (higher probabilities first), and
then picking a random number between 0 and 1. (Recall that the sum of all
the probabilities is 1) If the random number is between zero and the
probability of the first particle, we add a new particle there; if it's
between the probability of the first particle and the sum of the first and
second particles, we add a particle at the second particle, and so on. We
do this 500 times, and then remove the old particles from the list. Note
that the probabilities of these new particles doesn't matter; they will be
set based on the next sonar reading.
Plans for the near future include adding a timer to the wandering system
so that it never travels too far (say, down a long, straight hallway)
without stopping to take a reading. This will cut down on the particle
spread that arises from such motions and will therefore decrease the
uncertainty in position as a result.
The most useful thing we've done this week is to write a simple script
entitled master.py. All this does is launch all
seventeen-and-a-half programs that need to be running for our robot to
work correctly. Needless to say, this program is awesome. (It even uses a
brand-new feature of Python 2.4, which makes it even more awesome)
The most substantive change for this week was renaming the robot from the
firmly 13-year-old girl model of "Kelsey" to the slightly more descriptive
"Twitchy".
References:
- Achieving Artificial Intelligence Through Building Robots,
Rodney Brooks
- Experiments in automatic flock control, R. Vaughan, N. Sumpter,
A. Frost, & S. Cameron.
- The Polly System, Ian Horswill.
- PolyBot: a Modular Reconfigurable Robot, M. Yim, D.G. Duff, and
K.D. Roufas.
- Robots, After All, Hans Moravec.
- Robotic Evidence Grids, M. Martin, H. Moravec.
- Dervish: An office-navigating robot, I. Nourbakhsh, R. Powers,
S. Birchfield, AI Magazine, vol 16 (1995), pp. 53-60.
- Monte Carlo Localization: Efficient Position Estimation for Mobile
Robots, D. Fox, W. Burgard, F. Dellaert, S. Thrun, Proceedings of
the 16th AAAI, 119 pp. 343-349. Orlando, FL.
- Robotic Mapping: A Survey, S. Thrun.
- If at First You DOn;t Succeed... K. Toyama and G. D. Hager.
- Distributed Algorithms for Dispersion in Indoor Environments using a
Swarm of Autonomous Mobile Robots, J. McLurkin and J. Smith.
|