- Introduction
The purpose of this project is to familiarize ourselves with the
Rug Warrior robot and its capabilities by programming the robot to
perform a various number of tasks utilizing its sensors
(touch, light, IR, sound) and motor controls. Once we have gaged the
capabilities of Rug Warrior, we wish to program the robot to simply
follow the edge of a wall (through the use of its touch and IR
sensors) for a finite distance.
- Approach
Our initial approach is to familiarize ourselves with the
Rug Warrior's programming environment (Interactive C). From there,
we intend to write a series of simple programs that will
allow us to asses the extent of the robot's ability to interact with
its physical environment. To do so, we will test the robot's:
- motors -- can we easily get it to move in a straight
line?
- encoders -- if we want the Rug Warrior to move
forward 12 inches, will it do so accurately?
- touch sensors -- how sensitive are they and how well do
they work in combination (there are 3 spaced evenly along the
circumference of the robot)
- light sensors, IR sensor, sound sensor --
again, just how sensitive are they?
Once we have become familiar with the Rug Warrior and its capabilities,
we intend to write a program in Interactive C that will enable the
robot to follow walls in an indoor environment through the use of
the robot's sensors (primarily touch and IR).
- Progress
Week 1
We began by familiarizing ourselves with the Rug Warrior. We came
across a large stumbling block when we were unable to locate the cable
that connects the robot to a computer. We could not find another
identical cable in the lab because the cable we needed was unusual --
one end of it is a standard telephone line connector, and the other
connects to a serial port. We ended up soldering our own cable
according to the schematic in the manual. We did this by first taking
the female end of a serial connector and soldering wires to six of the
connectors. We braided three of those together, then joined these
"four" wires to an exposed end of RJ 11 cable, as shown in Figure 1.
Having to deal with this cable construction impeded our progress in
another way because we could not use the Interactive C software until
the program recognized the connection between the PC and our
Rug Warrior.

Figure 1 -- Our first attempt at the Rug Warrior cable
At this point, we do not know for sure whether or not our cable is
functional. It checks out fine on a voltmeter, and when we try to send
information from the PC to the robot it appears to be transmitting, as
indicated by a data LED on the robot, yet the information is not
getting processed on either end. Our current plan is to go to
RadioShack, build a cable that we can have more confidence in, and go
from there.
Week 2
As planned, we purchased a DB9 to RJ11 (both female) adaptor from
RadioShack and attempted to create another cable to bus data from the
computer to the Rug Warrior (see Figure 2). After a night's work
getting all the wiring set up, we were still unable to get a proper
connection working. With that, we were ready to give up on the
project entirely and move on to something else. Thankfully, Professor
Dodds and Daniel Lowd took a look at our cable and attempted to
download the Rug Warrior operating system code (termed "pcode" by the
manufacturer) onto the robot. Turns out our cable worked after all
and we simply didn't download the pcode properly. With that, we were
finally free to run code on the Rug Warrior.

Figure 2 -- Our second attempt at the Rug Warrior cable
Because we got the cable working so late in the week, we had a limited
amount of time to familiarize ourselves with the operation of the
Rug Warrior through the use of Interactive C programs. We began by
running through several of the diagnostic and demonstrative programs
provided with the Rug Warrior's Interactive C package. After
downloading and running each of the programs, we were able to verify
that each of the robot's sensors functions properly. This also
allowed us to gain a cursory
understanding of how to read each of the robot's sensors through
the interactive C code.
After running through each of the diagnostic programs, we learned
firsthand of the Rug Warrior's limited capabilities. For example,
when the yo-yo.c
code is uploaded onto the robot, the
Rug Warrior is programmed to move forward a set distance when its rear
touch sensors are bumped and then return to its original place.
However, the robot seems to have trouble moving in a straight line
because the motors controlling the left and right wheels are not
exactly the same. So, the robot tends to move in more of an arc than
a straight line; and it doesn't always return to the same spot because
of these inconsistencies in the motor control.
Our goal for the next week of the lab is first to find a way to make
the robot move in a true straight line. We will then perform further
tests on the robot's touch and IR sensors since these will be utilized
in the actual wall-following behavior. The next logical step will
be to write the wall-following code and test it on the Rug Warrior.
Week 3
Due to the delays in our project caused by the malfunctioning cable, we
stripped down our goal for week 3. Instead of getting the robot to map
out its environment and return home, our objective was simply to get
our Rug Warrior to follow walls. The first problem we encountered
dealt with the robot's power system. The batteries were low, so we had
to
purchase more. When we replaced the batteries, we noticed that part of
the battery holder had broken off, and while the coil that connected
the robot to the battery was still there, it did not have a "back" to
push against, causing the connection to be interrupted if we were too
rough with the robot (for instance, dropping the robot from more than
one inch above the ground).
The far more persistent problem that we ran into dealt with
the Rug Warrior's motors. As mentioned previously, the two wheels, by
default, do not move with the same speed. We found out that this is
due to the wiring of the robot, which gives each wheel a slightly
different amount of power. One of the files provided to us allows the
user to modify this "drive bias," and on Wednesday in the lab we got
the robot to move in a straight line on the lab's rug floor.
Inspired by the yo-yo program given to us, we devised a design idea for
following walls. Using a "left wall" by default, we would start the
robot travelling parallel to the wall. If the robot did not bump into
a wall after a specified amount of time (say, five seconds), we would
adjust the robot's path so that it aimed closer to the wall. This
accounts for the potential problem of a one-walled universe where the
robot could travel away the wall forever. Since the robot will
eventually crash into a wall, we will have it back out, turn a little
bit away from the wall, and then continue forward again.
Once again, we found that this was not as easy as it seemed. First,
we noticed that the drive bias required on the tile floor in the
hallway was different from the drive bias on a rug. This had a
simple solution, but then we observed that the motors were causing
the problem, not the bias. We wrote a program that made the Rug
Warrior drive forward, sleep for a short while, and then drive
forward again, which resulted in the robot travelling in mostly
random arcs that had a clockwise bias. When examined, we saw that a
plastic circular piece that sat on the inside of each wheel connected
to the axle was detached. On the second Rug Warrior that we had not
tested, the piece was adhered to the wheel. We theorized that this
piece was somehow getting stuck on one of the wheels, causing the
strange bias.
Since this seemed to be an impossible problem, we tried our code on
the second Rug Warrior. We found that this one performed much more
predictably, and after steadying the battery connection and removing
hair from the wheels, got the robot to follow walls. The algorithm
we used was essentially the same as the one described above. After
two movement segments, if the robot had not collided with a wall it
adjusted its direction to point towards the left wall. We did run
into the problem, however, of the robot getting stuck in corners.
- Perspective
Overall, our system performed as desired. In the end, we were able to
program the Rug Warrior to follow the edge of a straight wall through
the use of its touch sensors alone.
Our basic wall following algorithm used two main processes. The first
process, the wall-sensing process, waited to receive input from the
Rug Warrior's touch sensors. When the sensors were activated (i.e. a
wall was hit), this wall-sensing process sends a signal to our second
process, the motion control process. The motion control process
incrementally moves the robot in a forward direction until a touch
sensor is activated. At this point, the robot will reverse direction
and rotate away from the wall it hit. Additionally, if the robot has
not encountered a wall for a specified period of time, the robot
adjusts its heading toward the last wall it encountered. Our
architecture is loosely based on Rodney Brook's subsumption model.
In AI through Building Robots Brooks explains the basic idea of
subsumption.
"Each behavior should achieve some task. I.e., there
should besome observable phenomenon in the total behavior of the
system which can be used by an outside observer to say whether the
particular sub-behavior is operating successfully" (Brooks).
Our algorithm has two clear sub-behaviors. One is avoiding obstacles,
the other is maintaining a path that is roughly parallel to the wall.
Each of these can easily be seen by someone observing the Rug Warrior
in action.

Figure 3 -- A simple illustration of the Rug Warrior's
wall-following path
The robot had difficulty, however,
when it encountered corners as it tended to drive itself directly into
the corner when both of its front sensors were activated. The reason
our algorithm fails in this case has to do with oscillations between
the left and right sensors being activated. The robot will continue
to turn back and forth and never make any progress as the left and
right touch sensors are activated in alternation. This is a potential
area of improvement for our project. The robot could keep track of
the past state of its sensors. If the left and right sensors have
been activated too many times in a row, then the robot would execute
a behavior to free itself from the corner. This would involve a
departure from the subsumption architecture described earlier. This
new model would most be best implemented with Three-layer architecture.
Gat describes the role of internal state in the three-layer
architecture.
"Three-layer architectures organize algorithms according to whether
they contain no state, contain state reflecting memories about the
past, or contain state reflecting predictions about the future" (Gat).
In order to deal with corners we would have to appeal to Gat's second
case. That is we would have to retain state about the past number of
oscillations between left and right sensors in order to detect that
we are stuck in a corner.
Our algorithm also has problems dealing with the opposite scenario.
That is, if the robot is following a wall and this wall turns sharply
away from the robot's current heading, the
robot will have difficulty finding its way back to the wall.
Currently, the algorithm causes the robot to overshoot the turn
because it does not turn sharply enough to find its way quickly back
to the wall. On the other hand, if the robot were to turn too
sharply, the algorithm runs the risk of turning the robot around
in the opposite direction.
One limitation of the Rug Warrior is its circular shape. Might a
different shape be more suited to the operating in an maze-like
environment? Perhaps, a better suited shape would be one that would
easily navigate its way out of corners. The best possible shape would
be one that changes dynamically. This idea is presented in
the Get Back in Shape article by Yoshida and company. The robot
described in this paper uses small robotic cells to create arbitrary
two-dimensionally shaped robots. Obviously, we are not considering
this for our next robotics project! However, it is an intriguing
possibility. The premise being that not having a priori knowledge of
the environment your robot will be operating in necessitates great
flexibility not only in behavior but also in physical configuration.
We decided not to utilize its IR sensors because they pick up a large
amount of noise from the surrounding environment. However, we do feel
that additional sensing would improve the capabilities of the Rug
Warrior. In particular, we feel a vision system would allow the Rug
Warrior to navigate, generate accurate maps, and possibly allow
coordination with other Rug Warriors. The fact of the matter is that
with the Rug Warrior's current sensing equipment generating an accurate
map of the environment would be extremely difficult, as would
coordinating multiple Rug Warriors to accomplish a task.