Robots That Jump

Robot Bodies Needed Before Robot Minds

Category Archives: Team Robomonster

Team Robomonster – About the Vehicle (Video!)

Team Robomonster – Postings after DARPA Visit


Monday, August 08, 2005

Sensory Panel Types

During the last month we’ve been experimenting with different configurations of sensory panels. As described in earlier posts, each panel will form part of the outer body of the vehicle. The “sensor dense” approach makes each of these panels a sort of low-resolution visual system – providing 3D object and vector data to the vehicle, independent of cameras.

After some shuffling around, we have ended up with the following configurations:

“Panel” 1 – 5 sensors. 1 centrally-placed 40kHz Devantech sonar working out to 10 feet, with 4 surrounding Sharp IR threshold sensors working to about 4 feet. The total span covered by this panel is about 30cm x 20 cm. This is the configuration that will cover parts of the vehicle not aiming directly front, back, or to the sides. It provides a basic “skin”.
a) For long-range detection, the panel only reports the distance along the z-axis – the object is assumed to be centered on the panel.

b) But if one of the IR sensors is tripped, the panel uses this to refine the position estimate for the object.

c) The object may cover 1, 2, 3 or all 4 IR sensors. This allows some discrimination of object shape – horizontal, vertical, and diagonal bars may be recorded.

d) IR sensors “hits” increase the reality of the sonar detection. If IR registers a hit and sonar does not, then the objects reality is lessened. In practice, this has never happened with the panel.

e) Movement is detected by comparing two successive samplings from the sensors. The data is used to generate a 3D motion vector.
The panel outputs several kinds of pseudo-NMEA strings:
a) Simple list of detections by sonar and IR. This data is output is a virtual “retina” with 9 pixels (3×3) and a z-axis divided into about 150 pixels, so the total volume is about 3x3x100, or 1000 pixels. The data may be used to develop an “evidence grid” for objects.

b) Object detection, sent as apparent location and minimum size, and shape in some cases.

c) Motion, as a difference map of the virtual “retina”

d) Object detection, sent as the apparention position and motion of the object, with lower limits to size.

e) Configuration, listing sensors by name, panel height/width, number, and sensor positions on panel.
Even this relatively simple panel takes a lot of computing. Already, it’s become obvious that we need to allow feedback to the panel microprocessor. The microprocessor may integrate the data, but “watchdog” functions detecting faulty sensors are handled at a higher level. So, we are going to add an occasional pause where the microprocessor listens for commands from the upstream computers.

We’re working on two other panels, which bracket the extremes between “Panel 1”. “Panel 2” is a collection of “slow” sensors – in particular temperature, humidity, and magnetic compass, plus a tilt sensor. The Memsic tilt sensor is not really “slow”, but the information is integrated over a couple of seconds to detect slow changes in the tilt of the vehicle. These panels (about 5) run in a strip along the top of the vehicle. “Panel 3” is currently being worked on. It is much more complex. The current testing design consists of 4 threshold IR detectors, 4 digital IR detectors (shorter range), One long-range sonar with light sensor, two short-range 235kHz sonars, and temperature sensors. This is also placement for a low-resolution “webcam” level camera.

The non-camera panels all use a single microprocessor to handle their data. The webcams get a dedicated computer. The output of the microprocessors from the non-camera data may be configured as a “retina” so the same image processing algorithms may be used for visual and IR/ultrasound data.

We’ve also written a simple visualizer for the panels in the Windows IDE, currently in visual basic. It processes the data reasonably well, but display updates are clearly not real-time. However, it really gives us a feel whether our panel software is actually working…


Thursday, July 14, 2005

Gumstix Robostix

Gumstix, which has created a nifty and tiny 400mHz single-board Linux computer (see at recently released RoboStix, and add-on board. Overall, the gumstix plux RoboStix are the size of a pack of gum with 3 sticks in it – impressive. The RoboStix supplies PWM connections, and I2C interface, and various other goodies (including an array of colored LEDs useful for monitoring program execution).

We’ve been experimenting with the Gumstix for some time. Our hope has been to link one Gumstix to several microcontrollers, which in turn link to multiple sensors following our “sensor dense” concept. One of the problems is that the Gumstix runs at 3.3V instead of 5V, and can’t simply be plugged into a sensor array like a standard microcontroller. Various groups have developed the hardware to hook up Gumstix, but we decided to wait – we want to concentrate on the programming rather than hardware configuration. The gumstix also supports ethernet, usb-ethernet and bluetooth (not all at once) but this doesn’t seem to be the best way to hook up the microcontrollers.

We are actively investigating CAN (see for the link – but now we’ll have to look at the I2C interface on the Robostix. It seems possible that the Robostix will allow us to easily put the gumstix into an I2C network of several microprocessors and sensors. Our current microcontroller is “master” but this actually fits our idea for forcing the data from the “bottom” rather than requesting from the top.

Unfortunately, the growing popularity of Gumstix has held things up – the first batches of the Gumstix RoboStix board sold out almost instantly. So for now, we’re going to concentrate on our microprocesor arrays and make them as “smart” as possible. Instead of reporting raw data, the microprocessors report time vectors – changes in the sensor output. If nothing is happening, the microprocessor puts out a slow “heartbeat” (once every 5 seconds). However, if the data is changing, the microprocessor puts out vectors as fast as it can. It does this by comparing a filtered average to the current values. We’re also thinking of allowing two kinds of outputs – one which shows the current and smoothed averages, and another which simply records a +/- for the sensors showing rapid change in data. This might help speed up processing.

Still looking around for our “junker” car to test our stuff on so we don’t have to use the rock-crawler. Ideally the junker will be street-legal (the rock-crawler is not) so we can drive directly to test sites. More on this later.

posted by Robomonster at 8:27 AM | 1 comments

Saturday, July 09, 2005

Testing the “sensor dense” idea
Despite the long lazy days, we’re pushing ahead with our “sensor dense” concept. In addition, we’re modifying some of the controller software and testing controls.

Our big work for the last several weeks is evaluating particular sensors for use in a “sensor dense” configuration. This involves hooking them up to a microprocessor and evaluating their response under different conditions. In the next step, software filters smooth the data and detect when the data is changing suddenly – when there’s no change in readouts, the sensors don’t output. Finally, the information from each sensor is formatted into a NMEA-like string which is sent out via the serial port. We have been talking to MachineBus ( ) about using a CAN network for this, but at present we are still “all serial”.

Here is a list of the sensors we have tested. “Direct connect” means that the sensor has to be wired to input/output pins on a microprocessor, in contrast to I2C or other mini-network protocols. For testing we have been using the Basic Micro Atom Pro – mostly because it has a large (2k RAM) and is faster than other hobby microprocessors:

Ultrasound SRF04, SRF08, SRF10 (Devantech) – the 04 is a direct connection, while the SRF08 and SRF10 use I2C, allowing several devices to be on the same bus. All these ultrasonic devices do good detection, though the spread of the beam is fairly wide. At present, we’ll probably put only one of these per body panel – their range is out to 30 feet. We also experimented with the Senscomp/Polaroid ultrasound but it’s pretty clear that the Devantech devices are easier to set up and use. The SRF08 also has a photocell, which gives a crude light/dark reckoning – it will allow us to determine which parts of the vehicle are in shadow as it moves under, say an overpass.

Ultrasound SRF235 (Devantech) – this I2C device uses a 235 kHz beam, in contrast to the 40 – 50kHz beam used by most ultrasonic devices. Due to the high frequency it can only detect to about 1 meter – but the beam is only about 15 degrees wide! This makes this sensor much like a long invisible “hair” on the robomonster body. It also updates faster – up to 100 Hz. Finally, since it operates at a different frequency, it can fire at the same time as an SRF04 or SRF08/10 without interference. We see it as a secondary confirmation system for our close-range IR sensors (see below).

Senscomp (Polaroid) – This larger ultrasonic device has about the same range as the Devantech sonars, but uses more power. We found it was difficult to set up compared the Devantechs, and also has large power draws (transient 2 amps).

SportsImportLTD sonar – This is a commercial sonar system from one of our sponsors. The weather-hardened sonars are designed to point in an array to the front and back of the vehicle. Interestingly, there are only two wires going into the system. Our plan is to put this packaged system as a “canned” secondary detector for objects while the vehicle is trying to back up.

GP2D02 IR sensor (Sharp) – This direct connect device reports a range from about 3″ to 30″. Its low price makes it possible to use several, and connections are straightforward.

GP2Y0D02YK IR sensor (Sharp) – This direct connect device thresholds at abouty 30″ inches. In the “sensor dense” concept, threshold sensors typically trigger more detailed readouts of other sensors reporting position/range.

GP2Y0A02YK IR Sensor (Sharp) – This is the sensor we had at our 2005 site visit – it reports distance as voltage in an analog circuit 3-30″. It has pretty good performance – a set of 4 gave reliable indicators of a person moving in front of the vehicle. However, the analog system draws more power and can’t be hooked into a network like I2C.

GP2Y0D340K IR Sensor (Sharp) – This tiny IR sensor thresholds at 16″. However, its small size is less of an advantage than one might think – it requires additional wiring and some electrical components to function.

Memsic accelerometer (Memsic) – We’re testing the surface-mount version of this tilt sensor from Parallax. We’ve found that it is pretty good at working, but the raw data needs a lot of massaging to convert to tilt angle. We haven’t run it on a moving vehicle yet, so we don’t know whether tilt or vibration will predominate under actual driving conditions.

Magnetic compass (Devantech) – We’ve experimented with the Devantech compass and find it useful – however, the new Hitachi compass surface-mounted by Parallax is more compact. We are considering mounting multiple compasses on the vehicle body, and using the combined input to factor away effects of metal/electric fields.

Magnetic compass (TCM) – This high-end compass was mounted, along with our JRC GPS system during the 2005 site visit. It gives a reliable signal and can tilt-compensate after callibration.

Magnetic compass HM55B (Hitachi/Parallax) – This tiny compass performs similar to the Devantech, with the advantage of very small size – we’re using the Parallax surface mount.

Sensirion humidity sensor SHT11 (Sensirion/Parallax) – Why a humidity sensor? Well, the system has a thermometer, which gives temperature output. Second, by taking temperature and humidity it is possible to calculate dewpoint. In a real robot car, reaching dewpoint is significant – moisture will begin condensing on the vehicle body, lenses, etc. and affect sensors. This sensor will warn the vehicle to take action (e.g. running heaters on lenses). The system took some effort in programming – it has a custom read/write protocol which takes some effort to decode on a microprocessor.

TAOS color sensor (Taos/Parallax) – This sensor reports the relative Red/Green/Blue values of its field of view. We plan to use it to detect blue sky versus cloudy conditions to callibrate our other cameras.

TAOS Light to Digital (LTD) sensor (Taos) – This sensor can measure brightness ranges of 40,000:1 – about the same as the human eye. We plan to use these sensors to determine absolute scene brightness for camera callibration. Combined with the color sensor telling us if it is a cloudy day, it will also allow us to predict the contrast of shadows. Shadows fooled lots of the 2004 Grand Challenge vehicles, and we feel this will provide a workaround.

CMUCam – We’ve been working with this system, both the standard CMU version, and the alternate system developed by Acroname. The goal is to use multiple CMUCams to detect motion around the vehicle, and quantify “optic flow” of the environment as the vehicle moves. Object recognition comes at a later date, with a higher-resolution system.

Bump sensor – We’re putting a few standard switches on each body panel to detect contact. These are contact switches with a small wheel allowing them to roll against a substrate.

Flexiforce – This strip of material can measure pressure. It may be useful for detecting pressure on the bumper if contact is made, going beyond a contact switch

MSI piezo tab – These small plastic strips send a small electrical pulse if the are snapped or vibrated. They form a perfect close-range “whisker” for selected vehicle body panes. Since they can generate high (50v) voltages relative to microprocessor pins, wiring a large number of them will be tricky.

We’re also planning to test several other sensors. Chief among these is a microphone, a kind of vibration sensor. The plan is to use it to confirm things like engine noise and the siren (when it is sounded). Audio sensing/voice recognition for the moment lays well in the future.

In other work, we’ve set up a working servo control system for our throttle. We had tried to use the Polulu system, but found it simply would not respond to Visual Basic signals sent out the serial port. The Parallax servo controller is more forgiving, and we have not problem with servo control now. However, our “leaf blower” effectors are going to need a more sophisticated system. We’re looking at a very interesting servo controller which combines servo output, a microprocessor, and multiple A/D and digital I/O ports. This should allow us to build the leaf-blower effector with sensors for contact, vibration, etc.

Later this summer: Integration. Now that we’ve tested individual sensors, it is time to put some together on test body panels to see overall performance. Our plan is to try one panel with all-analog sensors, and another with digital/I2C sensors. The resulting output should allow our system to use each body panel like a low-resolution “retina” to examine its environment.


Wednesday, June 01, 2005

The ‘sensor dense’ approach

Our team is pursuing a ‘sensor dense’ approach inspired by biology. This approach is not just adding a lot of sensors to the vehicle – it is a particular design philosophy inspired by biology.

Here are some of its features:

1. Use lots of simple sensors, instead of a few complex ones.
2. Use a variety of sensor types
3. Organize the sensors into an electronic ‘skin’
4. If you aren’t getting enough information, throw more sensors at the system.
5. Don’t throw away simple sensors if you add complex ones.
6. Use overlapping, redundant sensor networks
7. Connect a lot of sensors to a smaller number of microprocessors. Connect these to a smaller number of computers integrating data from several microprocessors. Connect these to a still smaller number of computers.
8. Use ultra-simple arbitration – no advanced “AI”
9. The environment is modeled via a “body-centered” coordinate system

Our motivation for this design is biology, though we are not trying to duplicate the details of biological structure (e.g. no neural nets). Instead, we are trying to duplicate the ratio of sensors versus “thinking” neurons versus body size found in simple animals.

In our opinion, the smartest robots today are comparable to a jellyfish or at best a mollusk in their computational complexity. So we look at how these systems organize senses and brains – and see a “sensor dense” approach in action. Lots of sensors compensate for a small brain, rather than a complex brain enabling lots of sensors!

Case in point – the Bay Scallop. This creature is essentially a clam that decided to swim – in our minds like a car that decides to drive itself. What do we find? An extremely simple “brain” (actually three ganglia or sub-brains) plus LOTS of sensors. Scallops have upwards of 60 eyes with lenses. They don’t form perfect images, but give an idea of general direction and motion of critters around the scallop.

Instead of a scallop eye tracking an object, each eye in turn fires as an object moves past the scallop – an alternate, sensor-dense way of registering motion.

Case in point – the Box Jelly. These jellyfish are unlike others in that they have eyes (24-40 of them) with lenses which focus images. Box Jellies can swim to shelter, food, and other box jellies. Close examination of their eyes (recently reported in Nature at this link) demonstrate the “sensor dense” strategy at work. The animal has several stalks, with 6 eyes on each stalk. One eye per stalk is large, and forms blurry images. A second eye on each stalk is smaller, but has an iris on it to adjust for ambient light. 4 additional eyes are simple, non-focusing light sensors.

Note the following features comparable to the “sensor dense” approach:
1. Several different kinds of light sensors are used
2. Simple light sensors were not “thrown away” when the more complex eyes evolved – instead they continue to function in the 6-eye system.
3. Simple arbitration – like other jellyfish, box jellies have no brain at all. Instead, they have a network of nerve cells distributed evenly over their body. In the light, the multiple eyes make up for the lack of a central brain.

But this is different from our usual assumptions about robotics. It is usually assumed that you should pair lots of computers with lots of sensors. If you have limited computing power, you should limit sensors. But this clearly isn’t what biology does – in fact it appears to do the opposite. We find small numbers of eyes in the most advanced animals, e.g. ourselves.

Our equivalent to the Box Jelly in Robo Monster is use of a variety of “eyes”:

1. Light sensors scattered over all the body panels of the robot, detecting ambient light and recording absolute lux levels.
2. Low-resolution (160×120) webcams detecting motion and “optic” flow in all directions
3. A few high-resolution stereo cameras.
4. Repeat.

Monday, May 23, 2005

Roboteqs back in action!

Over the last week we made major progress from the site visit – we got our Roboteq motor controllers back online! At the site visit, a corrupted Flash memory in the Roboteqs prevented us from driving – the steering motor couldn’t be controlled. The reason was an upgrade gone wrong – when we tried to upgrade the Roboteq software the system crashed and became useless.

Fortunately, the creator of the Roboteq, (Cosma), provided a custom-compiled program which recovered the Roboteqs from the dead and allowed them to be upgraded. We tested them last week and had great positioning of the steering brake motor.

Now, we’re going to go back to the site visit area and try to run our GPS waypoint follower program.

Team Robomonster – Postings to DARPA Visit in 2005

The following post is a summary of posts on Team Robomonster, a 2005 DARPA Grand Challenge entry. The project “team robomonster”  managed to make a “rock crawler” vehicle drive-by-wire, and respond to the computer. It failed to get GPS input to accurately steer the vehicle, and was eliminated at the second round of the “Grand Challenge”. The 2005 Grand Challenge winner came from Stanford, and, like “Robomonster” did not try to use vision to steer.


Wednesday, May 18, 2005

Our Site Visit…

Well, it’s been one week since the site visit for team Robo Monster, so I thought I would post a summary. Overall, there was a lot of great stuff, particularly for a team which started a mere 12 weeks ago. The DARPA reps were particularly interested in the custom-built vehicle, and talked to offroad racing engineer Brian Kirby for about an hour. They were impressed with his techniques for keeping his custom-built vehicle cool in hot environments, and his unique modular method of integrating robotic components with the base system.

The DARPA reps were also impressed with our “sensor-dense” system incorporating multiple IR and ultrasound detectors, developed by our microprocessor experts Michael Wilson and Kerstin Gilg. Working under deadlines, Michael and Kerstin created a custom board managing and processing information from 4 Sharp IR sensors plus one Devantech SRF04 ultrasonic sensor. Currently, these sensors are on the front bumper. We plan to expand the system to handle 8-15 sensors, and array the sensors in a grid on individual body panels of the vehicle, providing a sort of infra-red “touch” sense. We’re in good company in this “sensor-dense” concept – NASA recently promoted a new system they are using to make robotic arms sensitive to touch in their environment using similar IR sensors. Check out the NASA project at this link.

Despite the successes, we had some areas which are more challenging. The biggest one was our motor controller, whose Flash memory became corrupted during an upgrade and could not be used to control our steering motor. As a result our site visit was ‘static’ – we demonstrated vehicle systems in place, and did not attempt to send the vehicle to our RDDF waypoints. The Flash memory became corrupted via a series of unlikely accidents. Two days before the visit, we discovered that our motor controller (Roboteq) would need the upgrade to properly position the steering. When the basic upgrade methods did not work, we called the developer in Switzerland and walked through a special upgrade – an upgrade which, unfortunately corrupted the Flash memory instead.

Fortunately, we’ve received a special program from the developer which will rescue or motor controller – so we plan on doing a demonstration of waypoint navigation in the next couple of weeks after re-installing the software.

Other projects – Taos, Inc. is sending us a special ambient light sensor for evaluation on our vehicle. We plan to use this special sensor, which has a response range comparable to the human eye (40,000:1) in developing our “sun sensor.” This custom sensor will use the Taos chip to determine absolute lighting levels around the vehicles, and also, via a unique “sundial” arrangement, the relative contrast of shadows. Combined with another Taos sensor which measures overall color (we will use it to measure the “blueness” of the sky) we will have shadow contrast and direction information. This information in turn can be used by our vision processing algorithms to predict how deep the shadows should be around an object, as well as their location – improving Robo Monster’s vision

Tuesday, May 10, 2005

Making it happen

May 5
Good news today – our E-stop is fully functional! We’ve got compressed air that slams the brakes when the power is cut, and manual and electronic relays are installed. Computer integration has been a bit more of a challenge – yet another computer blew up on us onWednesday(!) But we’re testing our home brew laser rangefinder on a variety of colors and textures – it will be lots of fun if this $100 instrument does the job. We’ve also hooked a “fail-safe” ultrasound unit from Sports Imports, Ltd. This system was designed for parking cars, but two of our programmers have now “hotwired” the system so we can connect it directly to our computers. Our robotic senses at the site visit will be primitive but functional. – Pete Markiewicz, Team Leader

Tuesday, March 08, 2005

Past the ‘Drive by Wire’ Hump…

Well, it’s three days before the DARPA March 11th deadline for part 3 (technical description of vehicle plus video of vehicle) and we’ve done it. Last week we got drive-by wire for steering and throttle on Robo Monster, with brakes ready. We shot a video showing the ‘Monster going over 4-foot obstacles while being steered by remote control – fun! We also sent in our part 3 document describing what we intend to do with Robo Monster.

The only thing missing was the transmission – the current vehicle has a manual transmission, and an automatic transmission will be purchased during the next week or so. Until then, we can’t install a linear actuator to put the vehicle into full drive-by-wire.

We’ve decided to use standard R/C control for remote-control driving as we develop our autonomous system. The reason is that there is a fairly simple circuit for switching between our Roboteq motor controllers and an R/C control. We could do wi-fi, but looking at the structure of Roboteq-computer communication convinced us that it would be a detour.

Currently testing sensors using microcontroller boards from BasicX and Micro Basic. We plant to put lots of “point sensors” all over the vehicle and read them with standard hobby microcontrollers and software. Our vehicle will be more “tactile” than others, though we will put in a high-res visual system later.

For now, our current efforts are to build a GPS waypoint following into Robo Monster. We have a good, if tiny GPS/DGPS that we’re using for testing. Currently deciding if we want an intermediate controller (e.g. acroname brainstem) between the PCs and the motor/servo controllers.

Monday, February 14, 2005

Crunch week…

This is the week that the vehicle goes from being a regular car to full drive-by wire. We’re installing Roboteq hardware for steering and throttle, along with some motors.

More members, this time coming from Cal Poly. We’re working out how to make the large sensor grid resistant to damage.

More sponsors this weekend…

Thursday, February 03, 2005

Team Robo Monster™

Welcome to the Team Robo Monster Wblogger! We are working on an entry for the 2005 DARPA Grand Challenge that will uniquely combine navigational ability with “attitude” in a kick-butt, unstoppable, off-road vehicle package. We have gotten our vehicle and are currently seeking sponsors and team members.

Team Robo Monster™ Responds to DARPA Grand Challenge
Desert Field Test of Robotic Vehicles Offers $2 Million Prize

Los Angeles, California, January 31, 2005 … Team Robo Monster™ announced it will participate in a Defense Department research and development initiative aimed at advancing robotics technologies for future military use. The initiative, known as the DARPA Grand Challenge, is a field test of fully autonomous ground vehicles to be conducted in the Mojave Desert on October 8, 2005. The Defense Advanced Research Projects Agency (DARPA), is offering a $2 million prize to the vehicle that completes the course the fastest within a 10-hour period.