Team Robomonster – Postings after DARPA Visit

 

Monday, August 08, 2005

Sensory Panel Types

During the last month we’ve been experimenting with different configurations of sensory panels. As described in earlier posts, each panel will form part of the outer body of the vehicle. The “sensor dense” approach makes each of these panels a sort of low-resolution visual system – providing 3D object and vector data to the vehicle, independent of cameras.

After some shuffling around, we have ended up with the following configurations:

“Panel” 1 – 5 sensors. 1 centrally-placed 40kHz Devantech sonar working out to 10 feet, with 4 surrounding Sharp IR threshold sensors working to about 4 feet. The total span covered by this panel is about 30cm x 20 cm. This is the configuration that will cover parts of the vehicle not aiming directly front, back, or to the sides. It provides a basic “skin”.
==============================
a) For long-range detection, the panel only reports the distance along the z-axis – the object is assumed to be centered on the panel.

b) But if one of the IR sensors is tripped, the panel uses this to refine the position estimate for the object.

c) The object may cover 1, 2, 3 or all 4 IR sensors. This allows some discrimination of object shape – horizontal, vertical, and diagonal bars may be recorded.

d) IR sensors “hits” increase the reality of the sonar detection. If IR registers a hit and sonar does not, then the objects reality is lessened. In practice, this has never happened with the panel.

e) Movement is detected by comparing two successive samplings from the sensors. The data is used to generate a 3D motion vector.
===========================
The panel outputs several kinds of pseudo-NMEA strings:
a) Simple list of detections by sonar and IR. This data is output is a virtual “retina” with 9 pixels (3×3) and a z-axis divided into about 150 pixels, so the total volume is about 3x3x100, or 1000 pixels. The data may be used to develop an “evidence grid” for objects.

b) Object detection, sent as apparent location and minimum size, and shape in some cases.

c) Motion, as a difference map of the virtual “retina”

d) Object detection, sent as the apparention position and motion of the object, with lower limits to size.

e) Configuration, listing sensors by name, panel height/width, number, and sensor positions on panel.
===========================
Even this relatively simple panel takes a lot of computing. Already, it’s become obvious that we need to allow feedback to the panel microprocessor. The microprocessor may integrate the data, but “watchdog” functions detecting faulty sensors are handled at a higher level. So, we are going to add an occasional pause where the microprocessor listens for commands from the upstream computers.

We’re working on two other panels, which bracket the extremes between “Panel 1”. “Panel 2” is a collection of “slow” sensors – in particular temperature, humidity, and magnetic compass, plus a tilt sensor. The Memsic tilt sensor is not really “slow”, but the information is integrated over a couple of seconds to detect slow changes in the tilt of the vehicle. These panels (about 5) run in a strip along the top of the vehicle. “Panel 3” is currently being worked on. It is much more complex. The current testing design consists of 4 threshold IR detectors, 4 digital IR detectors (shorter range), One long-range sonar with light sensor, two short-range 235kHz sonars, and temperature sensors. This is also placement for a low-resolution “webcam” level camera.

The non-camera panels all use a single microprocessor to handle their data. The webcams get a dedicated computer. The output of the microprocessors from the non-camera data may be configured as a “retina” so the same image processing algorithms may be used for visual and IR/ultrasound data.

We’ve also written a simple visualizer for the panels in the Windows IDE, currently in visual basic. It processes the data reasonably well, but display updates are clearly not real-time. However, it really gives us a feel whether our panel software is actually working…

 

Thursday, July 14, 2005

Gumstix Robostix

Gumstix, which has created a nifty and tiny 400mHz single-board Linux computer (see at http://www.gumstix.com) recently released RoboStix, and add-on board. Overall, the gumstix plux RoboStix are the size of a pack of gum with 3 sticks in it – impressive. The RoboStix supplies PWM connections, and I2C interface, and various other goodies (including an array of colored LEDs useful for monitoring program execution).

We’ve been experimenting with the Gumstix for some time. Our hope has been to link one Gumstix to several microcontrollers, which in turn link to multiple sensors following our “sensor dense” concept. One of the problems is that the Gumstix runs at 3.3V instead of 5V, and can’t simply be plugged into a sensor array like a standard microcontroller. Various groups have developed the hardware to hook up Gumstix, but we decided to wait – we want to concentrate on the programming rather than hardware configuration. The gumstix also supports ethernet, usb-ethernet and bluetooth (not all at once) but this doesn’t seem to be the best way to hook up the microcontrollers.

We are actively investigating CAN (see http://www.machinebus.com) for the link – but now we’ll have to look at the I2C interface on the Robostix. It seems possible that the Robostix will allow us to easily put the gumstix into an I2C network of several microprocessors and sensors. Our current microcontroller is “master” but this actually fits our idea for forcing the data from the “bottom” rather than requesting from the top.

Unfortunately, the growing popularity of Gumstix has held things up – the first batches of the Gumstix RoboStix board sold out almost instantly. So for now, we’re going to concentrate on our microprocesor arrays and make them as “smart” as possible. Instead of reporting raw data, the microprocessors report time vectors – changes in the sensor output. If nothing is happening, the microprocessor puts out a slow “heartbeat” (once every 5 seconds). However, if the data is changing, the microprocessor puts out vectors as fast as it can. It does this by comparing a filtered average to the current values. We’re also thinking of allowing two kinds of outputs – one which shows the current and smoothed averages, and another which simply records a +/- for the sensors showing rapid change in data. This might help speed up processing.

Still looking around for our “junker” car to test our stuff on so we don’t have to use the rock-crawler. Ideally the junker will be street-legal (the rock-crawler is not) so we can drive directly to test sites. More on this later.

posted by Robomonster at 8:27 AM | 1 comments

Saturday, July 09, 2005

Testing the “sensor dense” idea
Despite the long lazy days, we’re pushing ahead with our “sensor dense” concept. In addition, we’re modifying some of the controller software and testing controls.

Our big work for the last several weeks is evaluating particular sensors for use in a “sensor dense” configuration. This involves hooking them up to a microprocessor and evaluating their response under different conditions. In the next step, software filters smooth the data and detect when the data is changing suddenly – when there’s no change in readouts, the sensors don’t output. Finally, the information from each sensor is formatted into a NMEA-like string which is sent out via the serial port. We have been talking to MachineBus (http://www.machinebus.com ) about using a CAN network for this, but at present we are still “all serial”.

Here is a list of the sensors we have tested. “Direct connect” means that the sensor has to be wired to input/output pins on a microprocessor, in contrast to I2C or other mini-network protocols. For testing we have been using the Basic Micro Atom Pro – mostly because it has a large (2k RAM) and is faster than other hobby microprocessors:

Ultrasound SRF04, SRF08, SRF10 (Devantech) – the 04 is a direct connection, while the SRF08 and SRF10 use I2C, allowing several devices to be on the same bus. All these ultrasonic devices do good detection, though the spread of the beam is fairly wide. At present, we’ll probably put only one of these per body panel – their range is out to 30 feet. We also experimented with the Senscomp/Polaroid ultrasound but it’s pretty clear that the Devantech devices are easier to set up and use. The SRF08 also has a photocell, which gives a crude light/dark reckoning – it will allow us to determine which parts of the vehicle are in shadow as it moves under, say an overpass.

Ultrasound SRF235 (Devantech) – this I2C device uses a 235 kHz beam, in contrast to the 40 – 50kHz beam used by most ultrasonic devices. Due to the high frequency it can only detect to about 1 meter – but the beam is only about 15 degrees wide! This makes this sensor much like a long invisible “hair” on the robomonster body. It also updates faster – up to 100 Hz. Finally, since it operates at a different frequency, it can fire at the same time as an SRF04 or SRF08/10 without interference. We see it as a secondary confirmation system for our close-range IR sensors (see below).

Senscomp (Polaroid) – This larger ultrasonic device has about the same range as the Devantech sonars, but uses more power. We found it was difficult to set up compared the Devantechs, and also has large power draws (transient 2 amps).

SportsImportLTD sonar – This is a commercial sonar system from one of our sponsors. The weather-hardened sonars are designed to point in an array to the front and back of the vehicle. Interestingly, there are only two wires going into the system. Our plan is to put this packaged system as a “canned” secondary detector for objects while the vehicle is trying to back up.

GP2D02 IR sensor (Sharp) – This direct connect device reports a range from about 3″ to 30″. Its low price makes it possible to use several, and connections are straightforward.

GP2Y0D02YK IR sensor (Sharp) – This direct connect device thresholds at abouty 30″ inches. In the “sensor dense” concept, threshold sensors typically trigger more detailed readouts of other sensors reporting position/range.

GP2Y0A02YK IR Sensor (Sharp) – This is the sensor we had at our 2005 site visit – it reports distance as voltage in an analog circuit 3-30″. It has pretty good performance – a set of 4 gave reliable indicators of a person moving in front of the vehicle. However, the analog system draws more power and can’t be hooked into a network like I2C.

GP2Y0D340K IR Sensor (Sharp) – This tiny IR sensor thresholds at 16″. However, its small size is less of an advantage than one might think – it requires additional wiring and some electrical components to function.

Memsic accelerometer (Memsic) – We’re testing the surface-mount version of this tilt sensor from Parallax. We’ve found that it is pretty good at working, but the raw data needs a lot of massaging to convert to tilt angle. We haven’t run it on a moving vehicle yet, so we don’t know whether tilt or vibration will predominate under actual driving conditions.

Magnetic compass (Devantech) – We’ve experimented with the Devantech compass and find it useful – however, the new Hitachi compass surface-mounted by Parallax is more compact. We are considering mounting multiple compasses on the vehicle body, and using the combined input to factor away effects of metal/electric fields.

Magnetic compass (TCM) – This high-end compass was mounted, along with our JRC GPS system during the 2005 site visit. It gives a reliable signal and can tilt-compensate after callibration.

Magnetic compass HM55B (Hitachi/Parallax) – This tiny compass performs similar to the Devantech, with the advantage of very small size – we’re using the Parallax surface mount.

Sensirion humidity sensor SHT11 (Sensirion/Parallax) – Why a humidity sensor? Well, the system has a thermometer, which gives temperature output. Second, by taking temperature and humidity it is possible to calculate dewpoint. In a real robot car, reaching dewpoint is significant – moisture will begin condensing on the vehicle body, lenses, etc. and affect sensors. This sensor will warn the vehicle to take action (e.g. running heaters on lenses). The system took some effort in programming – it has a custom read/write protocol which takes some effort to decode on a microprocessor.

TAOS color sensor (Taos/Parallax) – This sensor reports the relative Red/Green/Blue values of its field of view. We plan to use it to detect blue sky versus cloudy conditions to callibrate our other cameras.

TAOS Light to Digital (LTD) sensor (Taos) – This sensor can measure brightness ranges of 40,000:1 – about the same as the human eye. We plan to use these sensors to determine absolute scene brightness for camera callibration. Combined with the color sensor telling us if it is a cloudy day, it will also allow us to predict the contrast of shadows. Shadows fooled lots of the 2004 Grand Challenge vehicles, and we feel this will provide a workaround.

CMUCam – We’ve been working with this system, both the standard CMU version, and the alternate system developed by Acroname. The goal is to use multiple CMUCams to detect motion around the vehicle, and quantify “optic flow” of the environment as the vehicle moves. Object recognition comes at a later date, with a higher-resolution system.

Bump sensor – We’re putting a few standard switches on each body panel to detect contact. These are contact switches with a small wheel allowing them to roll against a substrate.

Flexiforce – This strip of material can measure pressure. It may be useful for detecting pressure on the bumper if contact is made, going beyond a contact switch

MSI piezo tab – These small plastic strips send a small electrical pulse if the are snapped or vibrated. They form a perfect close-range “whisker” for selected vehicle body panes. Since they can generate high (50v) voltages relative to microprocessor pins, wiring a large number of them will be tricky.

We’re also planning to test several other sensors. Chief among these is a microphone, a kind of vibration sensor. The plan is to use it to confirm things like engine noise and the siren (when it is sounded). Audio sensing/voice recognition for the moment lays well in the future.

In other work, we’ve set up a working servo control system for our throttle. We had tried to use the Polulu system, but found it simply would not respond to Visual Basic signals sent out the serial port. The Parallax servo controller is more forgiving, and we have not problem with servo control now. However, our “leaf blower” effectors are going to need a more sophisticated system. We’re looking at a very interesting servo controller which combines servo output, a microprocessor, and multiple A/D and digital I/O ports. This should allow us to build the leaf-blower effector with sensors for contact, vibration, etc.

Later this summer: Integration. Now that we’ve tested individual sensors, it is time to put some together on test body panels to see overall performance. Our plan is to try one panel with all-analog sensors, and another with digital/I2C sensors. The resulting output should allow our system to use each body panel like a low-resolution “retina” to examine its environment.

 

Wednesday, June 01, 2005

The ‘sensor dense’ approach

Our team is pursuing a ‘sensor dense’ approach inspired by biology. This approach is not just adding a lot of sensors to the vehicle – it is a particular design philosophy inspired by biology.

Here are some of its features:

1. Use lots of simple sensors, instead of a few complex ones.
2. Use a variety of sensor types
3. Organize the sensors into an electronic ‘skin’
4. If you aren’t getting enough information, throw more sensors at the system.
5. Don’t throw away simple sensors if you add complex ones.
6. Use overlapping, redundant sensor networks
7. Connect a lot of sensors to a smaller number of microprocessors. Connect these to a smaller number of computers integrating data from several microprocessors. Connect these to a still smaller number of computers.
8. Use ultra-simple arbitration – no advanced “AI”
9. The environment is modeled via a “body-centered” coordinate system

Our motivation for this design is biology, though we are not trying to duplicate the details of biological structure (e.g. no neural nets). Instead, we are trying to duplicate the ratio of sensors versus “thinking” neurons versus body size found in simple animals.

In our opinion, the smartest robots today are comparable to a jellyfish or at best a mollusk in their computational complexity. So we look at how these systems organize senses and brains – and see a “sensor dense” approach in action. Lots of sensors compensate for a small brain, rather than a complex brain enabling lots of sensors!

Case in point – the Bay Scallop. This creature is essentially a clam that decided to swim – in our minds like a car that decides to drive itself. What do we find? An extremely simple “brain” (actually three ganglia or sub-brains) plus LOTS of sensors. Scallops have upwards of 60 eyes with lenses. They don’t form perfect images, but give an idea of general direction and motion of critters around the scallop.

Instead of a scallop eye tracking an object, each eye in turn fires as an object moves past the scallop – an alternate, sensor-dense way of registering motion.

Case in point – the Box Jelly. These jellyfish are unlike others in that they have eyes (24-40 of them) with lenses which focus images. Box Jellies can swim to shelter, food, and other box jellies. Close examination of their eyes (recently reported in Nature at this link) demonstrate the “sensor dense” strategy at work. The animal has several stalks, with 6 eyes on each stalk. One eye per stalk is large, and forms blurry images. A second eye on each stalk is smaller, but has an iris on it to adjust for ambient light. 4 additional eyes are simple, non-focusing light sensors.

Note the following features comparable to the “sensor dense” approach:
1. Several different kinds of light sensors are used
2. Simple light sensors were not “thrown away” when the more complex eyes evolved – instead they continue to function in the 6-eye system.
3. Simple arbitration – like other jellyfish, box jellies have no brain at all. Instead, they have a network of nerve cells distributed evenly over their body. In the light, the multiple eyes make up for the lack of a central brain.

But this is different from our usual assumptions about robotics. It is usually assumed that you should pair lots of computers with lots of sensors. If you have limited computing power, you should limit sensors. But this clearly isn’t what biology does – in fact it appears to do the opposite. We find small numbers of eyes in the most advanced animals, e.g. ourselves.

Our equivalent to the Box Jelly in Robo Monster is use of a variety of “eyes”:

1. Light sensors scattered over all the body panels of the robot, detecting ambient light and recording absolute lux levels.
2. Low-resolution (160×120) webcams detecting motion and “optic” flow in all directions
3. A few high-resolution stereo cameras.
4. Repeat.

Monday, May 23, 2005

Roboteqs back in action!

Over the last week we made major progress from the site visit – we got our Roboteq motor controllers back online! At the site visit, a corrupted Flash memory in the Roboteqs prevented us from driving – the steering motor couldn’t be controlled. The reason was an upgrade gone wrong – when we tried to upgrade the Roboteq software the system crashed and became useless.

Fortunately, the creator of the Roboteq, (Cosma), provided a custom-compiled program which recovered the Roboteqs from the dead and allowed them to be upgraded. We tested them last week and had great positioning of the steering brake motor.

Now, we’re going to go back to the site visit area and try to run our GPS waypoint follower program.

Published by pindiespace

See http://www.plyojump.com for more

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: