Interview with David Wettergreen: Part II
Astrobiology Magazine interviewed David Wettergreen, an associate research professor with Carnegie Mellon University’s Field Robotics Center. In this, the second segment of a four-part interview, Wettergreen talks about the robot Nomad, which began its career as a fossil-hunter in Chile’s Atacama Desert, and later was sent to Antarctica to search for meteorites.
Astrobiology Magazine: You’ve done a lot of work creating virtual environments for operating robots by remote control. One of the first times you did this was with a wheeled robot called Nomad. What were your goals on that project?
David Wettergreen: Nomad was a robot that was originally conceived as a lunar rover for a commercial mission to the moon, to explore the south polar region. One of the questions was, Can you really go a couple hundred kilometers (roughly 100 miles) on the moon with a robot? Another was, Would people on a low bandwidth communication link be able to see what was going on? Would they have that feeling like they were on the moon driving this robot around? Because you have three seconds of time delay for a signal to get from the moon to Earth, plus your reaction time, plus three seconds to send back a command – so with teleoperation you can get into a lot of trouble.
After Nomad was built at Carnegie Mellon, in 1997 we took it to the Atacama Desert. At that point I was working for NASA, at Ames Research Center, in what is now the Intelligent Robotics Group. We were developing virtual interfaces, huge-screen projections onto curved surfaces, using imagery to give panoramic views. We created something we called the Virtual Dashboard that showed a speedometer, a trip odometer, the robot’s roll and pitch, and a graphical representation that showed what direction it was pointing and where its cameras were looking. The conceptual model was to create the experience of driving your car—there’s the dashboard in front of you and when you look around you have a panoramic view – but with a three-second delay.
Another goal was to show that a robot could do useful science. We put together a science experiment for Nomad where we simulated lunar operations. We had scientists teleoperate it and try to understand the geology of sites that they weren’t given any prior knowledge of. It was during those experiments that scientists directed Nomad to an outcrop that was, in fact, a Jurassic fossil bed and found a rock that they believed to be a fossil. At the time that was a pretty exciting result because there was a conversation in the scientific community that said you’d never be able to find something like a fossil with a robot, you wouldn’t be able to distinguish it well enough.
AM: How big was the fossil?
DW: A little bit bigger than a quarter. It was a stromatolite, fossilized cyanobacteria. The scientists saw something that looked like a fossil and had the structure of a fossil, but Nomad didn’t have the instruments to make a conclusive decision, so it was only a hypothesis. Nomad didn’t have a sampling arm at that point, so we actually had someone in Chile go and pick up the rock. Then in the geology laboratory it was sliced up and analyzed, and confirmed as a fossil.
AM: Where were you operating the robot from?
DW: Pittsburgh and California. And also Santiago. The virtual environment was something that we could set up in multiple places. We had interfaces set up in science museums and we had over 10,000 people sit down and drive the robot. One of the science centers where we set it up had a theater with a voting system in the seats. So you had 100 people sitting there saying, "Go right, go right." In another experiment we had the imagery from the robot fed to a local cable television station, and people could dial in on the telephone and press 4 to go left, and 6 to go right and drive the robot in Chile. That was in 1997.
After that I went off to Australia and was building underwater robots for a few years. But back in the Field Robotics Center, Red Whittaker and Dimi Apostolopoulos and the crew kept working on Nomad. It was winterized and gained an arm, and in 2000 it went to Antarctica, to a place called Elephant Moraine, where the ANSMET (Antarctic Search for Meteorites) project planned to search for meteorites that year. Nomad was sent there to find rocks on the ice and to try to discriminate between terrestrial rocks and meteorites. ANSMET began in 1976, and in 1984 the Mars meteorite, ALH84001, with the hypothesized martian nanobacteria, was found in Antarctica. More than 10,000 meteorites have been found there. It’s the best place to collect meteorites, from Mars, the moon and elsewhere.
AM: How did it do?
DW: It found five meteorites in total.
AM: And it knew they were meteorites?
DW: They were above its threshold for detection of meteorites and rejection of terrestrial rocks. In experiments at Elephant Moraine, it performed patterned searches and classified 42 rocks, 3 that had a strong enough “meteorite” signature, 22 were classified as terrestrial rocks, and 17 were misclassified, it didn’t know which type they were. It identified two additional meteorites serendipitously, during preliminary testing when the rover was driving around but not searching systematically. Nomad was using visual imagery as a discrimination for color, and the arm had a spectrometer so it could do a little bit of compositional analysis to help it discriminate the two types of rocks.
AM: Did it look for a match with certain criteria typical of meteorites, and then say, “That’s a meteorite,” or did it look for characteristics of terrestrial rocks, and decide that if something wasn’t a terrestrial rock it must be a meteorite?
DW: A little bit of both. It estimated probabilities of both of those things, and then, depending upon how accurate you wanted it to be, it made a decision. Liam Pedersen developed a Bayesian network, which had a number of different input channels. It could look at texture and color and hue and structure; and if there was a spectrometer, at signal response in different wavelengths. All of those different channels fed into the belief network. Beforehand, Nomad had already looked at a number of training examples – lots and lots of known terrestrial rocks, and lots and lots of known meteorites – and it had trained its network to associate different probabilities with the inputs from different channels.
Nomad computes both the probability that a rock is terrestrial rock and the probability that it’s a meteorite. And then you pick a dividing line, which is a little bit gray, based on how many false positives and how many false negatives you’ll accept. If you don’t want to miss one, then you maybe accept ones that you’re a little bit in doubt about. If you never want to pick a rock that isn’t a meteorite, you might reject a meteorite because it didn’t look quite enough like a meteorite. We want a binary answer: yes or no. But what it really wants to tell you is, “Well, there’s a 62 percent probability it’s this and a 22 percent probability it’s that. That’s all I know.” And with that system we were able to show that this could be done automatically. Science autonomy has advanced quite a way since then.