Can a Robot Find a Rock?

An interview with David Wettergreen: Part IV

In the final segment of our four-part interview with David Wettergreen, an associate research professor at the Carnegie Mellon University Field Robotics Center, he explains why it’s not so easy for a robot to find a rock.

The semi-autonomous robotic rover Zoë hunts for rocks in Chile’s Atacama Desert.
Credit: Carnegie Mellon University

Astrobiology Magazine: The Field Robotics Center is doing ongoing work to develop robots that can find and study interesting science targets, without human direction. You’re the PI on a project called Science on the Fly, and I read that among the things you’re trying to get a robot to do reliably is to detect rocks. Is that really a difficult thing to do?

David Wettergreen: Yeah, distinguishing a rock from a soil is a surprisingly hard problem for a robot.

AM: What’s hard about it? It seems pretty obvious to me.

DW: Isn’t that extremely frustrating? The simple answer is that soil is made of busted up rocks, and in the difficult cases you have a rock of the same composition as the soil it’s sitting on, which happens quite often in the Atacama Desert, where we’ve been doing this work. The texture of soil and the texture of a rock are fairly similar, if you think about it. You get soils on top of your rocks, you get rocks partially buried in the soil. It’s not like you have black rocks sitting on a white sandy beach.

AM: But rocks have clearly defined edges, don’t they?

DW: Maybe. I can show you a lot of images where edges are not a good discriminator. What we’ve found – this has taken a couple of years of work to figure out – is that you have to combine all of these features, like the color, the texture, the 3D structure, and the edges; and you have to do that at multiple scales, because you have small rocks and big rocks, and big rocks sometimes look like a bunch of smaller rocks put together. Sometimes a clot of dirt looks like a rock but it’s actually just a clot of dirt. So if we put all these different features together, and set a size threshold – say, all the rocks bigger than a quarter – our method can find about 85 percent of the rocks in an image.

We’ve also taken spectra of different types of rocks and we look for patterns in the spectra to try to identify minerals so we can classify rock types. In the experiment coming up this year we’re going to try to do basic geologic mapping with the robot using planning software that Dave Thompson is developing. It will begin exploring a region, finding rocks and measuring their size, distribution and composition; and it will traverse along until it starts finding a different distribution of rocks. And then it will begin to map the boundary between those two distributions. It could just go back and forth, and back and forth, and back and forth over a large pre-defined region, but that turns out not to be a very information-efficient way of doing it. If you were a geologist, you’d probably start walking down the boundary. Maybe you’d wiggle a little bit, but you wouldn’t go 100 meters to this side and then back 100 meters to that side.

Zoë under construction at Carnegie Mellon University.
Credit: Carnegie Mellon University

So the system follows an information-optimal approach. It looks at where it knows the least about the geologic boundary and tries to gain more information about it. That gets the greatest efficiency in terms of collecting scientific data for the scientists.

It’s going to operate using a high level of autonomy to map out geologic boundaries. We have this beautiful geologist’s map on the wall in our lab: here’s these limestone beds, and there’s shale here and then there’s all this basalt overlaying something else. We want to see if we can do that to some order of accuracy with a robotic system.

AM: How will you know whether the robot’s created an accurate map?

DW: We’ll map a region three times. The first time, we’ll use that dumb strategy of go everywhere, cover everything. Then we’ll apply a strategy where the robot has finite time and resources, we pick a path for it to follow to a destination and it samples at some regular interval. And then we’ll do it a third time using this information-optimal strategy where we tell the robot, For the time and resources you have, make your best decisions about where you’re going to get the most information. Then we’ll give the three data sets to a geologist and say, Which one is the best? Which one would you choose? What are the problems? Overall, in our early experiments, they liked the data set that had the most information content.

AM: Over time, the trend has clearly been toward robots that can operate more and more autonomously, that can “reason” more and more like humans. As plans to return to the moon and possibly to send humans to Mars have developed, there’s been a renewed debate about what can be done with robots and what has to be done by people. What’s your view on this?

This 2006 NASA-sponsored Desert RATS exercise in the Arizona desert studied how humans and robots might work together to explore an unknown world.
Credit: Henry Bortman

DW: I think one of the main roles for robots is to act as scouts. For reasons of risk and cost and efficiency and the value of human life, it makes sense to send robotic systems most places first before you send people. Even in the exploration of the south pole of the moon, many, many of the things that need to be done to prepare for people could be done by robotic systems. The motel could be set up with the lights on when you arrive. So I think robots are very important to act as a pathfinders.

I look at robots as tools in some sense. If it’s more efficient to have a robot go get something for you and bring it back, you do that. If it’s more efficient for you to sit in your comfortable environment and operate the robot remotely, you do that. So robots act as assistants and surrogates. I think it’s an open question whether a scientist walking around on the moon has the robot going ahead of him, making sure everything is safe and giving him preliminary information to make decisions about where he or she wants to go as they explore, or whether the robot is behind them carrying the tool belt and handing over the hammer. Actually, I think doing both is a reasonable strategy.

Most people are not going to visit the moon. So for them the experience is partly through robotic systems: the images and the data they can provide. But the experience is also what people that do go to the moon or do go to Mars come back and say to us. You are never going to have a robot tell you what it was like. You need both robots and people to explore.

AM: Last question: If somebody said, “Here’s a blank check, go build the robot of your dreams,” what would you have it do?

DW: Hmmm. I guess I don’t spend enough time daydreaming. I think I would head for the polar regions of Mars. I’ve recently been looking at pictures of the contact between the ice sheets, the water and carbon-dioxide ice and the terrain. I’d go look there. I don’t know what’s going to be there, whether it’s life or exposed history or layered deposits, but I think it’s an fascinating place, and a place where robots can travel long distances, take measurements and images, collect data for us to interpret, and produce exciting discoveries.

That’s what I’d do today. Tomorrow I’d probably do something different.