Build Your Own Borg: Sort of
While using a joystick or wheel to drive a rover might at first seem appealing, the 20 minute delay in transmission of each signal from Earth to Mars would make a true drive more like a very slow crawl. To counteract this delay, more autonomy has been built into loading an entire day's worth of driving in a single set of commands sequences.
The core of this capability is sophisticated hazard avoidance and remote decisionmaking. While avoiding bad spots ranks highest in a decision tree, the opportunity for a rover to direct itself to interesting places becomes important, particularly for longer drives.
A team of scientists has set out to combine human mobility with some of the latest off-the-shelf hardware to study what a remote geologist might do on another planet. The team calls their system, the "Cyborg Astrobiologist". The half-machine/half-human seeks out and prioritizes changes in its survey.
Their recent accounts in the field are abridged as a case study of what cybernetics might deliver. Patrick McGuire is the lead author describing a mission that included robotics experts, geologists, and a wearable computer equipped with image analysis software as its pointing compass. What follows is an excerpt of his longer account of field experiences so far with the "Cyborg" project.
We have developed and field-tested a "Cyborg Astrobiologist" system that now can:
an SV-6 Head Mounted Display (from Tekgear in Virginia, via the Spanish supplier Decom in Val`encia) with native
pixel dimensions of 640 by 480 that works well in bright sunlight,
a SONY 'Handycam' color video camera (model DCR-TRV620E-PAL),
a thumb-operated USB finger trackball from 3G Green Green Globe Co., resupplied by ViA Computer Systems and
a small keyboard attached to the human's arm,
a tripod for the camera, and
a Pan-Tilt Unit (model PTU-46-70W) from Directed Perception in California with a bag of associated power and signal converters.
The programming for this Cyborg Astrobiologist/Geologist project was initiated with the SONY Handycam in April 2002. The wearable computer arrived in June 2003, and the head mounted display arrived in November 2003.
We now have a reliably functioning human and hardware and software Cyborg Geologist system, which is partly robotic with its Pan Tilt camera mount. This robotic extension allows the camera to be pointed repeatedly, precisely and automatically in different directions.
Based upon the performance of the Cyborg Astrobiologist system during the 1st mission to Rivas in March 2004 on the outcropping cliffs near Rivas Vaciamadrid, we have decided that the system was paying too much attention to the shadows made by the 3D structure of the cliffs. We hope to improve the Cyborg Astrobiologist system in the next months in order to detect and to pay less attention to shadows. We also hope to upgrade our system to include: image-segmentation based upon micro-texture; and adaptive methods for summing the uncommon maps in order to compute the interest map.
Based upon the significantly-improved performance of the Cyborg Astrobiologist system during the 2nd mission to Rivas in June 2004, we conclude that the system now is debugged sufficiently so as to be able to produce studies of the utility of particular computer vision algorithms for geological deployment in the field.
We have outlined some possibilities for improvement of the system based upon the second field trip, particularly in the improvement in the systems-level algorithms needed in order to more intelligently drive the approach of the Cyborg or robotic system towards a complex geological outcrop. These possible systems-level improvements include: a better interest-map algorithm, with adaptation and more layers; hardware and software for intelligent use of the camera's zoom lens; a memory of the image segmentation performed at greater distance or lower magnification of the zoom lens; and highlevel image-interpretation capabilities.
Performing further offline analysis and algorithm-development for the imagery obtained at Rivas Vaciamadrid: several of the parameters of the algorithms need testing for their optimality, and further enhancements of the algorithms could be made.
Optimizing the image-processing and robotic-control code for the current Cyborg Astrobiologist system for speed and memory utilization. Further testing of the existing Cyborg geological exploration system at other geological sites with different types of imagery.
Speeding up the algorithm development by changing the project from being partly a hardware project with cameras and pan-tilt units and fieldwork to being entirely a software project without robotically-obtained image mosaics and without robotic interest-map pointing; with such a change in focus, our algorithms could be significantly enhanced by studying many more types of imagery: for example, from human geologist field studies on the Earth, from robotic geologist field studies on Mars, and from orbiter or flyby studies of our solar system's moons.
What the Mars MER team has achieved is truly amazing:
Firstly, the rovers can move to points 50-150 meters away in one sol with autonomous obstacle avoidance enabled for the uncertain or dangerous parts of the journey
As of July 4, 2004, this was taking 4-5 hours per sol for the mission team to complete, rather than the 17 hours per sol that it took at the beginning of the MER missions.
Such capabilities for semi-autonomous teleoperated robotic 'movement and discovery' are a significant leap beyond the capabilities of the previousMars lander missions of Viking I and II and of Pathfinder and Sojourner.
Nonetheless, we would like to build upon this great success of the MER rovers by developing enhancing technology that could be deployed in future robotic and/or human exploration missions to the Moon, Mars, and Europa.
One future mission deserves special discussion for the technology developments: the Mars Science Laboratory, planned for launch in 2009.