Amplified Intelligence

The AI Problem, Interview with Ken Ford

Dr. Ken Ford, Director, IMHC
Image Credit: Pensacola News Journal/ IMHC

The Institute for the Interdisciplinary Study of Human & Machine Cognition (IHMC) was established in 1990 as an interdisciplinary research unit of the University of West Florida. Since that time, IHMC has grown into one of the nation’s premier research institutes with more than 115 researchers and staff investigating a broad range of topics related to understanding cognition in both humans and machines with a particular emphasis on building computational tools to leverage and amplify human cognitive and perceptual capacities.

The Institute’s Director, Dr. Ken Ford, was recently selected by President Bush to serve a six-year term on the National Science Board. His nomination was confirmed by the United States Senate in March. Ford is the author of over a hundred scientific papers and five books. Ford’s research interests include: artificial intelligence, cognitive science, human-centered computing, and entrepreneurship in government and academia. He received a Ph.D. in Computer Science from Tulane University. In January 1997, Dr. Ford was asked by NASA to develop and direct its new Center of Excellence in Information Technology at the Ames Research Center in the heart of Silicon Valley. He served as Associate Center Director and Director of NASA’s Center of Excellence in Information Technology.

Ford’s work spearheads a vision to address how humans interact with machines (and vice versa). That vision centers not so much on the traditional fields of Artificial Intelligence, as the concept of Amplified Intelligence. In other words, not just how to make machines behave more like humans, but how to improve humans working and playing in consort with their new and inevitably busy machine environments.

Astrobiology Magazine (AM): The IMHC research agenda broadly seems to cover robotics, cognition and simulations. Are there parts of machine intelligence that your research institute doesn’t cover today, but that you see as growth areas?

Ken Ford (KF): Don’t forget that second letter is ‘H’. Although a lot of our research could be categorized as AI, and five of our researchers are AAAI (American Association for Artificial Intelligence) Fellows, IHMC is not a traditional machine intelligence laboratory. The focus and theme of our research is what has become known as human-centered computing which, in a nutshell, is about fitting technology to people instead of fitting people to technology. The human is part of the system, and it is the performance of the whole system, including the human, that we are interested in. This requires that machines should be designed to fit us physically, cognitively, and perhaps even socially.

Computer rendering of the semi-autonomous mission plans now taking place using the twin Mars’ rovers, Spirit and Opportunity.
Credit: Maas/NASA/JPL

We think of AI as meaning "Amplified Intelligence." The interesting thing is that many traditional AI technologies in fact are being used in just this way. We like to refer to it as building cognitive prostheses, computational systems that leverage and extend human intellectual capacities, just as eyeglasses are a kind of ocular prosthesis. Building cognitive prostheses is fundamentally different from AI’s traditional Turing Test ambitions — it doesn’t set out to imitate human abilities, but to extend them. And yet (unlike, say, the ambition of developing artificial insects) it keeps human thought at the center of our science.

The "prostheses" metaphor emphasizes the importance of designing systems that fit human beings. I am now typing on a computer that I regard as my cognitive prosthesis, if I lost it I would be lost but unfortunately it doesn’t fit me very well. It knows almost nothing about humans, whereas I have to know quite a lot about it. I also had to adapt myself to use it, for example to type on its keyboard: again, fitting humans to machines, rather than machines to humans.

The design and fit of these computational prostheses requires a broader interdisciplinary range than has traditionally been associated with AI work, including computer scientists, cognitive scientists, physicians, social scientists of various stripes, and even some philosophers.

Current active research areas at IHMC include: adjustable autonomy, advanced interfaces and displays, biologically-inspired robotics, cognitive work analysis, communication and collaboration, computer-mediated learning systems, expertise studies, human strength and endurance amplifying devices, intelligent data understanding, knowledge modeling and sharing, knowledge representation, natural language processing, software agents, work practice simulation.

As you can see, this covers more than traditional AI, which is itself now a huge subject, and no single institution could be expected to encompass all of it.

AM: In your opinion, how well do the machine intelligence problems (like navigation, data-mining, or simulations with agents) map to the basic computer science [CS] problem of efficient ‘search’?

KF: Wow, efficient search is a "basic computer science problem"? Not long ago, search was being suggested as a defining characteristic of AI to distinguish it from ‘mainstream’ CS.

But to return to the question: search is certainly a central technique in AI, but the search spaces arising in AI are often impossibly huge, and a more interesting aspect is not so much how to search them efficiently as how to re-cast problems so that the search space itself is reduced in size. Searching is what you do when you can’t think of anything smarter.

NASA is developing the Wearable Augmented Reality Prototype (Warp), a personal communication device. The voice- activated wearable computer allows easy, real-time access to voice communication, pictures, video, people and technical reports. "It wasn’t so much the electronics but the packaging that ended up being the big unknown…"–JPL engineer Ann Devereaux Image Credit: JPL

AM: Are there compelling technologies today that yield ‘autonomous’ robotics, but without much in the way of solving ‘cognition problems’. In other words does mobility and a predetermined set of tasks fit better to a future model for robotics than, say, having software that tries to figure out what to do given a whole new set of environmental inputs or equivalent sensory functions?

KF: Yes and no. Cognition isn’t all-or-nothing. The moral of recent AI work is that this is a continuum. Robots can be made adjustably autonomous, so they explore by themselves but report back if they need help or find something interesting (using the criteria which we provide, of course). We can give them a high-level set of tasks but leave the details to the robot.

Basically, what it comes down to is that making a robot which can plan its own activities in more or less detail is just one task among many, but one that AI has been reasonably successful at solving: so it makes sense to let them do some planning, since they are closer to the problems than we are.

AM: One priority in many of planetary science missions involves data-mining from image collections. An example is crater counts from remote sensing images [to date another planet’s exposed surface like Mars], or signal detection looking for extrasolar planets from large orbital data sets and telescope observations. In your experience, are these tasks approachable with a general pattern recognition strategy (the black box) or should these tasks feature customized algorithms (most such fields seem to have their own methods employed today that do not cross disciplines)?

KF: We do not have at this time an opinion as to which data analysis methods should be used for the variety of problems of these kinds that may arise in manned and robotic exploration in many cases the character of the problems will depend on technology yet to be developed.

AM: Some have advocated robotic exploration to solve many problems in planetary exploration, ranging from extreme environments and cost, to command transmission delays over great distances. Is there an appealing model for the next generation of machine explorers compared to human explorers?

KF: Space exploration will need both humans and robots. There is a better way to pose your question: Where will the human explorers reside in the exploration process? One can imagine the human involvement being from Earth, from Earth orbit, from a libration point, from an orbit around the planet to be explored, or on the planet’s surface. Arguments can be made for each option or, more compellingly, some combination of these in sequence.

AM: One proposal for landing on Mars involves highly redundant units which spring from a kind of mother pod, then go about some sort of sharing of data as they fan out over the landscape. Is this a model that has been tried for other problems here on Earth before? For instance, in exploring other extreme environments?

KF: I am not aware of any such studies focusing particularly on extreme environments, but the use of a ‘sensor web’ of many relatively simple sensors to collect data in a complex environment has become quite common. They have the advantage of not being dependent on the functioning of any single sensor.

AM: On a lighter note, many in the public are being introduced to household robotics through one interface, the ‘vacuuming’ robot (or carpet drone). This seems to some almost like a bug or rodent model for what machines might introduce culturally. Given the frustrations that many have expressed with their traditional interfaces to other convenience appliances, whether programming a VCR or using a personal computer, do you think there are other cultural models to compete with the vacuum robot?

On convenience appliances like the Home Vacuum Robot, when they work, they are "visibly invisible" like spectacles. "..much like eyeglasses, we don’t notice them: we just see the world better through them.." –K. Ford
Image Credit: Time

KF: That common tale of the impossibility of programming VCRs is out of date; check out a Tivo, for example. The interface times are changing, but nobody remarks on it, because the whole point of good interface design is that it becomes invisible. Modern cars have lots of processors in them, but they don’t feel like VCRs. When our artifacts become more like real prostheses, they feel like part of us rather than something we are obliged to fight or wrestle with. They make us feel empowered, rather then imposing themselves into our attention as something for us to deal with. So, much like eyeglasses, we don’t notice them: we just see he world better through them than we saw it previously.

AM: Or put another way, how will humans interact with robots into the future–as a housemate, a slave, or an entertainer?

KF: All and none of the above. Those are all human/human interaction role models. Human/machine relationships will be different. They will integrate themselves into our environment, so that it fits us better in new kinds of ways, rather than being imitation human beings.

An intelligent kitchen will be one where, say, you can simply ask the room to show you a recipe for vanilla pudding, and maybe choose to have it speak to you with Julia Child’s voice (just as you can choose which of various Steinways your electric piano should sound like), or to call you when the roast is ready to be basted.

It’s amazingly smart, this kitchen, but it’s still a kitchen.

AM: Your research group looks at some issues in biologically-inspired locomotion, such as robots that run, swim, glide or crawl. Is there a natural model that is particularly challenging to mimic using machine intelligence? For instance, is flying inherently more difficult than crawling for a machine to accomplish well?

KF: No, if anything, crawling is harder–obstacles, limited line of sight, variable surfaces, etc. We already have a lot of autonomous, goal-directed flying machines– smart ordinance.

AM: Can you describe some of the problems you hope to tackle in your study of NASA’s future space exploration programs?

KF: Our charter is rather specific. We are to provide NASA an independent assessment of (a) the technologies required to meet the exploration vision and (b) the processes by which to engage the best of the research, development, and engineering communities to provide the necessary developments.