Campus News

Worlds to Explore: Autonomy Challenges for Human Space Flight.

Story posted September 24, 2003

What will it take to send astronauts on a mission to mars?

The answers are time and technology (and of course money, but we'll focus on technology here). David Kortenkamp, a computer scientist in the automation, robotics and simulation division of the Johnson Space Center is one of those hard at work to meet the myriad technological challenges presented by the desire to send people farther and farther into space.

Kortenkamp visited Bowdoin recently to talk about "Worlds to Explore: Autonomy Challenges for Human Space Flight."

One of the primary areas of NASA research in his division concerns how to make space flights more autonomous, meaning that astronauts would spend less time worrying about the maintenance issues of hardware and support systems and more time working on science. Astronauts are relieved of these responsibilities when computers are smart enough to accomplish them, so much of NASA's research is into artificial intelligence, AI. AI allusions are easily drawn from pop culture, and Kortenkamp referred to one area of research as the "Hal 9000 kind of AI" - the kind that takes care of monitoring and controlling systems.

Space stations don't currently have regenerative life support systems, which means that everything that is needed - food, water, etc. - must be carried into space. Having to carry all the supplies means that longer trips are cost prohibitive. For example, carrying all of the water that would be needed to travel to mars and back would add a lot of weight to the mission, and in NASA, Kortenkamp said "weight equals cost."

"If we really want to do longer missions on space stations, we need to come up with these regenerative life support systems," he said. A truly regenerative system would be multilayered.: It would allow food to be produced in space and would allow water and oxygen to be produced or recycled.

In a space station of the future, there would be one area in which the crew lived and another (a biomass area) in which food could be grown. As the crew breathed, their breathing would deplete oxygen and produce carbon dioxide. The carbon dioxide would then be gathered; some would be transformed into methane gas and then into fuel, while some would be sent to the biomass area where plants would use it in their life processes. The plants would produce oxygen as a byproduct, and the oxygen could then be sent into the crew area for them to breathe. There would also need to be a production area in which the plants harvested, such as wheat, could be transformed into food. Alongside these systems would be a system that would transform "gray water," such as bath water and water from washing dishes, and "dirty water" (urine) in to clean water.

These systems are not only complicated, but also interrelated. Detailed planning is needed to know what foods to plant, how much to plant, and when in order to provide the crew with the right amount of oxygen and food. Additional plannig is needed to gauge how much oxygen and water would be consumed by the crew and when fresh oxygen and clean water would be needed.

The advanced life support system NASA has developed has its roots in a robot called Shaky that was created in 1969. This robot was able to perceive cues about the world, build models in its "brain" to observe how the world works, and then take an action based on those models. The problem was that the reaction was delayed because of the time it took the computer to construct a model. In 1986 a new robot was developed that was able to take in input about the world from a sensor and then react immediately based on its perceptions.

"What we really wanted was a system that would combine deliberation...with reactivity," Kortenkamp said. So NASA created a system with several layers to try to get the best of both worlds.

One layer of the system knows how to do very specific skills very quickly, say, for example, how to turn a doorknob. A second layer knows how to break tasks down into subtasks and how to order the tasks. The third layer is a planning system that can plan more complex actions and allocate resources to accomplish the tasks (which are then completed by the other layers of the system).

NASA is already testing these systems.

The triple-layer planning and operation system was operated for 24-hours a day for 18 months. (An article about the test and the system was published in the Spring 2003 issue of "AI" magazine.) Currently operating is a simulation of an entire advanced life support system that is accessible online so that others can test different algorithms in the system. Information is available at http://www.traclabs.com/biosim.

In another NASA test, four people were closed in an airtight container for 90 days. The chamber in which they lived was hooked to a wheat chamber, but there was only enough wheat to produce the oxygen for one person, so they also used an air revitalization system. This system did not produce food, but was a test of the air exchange.

NASA also did an 18-month test of a waste water to potable water system. Using their water recovery system they were able to produce, from gray and dirty water, water cleaner than most people get out of their taps at home.

An advanced life support system is extremely complicated because plants, people and bacteria are constantly adapting to their changing environment, so the system needs to account for that. Also, because the lives of the crew depend on proper functioning of the system, much research is needed into validating that the system is running correctly.

Kortenkamp predicted that the first mission to mars will not be equipped to grow plants because of the complexity of accomplishing the task. Though tests on earth have worked well, they have primarily focused on wheat, so other crops need to be tested. In addition, wheat would behave differently in the atmosphere of space than it does on the earth.

In 2007, NASA is expecting to open a facility known as Integrity, Integrated Human Exploration Mission Simulation Facility, that will simulate a mars mission and allow NASA to further test these life support systems. It will include a chamber for crops, a lab, a living space for the crew and a chamber in which the martian surface is duplicated.

In terms of making life support systems that operate autonomously in space, research is needed in the following areas:

  • Time delay. A mission to mars would have a much longer delay in relaying messages between earth and the space station, so the crew needs to be able to deal with more issues without help from ground control.

  • Relieving the crew of system control

  • Changing the role of the crew from vigilance to supervision

  • Planning and scheduling

While the first type of AI research brings to mind 2001: A Space Odyssey, the second type brings to mind everything from the Jetsons to Robocop. NASA has created Robonaut.

Robonaut is a humanoid robot that NASA is testing. Robonaut would be able to focus on dangerous or mundane tasks so that the astronauts would be protected and able to concentrate on science.

For now, Robonaut is teleoperated, meaning that a controller wears gloves and a headpiece and is able to control Robonaut's actions by his or her own actions; when the operator moves his or her head, Robonaut's head moves, likewise with the hands, arms, fingers, and torso.

Why Humanoid? "It's easier to teleoperate something that's like you," Kortenkamp said. ("The head kind of looks like Boba Fett.") Also, space stations are designed for humans, so for a robot to operate effectively, it has to be humanoid.

"We either need to redesign a space station, or design a robot that's like human," he said. The one exception is legs: Robonaut is only humanoid from the waist up.

"You don't need legs in space," Kortenkamp said. "Legs are a detriment."

The bottom half of Robonaut is generally some kind of stand, or anchor (to anchor it to the outside of a space ship). A new version of Robonaut is mounted on a Segway human transport device.

The biggest obstacle right now to effective use of Robonaut is that the teleoperator has no sense of what sensations Robonaut is "feeling" so the operator has to rely on visual cues to operate Robonaut; Robonaut's filed of view is also rather narrow. Robonaut is equipped with sensitive sensors on its palm and fingertips, but right now there is no way to transmit the sensations to the operator.

"What we really need to do with Robonaut is develop these perception/action skills," Kortenkamp said.

Robonaut needs to be able to see something, reach out, and grab it, something that is much harder for a robot, teleoperated or not, than for a person.

Standing in the way of eliminating a teleoperator are issues of natural language and computer vision. To make Robonaut the kind of robot most people imagine would take complex voice recognition, gesture recognition, sensors and sight - these issues are so complicated that solving them would mean answering basically all of the questions of AI.

Even very simple human actions are very complex in an AI context.

"Stuff we thought was hard, like chess, is easy. Stuff we thought was easy, that any five-year-old can do, is hard," Kortenkamp said. Simply twirling a pencil in one's hand or seeing something and reaching out and picking it up are extremely difficult.

Because of the difficulties involved in automation, Robonaut will probably be teleoperated for the conceivable future.

One of the greatest advantages to the use of Robonaut, even teleoperated as it is now, would be that it could perform duties outside the space vehicle that must now be done by astronauts. Leaving the space shuttle or space station is one of the most dangerous things an astronaut does, so being able to send Robonaut outside would mean a huge increase in safety. If Robonaut becomes advanced enough that it can be teleoperated from mission control on earth, that would also provide a huge cost savings, because it wouldn't be necessary to send someone into space just to operate Robonaut.

To improve Robonaut, some of the issues research is focused on are

  • Sensor interpretation

  • Dynamic interaction with real world objects ("This is something no robot can do yet," Kortenkamp said.")

  • Learning via imitation

  • Abstraction of the continuous world into symbolic reasoning representation

  • Grounding of symbolically represented actions in continuous control

  • Human control of non-humanoid robots. ("What if the best robot to do the job is a robot with 10 arms?" Kortenkamp asked.)

The audience finally coaxed Kortenkamp into making a few predictions:

  • He guessed that it will take at least 50 years to have a Robonaut that can be as effective an assistant as a space-suited astronaut.

  • It will be at least 100 years before a robot can walk around and interact with the world independently as seen in movies and on television.

  • A fully autonomous mars mission, however, might be possible in the next 10 to 20 years.
  • « Back | Campus News | Academic Spotlight | | Subscribe to Bowdoin News by Email