Page images
PDF
EPUB

"distributed" systems that became the tradition in the "fail-safe" era of engineering.

However, because NASA has not absorbed these techniques, it still distrusts centralization of computation. We argue elsewhere that this leads to very large and unnecessary costs of many different kinds.

The Development of Sophisticated Manipulators. We feel that NASA has not adequately exploited the possibilities of even simple man-controlled remote manipulators. The Skylab sunshade episode might well have been easily handled by an onboard device of this sort, and we think it likely that it would have paid for itself in payload by replacing some variety of other special-purpose actuators.

The need to handle radioactive substances led to the development of rudimentary teleoperators many years ago. At first progress was rapid, with force-reflecting, two-fingered models appearing in early 1950s. But, strangely, this development all but stopped when progress was sufficient to make the handling of nuclear materials possible, rather than easy, economical, and completely safe. We believe that this happened because the nuclear industry, like NASA, became at this time mission-oriented rather than technology oriented - so that places like Argonne National Laboratory lost their basic research and long-view funding.

Consequently, today manipulators differ little from their 1950s ancestors. They are still two-fingered and they still leave their operators fatigued after a half-hour or so of use. Even today, there is no generally available and reliable mobile and dexterous manipulator suitable for either emergency or preventive maintenance of nuclear plants - this is still done by people working under extremely hazardous conditions. Concerns within a nuclear plant about storage safety, detection of faults, and adequacy of emergency systems are perhaps best handled using a mobile and dexterous robot.

If such devices had been developed - and space-qualified versions produced NASA could have exploited them, both for teleoperator (human-controlled) and for fully autonomous (robot) use. Indeed, we feel, NASA's needs in this area are quite as critical as those in the nuclear industry. Nevertheless, NASA has not given enough attention to work in the area. Perhaps a dozen or more clumsy two-fingered systems have been developed, but all of these would be museum pieces had the work gone at proper speed.

It therefore makes sense for NASA to enter into a partnership with ERDA to reverse the neglect of manipulator technology. A good start would be to sponsor the development of a tendon-operated arm with a multifingered hand,

both heavily instrumented with imaginative force, touch sensors, and proximity vision systems. Besides the obvious value - in space of separating the man and his life-support problems from the workspace, there are many obvious spinoffs in general manufacturing, mining, undersea exploitation, medicine (micro-teleoperators), and so forth.

Controlling a Manipulator: Still a Research Problem. Dynamic control of the trajectory of a many-jointed manipulator seems to require large calculations, if the motion is to be done at any speed. It takes six joints to put a hand at an arbitrary place at an arbitrary orientation, and the six degrees of freedom have interactions that complicate the dynamics of arm control. The equations are too complex for straightforward real-time control with a low-capacity computer. The problem can be simplified by placing constraints on manipulator design, for example by designing the axes of rotation of the last three joints to intersect, but even the simplified problem is not yet solved.

In any case, the most obvious approach to put an independent feedback control loop around each joint - fails because constant feedback loop gains cannot manage (at high speeds) the configuration-dependent inertia terms or the velocity interaction terms. On the other hand, it seems clear that such problems can be solved by combinations of "table look-up" for sample situations with correctional computations. In any case the control computer will need a central memory that is large by today's space standards.

Rover Mobility, Locomotion, and Guidance Research. Although much knowledge regarding several of the solar system planets has been gained through missions employing remote sensors, and more can be obtained in the future in this manner, many of the critical scientific questions require detailed surface experiments and measurements such as those conducted by the Viking landers on Mars. Despite the historic achievement represented by the soft landing of the Vikings and the effectiveness of the onboard experimental systems, more new important questions were raised. For these to be answered, an extensive surface exploration should be undertaken. A surface trajectory involving hundreds of kilometers, and desirably over 1000 kilometers, would be required to explore a sufficient number of the science sites on Mars to gain an adequate coverage of the planet.

The round-trip communications delay time, which ranges from a minimum of nine minutes to a maximum of forty minutes for Mars, and the limited "windows" during which information can be transmitted precludes direct control of the rover from Earth. Accordingly, a rover on Mars or another planet must be equipped with sensors and appropriate computing capability and procedures to proceed autonomously

along Earth-specified trajectories. The intelligence of this path selection system, together with the basic mobility characteristics of a rover, determine whether scientific sites of specific interest can be reached, given the characteristics of the approach terrain and the distances between sites. It follows that a low-mobility rover equipped with a high-quality path selection system will not be able to reach particular sites nor could it undertake an extensive mission. It also follows that a high-mobility rover guided by a low-quality path selection system would be limited in a similar fashion. Therefore, systematic research programs aimed at maximizing both the rover mobility and the intelligence in path selection systems consistently should be undertaken to provide a sound basis for the planning and execution of surface exploration of solar system bodies.

The term "mobility" includes several characteristics which when taken together describe collectively the capability of the rover to deal with specific classes of terrain.

1. The stability of the rover in terms of the in-path and cross-path (i.e., pitch and roll) which the rover can handle without the hazard of overturning. This characteristic is not only important in terms of the general slope characteristic of the terrain surface, but especially in connection with boulders and trenches on which individual propulsion elements may find temporary purchase (foothold).

2. The maneuverability of the rover, i.e., the turning radius and dynamical characteristics, will determine what terrains in the large sense will be open for exploration. Unless the rover is able to execute strong turning trajectories and maneuver in close quarters, many areas will be forbidden.

3. Clearance of the payload above the propulsion units will have a direct effect on the available paths. A rover whose clearance is adjustable will not only offer prospects for recovery should the rover become hung-up but may also offer additional scientific capabilities. Finally, an adjustable clearance would allow for the rover's center of gravity to be reduced temporarily in situations where the critical pitch/roll conditions are approached to increase safety or to permit the rover to follow a normally unsafe terrain.

4. The rover's speed capabilities will have a direct effect on the scope of the time required for the traverse between specified science sites.

5. Locomotion is a very major factor since it exerts a primary limit as to what terrains can be handled. The

three major alternatives available, wheels, tracks and legs, not only offer varied propulsive and maneuverability capabilities as well as potential sensor information for guidance, but also pose unique as well as general control problems.

With respect to the propulsive and maneuverability factors, wheels and tracked units can be designed to achieve the required footprint pressures and traction required to deal with soft, loose materials such as ultrafine sand as well as hard coherent terrain forms such as boulders. Wheels have the advantage of being able to change direction with a minimum of scuffing and to tolerate small obstacles in lateral motion. The tracked units have the advantage of being able to bridge larger trenches but offer potential problems in turning on irregular terrain.

Neither the potential nor the limitations of such concepts have been firmly established and a systematic research and development program would appear to be in order. Such a program should be aimed at developing maximum carry load-to-wheel weight ratios consistent with reliability, footprint pressure, turning capabilities, and dimension.

A legged vehicle, which makes use of six or eight legs of varying joint complexity, would appear to offer decided advantages over wheeled or tracked vehicles in extremely rugged and irregular terrain. Depending on the number of segments and their lengths and the degrees of freedom provided by the connecting joints, a rover capable of dealing with extraordinarily irregular terrain and possessing exceptional climbing ability is potentially feasible. Maneuverability and stability potential of such a rover could exceed that of wheeled or tracked rovers. However, the feet of such a device may raise a serious problem. Rather large feet would be required to provide a sufficiently low footprint pressure on soft or loose terrain. On the other hand, such big broad feet might seriously limit the rover's capability in gaining a firm purchase on very irregular terrain. Research on legged vehicles has been very limited in the United States. At the present time, McGee at Ohio State University has an active hardware program. Considerable efforts are apparently underway in the Soviet Union but virtually nothing is known of the details of this work, other than that they are proceeding vigorously. Successful development of a legged vehicle would apply to environmentally delicate regions such as tundra as well as space exploration.

The control of either wheeled, tracked, or legged rovers represent problems of substance which will require study. The wheeled or tracked vehicle control system will have to respond to constraints imposed by irregular terrains. In the case of locomotion on a flat plane, it is a straightforward matter to

specify a vehicle speed and a steering angle to a computerdriven or hard-wired control system to drive each wheel at the proper speed to achieve the desired motion without scuffing and without excessive stresses either on the propulsion system or the vehicle structure. However, if the vehicle is on irregular terrain so that the axle velocity vectors are no longer coplanar, then each wheel must be driven at a specific rate to achieve the desired result. Wheel speed and torque as well as the vehicle strut position locations, possibly force or stress sensors, and the pitch/roll of the rover will have to be combined with trajectory parameters to achieve an acceptable system.

The legged-vehicle control problem is of a different character. Certainly all the dynamic control problems discussed above in connection with manipulation reappear. Moreover, additional problems come up. The gaits (sequences in which the legs are moved) which are selected are a function of the terrain to be traversed and the desired speed. The specific motion of an individual leg may also be a function of the terrain. In the case of irregular terrain, a significant lift of the leg to avoid hazards will be required before the foot can be lowered to the desired position. Sensors and control systems controlling the motion will have to be developed.

In order for the rover to autonomously execute highly sophisticated operations in an unpredictable environment, it must be capable of real-time interaction with sensory feedback. It must be capable of selecting and modifying its behavior sequences in response to many different types of sensory information over a wide range of response times. For example, the mobility system should respond almost instantaneously to pitch and roll accelerations, but may tolerate longer time delays as it picks its way around small rocks and ruts on a meter by meter basis. It should anticipate larger obstacles two to five meters ahead and impassable barriers should be detected 5 to 100 meters in advance. Minimum energy pathways along contour lines, through valleys, and between hills should be selected 0.1 to 1 km ahead, and long range navigational goals should be projected many kilometers ahead.

Similarly with manipulation, position servo corrections must be applied with very short time delays, whereas feedback from proximity sensors can be sampled only a few times per second in order to modify approach path motions which move slowly over a distance of a few centimeters. Processing of feedback in order to select alternative trajectory segments during the execution of elemental movements is more complex, and can be done at still coarser time intervals. The modification of plans for simple tasks to accommodate irregularities in the environment, the modification of complex task plans, or changes in scenarios for site exploration require increasingly complex sensor analysis processes which can be

safely carried out over longer time intervals. The most natural way to deal with this hierarchy of ascending complexity and increasing time intervals is to map it onto a computing mechanism with the same hierarchical structure.

The important issue in the mobility control hierarchy is the wide range of time and distance scales to which the sensory data must interact with the mobility system. Some types of feedback must be incorporated into the control system with millisecond and centimeter resolution, while other feedback can be incorporated at intervals of days or kilometers. Only if the control system is hierarchically structured can such a wide range of resolution requirements be easily accommodated.

Automatic Assembly and Force Feedback. The most naive concept of automation is to make a robot that will repeat pre-programmed motions over and over. This will not work in many situations; using position control alone, a robot cannot insert a fastener in a tight hole or even turn a crank because the inevitable small errors would cause binding or breakage. Consequently, it is necessary for robot manipulators to use force-sensing feedback or the equivalent.

In the 1960s, experimental systems demonstrated such methods for automatic assembly. Centers in Japan, the U.S., and the U.K. succeeded nearly simultaneously. In one such demonstration, Inoue, working at MIT, used an arm equipped with a force-sensing wrist designed by Minsky to assemble a radial bearing. Shortly thereafter researchers at the Draper Laboratory exhibited a device to do some kind of work with carefully arranged passive, compliant members replacing active force-sensing feedback loops. Using such techniques, we think that much of the automatic assembly of space structures already comes near to the state of this art.

Automatic Assembly and Problem Solving Systems. Problem solving and languages for problem solving has been a central focus in artificial intelligence since the science began. In the earliest stages of AI, it was seen that a computer could be programmed to try a variety of alternatives, when it encountered a situation not specifically anticipated by the programmer. Soon these "heuristic search" programs were succeeded by "goal-directed" problem-solvers, notably the GPS system of Newell and Simon at Carnegie-RAND. The symbolic integration program by Slagle is perhaps the best known example from that era.

Since that time, there has been a steady stream of new ideas, both for more general theories and for the design of problem solvers for particular problem domains. This work led to a variety of new computational organizations and languages. LISP, PLANNER, CONNIVER, STRIPS, QA4, and Production Systems are representative of this conceptual evolution. The

MYCIN program for bacteriological diagnosis and treatment, and the PARSIVAL program for analyzing English syntax are representative of what can be done to attack small, well-defined domains.

In the last few years, some steps have been taken to apply the resulting technology to the problem of assembly automation. It would be impractical and unreliable to program assembly machines at a low level, giving step-by-step instructions about exactly where to move the hand and exactly when to close it around an object. It is better and easier in principle to design languages with embedded problemsolving apparatus, so that the "programmer" can give instructions in much the same way as one communicates with people. First, one states the general goal, names the parts, and suggests an order in which the parts should be put together. Later, one makes suggestions as they come to mind, or if and when the assembly machine gets stuck.

Several research centers are now working on such problems, among them Stanford, SRI, IBM, NBS, and MIT. A full solution is some distance off, but the work has the fortunate character that each step in basic progress yields a corresponding step in application. In early stages, the amount of suggestion and detail supplied by the programmer is large, but the amount decreases as the problem solver gets smarter and knows more.

Automatic Assembly and Vision. We are still far away from being able to make a computer "see" to describe and recognize objects and scenes as well as a person can. In spite of much brillant work done in this field, "general-purpose computer vision" is still far away. Still, the special and controllable environments involved in manufacturing have enabled exciting demonstrations with near-term promise. One of these, done by Rosen and his colleagues at SRI, uses binary image processing techniques to identify parts and their orientation after they have been placed randomly on a light table. In other work, done under the direction of Horn at MIT, inspection programs have successfully examined watches to make sure the hands are moving, castings to make sure the grain structure is correct, and IC lead frames to make sure the pins are straight. We believe that this sort of work has great promise of enabling work in space that might otherwise never be done. Still, we emphasize that of all problems described here, computer vision is likely to prove the most difficult and most deserving of attention and funding. The successful examples cited are included only to suggest that there is a technology to be explored for potential uses within NASA, not that there is a technology that can be merely bought. There is, for example, no general systems for selecting parts from a bin, even though it is well-known to be a serious

problem and even though everyone in the field has thought about the problem from time to time.

1.3 Recommendations

We must re-emphasize two major obstacles to addressing the needs just outlined. The first is that of a fail-safe attitude. NASA pioneered in achieving extraordinary reliability in its fail-safe, redundant designs for missions. We have the impression that the use of these techniques is persisting in new problems to the point of some dogmatism, overlooking new possibilities enabled by progress in computer technology. In particular, we believe a great increase in flexibility and reliability might be obtained through centralizing many operations within one computer. But we see an opposite tendency; to design multiple, “distributed" computer systems that limit the flexibility of the system. On the surface, this seems sensible; but we believe that it leads to overlooking other, more centralized ways to do things that may be cheaper, more versatile, and at least equally reliable. For example, one might imagine missions that depend utterly on one central computer and one manipulator to replace many special systems. Of course, one of these two components might fail and lose the mission. On the other hand, eventually such a system might be (1) an order of magnitude cheaper and (2) possibly more reliable - because of extensive concentration on the two components and because of their ability to salvage or repair other failing components.

NASA's second major problem is that its current strength is low in artificial intelligence and even in general computer science. There are few people within NASA who understand the state-of-the-art. There is no place where those who do artificial intelligence work can reach critical mass with respect to the number of high-quality researchers or with respect to computational and other supporting resources. This has led to three regrettable consequences. First, present NASA workers are unable to be maximally productive. Second, it is extremely difficult to attract talented people to NASA. And third, those people in NASA that most need advice on artificial intelligence do not find it. Instead, they incorrectly suppose that they must be in good hands because NASA spends a great deal of money on computation.

This has led to a great gap. Much of what NASA does with computers is years out-of-date. Worse, with only a few exceptions, influential people in NASA do not realize how out-of-date most of their thinking has become. In such areas as computer languages, the situation is nearly scandalous. Part of the problem has to do with mission-oriented horizons, and part with distrust of outside researchers. Because typical "Earth-bound" workers do not have such concern with reliability and simplicity, we conjecture that NASA mission

workers feel that the techniques of non-NASA people are inapplicable. Instead of working with AI and robotics projects outside, NASA has tended to try to build its own. But these projects have never reached critical mass and have not attracted enough first-rate workers. The problem is connected, again, with the lack of modern computing power; modern vision and robotic control concepts require large computer programs and memories. We believe that there is no reason such systems cannot be space-qualified, and that they need not be very heavy or power-hungry. But without them, it is hard to use modern ideas about control and operations.

How to Correct the Central Problem of Insufficient Expertise. One idea is to contract with computer companies to provide advice and needed research. This idea, however, will not work. The large companies NASA is comfortable working with have not yet developed strength in artificial intelligence. NASA can only be led into a false sense of security by relying on them. Alternatively, NASA could increment its small existing budget for artificial intelligence and related topics, increasing the funds available at existing places. This also will not achieve the desired results. Indeed, such a plan could be counterproductive. The nature of the work demands a community of highly-motivated people working together. Efforts below critical mass in human or other resources are not likely to do well and such efforts could therefore lead to pessimism rather than excitement.

Still another possibility is that NASA could fund university research. This is a reasonable alternative as long as it is again understood that small, subcritical efforts are not cost-effective. Only a half-dozen university centers have sufficient existing size and strength to do really well. And finally, NASA could establish its own center. This is a good choice, especially if done in close proximity to and collaboration with an existing university center. It is our opinion that the need for artificial intelligence in space argues for such a center in the strongest terms. We believe that artificial intelligence will eventually prove as important to space exploitation and exploration as any of the other technologies for which there are large, focused, and dedicated NASA centers today.

Future NASA Role. At a certain level of abstraction, NASA's needs are not unique. Certainly such things as automated assembly and mining would be useful on Earth as well as in space. But it would be folly for NASA to expect someone else to produce the needed technology. NASA should plan to be the donator of artificial intelligence robotic developments rather than the benefactor for several reasons.

First, not enough is happening for reasons ranging from the shape of our antitrust laws to the lack of congressional

concern for our declining position in productivity. Second, the extreme cost of placing people in space ensures that using robots and/or teleoperators will be the method of choice in space assembly and mining long before robots see much action on Earth. Consequently, cost/benefit ratios will be more of a driving force to NASA than to others. And third, doing things in space is sufficiently special that NASA must be in the act in a major way to ensure that the technology progresses with NASA's interests in mind. Otherwise, all NASA will have is a technology that is solving someone else's problems but skirting NASA's.

The Virtual Mission Concept A Special Recommendation. The establishment of research efforts, well endowed with human and financial resources, should be accompanied by a new kind of attitude about mission planning and development, particularly with respect to space qualification of hardware. As it stands today, work seems to be done in two primary contexts, that of the paper study and that of the approved, assumed-to-fly mission. This automatically ensures two crippling results.

First, since the execution of a mission is very expensive, only a small number of the promising ideas will go forward to the point of full and fair evaluation and to the point of generating spinoff technology. Second, since space qualification is an assumed starting point for all thinking, the technology employed in mission development is guaranteed to be years behind the state of the art. The chances for pushing the state of the art via spin-offs is smaller than it should be. Paper studies, on the other hand, tend to produce mostly paper.

Consequently we see the need for a new kind of research context, that of the virtual mission. Such missions would have the same sort of shape as real missions, with two key exceptions: first, space hardened and qualified hardware would not be used; second, the objective would not be to fly, but rather to win an eligibility contest. As we see it, there would be many virtual missions competing for to-be-flown status. Taken together, they would produce a pool of alternatives, any of which could be selected and flown, with space qualification taking place after, rather than before selection. Since none would be fettered by the limits of space qualification for their entire life, all would be more imaginative, technically exciting, and technically productive by way of spinoff technology. We believe that the costs involved in doing things this way are likely to be reduced. Conceivably, several virtual missions could be done, using commercial equipment where possible, for the price of one, whereas real missions are restricted as now to old fashioned, one-of-an-obsolescent-kind antiques.

« PreviousContinue »