Page images
PDF
EPUB

the effective use of LSI systems may be severely blunted by the time requirements of space qualification. NASA must avoid committing to architectures prematurely. The adoption of a family of space-qualified computers would allow software to be developed and hardware decisions to be deferred allowing for more cost-effective and powerful technologies. There are many architectural alternatives for space computers: distributed, centralized, and network implementations. A distributed processor system is attractive from a management point of view since it provides separation of functions. In situations where there are special timing requirements for intelligent devices or sensors, the dedication of processors to these devices may be appropriate. However, in order to support robotic devices, much larger centralized computer systems, possibly with peripheral memories, will be required. This is an important area for study since spacecraft computer technology will to a large part determine the sophistication and success of future missions.

The Study Group recommends that NASA plan to test and space-qualify LSI circuits in-house to reduce the apparent factor of 5 or 10 increase in cost of industry supplied space-qualified microprocessors and memories. Further, the Study Group believes that NASA should play an active role in encouraging the development of flexible computer architectures for use in spacecraft.

5. Computer Systems Technology

Current trends in the use of computer technology throughout NASA seriously impede NASA utilization of machine intelligence. Distributed processing techniques being adopted by NASA takes advantage of microcomputer technology to develop intelligent sensors and controllers of instruments. While microprocessors are well suited for simple sensing and controlling functions, many of the essential functions involving the use of machine intelligence and robotics technique require much larger processors. A flexible spacecraft computer architecture, within which both microprocessors and larger systems can coexist and communicate and cooperate with each other, seems to be a highly desirable goal for NASA.

The standardization of computer hardware which is intended to reduce costs by avoiding new hardware development and space qualification may result in the use of obsolete hardware. This will limit the resources available for a machine intelligence system, and possible preclude any effective implementations. NASA should look at developing techniques for software portability, or, equivalently, hardware compatibility in a family of machines. The desire to minimize software complexity may unnecessarily restrict experimental machine intelligence systems. Part of the problem rests with the issues

of protection and reliability. NASA should reevaluate its hardware systems in light of recent techniques for providing resource sharing and protection in centralized systems.

The Study Group recommends a "software-first" approach to computer systems development within NASA so that hardware can be supplied as late as possible in order to take advantage of the latest technological advances.

6. Software Technology

The method of software development within NASA is in striking contrast to program development environments that exists in several laboratories working on machine intelligence. Compared with other users of computer technology, such as military and commercial organizations, NASA appears to be merely a state-of-the-art user. But compared with software development environments found in universities and research institutes there is a significant technological lag. The technology lag represented by this gap is not NASA's responsibility alone. The gap is indicative that an effective technology transfer mechanism does not yet exist within the computer field.

Software developed within NASA is often done in a batch environment using punched cards, resulting in a turnaround time of hours or even days. In contrast, the machine intelligence laboratories are characterized by being totally on-line and interactive. While debugging in a batch environment is a purely manual operation, requiring modification of the source program via statements to display internal values and intermediate results, many more programming aids are available in an interactive laboratory environment. Changes to programs are automatically marked on reformatted listings, the author and date of the changes are recorded, and the correspondence between source and object modules is maintained. In addition, extensive debugging and tracing facilities exist including interactive changing the programs data and restarting it from arbitrary checkpoints. The investment made to substitute computer processing for many manual activities of programmers should ultimately result in improved software quality and programmer productivity.

It should be emphasized that improved software development facilities can be created within NASA through the transfer and utilization of existing computer science technology. However, further improvements necessitate advances in the field of automatic programming which is an area of machine intelligence where programming knowledge (i.e., knowledge about how programs are constructed) is embedded

within a computer tool that utilizes this knowledge to automate some of the steps which would otherwise have to be manually performed. This is an area which deserves attention by NASA, perhaps towards developing specialized automatic programming systems tailored to NASA's needs.

The Study Group recommends immediate creation of an interactive programming environment within NASA and the adoption of a plan to use a modern data-encapsulation language (of the DOD ADA variety) as a basis of this facility. The Study Group also believes that NASA should initiate research towards the creation of automatic tools for software development.

7. Data Management Systems Technology

There are several data mangement issues where artificial intelligence techniques could be brought to bear. These areas range from the control of data acquisition and transmission, data reduction and analysis, and methods for dissemination to users. For example, onboard computers should perform data reduction and selective data transmission. This will minimize the amount of data transmitted and conserve communication channels and bandwidth. This requires an advanced computer capable of various types of data analysis. Once the data reaches a ground collection site, there are three types of data management functions required to make the data accessible and usable to researchers. First, the data must be archived. This is the simplest type of management which does not involve analysis of the data itself. For example, “Retrieve all data for the fifth orbit of the Viking mission." Secondly, access to specific portions or collections of the data, locating predetermined criteria such as "all infrared images centered over Pittsburgh taken between June and September of 1978" must be provided. Both archival and criteria selection management systems are well within current technology, and to some extent are available in systems similar to those at the EROS data center in Sioux Falls. However, the third type of database management function, the ability to access data by its content does not yet exist, and requires specific artificial intelligence support. It would utilize a knowledge base containing specific facts about the data, general rules concerning the relationships between data elements, and world models into which complex requests can be evaluated. This knowledge base would guide the system in locating data containing the desired attributes utilizing a predefined indexing criteria and the relationship of the desired attributes to the indexing attributes.

The Study Group recommends reexamination and evaluation of the NASA end-to-end data management system and the establishment of a systems engineering group consisting of

computer scientists and hardware experts to achieve an effective system design and implementation.

8. Man-Machine Systems Technology

For both ground- and space-based NASA systems we would like to have the best integration of human intelligence and machine intelligence; but we lack an understanding of how best to combine these natural and artificial components. For example, to be more effective in the use of teleoperators, NASA needs to redress a basic lack of knowledge: there now is no satisfactory theory of manipulation on the basis of which to improve design and control of manipulators. The relative assignment of roles to man and computer and the design of the related interfaces require much better understanding than now exists.

In view of potential long-range payoff and the fact that such related research as exists within NASA has been ad hoc and mission-oriented, the Study Group recommends support of significantly more basic research on man-computer cooperation, and, more generally, on man-machine communication and control. NASA organizational entities representing life sciences and the technological disciplines of computers and control should develop better cooperative mechanisms and more coherent programs to avoid man-machine research "falling between the cracks," as has been the case. Future NASA missions can have the advantages of human intelligence in space, without the risks and life support costs for astronauts, by developing teleoperators with machine intelligence, with human operators on Earth monitoring sensed information and controlling the lower-level robotic intelligence in supervisory fashion.

9. Digital Communication Technology

Computer based communication systems have been used by the artificial intelligence community since the inception of the ARPANET network which is now used under NSF support to link approximately 500 non-computer scientists in about eight different research communities. These systems provide electronic mail (using distribution lists) and communication, and are used to give notices and reminders of meetings and reports. Online documentation of programs with instant availability to updated versions allow users access to information and programs at a variety of research sites. In addition, document preparation services including text editing systems, spelling correctors, and formatting programs are in common use. NASA would do well to adopt a computer based communication system since it would offer opportunities for improvements in management, planning, and mission implementation. If the system were a copy of existing systems at research sites on the ARPANET, software could be taken directly from those systems.

The principal activity of the Study Group during its existence was to identify information processing technologies that are highly relevant to NASA and to the success of its future programs. Each workshop had one or more of these topics as the foci of interest. Appendix A gives a complete list of topics covered at each of the workshops. In this section we provide detailed discussions of those topics which are considered by the Study Group to be of high priority for NASA.

1. Robotics Technology

This section discusses the need for advanced development of intelligent manipulators and sensors. The application areas for these devices range from the assembly of space structures to planetary rovers capable of autonomous execution of highly sophisticated operations. Research in the areas of robotics and artificial intelligence is necessary to ensure that future missions will be both cost-effective and scientifically valuable. In addition, results in robotics and artificial intelligence are directly applicable in the areas of automatic assembly, mining, and exploration and material handling in hazardous

environments.

1.1 Need for Robotics Within NASA

Robotics and artificial intelligence have played surprisingly small roles in the space program. This is unfortunate because there are a number of important functions they could serve. These include, very broadly:

1. To enable missions that would otherwise be out of the question because of cost, safety, or feasibility for other reasons. Example: At rather low cost, we could have had a remotely-manned lunar explorer in progress for the past decade.

2. To enable the kinds of popular and valuable features that might rekindle public interest in the exploitation and exploration of space. Example: In the past decade, the hypothetical lunar explorer just mentioned would have been operating for 1,000,000 five-minute intervals. In this period, a vast number of influential public visitors could have operated some of the Explorer's controls, remotely, from NASA visitor centers. Imagine the

education and enthusiasm that could come from such a direct public participation in space!

3. To achieve general cost reductions from efficient automation. Example: The Skylab Rescue Mission would have been a routine exercise, if a space-qualified teleoperator had been developed in the past decade. It would have been a comparatively routine mission to launch it on a military rocket if the Shuttle project encountered delays.

These things have not been done, in part, because NASA has little strength at present in the necessary technical areas. In our view the future prospects seem poor unless there is a change. We see several obstacles:

In-House Competence. NASA's current strength in artificial intelligence is particularly low. NASA's in-house resources are comparatively weak, as well, in computer science on the whole, especially in areas such as higher-level languages and modern debugging and multiprocessing methods.

Self-Assessment. Even more serious, NASA administrators seem to believe that the agency is outstanding in computation science and engineering. This is far from true. The unawareness of weakness seems due to poor contact of the agency's consultants and advisors with the rest of the computational research world.

Superconservative Tradition. NASA has become committed to adhere to the concept of very conservative, fail-safe systems. This is eminently sound in the days of Apollo, when (i) each successful launch was a miracle of advanced technology and (ii) the lives of human passengers were at stake. But today, we feel, that strategy has become self-defeating, leading to unnecessarily expensive and unambitious projects.

Fear of Complexity. On a similar note, we perceive a broad distrust of complicated automatic machinery in mission planning and design. This distrust was based on wise decisions made in the early days of manned space exploration, but it is no longer appropriate in thinking about modern computation. Instead of avoiding sophisticated computation, NASA should become masterful at managing and exploiting it. Large computers are fundamentally just as reliable as small computers.

Fear of Failure. Many NASA people have confided to the Study Group that the agency is afraid that any mission failures at all may jeopardize the whole space program, so that they "cannot take chances" in advanced design. Again, this attitude was sound in the Apollo era, but probably is not sound when we consider the smaller, multiple, and individually inexpensive missions of today.

What Are the Alternatives? We feel that NASA should begin to consider new styles of missions which are, at the same time, more adventurous and less expensive. Left as it is, NASA's thinking will continue to evolve in ways that will become suffocatingly pedestrian. To get out of this situation, it will be necessary to spend money, but the amount needed to learn to do exciting things like using powerful computers and semiintelligent robots will be small compared to the money needed in the past for developing propulsion systems. "Getting there" is no longer all the fun; it is time to think about how to do sophisticated things after the mission arrives there.

Space Programs and Intelligent Systems. It is extremely expensive to support personnel in space for long periods. Such costs will render impossible many otherwise exciting uses of space technology. Yet, our Study Group found relatively little serious consideration of using autonomous or semiautonomous robots to do things in space that might otherwise involve large numbers of people. In many cases, the use of artificial intelligence had not been considered at all, or not considered in reaching conclusions about what computer resources will be needed, or prematurely dismissed on the basis of conversations with the wrong people. In other cases, it was recognized that such things were possible in principle, but out of the question because of NASA's mission-oriented - as opposed to technology-oriented way of planning for the future.

Two examples come to mind as obvious illustrations of cases where we found the views expressed to be particularly myopic:

(1) Building Large Space Structures. Large-scale constructions usually involves two activities. First, basic building blocks must be fabricated from stock material. Second, the building blocks must be assembled. Space fabrication seems necessary becuase of difficulty in launching large prefabricated sections. We applaud the work that NASA has done already toward creating machines that continuously convert sheet metal into beams. We are less happy with the lack of justification for automatic inspection and assembly of such beams. There are existing automatic vision and manipulation techniques that could be developed into practical systems for these tasks. The beams could be marked,

during fabrication, so that descendants of today's visual tracking programs could do rough positioning. And, force-sensing manipulators could mate things together, once roughly positioned. Where large structures are concerned, in fact, these are areas in which reliable, accurate, repetitive human performances would be very hard to maintain.

(2) Mining. An ability to build structures is probably a prerequisite to doing useful, economically justified mining on the Moon, the planets, and the asteroids. But the ability to build is only a beginning. The vision and manipulation problems that plague the robot miner or assembler are different. Rocks do not have fiduciary marks, and forces encountered in digging and shoring are less constrained than those involved in screwing two parts together. On the other hand, less precision is required, and even interplanetary distances do not prevent the exchange of occasional questions and return suggestions with Earth-based supervisors.

1.2 The State of the Art

At this point, we turn to some specific areas, both to draw attention to NASA's special needs and to tie those needs to the state of the art.

Basic Computer Needs. A first step toward enabling the use of artificial intelligence and other advanced technologies is to use more sophisticated computer systems. We conjecture that the various benefits that would follow from this approach could reduce the cost of spacecraft and ground-based operations enough to make several missions possible for the present cost of one.

We want to emphasize this point strongly, for we note a trend within NASA to do just the opposite! In our Study Group meetings with NASA projects over the year, time and time again we were shown "distributed" systems designed to avoid concentrating the bulk of a mission's complexity within one computer system. However, we feel that this is just the wrong direction for NASA to take today because computer scientists have learned much about how to design large computer systems whose parts do not interact in uncontrollably unpredictable ways. For example, in a good, modern "time-sharing system" the programs of one user - however badly full of bugs do not interfere either with the programs of other users or with the operation of the overall "system program." Thus, because we have learned how to prevent the effects of bugs from propagating from one part to another, there is no longer any basic reason to prefer the decentralized,

"distributed" systems that became the tradition in the "fail-safe" era of engineering.

However, because NASA has not absorbed these techniques, it still distrusts centralization of computation. We argue elsewhere that this leads to very large and unnecessary costs of many different kinds.

The Development of Sophisticated Manipulators. We feel that NASA has not adequately exploited the possibilities of even simple man-controlled remote manipulators. The Skylab sunshade episode might well have been easily handled by an onboard device of this sort, and we think it likely that it would have paid for itself in payload by replacing some variety of other special-purpose actuators.

The need to handle radioactive substances led to the development of rudimentary teleoperators many years ago. At first progress was rapid, with force-reflecting, two-fingered models appearing in early 1950s. But, strangely, this development all but stopped when progress was sufficient to make the handling of nuclear materials possible, rather than easy, economical, and completely safe. We believe that this happened because the nuclear industry, like NASA, became at this time mission-oriented rather than technology oriented that places like Argonne National Laboratory lost their basic research and long-view funding.

SO

Consequently, today manipulators differ little from their 1950s ancestors. They are still two-fingered and they still leave their operators fatigued after a half-hour or so of use. Even today, there is no generally available and reliable mobile and dexterous manipulator suitable for either emergency or preventive maintenance of nuclear plants this is still done by people working under extremely hazardous conditions. Concerns within a nuclear plant about storage safety, detection of faults, and adequacy of emergency systems are perhaps best handled using a mobile and dexterous robot.

If such devices had been developed and space-qualified versions produced NASA could have exploited them, both for teleoperator (human-controlled) and for fully autonomous (robot) use. Indeed, we feel, NASA's needs in this area are quite as critical as those in the nuclear industry. Nevertheless, NASA has not given enough attention to work in the area. Perhaps a dozen or more clumsy two-fingered systems have been developed, but all of these would be museum pieces had the work gone at proper speed.

It therefore makes sense for NASA to enter into a partnership with ERDA to reverse the neglect of manipulator technology. A good start would be to sponsor the development of a tendon-operated arm with a multifingered hand,

both heavily instrumented with imaginative force, touch sensors, and proximity vision systems. Besides the obvious value in space of separating the man and his life-support problems from the workspace, there are many obvious spinoffs in general manufacturing, mining, undersea exploitation, medicine (micro-teleoperators), and so forth.

Controlling a Manipulator: Still a Research Problem. Dynamic control of the trajectory of a many-jointed manipulator seems to require large calculations, if the motion is to be done at any speed. It takes six joints to put a hand at an arbitrary place at an arbitrary orientation, and the six degrees of freedom have interactions that complicate the dynamics of arm control. The equations are too complex for straightforward real-time control with a low-capacity computer. The problem can be simplified by placing constraints on manipulator design, for example by designing the axes of rotation of the last three joints to intersect, but even the simplified problem is not yet solved.

-

In any case, the most obvious approach to put an independent feedback control loop around each joint - fails because constant feedback loop gains cannot manage (at high speeds) the configuration-dependent inertia terms or the velocity interaction terms. On the other hand, it seems clear that such problems can be solved by combinations of "table look-up" for sample situations with correctional computations. In any case the control computer will need a central memory that is large by today's space standards.

Rover Mobility, Locomotion, and Guidance Research. Although much knowledge regarding several of the solar system planets has been gained through missions employing remote sensors, and more can be obtained in the future in this manner, many of the critical scientific questions require detailed surface experiments and measurements such as those conducted by the Viking landers on Mars. Despite the historic achievement represented by the soft landing of the Vikings and the effectiveness of the onboard experimental systems, more new important questions were raised. For these to be answered, an extensive surface exploration should be undertaken. A surface trajectory involving hundreds of kilometers, and desirably over 1000 kilometers, would be required to explore a sufficient number of the science sites on Mars to gain an adequate coverage of the planet.

The round-trip communications delay time, which ranges from a minimum of nine minutes to a maximum of forty minutes for Mars, and the limited "windows" during which information can be transmitted precludes direct control of the rover from Earth. Accordingly, a rover on Mars or another planet must be equipped with sensors and appropriate computing capability and procedures to proceed autonomously

« PreviousContinue »