Page images
PDF
EPUB

which ranges from a minimum of nine minutes to a maximum of forty minutes for Mars, and the limited "windows" during which information can be transmitted, precludes direct control from Earth. Thus the human role must be that of a supervisor and periodic program-updater of the computer. A robot Mars explorer must be equipped with sensors and appropriate computing capability which maximizes both efficient mobility and intelligent data gathering.

The Study Group recommends that NASA take an active role in developing the necessary robotics technology, including rovers and manipulators, rather than expecting this technology to be transfered from other sources.

2. Smart Sensor Technology

There are several additional areas within NASA applications and mission programs which would benefit from advances in machine visual perception. These areas include remote sensing and crop survey, cartography and meteorology, teleoperators, and intelligent robot explorers.

Current crop census systems do not seem to meet the expectations of lowered cost and increased repeatability from automated classification. It also appears that the 80-meter resolution per pixel of LANDSAT imagery is insufficient for current structural pattern recognition and scene analysis techniques. What is needed is an emphasis on sensors whose resolution is between 2 meters and 2 centimeters per pixel. Coarse sampling (2 meters) would separate field boundaries, while finer resolution (2 centimeters) could be used to perform structural analysis on limited parts of the fields.

Much work is yet to be done in computer stereo vision. Such systems will find applications in the automation of cartographic processes. While research into stereo vision have produced systems which work in a research environment, support is needed for newer high performance systems. Teleoperators and manipulators for fabrication and assembly of materials in space will require a vision system containing smart sensors which provide stereo presentations and the ability to generate multiple views. The quality of visual components in a teleoperator system will determine its utility as much as its mechanical sophistication. Intelligent robot explorers will rely on smart sensor visual systems in order to navigate and recognize interesting features to sample. Laser ranging devices offer minimal navigational aid due to their limited range capability. Stereo vision systems based on motion parallax offer superior capabilities by navigating with respect to distant landmarks. It would thus be possible to avoid difficult terrain and to return to locations of interest.

The Study Group recommends that NASA expand and diversify its image processing research to include knowledge guided interpretation systems and initiate development of LSI-based smart sensors capable of both signal-based and symbolic interpretation.

3. Mission Operations Technology

It appears that significant cost-effective performance can also be realized by the application of machine intelligence techniques to mission planning and sequencing operations. These operations tend to be time-critical during space missions and require many repetitive and routine decision-making roles currently performed by human operators. Mission planning and control facilities dealing with data collection, experimentation scheduling, and monitoring should be automated to a much larger degree. Various missions may share many common requirements which could be served by a software facility providing for mission-independent aspects of data collection and allowing embedding of mission-specific, task-oriented software.

The Study Group recommends that NASA begin the development of a reusable, modular intelligent mission control center with the goal of increasing the mechanization and standardization of sequencing, data handling and delivery, and related protocols.

4. Spacecraft Computer Technology

Digital computers have been playing an ever increasing role in NASA space missions as the need to control and coordinate sophisticated sensors and effectors grows. They are destined to play a dominant role in future space missions. There are several issues, to which NASA should address itself, which bear on the ability of current space-qualified computers to support robotic devices requiring large central processors and memory.

Specifically, fault tolerant designs, large scale integrated circuits, and computer architectures should receive attention by NASA. Fault tolerance implies that expected computer system behavior should continue after faults have occurred. Fault tolerance is essential to space missions since it is impossible to adequately test each component of the total system. Techniques for building reliable systems should include the ability to isolate the effect of a fault to a single module and to detect the fault so automatic recovery algorithms can be invoked to "repair" the fault.

LSI technology holds the promise of more powerful, sophisticated computers with smaller power and weight requirements. However, since technology is rapidly advancing,

the effective use of LSI systems may be severely blunted by the time requirements of space qualification. NASA must avoid committing to architectures prematurely. The adoption of a family of space-qualified computers would allow software to be developed and hardware decisions to be deferred allowing for more cost-effective and powerful technologies. There are many architectural alternatives for space computers: distributed, centralized, and network implementations. A distributed processor system is attractive from a management point of view since it provides separation of functions. In situations where there are special timing requirements for intelligent devices or sensors, the dedication of processors to these devices may be appropriate. However, in order to support robotic devices, much larger centralized computer systems, possibly with peripheral memories, will be required. This is an important area for study since spacecraft computer technology will to a large part determine the sophistication and success of future missions.

The Study Group recommends that NASA plan to test and space-qualify LSI circuits in-house to reduce the apparent factor of 5 or 10 increase in cost of industry supplied space-qualified microprocessors and memories. Further, the Study Group believes that NASA should play an active role in encouraging the development of flexible computer architectures for use in spacecraft.

5. Computer Systems Technology

Current trends in the use of computer technology throughout NASA seriously impede NASA utilization of machine intelligence. Distributed processing techniques being adopted by NASA takes advantage of microcomputer technology to develop intelligent sensors and controllers of instruments. While microprocessors are well suited for simple sensing and controlling functions, many of the essential functions involving the use of machine intelligence and robotics technique require much larger processors. A flexible spacecraft computer architecture, within which both microprocessors and larger systems can coexist and communicate and cooperate with each other, seems to be a highly desirable goal for NASA.

The standardization of computer hardware which is intended to reduce costs by avoiding new hardware development and space qualification may result in the use of obsolete hardware. This will limit the resources available for a machine intelligence system, and possible preclude any effective implementations. NASA should look at developing techniques for software portability, or, equivalently, hardware compatibility in a family of machines. The desire to minimize software complexity may unnecessarily restrict experimental machine intelligence systems. Part of the problem rests with the issues

of protection and reliability. NASA should reevaluate its hardware systems in light of recent techniques for providing resource sharing and protection in centralized systems.

The Study Group recommends a "software-first" approach to computer systems development within NASA so that hardware can be supplied as late as possible in order to take advantage of the latest technological advances.

6. Software Technology

The method of software development within NASA is in striking contrast to program development environments that exists in several laboratories working on machine intelligence. Compared with other users of computer technology, such as military and commercial organizations, NASA appears to be merely a state-of-the-art user. But compared with software development environments found in universities and research institutes there is a significant technological lag. The technology lag represented by this gap is not NASA's responsibility alone. The gap is indicative that an effective technology transfer mechanism does not yet exist within the computer field.

Software developed within NASA is often done in a batch environment using punched cards, resulting in a turnaround time of hours or even days. In contrast, the machine intelligence laboratories are characterized by being totally on-line and interactive. While debugging in a batch environment is a purely manual operation, requiring modification of the source program via statements to display internal values and intermediate results, many more programming aids are available in an interactive laboratory environment. Changes to programs are automatically marked on reformatted listings, the author and date of the changes are recorded, and the correspondence between source and object modules is maintained. In addition, extensive debugging and tracing facilities exist including interactive changing the programs data and restarting it from arbitrary checkpoints. The investment made to substitute computer processing for many manual activities of programmers should ultimately result in improved software quality and programmer productivity.

It should be emphasized that improved software development facilities can be created within NASA through the transfer and utilization of existing computer science technology. However, further improvements necessitate advances in the field of automatic programming which is an area of machine intelligence where programming knowledge (i.e., knowledge about how programs are constructed) is embedded

within a computer tool that utilizes this knowledge to automate some of the steps which would otherwise have to be manually performed. This is an area which deserves attention by NASA, perhaps towards developing specialized automatic programming systems tailored to NASA's needs.

The Study Group recommends immediate creation of an interactive programming environment within NASA and the adoption of a plan to use a modern data-encapsulation language (of the DOD ADA variety) as a basis of this facility. The Study Group also believes that NASA should initiate research towards the creation of automatic tools for software development.

7. Data Management Systems Technology

There are several data mangement issues where artificial intelligence techniques could be brought to bear. These areas range from the control of data acquisition and transmission, data reduction and analysis, and methods for dissemination to users. For example, onboard computers should perform data reduction and selective data transmission. This will minimize the amount of data transmitted and conserve communication channels and bandwidth. This requires an advanced computer capable of various types of data analysis. Once the data reaches a ground collection site, there are three types of data management functions required to make the data accessible and usable to researchers. First, the data must be archived. This is the simplest type of management which does not involve analysis of the data itself. For example, "Retrieve all data for the fifth orbit of the Viking mission." Secondly, access to specific portions or collections of the data, locating predetermined criteria such as "all infrared images centered over Pittsburgh taken between June and September of 1978" must be provided. Both archival and criteria selection management systems are well within current technology, and to some extent are available in systems similar to those at the EROS data center in Sioux Falls. However, the third type of database management function, the ability to access data by its content does not yet exist, and requires specific artificial intelligence support. It would utilize a knowledge base containing specific facts about the data, general rules concerning the relationships between data elements, and world models into which complex requests can be evaluated. This knowledge base would guide the system in locating data containing the desired attributes utilizing a predefined indexing criteria and the relationship of the desired attributes to the indexing attributes.

The Study Group recommends reexamination and evaluation of the NASA end-to-end data management system and the establishment of a systems engineering group consisting of

computer scientists and hardware experts to achieve an effective system design and implementation.

8. Man-Machine Systems Technology

For both ground- and space-based NASA systems we would like to have the best integration of human intelligence and machine intelligence; but we lack an understanding of how best to combine these natural and artificial components. For example, to be more effective in the use of teleoperators, NASA needs to redress a basic lack of knowledge: there now is no satisfactory theory of manipulation on the basis of which to improve design and control of manipulators. The relative assignment of roles to man and computer and the design of the related interfaces require much better understanding than now exists.

In view of potential long-range payoff and the fact that such related research as exists within NASA has been ad hoc and mission-oriented, the Study Group recommends support of significantly more basic research on man-computer cooperation, and, more generally, on man-machine communication and control. NASA organizational entities representing life sciences and the technological disciplines of computers and control should develop better cooperative mechanisms and more coherent programs to avoid man-machine research "falling between the cracks," as has been the case. Future NASA missions can have the advantages of human intelligence in space, without the risks and life support costs for astronauts, by developing teleoperators with machine intelligence, with human operators on Earth monitoring sensed information and controlling the lower-level robotic intelligence in supervisory fashion.

9. Digital Communication Technology

Computer based communication systems have been used by the artificial intelligence community since the inception of the ARPANET network which is now used under NSF support to link approximately 500 non-computer scientists in about eight different research communities. These systems provide electronic mail (using distribution lists) and communication, and are used to give notices and reminders of meetings and reports. Online documentation of programs with instant availability to updated versions allow users access to information and programs at a variety of research sites. In addition, document preparation services including text editing systems, spelling correctors, and formatting programs are in common use. NASA would do well to adopt a computer based communication system since it would offer opportunities for improvements in management, planning, and mission implementation. If the system were a copy of existing systems at research sites on the ARPANET, software could be taken directly from those systems.

Appendix on Relevant Technologies

The principal activity of the Study Group during its existence was to identify information processing technologies that are highly relevant to NASA and to the success of its future programs. Each workshop had one or more of these topics as the foci of interest. Appendix A gives a complete list of topics covered at each of the workshops. In this section we provide detailed discussions of those topics which are considered by the Study Group to be of high priority for NASA.

1. Robotics Technology

This section discusses the need for advanced development of intelligent manipulators and sensors. The application areas for these devices range from the assembly of space structures to planetary rovers capable of autonomous execution of highly sophisticated operations. Research in the areas of robotics and artificial intelligence is necessary to ensure that future missions will be both cost-effective and scientifically valuable. In addition, results in robotics and artificial intelligence are directly applicable in the areas of automatic assembly, mining, and exploration and material handling in hazardous

environments.

1.1 Need for Robotics Within NASA

Robotics and artificial intelligence have played surprisingly small roles in the space program. This is unfortunate because there are a number of important functions they could serve. These include, very broadly:

1. To enable missions that would otherwise be out of the question because of cost, safety, or feasibility for other reasons. Example: At rather low cost, we could have had a remotely-manned lunar explorer in progress for the past decade.

2. To enable the kinds of popular and valuable features that might rekindle public interest in the exploitation and exploration of space. Example: In the past decade, the hypothetical lunar explorer just mentioned would have been operating for 1,000,000 five-minute intervals. In this period, a vast number of influential public visitors could have operated some of the Explorer's controls, remotely, from NASA visitor centers. Imagine the

education and enthusiasm that could come from such a direct public participation in space!

3. To achieve general cost reductions from efficient automation. Example: The Skylab Rescue Mission would have been a routine exercise, if a space-qualified teleoperator had been developed in the past decade. It would have been a comparatively routine mission to launch it on a military rocket if the Shuttle project encountered delays.

These things have not been done, in part, because NASA has little strength at present in the necessary technical areas. In our view the future prospects seem poor unless there is a change. We see several obstacles:

In-House Competence. NASA's current strength in artificial intelligence is particularly low. NASA's in-house resources are comparatively weak, as well, in computer science on the whole, especially in areas such as higher-level languages and modern debugging and multiprocessing methods.

Self-Assessment. Even more serious, NASA administrators seem to believe that the agency is outstanding in computation science and engineering. This is far from true. The unawareness of weakness seems due to poor contact of the agency's consultants and advisors with the rest of the computational research world.

Superconservative Tradition. NASA has become committed to adhere to the concept of very conservative, fail-safe systems. This is eminently sound in the days of Apollo, when (i) each successful launch was a miracle of advanced technology and (ii) the lives of human passengers were at stake. But today, we feel, that strategy has become self-defeating, leading to unnecessarily expensive and unambitious projects.

Fear of Complexity. On a similar note, we perceive a broad distrust of complicated automatic machinery in mission planning and design. This distrust was based on wise decisions made in the early days of manned space exploration, but it is no longer appropriate in thinking about modern computation. Instead of avoiding sophisticated computation, NASA should become masterful at managing and exploiting it. Large computers are fundamentally just as reliable as small computers.

Fear of Failure. Many NASA people have confided to the Study Group that the agency is afraid that any mission failures at all may jeopardize the whole space program, so that they "cannot take chances" in advanced design. Again, this attitude was sound in the Apollo era, but probably is not sound when we consider the smaller, multiple, and individually inexpensive missions of today.

What Are the Alternatives? We feel that NASA should begin to consider new styles of missions which are, at the same time, more adventurous and less expensive. Left as it is, NASA's thinking will continue to evolve in ways that will become suffocatingly pedestrian. To get out of this situation, it will be necessary to spend money, but the amount needed to learn to do exciting things like using powerful computers and semiintelligent robots will be small compared to the money needed in the past for developing propulsion systems. "Getting there" is no longer all the fun; it is time to think about how to do sophisticated things after the mission arrives there.

Space Programs and Intelligent Systems. It is extremely expensive to support personnel in space for long periods. Such costs will render impossible many otherwise exciting uses of space technology. Yet, our Study Group found relatively little serious consideration of using autonomous using autonomous or semiautonomous robots to do things in space that might otherwise involve large numbers of people. In many cases, the use of artificial intelligence had not been considered at all, or not considered in reaching conclusions about what computer resources will be needed, or prematurely dismissed on the basis of conversations with the wrong people. In other cases, it was recognized that such things were possible in principle, but out of the question because of NASA's mission-oriented opposed to technology-oriented way of planning for the future.

as

Two examples come to mind as obvious illustrations of cases where we found the views expressed to be particularly myopic:

(1) Building Large Space Structures. Large-scale constructions usually involves two activities. First, basic building blocks must be fabricated from stock material. Second, the building blocks must be assembled. Space fabrication seems necessary becuase of difficulty in launching large prefabricated sections. We applaud the work that NASA has done already toward creating machines that continuously convert sheet metal into beams. We are less happy with the lack of justification for automatic inspection and assembly of such beams. There are existing automatic vision and manipulation techniques that could be developed into practical systems for these tasks. The beams could be marked,

during fabrication, so that descendants of today's visual tracking programs could do rough positioning. And, force-sensing manipulators could mate things together, once roughly positioned. Where large structures are concerned, in fact, these are areas in which reliable, accurate, repetitive human performances would be very hard to maintain.

(2) Mining. An ability to build structures is probably a prerequisite to doing useful, economically justified mining on the Moon, the planets, and the asteroids. But the ability to build is only a beginning. The vision and manipulation problems that plague the robot miner or assembler are different. Rocks do not have fiduciary marks, and forces encountered in digging and shoring are less constrained than those involved in screwing two parts together. On the other hand, less precision is required, and even interplanetary distances do not prevent the exchange of occasional questions and return suggestions with Earth-based supervisors.

1.2 The State of the Art

At this point, we turn to some specific areas, both to draw attention to NASA's special needs and to tie those needs to the state of the art.

Basic Computer Needs. A first step toward enabling the use of artificial intelligence and other advanced technologies is to use more sophisticated computer systems. We conjecture that the various benefits that would follow from this approach could reduce the cost of spacecraft and ground-based operations enough to make several missions possible for the present cost of one.

We want to emphasize this point strongly, for we note a trend within NASA to do just the opposite! In our Study Group meetings with NASA projects over the year, time and time again we were shown "distributed" systems designed to avoid concentrating the bulk of a mission's complexity within one computer system. However, we feel that this is just the wrong direction for NASA to take today because computer scientists have learned much about how to design large computer systems whose parts do not interact in uncontrollably unpredictable ways. For example, in a good, modern "time-sharing system" the programs of one user however badly full of bugs - do not interfere either with the programs of other users or with the operation of the overall "system program." Thus, because we have learned how to prevent the effects of bugs from propagating from one part to another, there is no longer any basic reason to prefer the decentralized,

« PreviousContinue »