Page images
PDF
EPUB

Section I
Introduction

The NASA Study Group on Machine Intelligence and Robotics, composed of many of the leading researchers from almost all of the leading research groups in the fields of artificial intelligence, computer science, and autonomous systems in the United States, met to study the influence of these subjects on the full range of NASA activities and to make recommendations on how these subjects might in the future assist NASA in its mission. The Study Group, chaired by Carl Sagan, was organized by Ewald Heer, JPL, at the request of Stanley Sadin of NASA Headquarters. It included NASA personnel, scientists who have worked on previous NASA missions, and experts on computer science who had little or no prior contact with NASA. The Group devoted about 2500 man-hours to this study, meeting as a full working group or as subcommittees between June 1977 and December 1978.

A number of NASA Centers and facilities were visited during the study. In all cases, vigorous support was offered for accelerated development and use of machine intelligence in NASA systems, with particularly firm backing offered by the Director of the Johnson Space Center, which the Group consid

ered especially significant because of JSC's central role in the development of manned spaceflight.

This report is the complete report of the Study Group. It includes the conclusions and recommendations with supporting documentation. The conclusions represent a group consensus, although occasionally there were dissenting opinions on individual conclusions or recommendations. While the report is critical of past NASA efforts in this field - and most often of the lack of such efforts the criticisms are intended only as constructive. The problem is government-wide, as the Federal Data Processing Reorganization Project' has stressed, and NASA has probably been one of the least recalcitrant Federal agencies in accommodating to this new technology.

The Study Group believes that the effective utilization of existing opportunities in computer science, machine intelligence, and robotics, and their applications to NASA-specific problems will enhance significantly the cost-effectiveness and total information return from future NASA activities.

1U.S. Office of Management and Budget, Federal Data Processing Reorganization Study. Available from National Technical Information Service, Washington, D.C.

Section II

Machine Intelligence: An Introductory Tutorial2

Many human mental activities, such as writing computer programs, doing mathematics, engaging in common sense reasoning, understanding language, and even driving an automobile, are said to demand "intelligence." Over the past few decades, several computer systems have been built that can perform tasks such as these. Specifically, there are computer systems that can diagnose diseases, plan the synthesis of

2 This section is based on copyrighted material in Nils J. Nilsson's book Principles of Artificial Intelligence available from Tioga Publishing Company, Palo Alto, California. The Study Group wishes to thank Nilsson for his permission for use of this material.

complex organic chemical compounds, solve differential equations in symbolic form, analyze electronic circuits, understand limited amounts of human speech and natural language text, and write small computer programs to meet formal specifications. We might say that such systems possess some degree of artificial intelligence.

Most of the work on building these kinds of systems has taken place in the field called Artificial Intelligence (AI)3. This

3 In this report the terms Machine Intelligence and Artificial Intelligence are used synonymously.

work has had largely an empirical and engineering orientation. Drawing from a loosely structured but growing body of computational techniques, AI systems are developed, undergo experimentation, and are improved. This process has produced and refined several general Al principles of wide applicability.

AI has also embraced the larger scientific goal of constructing an information-processing theory of intelligence. If such a science of intelligence could be developed, it could guide the design of intelligent machines as well as explicate intelligent behavior as it occurs in humans and other animals. Since the development of such a general theory is still very much a goal rather than an accomplishment of AI, we limit our attention here to those principles that are relevant to the engineering goal of building intelligent machines. Even with this more limited outlook, discussion of AI ideas might well be of interest to cognitive psychologists and others attempting to understand natural intelligence.

In the rest of this section, we will provide a broad overview of several different problem areas in which AI methods and techniques have been applied.

A. Robotics

The problem of controlling the physical actions of a mobile robot might not seem to require much intelligence. Even small children are able to navigate successfully through their environment and to manipulate items, such as light switches, toy blocks, eating utensils, etc. However these same tasks, performed almost unconsciously by humans, when performed by a machine require many of the same abilities used in solving more intellectually demanding problems.

Research on robots or robotics has helped to develop many AI ideas. It has led to several techniques for modeling world states and for describing the process of change from one world state to another. It has led to a better understanding of how to generate plans for action sequences and how to monitor the execution of these plans. Complex robot control problems have forced us to develop methods for planning first at a high level of abstraction, ignoring details, and then at lower and lower levels, where details become important. Nilsson's book contains several examples of robot problem solving which illustrate important ideas in AI.

B. Perception Problems

Attempts have been made to fit computer systems with television inputs to enable them to "see" their surroundings or to fit them with microphone inputs to enable them to "hear"

speaking voices. From these experiments, it has been learned that useful processing of complex input data requires "understanding" and that understanding requires a large base of knowledge about the things being perceived.

The process of perception studied in artificial intelligence usually involves a set of operations. A visual scene, say, is encoded by sensors and represented as a matrix of intensity values. These are processed by detectors that search for primitive picture components such as line segments, simple curves, corners, etc. These, in turn, are processed to infer information about the three-dimensional character of the scene in terms of its surfaces and shapes. The ultimate goal is to represent the scene by some appropriate model. This model might consist of a high-level description such as “A hill with a tree on top with cattle grazing."

The point of the whole perception process is to produce a condensed representation to substitute for the unmanageably immense, raw input data. Obviously, the nature and quality of the final representation depend on the goals of the perceiving system. If colors are important, they must be noticed; if spatial relationships and measurements are important, they must be judged accurately. Different systems have different goals, but all must reduce the tremendous amount of sensory data at the input to a manageable and meaningful description.

The main difficulty in perceiving a scene is the enormous number of possible candidate descriptions in which the system might be interested. If it were not for this fact, one could conceivably build a number of detectors to decide the category of a scene. The scene's category could then serve as its description. For example, perhaps a detector could be built that could test a scene to see if it belonged to the category "A hill with a tree on top with cattle grazing." But why should this detector be selected instead of the countless others that might have been used?

The strategy of making hypotheses about various levels of description and then testing these hypotheses seems to offer an approach to this problem. Systems have been constructed that process suitable representations of a scene to develop hypotheses about the components of a description. These hypotheses are then tested by detectors that are specialized to the component descriptions. The outcomes of these tests, in turn, are used to develop better hypotheses, etc.

This hypothesize-and-test paradigm is applied at many levels of the perception process. Several aligned segments suggest a straight line; a line detector can be employed to test it. Adjacent rectangles suggest the faces of a solid prismatic object; an object detector can be employed to test it.

The process of hypothesis formation requires a large amount of knowledge about the expected scenes. Some Artificial Intelligence researchers have suggested that this knowledge be organized in a special structure called a frame or schema. For example, when a robot enters a room through a doorway, it activates a room schema, which loads into a working memory a number of expectations about what might be seen next. Suppose the robot perceives a rectangular form. This form, in the context of a room schema, might suggest a window. The window schema might contain the knowledge that windows typically do not touch the floor. A special detector, applied to the scene, confirms this expectation, thus raising confidence in the window hypothesis. Nilsson's book discusses various fundamental ideas underlying framestructured representations and inference processes.

C. Combinatorial and Scheduling Problems

An interesting class of problems is concerned with specifying optimal schedules or combinations. Many of these problems can be attacked by the methods of AI. A classical example is the traveling salesman's problem. The problem here is to find a minimum distance tour, starting at one of several cities, visiting each city precisely once, and returning to the starting city. The problem generalizes to one of finding a minimum cost path over the edges of a graph containing n nodes such that the path visits each of the n nodes precisely

once.

Many puzzles have this same general character. Another example is the eight-queens problem. The problem is to place eight queens on a standard chessboard in such a way that no queen can capture any of the others; that is, there can be no more than one queen in any row, column, or diagonal. In most problems of this type, the domain of possible combinations or sequences from which to choose an answer is very large. Routine attempts at solving these types of problems soon generate a combinatorial explosion of possibilities that exhaust even the capacities of large computers.

Several of these problems (including the traveling salesman problem) are members of a class that computational theorists call NP-complete. Computational theorists rank the difficulty of various problems on how the worst case for the time taken (or number of steps taken) to solve them by the theoretically best method grows with some measure of the problem size. (For example, the number of cities would be a measure of the size of a traveling salesman problem.) Thus, problem difficulty may grow linearly, polynomially, or exponentially, for example, with problem size.

The time taken by the best methods currently known for solving NP-complete problems grows exponentially with problem size. It is not yet known whether faster methods (involving only polynomial time, say) exist; but it has been proven that if a faster method exists for one of the NP-complete problems, then this method can be converted to similarly faster methods for all the rest of the NP-complete problems. In the meantime, we must make do with exponential-time methods.

AI researchers have worked on methods for solving several types of combinatorial problems. Their efforts have been directed at making the time-versus-problem-size curve grow as slowly as possible, even when it must grow exponentially. Several methods have been developed for delaying and moderating the inevitable combinatorial explosion. Again, knowledge about the problem domain is the key to more efficient solution methods. Many of the methods developed to deal with combinatorial problems are also useful on other, less combinatorially severe problems.

D. Automatic Programming

The task of writing a computer program is also related to other areas of AI. Much of the basis research in automatic programming, theorem proving, and robot problem solving overlaps. In a sense, existing compilers already do “automatic programming." They take in a complete source code specification of what a program is to accomplish; they write an object code program to do it. What we mean here by automatic programming might be described as a "super-compiler" or a program that could take in a very high-level description of what the program is to accomplish and from it produce a program. The high-level description might be a precise statement in a formal language, such as the predicate calculus, or it might be a loose description, say, in English, that would require further dialogue between the system and the user in order to resolve ambiguities.

The task of automatically writing a program to achieve a stated result is closely related to the task of proving that a given program achieves a stated result. The latter is called program verification. Many automatic programming systems produce a verification of the output program as an added benefit.

One of the important contributions of research in automatic programming has been the notion of debugging as a problem-solving strategy. It has been found that it is often much more efficient to produce an errorful solution to a programming or robot control problem cheaply and then

modify it (to make it work correctly), than to insist on a first solution completely free of defects.

E. Expert Consulting Systems

AI methods have also been employed in the development of automatic consulting systems. These systems provide human users with expert conclusions about specialized subject areas. Automatic consulting systems have been built that can diagnose diseases, evaluate potential ore deposits, suggest structures for complex organic chemicals, and even provide advice about how to use other computer systems.

A key problem in the development of expert consulting systems is how to represent and use the knowledge that human experts in these subjects obviously possess and use. This problem is made more difficult by the fact that the expert knowledge in many important fields is often imprecise, uncertain, or anecdotal (though human experts use such knowledge to arrive at useful conclusions).

Many expert consulting systems employ the AI technique of rule-based deduction. In such systems, expert knowledge is represented as a large set of simple rules, and these rules are used to guide the dialogue between the system and the user and to deduce conclusions. Rule-based deduction is one of the major topics in Nilsson's book.

F. Natural Language Processing

When humans communicate with each other using language, they employ, almost effortlessly, extremely complex and still little understood processes. It has been very difficult to develop computer systems capable of generating and "understanding" even fragments of a natural language, such as English. One source of the difficulty is that language has evolved as a communication medium between intelligent beings. Its primary purpose is to transmit a bit of "mental structure" from one brain to another under circumstances in which each brain possesses large, highly similar surrounding mental structures that serve as a common context. Furthermore, part of these similar, contextual mental structures allows each brain to know that the other brain also possesses this common structure and that the other brain can and will perform certain processes using it during its communication "acts." The evolution of language use has apparently exploited the opportunity for each brain to use its considerable computational resources and shared knowledge to generate and understand highly condensed and streamlined messages: A word to the wise from the wise is sufficient. Thus,

generating

and understanding language is an encoding and decoding problem of fantastic complexity.

A computer system capable of understanding a message in natural language would seem, then, to require (no less than would a human) both the contextual knowledge and the processes for making the inferences (from this contextual knowledge and from the message) assumed by the message generator. Some progress has been made toward computer systems of this sort, for understanding spoken and written fragments of language. Fundamental to the development of such systems are certain AI ideas about structures for representing contextual knowledge and certain techniques for making inferences from that knowledge. Although Nilsson's book does not treat the language-processing problem in detail, it does describe some important methods for knowledge representation and processing that do find application in language-processing systems.

G. Intelligent Retrieval From Databases

Database systems are computer systems that store a large body of facts about some subject in such a way that they can be used to answer user's questions about that subject. To take a specific example, suppose the facts are the personnel records of a large corporation. Example items in such a database might be representations for such facts as "Joe Smith works in the Purchasing Department," "Joe Smith was hired on October 8, 1976," "The Purchasing Department has 17 employees," "John Jones is the manager of the Purchasing Department,"

etc.

The design of database systems is an active subspecialty of computer science, and many techniques have been developed to enable the efficient representation, storage, and retrieval of large numbers of facts. From our point of view, the subject becomes interesting when we want to retrieve answers that require deductive reasoning with the facts in the database.

There are several problems that confront the designer of such an intelligent information retrieval system. First, there is the immense problem of building a system that can understand queries stated in a natural language like English. Second, even if the language-understanding problem is dodged by specifying some formal, machine-understandable query language, the problem remains of how to deduce answers from stored facts. Third, understanding the query and deducing an answer may require knowledge beyond that explicitly represented in the subject domain database. Common knowledge (typically omitted in the subject domain database) is often required. For example, from the personnel facts mentioned above, an

[ocr errors][ocr errors][ocr errors]
« PreviousContinue »