Page images
PDF
EPUB

modify it (to make it work correctly), than to insist on a first solution completely free of defects.

E. Expert Consulting Systems

AI methods have also been employed in the development of automatic consulting systems. These systems provide human users with expert conclusions about specialized subject areas. Automatic consulting systems have been built that can diagnose diseases, evaluate potential ore deposits, suggest structures for complex organic chemicals, and even provide advice about how to use other computer systems.

A key problem in the development of expert consulting systems is how to represent and use the knowledge that human experts in these subjects obviously possess and use. This problem is made more difficult by the fact that the expert knowledge in many important fields is often imprecise, uncertain, or anecdotal (though human experts use such knowledge to arrive at useful conclusions).

Many expert consulting systems employ the AI technique of rule-based deduction. In such systems, expert knowledge is represented as a large set of simple rules, and these rules are used to guide the dialogue between the system and the user and to deduce conclusions. Rule-based deduction is one of the major topics in Nilsson's book.

F. Natural Language Processing

When humans communicate with each other using language, they employ, almost effortlessly, extremely complex and still little understood processes. It has been very difficult to develop computer systems capable of generating and "understanding" even fragments of a natural language, such as English. One source of the difficulty is that language has evolved as a communication medium between intelligent beings. Its primary purpose is to transmit a bit of "mental structure" from one brain to another under circumstances in which each brain possesses large, highly similar surrounding mental structures that serve as a common context. Furthermore, part of these similar, contextual mental structures allows each brain to know that the other brain also possesses this common structure and that the other brain can and will perform certain processes using it during its communication "acts." The evolution of language use has apparently exploited the opportunity for each brain to use its considerable computational resources and shared knowledge to generate and understand highly condensed and streamlined messages: A word to the wise from the wise is sufficient. Thus, generating

and understanding language is an encoding and decoding problem of fantastic complexity.

A computer system capable of understanding a message in natural language would seem, then, to require (no less than would a human) both the contextual knowledge and the processes for making the inferences (from this contextual knowledge and from the message) assumed by the message generator. Some progress has been made toward computer systems of this sort, for understanding spoken and written fragments of language. Fundamental to the development of such systems are certain AI ideas about structures for representing contextual knowledge and certain techniques for making inferences from that knowledge. Although Nilsson's book does not treat the language-processing problem in detail, it does describe some important methods for knowledge representation and processing that do find application in language-processing systems.

G. Intelligent Retrieval From Databases

Database systems are computer systems that store a large body of facts about some subject in such a way that they can be used to answer user's questions about that subject. To take a specific example, suppose the facts are the personnel records of a large corporation. Example items in such a database might be representations for such facts as "Joe Smith works in the Purchasing Department," "Joe Smith was hired on October 8, 1976," "The Purchasing Department has 17 employees," "John Jones is the manager of the Purchasing Department,"

etc.

The design of database systems is an active subspecialty of computer science, and many techniques have been developed to enable the efficient representation, storage, and retrieval of large numbers of facts. From our point of view, the subject becomes interesting when we want to retrieve answers that require deductive reasoning with the facts in the database.

There are several problems that confront the designer of such an intelligent information retrieval system. First, there is the immense problem of building a system that can understand queries stated in a natural language like English. Second, even the language-understanding problem is dodged by specifying some formal, machine-understandable query language, the problem remains of how to deduce answers from stored facts. Third, understanding the query and deducing an answer may require knowledge beyond that explicitly represented in the subject domain database. Common knowledge (typically omitted in the subject domain database) is often required. For example, from the personnel facts mentioned above, an

intelligent system ought to be able to deduce the answer "John Jones" to the query "Who is Joe Smith's boss?” Such a system would have to know somehow that the manager of a department is the boss of the people who work in that department. How common knowledge should be represented and used is one of the system design problems that invites the methods of Artificial Intelligence.

H. Theorem Proving

Finding a proof (or disproof) for a conjectured theorem in mathematics can certainly be regarded as an intellectual task. Not only does it require the ability to make deductions from hypotheses but it also demands intuitive skills such as guessing about which lemmas should be proved first in order to help prove the main theorem. A skilled mathematician uses what he might call judgment (based on a large amount of specialized knowledge) to guess accurately about which previously proven theorems in a subject area will be useful in the present proof and to break his main problem down into subproblems to work on independently. Several automatic theorem proving programs have been developed that possess some of these same skills to a limited degree.

The study of theorem proving has been of significant value in the development of AI methods. The formalization of the deductive process using the language of predicate logic, for example, helps us to understand more clearly some of the components of reasoning. Many informal tasks, including medical diagnosis and information retrieval, can be formalized as theorem-proving problems. For these reasons, theorem proving is an extremely important topic in the study of AI methods.

I. Social Impact4

The impact of computers, machine intelligence, and robotics must be examined in the broader context of their impact on society as a whole, rather than the narrower focus based on NASA needs and applications. The impact of information processing technology (and machine intelligence and robotics) on society has been considered in detail by Simon. Here we present the conclusions derived by him. The reader is referred to Simon's book for details of the reasoning and evidence that led to the conclusions presented here.

This subsection is based on material presented in The New Science of Management Decision, revised edition, by Herbert A. Simon, PrenticeHall, Englewood Cliffs, N.J., 1977. The Study Group would like to thank Professor Simon and Prentice-Hall for their kind permission for the use of the material. The reader is referred to Chapters 3 and 5 of the book for detailed discussions that lead to the conclusions presented here.

1. The Dehumanization - Alienation Hypothesis

There has been a great deal of nervousness, and some prophetic gloom, about human work in highly automated organizations. An examination of such empirical evidence, and an analysis of the arguments that have been advanced for a major impact of automation upon the nature of work has led us to a largely negative result.

There is little evidence for the thesis that job satisfaction has declined in recent years, or that the alienation of workers has increased. Hence, such trends, being nonexistent, cannot be attributed to automation, past or prospective. Trends toward lower trust in government and other social institutions flow from quite different causes.

An examination of the actual changes that have taken place in clerical jobs as the result of introducing computers indicates that these changes have been modest in magnitude and mixed in direction. The surest consequence of factory and office automation is that it is shifting the composition of the labor force away from those occupations in which average job satisfaction has been lowest, toward occupations in which it has been higher.

The argument that organizations are becoming more authoritarian and are stifling human creativity flies in the face of long-term trends in our society toward the weakening of authority relations. Moreover, the psychological premises on which the argument rests are suspect. Far more plausible is the thesis that human beings perform best, most creatively, and with greatest comfort in environments that provide them with some immediate amount of structure, including the structure that derives from involvement in authority relations. Just where the golden mean lies is hard to say, but there is no evidence that we are drifting farther from it.

Finally, while we certainly live in a world that is subject to continuing change, there is reason to believe that the changes we are undergoing are psychologically no more stressful, and perhaps even less, stressful, than those that our parents and grandparents experienced. It appears that the human consequences we may expect from factory and office automation are relatively modest in magnitude, that they will come about gradually, and that they will bring us both disadvantages and advantages - with the latter possibly outweighing the former.

2. The Potential for Increased Unemployment

Simon presents evidence that any level of technology and productivity is compatible with any level of employment,

including full employment. He suggests that the problems we face today will not cause us to retreat from high technology for such a retreat would not be consistent with meeting the needs of the world's population - but that they will bring about a substantial qualitative shift in the nature of our continuing technological progress. For future increases in human productivity, we will look more to the informationprocessing technologies than to the energy technologies. Because of resource limitations and because of shifting patterns of demand with rising real incomes, a larger fraction of the labor force than at present will be engaged in producing services, and a smaller fraction will be engaged in producing goods. But there is no reason to believe that we will experience satiety of either goods or services at full employment levels.

3. The Impact on Resources and

Environment

Technology is knowledge and information-processing technology is knowledge of how to produce and use knowledge more effectively. Modern instruments those, for example, that allow us to detect trace quantities of contaminants in air, water, and food - inform us about consequences of our actions of which we were previously ignorant. Computers

applied to the modeling of our energy and environmental systems trace out for us the indirect effects of actions taken in one part of our society upon other parts. Informationprocessing technology is causing all of us to take account of the consequences of our actions over spans of time and space that seldom concerned us in the past. It is placing on us — perhaps forcing on us - the responsibilities of protecting future generations as well as our own. In this way, the new technology, the new knowledge, is helping to redefine the requirements of morality in human affairs.

J. Conclusion

In this section we have attempted to provide a broad introductory tutorial to AI. Detailed discussion of the methods and techniques of AI and the wide range of problem domains in which they have been applied is given in various survey articles by Minsky (1963), Newell (1969), Nilsson (1974), and Feigenbaum (1978) all of which appear as Appendixes B to E of this report. Appendix F (Newell, 1970) discusses the relationship between artificial intelligence and cognitive psychology. (The book, Introduction to Artificial Intelligence by Patrick H. Winston, also provides an excellent introduction to the field.)

NASA Needs

NASA is, to a significant degree, an agency devoted to the acquisition, processing, and analysis of information -- about the Earth, the solar system, the stars, and the universe. The principal goal of NASA's booster and space vehicle commitment is to acquire such scientific information for the benefit of the human species. As the years have passed and NASA has mustered an impressive array of successful missions, the complexity of each mission has increased as the instrumentation and scientific objectives have become more sophisticated; and the amount of data returned has also increased dramatically. The Mariner 4 mission to Mars in 1965 was considered a striking success when it returned a few million bits of information. The Viking mission to Mars, launched a decade later, acquired almost ten thousand times more information. Comparable advances have been made in Earth resources and meteorological satellites, and across the full range of NASA activities. At the present time, the amount of data made available by NASA missions is larger than scientists can comfortably sift through. This is true, for example, of Landsat and other Earth resources technology satellite missions. A typical information acquisition rate in the 1980s is about 1012 bits per day for all NASA systems. In two years, this is roughly the total nonpictorial information content of the Library of Congress. The problem is clearly getting much worse. We have reached a severe limitation in the traditional way of acquiring and analyzing data.

A recent study at JPL estimates that NASA could save 1.5 billion dollars per year by A.D. 2000 through serious implementation of machine intelligence. Given different assumptions, the saving might be several times less or several times more. It is clear, however, that the efficiency of NASA activities in bits of information per dollar and in new data acquisition opportunities would be very high were NASA to utilize the full range of modern computer science in its missions. Because of the enormous current and expected advances in machine intelligence and computer science, it seems possible that NASA could achieve orders-of-magnitude improvement in mission effectiveness at reduced cost by the 1990s.

Modern computer systems, if appropriately adapted, are expected to be fully capable of extracting relevant data either onboard the spacecraft or on the ground in user-compatible format. Thus, the desired output might be a direct graphic display of snow cover, or crop health, or global albedo, or mineral resources, or storm system development, or hydrologic cycle. With machine intelligence and modern computer

graphics, an immense amount of data can be analyzed and reduced to present the scientific or technological results directly in a convenient form. This sort of data-winnowing and content analysis is becoming possible, using the developing techniques of machine intelligence. But it is likely to remain unavailable unless considerably more relevant research and systems development is undertaken by NASA.

The cost of ground operations of spacecraft missions and the number of operations per command uplinked from ground to spacecraft are increasing dramatically (Figures 3-1 and 3-2). Further development of automation can, at the same time, dramatically decrease the operations costs of complex missions and dramatically increase the number and kinds of tasks performed, and therefore, the significance of the data returned. Figures 3-3 and 3-4 illustrate schematically how improved automation can produce a significant decline in the cost of mission operations. The projected reallocation of responsibility during mission operations between ground-based humans and spacecraft computer processing is shown in Figure 3-5. There are many simple or repetitive tasks which existing machine intelligence technology is fully capable of dealing with more reliably and less expensively than if human beings were in the loop. This, in turn, frees human experts for more difficult judgmental tasks. In addition, existing and projected advances in robot technology would largely supplant the need for manned missions, with a substantial reduction in cost.

[blocks in formation]

RELATIVE COST

PER GROUND COMMAND

OPERATIONS PER COMAND

AUTONOMOUS MISSION CONTROL

1.0

[blocks in formation]

Figure 3-2. Trend of spacecraft automation. As a relative indicator, the level of automation is measured by the different elementary functions the spacecraft can perform in an unpredictable environment between ground commands. A 100-fold improvement through advanced automation is projected by the year 2000.

[blocks in formation]

Figure 3-3.

TOTAL NUMBER OF DECISIONS

PER OPERATION

RELATIVE COST PER MISSION OPERATION

[blocks in formation]

Figure 3-4.

10

1968

[blocks in formation]

100

1975

DECISIONS MADE BY MACHINE

1986

[blocks in formation]

Trend of cost to generate ground commands. A four-fold improvement through advanced automation is projected by the year 2000 through (1) performing more ground functions on the spacecraft, and (2) automating the remaining functions on the ground.

Figure 3-5. Trend of function (decision) allocation between humans and spacecraft. For the same typical operation, the machine takes over an increasing number of elementary functions, leaving high-level decisions to human beings.

« PreviousContinue »