ARTIFICIAL INTELLIGENCE Nils J. NILSON Artificial Intelligence Center, Stanford Research Institute (INVITED PAPER) This paper is a survey of Artificial Intelligence (AI). It divides the field into four core topics (embodying the base for a science of intelligence) and eight applications topics (in which research has been contributing to core ideas). The paper discusses the history, the major landmarks, and some of the controversies in each of these twelve topics. Each topic is represented by a chart citing the major references. These references are contained in an extensive bibliography. The paper concludes with a discussion of some of the criticisms of AI and with some predictions about the course of future research. 1. INTRODUCTION Can we ever hope to understand the nature of intelli- The field of Artificial Intelligence (AI) has as its These are the emerging beliefs of a group of computer these researchers have produced working models in the Whether the activities of these workers constitute a way in which he views himself. In this paper, I Before beginning we must discuss an important char- On reflection, this is not surprising. When a field Destined apparently to lack an applied branch, is will continue to grow and contribute needed ideas to applications in other areas? I think the answer is yes. Just what form these central ideas will ultimately take is difficult to discern now. Will AI be something like biology--diverse but still united by the common structure of DNA? What will be the DNA of AI? Or will the science of Al be more like the whole of science itself--united by little more than some vague general principles such as the scientific method? It is probably too early to tell. The Techniques for common sense reasoning, deduction, and problem solving. present central ideas seen more specific than does the scientific method but less concrete than DNA, 2. WHAT IS HAPPENING IN AI? 2.1 The structure of the field As a tactic in attempting to discover the basic principles of. intelligence, Al researchers have set themselves the preliminary goal of building computer programs that can perform various intellectual tasks that humans can perform. There are major projects currently under way whose goals are to understand natural language (both written and spoken), play master chess, prove non-trivial mathematical theorems, write. computer programs, and so forth. These projects serve two purposes. First, they provide the appropriate settings in which the basic mechanisms of intelligence can be discovered and clarified. Second, they provide non-trivial opportunities for the application and testing of such mechanisms that are already known. I am calling these projects the first-level applications of AI. Techniques for heuristic search. These four parts are shown at the center of Figure 1. If an application is particularly successful, it might be noticed by specialists in the application area and developed by them as a useful and economically viable product. Such applications we might call second-level applications to distinguish them from the first-level applications projects undertaken by the AI researchers themselves. Thus, when Al researchers work on a project to develop a prototype system to understand speech, I call it a firstlevel application. If General Motors were to develop and install in their assembly plants a system to interpret television images of automobile parts on a conveyor belt, I would call it a secondlevel application. (We should humbly note that perhaps several second-level applications will emerge without benefit of obvious AI parentage. In fact, these may contribute mightily to Al science itself.) Thus, even though I agree that Al is a field that cannot retain its applications, it is the secondlevel applications that it lacks. These belong to the applications areas themselves. Until all of the principles of intelligence are uncovered, AI researchers will continue to search for them in various first-level applications areas, Figure 1, then, divides work in Al into twelve major topics. I have attempted to show the major papers, projects, and results in each of these topics in Charts 1 through 12, each containing references to an extensive bibliography at the end of this paper. These charts help organize the literature as well as indicate something about the structure of work in the field. By arrows linking boxes within the charts we attempt to indicate how work has built on (or has been provoked by) previous work. The items in the bibliography are coded to indicate the subheading to which they belong. I think that the charts (taken as a whole) fairly represent the important work even though there may be many differences of opinion among workers about some of the entries (and especially about how work has built on previous work). By reasoning, etc., we mean the major processes involved in using knowledge: Using it to make inferences and predictions, to make plans, to answer questions, and to obtain additional knowledge. As a core topic, we are concerned mainly with reasoning about everyday, common domains (hence, common sense) because such reasoning is fundamental, and we want also to avoid the possible trap of developing techniques applicable only to some specialized domain. Nevertheless, contributions to our ideas about the use of knowledge have come from all of the applications areas. There have been three major themes evident in this core topic. We might label these puzzle-solving, question-answering, and common-sense reasoning. Puzzle-solving. Early work on reasoning concentrated on writing computer programs that could solve simple puzzles (tower of Hanoi, missionaries and cannibals, logic problems, etc.). The Logic Theorist and GPS (see Chart 1) are typical examples. From this work certain problem-solving concepts were developed and clarified in an uncluttered atmosphere. Among these were the concepts of heuristic search, problem spaces and states, operators (that transformed one problem state into another), goal and subgoal states, meansends analysis, and reasoning backwards. The fact * In particular, some might reasonably claim machine vision (or more generally, perception) and language understanding to be core topics. |