« PreviousContinue »
It would not be unreasonable to expect that current and future experimentation will lead to the crystallization of additional concepts (such as, perhaps, Ninsky's (1974) Frame Systems) that will be incorporated in a new round of Al languages, possibly in the late 1970s.
2.3 Pirst-level applications topics
2.3.1 Game playing (Chart 3)
Programs have been written that can play several games that humans find difficult. As the most famous example, we might mention the chess playing program, MAC-HACK, of Greenblatt et al. (1967). A version of this progran achieved a United States Chess Federation rating of 1720 in one tournament. Samuel's programs for checkers have beaten experts in the game. Several other programs are mentioned in the chart.
Levy (1970) described a progru written by Atkins, Slate, and Gorland at Northwestern University and said that he thought it was stronger than Greenblatt's. He estimated its rating at about 1750, which would make it, he claims, the 500 th best player in Britain.
Computer chess tournaments are now held routinely. Results of these and other news about computer chess have been rather extensively reported in the SIGART Newsletter since 1972.
Most game playing programs still use rather straightforward tree-searching ideas and are weak in their use of high-level strategic concepts. It is generally agreed that advances in the use of strategy and in end-game play are necessary before chess programs can become substantially better, and they must become substantially better before they can beat human champions. (World Champion Bobby Fischer is rated at about 2810.) Levy (1970) is rather pessimistic about the rate of future progress in chess and has nade a £750 bet with Professors McCarthy, Papert, and Nichie that a program cannot beat him in a match by August 1978. (Levy's rating in 1970 was 2380.)
2.3.2 Math, science, and engineering aids (Chart 6)
The chart 118ts just a few examples of AI techniques that have been applied in systems that help human. professionals. The early Al work on symbolic integration, together with the work on algebraic simplification, contributed to a number of systems for symbolic mathematical computations. Moses (1971b) presents a good review. Systens presently exist that can solve symbolically an equation 11ke 32* - 3y* + 2 = 0 (for x), and that can Integrate symbolically an expression like f(x + etj? dx. Such systems are quite usefully employed in physics research, for example, in which expressions arise having hundreds of terms.
research) are now commonly used in many Al programs
2.2.4 AI systems and languages (Chart 4)
The programming languages developed and used by AI
After some years of research using these languages,
Edward Feigenbaum once characterized progress in AI
We do not have space here to trace the development
Currently, a large part of Al research is being con-
Another quite successful application is the DENDRAL program that hypothesizes chemical structures from
combination of mass spectrogram and nuclear magnetic resonance data. The systen is presented with this data from 'a sample of a known chemical compound (that is, its chemical formula is known). It uses several levels of knowledge about chemical structures and how they break up in mass spectroscopy to inter the structure of the compound. It can deal with a large number of organic compounds including complex amines and estrogenic steroids. Its performance on the steroids often exceeds the best human performance.
results announced without proof in the Notices of the American Mathematical Society. ]
Various strategies were developed to improve the efficiency of the resolution provers. These strategies were mainly based on the form or syntax of the expressions to be proved and not on any special knowledge or semantics of the domain. In automatic theorem proving, just as in other applications areas, semantic knowledge was needed to improve performance beyond the plateau reached by the late 1960s.
The work of Bledsoe and his students' is typical of the third and latest theme in automatic theorem proving. Although they emphasize the importance of manmachine systems, their programs themselves have become knowledge-based specialists in certain mathematical domains. The use of semantic knowledge in theoremproving systems has also renewed interest in beuristics for subgoaling, and so forth. The programs of this group are capable of proving some rather impressive theorems, and it can be expected that the present man-machine systems will produce ever more competent and more completely automatic offspring.
2.3.4 Automatic programming (Chart 8)
Work in automatic programming has two closely interrelated goals. One is to be able to prove that a given program acts in a given way; the other is to synthesize a program that (provably) will act in a given way.
The first might be called program verification and the second program generation. Work on one goal usually contributes to progress toward the other; hence, we combine them in our discussion.
Blost of the work on program verification is based on a technique proposed by Floyd (1967). (See also Turing (1949).) This technique involves associating assertions with various points in the flow chart of a program and then proving these assertions. Originally, the assertions had to be provided by a human, but some recent work has been devoted to generating the assertions automatically. Once proposed, one can attempt to have the assertions proved either by a human or by a machine. The latter course involves a close link between this field and that of automatic theorem proving.
A recent system developed at the Stanford Research Institute (Elspas et al. (1973)] is typical of one in which the assertions are both produced [Elspas (1972)] and proved (Waldinger and Levitt (1973)) automatically. This system has been used to verify several programs including a real-number division algorithm and some sort programs. It has also proved theorems about a pattern matcher and a version of Robinson's (1965) unification algorithm. It is a good example of a modern al program in that it makes effective use of a large amount of domain-specific knowledge.
The DENDRAL project typifies a style of Al system building that has been quite successfully applied to chemistry and some other domains. This design style involves intensive interaction between Al scientists and applications area scientists. The latter are queried in the minu test detail to extract from them rules and other knowledge that are operationally useful in the domain. These are then coded into the system by the AI scientists and tests are run to judge their effectiveness. The process is long and involves several iterations. The applications scientists are often confronted with apparent contradictions between how they say they make decisions and how they actually make decisions. Few of them have any really global or completely accurate theory of how they apply their knowledge. Furthermore, this knowledge is often informal and heuristic. As a result, the emerging system is a collection of "minitheories" and special rules of only local effectiveness. To use this design strategy, the system must be one that can deal with many, and some times conflicting, mini-theories. It must also be a system to which new knowledge can gradually be added and old knowledge modified.
After several months or years of this sort of gradual shaping of the system, it comes to simulate the performance of the human experts whose knowledge it has gained. This general strategy is beginning to be employed extensively in AI applications. (For example, see also Shortliffe et al. (1973).]
2.3.3 Automatic theorem proving (Chart 7)
There are three major themes evident in attempts to get computer programs to prove theorems in mathematics and logic. First, early work by AI researchers produced heuristic programs that could prove simple theorems in propositional logic and highschool level theorems in plane geometry. These prograns used (but mainly helped to refine) concepts like reasoning backwards, means-ends analysis, use of subgoals, and the use of a model to eliminate futile search paths. The fact that logicians had already developed powerful procedures that effectively eliminated propositional logic as a domain requiring heuristic problem-solving techniques does not detract from the value of this early work.
Logicians were also developing techniques for proving theorems in the first order predicate calculus. J. A. Robinson (1965) synthesized some of this work into a procedure for using a single rule of inference, resolution, that could easily be mechanized in computer programs. Building resolution-based provers quickly became a second theme in automatic theorem proving, while other approaches languished. Resolution had a great influence on other application areas as well (Charts 1 and 8). Performance of the resolution systems reached impressive, if not superhuman, levels. Programs were written that could prove reasonably complex, some times novel, theorems in certain domains of mathematics. The best performance, however, was achieved by man-machine systems in which a skilled human provided strategic guidance leaving the system to verify lemmas and to 1111 in short chains of deduction. (See especially Guard et al. (1969) and Allen and Luckham (1970). The latter system has been used to obtain proofs of new ma thematical
lead to a resurgence of interest in general robot systens, perhaps during the late 1970s.
vrite a simple "let's-hope-that-this-will-do" program, and then debugging it until it does succeed at its task. To employ this strategy, HACKER uses a great deal of knowledge about likely classes of program bugs and how to fix then.
2.3.6 Machine vision (Chart 10)
Again, some of the most successful work bas been in connection with man-machine systems. We include in this category certain aids to human programmers such as those found in the INTERLISP system (Teitelman (1972a, b, 1973)). In fact, any techniques that help make the production of programs more efficient might be called part of automatic programming. Balzer (1972) provides a good summary of this broad view of the field.
The ability to interpret visual images of the world is adequate enough even in some insects to guide many complex behavior patterns. Yet the analysis of everyday visual scenes by machine still remains a largely unconquered challenge to Al researchers. Early work concentrated almost exclusively on designing systems that could classify two-dimensional inages into a small number of categories--alphanumeric character recognition, for example. In fact, much of the Al work during the 1950s was concorned with pattern recognition. Researchers, such as Frank Rosenblatt and Oliver Selfridge, vere influential in shaping this early period. Pattern classification (or recognition) continues as a separate active research interest, but since about 1965, AI interest in vision has centered on the more difficult problem of interpreting and describing complex three-dimensional scenes. Both aspects, classification and description, are thoroughly and clearly treated in an excellent textbook by Duda and Hart (1973).
2.3,5 Robots (Chart 9)
Every now and then, man gathers up whatever technology happens to be around and attempts to build robots. During the late 1960s, research on robots provided a central focus for integrating much of the AI technology. To build an intelligent robot is to build a model of man. Such a robot should have general reasoning ability, locomotive and manipulative skills, perceptual (especially visual) abilities, and facility with natural language. Thus, robot research is closely linked with several other applications areas. In fact, most of the research on machine vision (Chart 10) was, and is, being performed in connection with robot projects.
Our problem-solving and representational techniques are probably already adequate to allow useful general purpose robot applications; however, such robots would be perceptually impoverished until we develop much more powerful visual abilities. Robotics is a particularly good domain in which to pursue the necessary vision research.
Much of the scene analysis work can be traced to Robert's (1963) influential thesis. It established a trend of analyzing scenes composed of prismatic solids (the so-called "blocks world"). Working with these (sometimes complex) scenes composed of simple objects helped to establish a wide range of techniques for converting raw video images into symbolic descriptions based on concepts such as lines, regions, and simple shapes. The MIT "COPY" system, for example, can use a visual input device to look at a scene consisting of a structure of blocks. The systen can analyze the scene to form a representation of how the blocks are arranged. This representation can then later be used (with the robot an system) to reproduce this exact block structure from disarranged blocks.
The robot research of the late 1960s produced systems capable of forming and then intelligently executing plans of action based on an internal model of the world. The Edinburgh, Stanford, HITAC, and MIT systens consisted of manipulator arms and TV cameras or other visual input devices. These became capable of building structures out of simple blocks. In one case (Stanford), the system could assemble an automobile water pump. The Stanford Research Institute system consisted of a nobile cart and TV camera (but no arm). It could forn and execute plans for navigating through a simple environment of rooms, doorways, and large blocks, and its visual system could recognize and locate doorways, floor-wall boundaries, and the large blocks, The system had sophisticated techniques to allow it to recover from errors and unforeseen circumstances, and it could store (learn) generalized versions of the plans it produced for future use.
Some successful excursions outside the blocks world have been nade. (See the entries to the right of the dashed line in Chart 10). Indeed, nany researchers contend that continuing to work with blocks has actually hindered research progress in machine vision because it has allowed workers to avoid facing certain key problems associated with domain semantics, distinguishing features of complex objects, and new representational schemes. In any case, working with more complex scenes is now well established. The spirit of recent work is well described in a note by Tenenbaum (1973). Again, knowledge about the domain is crucial to scene interpretation!
2.3.7 Natural language systems (Chart 11)
Since practical applications of general purpose robot systems seem more remote than they do in other applications areas, the increasingly pragnatic research climate of the early 1970s has seen a lessening of activity in general robotics research. In the meantime, various projects with the practical goal of advancing industrial automation have begun to apply some of the already-developed manipulative and visual skills to factory assembly and inspection problems. It seems reasonable to predict that man's historic fascination with robots, coupled with a new round of advances in vision and reasoning abilities, will
Processing written and spoken forms of natural languages has been the subject of so much research that the task of surveying progress in this field alone should humble anyone. How, then, are we to say anything very helpful in just a few sentences. The interested reader should see the GENERAL volumes cited in Chart 11.
Perhaps a description of the power of some representative systems will suffice. Winograd (1971)
implemented a system that carried on "blocks-world" written dialogs like the following:
Man (typing into a terminal): Which cube is sitting
on the table? System (generating text on a display): The large
groen one which supports the red pyramid. N: Is there a large block behind a pyramid? S: Yes, three of them: A large red one, a large
green cube, and the blue one. M: Put a small one onto the green cube which sup
ports a pyramid.
because the concepts out of which to construct these theories had not yet been formulated. Botore we have the concepts (and they are now gradually accumulating) 1t is as impossible to understand human thought as it was impossible to understand navigation, say, before we had the concept of sonar. Man understands the world by constricting models, and his models are often based on concepts drawn from his technological laventions. We may not understand nen immediately after building the first robot, but we certainly won't une derstand him before: (We note in passing that knowledge about the structure and function of the neuron-or any other basic component of the braidis irrele vant to the kind of understanding of intelligence that we are seeking. So long as these components can per form some very simple logical operations, then it doesn't really matter whether they are neurons, relays, vacuum-tubes, transistors, or whatever.)
The system demonstrates its understanding of the last two commands by having a simulated robot arm carry out appropriate actions in a simulated blocks world.
The work of Schank (1972) typifies a rather successful trend in natural language understanding. Many of the recent systems, in one way or another, attempt to match a section of input text or utterance against semantically likely stored structures (that are more or less complex.) These structures are themselves schemas or scenario families having variables that are bound to constants in the input during matching. The instantiated scenarios serve as a sort of deep structure that represent the meaning of the utterance. (See also Alinsky (1974).)
An excellent short account of the relationship between AI and psychology has been written by Newell (1970). While he, perhaps prudently, adopts a somewhat less extreme position than mine about the dependence of psychology on Al, he nevertheless shows how thoroughly information processing ideas have penetrated psychological theory.
Most of the information-processing-based psychology to date has been devoted to explaining either memory (1.8., EPAM and HAM in Chart 12), perception (e.g., Sternberg (1966)], or problem solving (e.8., Newell and Simon (1972)). Probably the most complete attempt at understanding human problem-solving ability is the last-mentioned work of Newell and Simon. This volume proposes an information processing theory of problem-solving based on the results of many years of research in psychology and AI.
The goals of a coordinated scientific effort to produce systems to understand limited utterances of continuous speech are clearly outlined in a plan by Newell et al. (1971). If the goals are met, by 1976 a prototype system should be able (in the context of a limited domain of discourse) to understand (in a few times real time) an American (whose dialect is not extremely regional) speaking (in a "natural" manner) ordinary (although perhaps somewhat simple) English sentences constructed from a 1000-word vocabulary. These projects bring together workers in acoustics and speech research as well as in AI. The projects seem to be more or less on schedule and will probably achieve creditable performance by 1976. (In the spirit of the vagueness of the phrase "a few times real time," the projects ought to achieve the 1976 goals at least some time in the late 1970s.)
Animal behavior, while long the special interest of experimental psychologists, has had little information-processing-based theoretical attention, Some models inspired by ethologists have been proposed by Friedman (1967). I think that the production system model advanced to explain certain human problem solving behavior by Newell (1967) and colleagues might be a starting point for an extensive theory of animal behavior. Newell, himself, notes that these production systems can be viewed as generalizations of stimulus-response systems. (Incidentally, the entire repertoire of what was called "intermediate-level actions" of the Stanford Research Institute robot system (Raphael et al. 1971) was independently programmed in almost exactly this production formalism. Production systems have been used in other Al programs as well.] Newell and Simon (1972, p. 803) have also stated that they "have a strong premonition that the actual organization of human problem solving 'programs closely resembles the production system organization ...." It would seem profitable then to attempt to trace the evolutionary development of this hypothesized production system organization down through some of the higher animals at least.
In my opinion, the work in natural language understanding is extremely important both for its obvious applications and for its future potential contributions to the core topics of AI. It is the prime example of a field in which reasonable performance could not be achieved by knowledge-impoverished systems. We now know that understanders need large amounts of knowledge; the challenge is to attempt to build some really large systems that have the adequate knowledge and to learn, by our mistakes, the organizational principles needed to keep these large systems from becoming unwieldy.
2.3.8 Information processing psychology (Chart 12)
Computer science in general and Al in particular have had a tremendous impact on psychology. They provide the concepts and the very vocabulary out of which to construct the most useful theories of human behavior. In my opinion the reason that, say, prior to 1955, there rere, in fact, no adequate theories of human behavi , perception, and cognition 18
In summary, we see that the Al campaign is being waged on several different fronts, and that the victories, as well as the setbacks, contribute to a growing common core of ideas that aspires to be a science of Intelligence. Against this background,
it is worth mentioning some of the popular criticisms of AI:
On the other hand, wat 11 ve already have most of the ideas that we are going to get, ideas like millions of coordinated mini-theories, procedural embedding of knowledge, associative retrieval, and scenario frames. Suppose that we have now only to devote the large etfort required to build really huge intelligent systems based on these ideas. To my knowledge, no one advocates this alternative view, but consider this: · Whatever the nature of an intelligent systen, it will be exceedingly complex. Ito performance will derive in large part from its complexity. We will not be sure that Al is ready to build a large, intelligent system until after we have done so. The elegance of the basic ideas and the new and powerful languages alone will not be sufficient indication of our naturity. At some time, we will have to put together exceedingly complex systems. The time at which it is appropriate to try will always be a guess.
My guess is that we still have a good deal of work to do on the problem of how to obtain, represent, coordinate, and use the extensive knowledge we now know is required. But these ideas will not come to those who merely think about the problem. They will come to those who both think and experiment with much larger systems than we have built so far.
(1) Al hasn't really done anything yet. There are a few "toy" programs that play middling chess and solve simple puzzles 11ke "missionaries and cannibals," but the actual accomplishments of Al measured against its proni ses are disappointing. (See, for example, Dreyfus (1965, 1972).) (My comment about this kind of criticism is that its authors haven't really looked at Al research past about 1960.) (2) Not only has Al not achieved anything, but its goals are actually impossible. Thus, Al is something like alchemy. It is impossible in principle to progran into computers such necessities of intelligence as "fringe consciousness" and "perspicuous grouping." (Again, see Dreyfus (1965, 1972).) [This kind of criticism is actually rather brave in view of the fate of many previous impossibility predictions. This attack simply looks like a poor bet to me
me. ) (3) The subject matter of Al, namely intelligence, is too broad. It's like claiming science is a field. [This criticism may have some merit. J (4) Everything happening in AI could just as well happen in other parts of computer science, control engineering, and psychology. There is really no need for this Al "bridge" between already established disciplines. (See Lighthill (1973).) (This kind of criticism caused quite a stir in Great Britain recently. I think I have shown that the so-called bridge has quite a bit of internal structure and is contributing a heavy traffic of ideas into its termini.) (5) AI is impossible because it is attempting to reduce (to understanding) something fundamentally "irreducible." Furthermore, this very attempt is profane; there are certain awesome mysteries in life that best remain mysterious. (See Roszak (1972).) (My prejudice about this view is that, at best, it is, of course, nonsense. A blind refusal even to attempt to understand is patently dangerous, By all means, let us not foreclose a "rhapsodic understanding" of these mysteries, but let us also really understand them. ] (6) AI is too dangerous, so it probably ought to be abandoned--or at least severely limited. (See Weizenbaum (1972).] [My view is that the potential danger of AI, along with all other dangers that man presents to himself, will survive at least until we have a science that really understands human emotions. Understanding these emotions, no less than understanding intelligence and perception, will be an ultimate consequence of AI research. Not to understand them is to be at their mercy forever, anyway. ]
Another problem, of a more practical type, concerns knowledge acquisition. Today, the knowledge in a program must be put in "by hand" by the programmer although there are beginning attempts at getting programs to acquire knowledge through on-line interaction with skilled humans. To build really large, knowledgeable systems, we will have to "educate" existing programs rather than attempt the almost impossible feat of giving birth to already competent ones. [Some researchers (e.8., Papert, 1972) expect that at least some of the principles we discover for educating programs will have an impact, perhaps revolutionary, on how we educate people.]
In this connection, we have already mentioned that several successful Al systems use a combination of man and machine to achieve high performance levels. I expect this research strategy to continue and to provide the setting in which the human expert(s) can gradually transfer skills to the machine. [woods and Nakhoul (1973) consciously apply a strategy such as this and call it "incremental simulation.")
The one criticism having any weight at all, I think, is that AI may be too broad and diverse to remain a cohesive field. So far, it has stayed together reasonably well. Whether it begins to fractionate into separate exotic applications areas of computer science depends largely, I think, on whether these applications continue to contribute core ideas of great generality.
I have not yet mentioned in this paper the subject of learning. It is because I have come to agree with John McCarthy that we cannot have a program learn a fact before we know how to tell it that fact and before the program knows how to use that fact. We have been busy with telling and using facts. Learning them is still in the future, although some isolated successes have, in fact, occurred. (See especially, Samuel (1959, 1967), Winston (1970), Fikes et al. (1972a), and Sussman (1973).]
What is the status of these core ideas today? There are two extreme views. I have heard John McCarthy say (perhaps only provocatively to students) that really intelligent programs are a long way off and that when we finally achieve them they will be based on ideas that aren't around yet. Their builders will look back at al in 1974 as being a period of pre-history of the field.
Continuing our discussion of the likely future of Ai, we note that the increasingly pragmatic attitude of those who have been sponsoring Al research will have a great offect on the course of this research. There may even be a temporary reduction of effort by AI researchers in the core topics and the f!rst-level applications areas in favor of increase, support of engineers and scientists building second-level applications. The results of these second-level efforts nay, in fact, be rather spectacular. I have in mind such things as automated factories, automatic robots