Page images
PDF
EPUB

1950

1955

ALGOL, ASSOCIATION and HASH-CODING

SCHEMES

[blocks in formation]

QA4
Mulihon, a
(1971, 1972)

LEN Fuldmen and

Rovner (1809)

PLANNER
Hewitt (1889, 1971)

1972)

1970

SAIL Foldmen (1972)

[blocks in formation]

QLIS
Reboh and

Socerdoti (1973)

POPLER

(190)

ACTORS Hewitt, at (1973)

CHART 4: AI SYSTEMS AND LANGUAGES

[blocks in formation]

GENERAL
Slagle (1971)

[blocks in formation]

KALAH

BRIDGE Berlekamp (1963)

1965

Russell (1964)
Slagle and
Dixon (1969)

[blocks in formation]
[blocks in formation]
[merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][ocr errors][merged small]

GENERAL
Bobrow and
Raphael (1973)

SIMULATION

LANGUAGES
See, for example,
Gordon (1909)

POP-2
Burstall,

a (1971)

CONNIVER Susoman mo

McDermott (1972) McDormont and Suraman (1972)

SIMPLE CHESS

PROGRAMS
Kister, et al (1957)
Newell, Shaw
and Simon (1958b)
Bernstein (1959)

COMPETENT
CHESS PROGRAMS
Greenblau, et

al (1967) Berliner (1970) Loy (1970) Gillogly (1972) Zobrist and

Carison (1973) Mittman (1973)

MANAGEMENT

SCIENCE
APPLICATIONS
Tongo (1961)
Gre (1966)
West (1967)
Lin (1970)
Hoewele (1971)

DENDRAL
CHEMICAL

ANALYSIS
Buchanan, et ol (1969)
Buchanan and
Laderborg (1971)

[blocks in formation]

CHART 6: MATH, SCIENCE AND ENGINEERING AIDS

[blocks in formation]
[merged small][merged small][merged small][merged small][merged small][merged small][merged small][ocr errors][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small]

1950

1955

TT

1960

1965

LINGUISTICS
MECHANICAL

LOGIC
Harris (1951, 1951)

TRANSLATION
Chorrsky (1956, 1957,
See, for muample,

ACOUSTICS
196S
Lacke and Booth

ANO SPEECH
(1955)

RESEARCH
Edmundson (1961)

So, for emple,

Flanagan (1965)
GENERAL
Schank and
Colby (1973)
Simmons (1965.

1970)
BASEBALL

Ruston (1973)
Gruen, et n (1961)

Minsky (1968)
MULTIPLE PATN
SYNTACTIC ANALYSIS

Kuno and
MITAE ENGLISH

STRING
Oettinger (1963)

REWRITING STUOENT AND SIR
PREPROCESSOR

PROTOSYNTHEX
TRANSFORMATIONAL PARSER

Bobrow (1964,61

Simmons, et al (1964)
Wolke (1964)
ANALYZER

Kay (1964)
Raphael (19643.b)

DEACON

ELIZA
Zwety, nt al
Puick (1965, 1971)
POWERFUL

Thompson (1966)

Weizenbaum
(1965)

PARSER
SEMANTIC

(1966)

NETWORKS
TRANSFORMATIONAL
TRANSITION Kay (1967)

Quilhan (1968.
GRAMMAR
NETWORKS

REL

1969)
TESTERS
Thorne, et of (1968)

Thompson (1969)
Lande and
Bobrow and Fraser

ARPA SPEECH
(1969)

MINO
Schorne (1958

PROJECTS
Gross med
Woods (1970, 1973)

SYSTEM

CONCEPTUAL

PARRY

Nowell, et al
Walker (1969)

Koy (1973)

(1971) REQUEST

DEPENDENCY Enes and Colby
freedman (1971)

Woods and
LUNAR
NATURAL LANG, Schank (1972)

(1973)
Petrack (1973)
GSP

Mokoul
0-A SYSTEM
Woods, at sf (1972)

Schonk, et al
Porn (1973)
Kaplan (1973)

(1973)
Summons (1973) (1973)

Reddy. n

(1973)
Walka (1973)
haxton and

Robinson (1973)

Barnett (1972)
CHART 11: NATURAL LANGUAGE SYSTEMS

SPEECH SYSTEM

Reddy (1967)
Vicens (1969)

1970

SHRDLU Winograd (1971)

ENGLAW Cores (1972)

STORIES Cherniak (1972)

1950

[blocks in formation]
[ocr errors]

1955

860

LINGUISTICS
PSYCHOLOGY

1 Chomsky (1956, 1957,
Example:

1885 Bruner. Goodnew Miller, en (1960)

and Aurten (1956) Sporting (1960)
Winter (19561 thornton (1966)

LOGIC THEORIST
Burttert (1932

Gropory (1966, 19701
· 1958
Marime (1987
Well and Simon (1950)

GENERAL
Noro, show and Simon

Itowo (1970
(1987, 1956.)

Achank md

Colby (1973)

Chan (1973)
ANO VARIANTS
INDUCTION
GPS

Lindwy and
Frigonbova (181)
Hunt and Mariand (1961)
Newett, Show

Normen (1973)
Simon and
Jotune (122

id Simon (1960)
Fengenbaum (IMA

Nowell and
Nimamen (1968)

Simon (1961)

TAOBLEN

SEHAVIOR
SEMANTIC NETS

GRAPHS AND
Quiltion (1980, 1969

PRODUCTION
Collins and Owithin

SYSTEMS
(1972

PARRY

Nawe (1987)
ELINOR

Colby, a

Nowolt (1872,8)
Aumothert, e

THEORIES OF

(1971)

INFORMATION
(1972)

MAM
LANGUAGE

MOCESSING THEORY
Rumadhort and Anderson and UNDERSTANDING

OF HUMAN PROBLEM
Herman (19731
Bower (1973
Region (1972

SOLVING
echend (1912,

Werwoll and Simon (19721 1973

[ocr errors]

ANIMAL
BEHAVIOR

MODELS
Friedman (1M
Becker (1970, 1973)

1970

CHART 12: INFORMATION PROCESSING PSYCHOLOGY

that these useful ideas sean so familiar in Al research today testiria to the success of this early work. But the very cleanness of puzzles allowed researchers to avoid facing what has turned out to be the key problen, namely dealing with knowledge, huge amounts of knowledge, diverse, cluttered and interrelated.

question-answering, and common-sense reasoning. First attempts to build such universal systens were unsuccessful in the incorporation of the necessary domain-specific knowledge and techniques and, as far us I know, there are at present no serious advocates of a simple universal system.

Question-answering. As one step toward facing the problem of dealing with knowledge, several researchers concentrated on building interential questionanswering systons. (See, in particular, the references listed under SIR, QA2, and QA3 in Chart 1.) Such systems should be able to store a large number of facts and should be able to respond to reasonable questions whose answers could be deduced from these facts. These systems required nechanisms for logical inference and led Al researchers into a romance with logic in general and with Robinson's resolution principle in particular. (See Chart 7.) This line of research clarified our concepts of applying inference techniques to common-sense knowledge and led to varsous useful schemes for associative retrieval of stored data. We also learned that for large question-answering systems the question of when to use inference methods was more important than the nature of the inference mechanism itself. Thus, we learned that we would need large amounts of secondary knowledge about how and when to use the primary knowledge of the domain.

At the opposite extrone of this controversy, however, are the proponents of what I would call ad hocism. To them, following any systematic approach is anathema. Each task should simply be programned on its own using whatever tricks might be needed. There is no doubt that this kind of opportunism is healthy for • growing field still in search of its general principles. Still, the following point must be made against rampant ad hocism: One part of developing a science is to discover those concepts that are important. We must try to produce intelligent behavior out of systems limited to various combinations of trial concepts. Our failures tell us where our present concepts are weak and give us hints about new ones that might be needed. If our trial concepts are always allowed the crutch of ad hocism, we do not learn enough about where the concepts are weak.

Another controversy concerns how much knowledge we ought to give our reasoning programs. At one extreme are researchers who insist that the program should be given only some basic premises from which it must derive any intermediate knowledge it needs to arrive at an answer. At the other (and impossible) extreme, programs would be provided explicitly with answers to all problems. There are some who feel that derivation of answers ultimately will play such a large role in intelligent systems that we may as well concentrate now on derivation techniques. To force derivation, they tend to work with knowledgeimpoverished systems.

Common-sense reasoning. In 1958, McCarthy proposed an ADVICE-TAKER that would be able to accept knowledge and use it to deduce answers to questions and to figure out simple plans for courses of action. One night ask such a system, for example, how to get to Timbuktu (a favorite example of McCarthy's). If the system knew about airline schedules, airports, how to get to airports, and other common (but imnensely diverse) knowledge, it might answer thus: (1) go to your travel agent and find out about flights to Timbuktu, (2) using this information, select a flight and make a reservation, (3) drive to the airport at the appropriate time, (4) park your car, and (5) get on the appropriate airplane. Each of these steps, of course, could be expanded in detail.

Problems of this sort are clearly not as clean as puzzles; they denand the use of large amounts of knowledge; yet they have in common with puzzles the feature of planning a course of action to accomplish 1 goal.

The consensus just now emerging from this controversy is that, because of combinatoric problens, an intelligent system probably will be able to make only reasonably direct derivations at any stage. Thus, to deal with a large domain, such a system must begin with a large skeletal network of basic knowledge about the domain and knowledge about how to use its knowledge. Any excursion from the known (explicitly represented) knowledge into the unknown (derived) can thus be well-guided (1.e., practical) even though the "volume" of the unknown part itself can be extremely large. It is senseless to insist that, to answer a single question, an intelligent system must repeat the tedious trial and error evolution of a large part of our cultural and scientific knowledge to say nothing of possibly having to repeat much of biological evolution itself. Even the "let's derive all" school would agree. What members of this school and some others did not realize was just how much knowledge would finally be needed by intelligent systems. Given this realization, the only possible course is to build "knowledge-based" programs.

Robotics research (see Chart 9) has probably contributed the most to our knowledge of how to generate plans based on large amounts of common-sense knowledge. Researchers at MIT, using an arm 10 a domain of simple blocks (called the BLOCKS world) and at SRI using a mobile robot in a domain of corridors and roons, have developed various reasoning systems that can generate plans of action for a robot. 01 these, we night mention in particular STRIPS, SHRDLU, and HACKER (see Chart 1).

2.2.2 Modeling and representation of knowledge

(Chart 2)

There has been a lot of useful internal controversy about how to build rousoning systems and about the best directions for research. For a while, there was hope in some quarters that some universal systen (based, for example, like QA3 on Robinson's resolution principle) could be used for all of the tasks we have mentioned so far: puzzle-solving,

Our ideas about how to represent knowledge have come from several of the applications areas. (Quite Minsky (1974) guesses that a knowledge-based system reasoning about visual images (a system such as aight be possessed by a typical human) "might need 1 few millions, but not billions, of structural units, interconnections, pointers."

S1 - IFIP

obviously, every Al program uses some representational scheme. We cite in Chart 2 just a few of the important contributions.) Researchers in machine vision and perception and in natural language understanding were perhaps the first to realize how much knowledge would be needed by high performance programs. These two applications areas have thus probably contributed the most to our repertoire of representational techniques.

(the goal of being at the airport), how are we to deal with certain difficulties arising when new information is received prior to executing the plan. Suppose, for example, someone tells us that our automobile is out of gasoline so that now our plan (that called for driving to the airport) will not work. We had proved that it would, and now new information has rendered the proof invalid even though all of the information on which the original proof was based is still present, Hayes (1973) discusses this violation of the "extension property" and shows the close connection between the qualification problen and the frame problem. Systen builders (e. 8., Hewitt (1969) and Rulifson et al. (1972)) have invented certain constructs that apparently get around these difficulties, although in a way that is somewhat unsatisfactory to logicians.

The systems mentioned in Chart 2 cover some of the major suggestions. For example:

Green (1969a,b,c): Statements in the first order predicate calculus. Quillian (1968): Concept nodes in a graph structure linked by various relationships, Schank et al. (1972): Canonical concept structures having "slots" for case information. Hewitt (1969,71) and Winograd (1971): PatternInvoked procedures plus assertions. Rulifson et al. (1971): Pattern-invoked procedures plus special list structures such as n-tuples, bags and sets with property lists all organized in a discrimination net. Newell (1967): Sets of productions organized as Markoy tables, Minsky (1974): Hierarchically organized structures called "frame systems." These have "free variables" (analogous to Schank's slots) that can be matched against constants occurring in the data to be analyzed.

We are still quite a way, it seems, from having a sound theoretical basis for knowledge representation, It is my view that the necessity of developing large and complex reasoning systems will produce the new concepts out of which the needed theories will be constructed.

2.2.3 Heuristic search (Chart 3)

For a period there was some controversy over whether knowledge should be represented assertionally or procedurally. (As an extreme case, a spiral, say, can be represented assertionally by a list of the points in the plane through which it passes, or it can be represented procedurally by a program that draws it.) Something of a cult was made of the "procedural embeading" of knowledge, but this controversy seems to be settling down now to an acceptance of the value of a combination of assertional and procedural knowledge.

One of the first results of early Al research was the development of a point of view toward problem-solving some times called "the heuristic search paradigm." There are two closely related versions of this paradigm. In one, a "problem" is transformed into the canonical problem of finding a path through a "space" of problem states from the initial state to a goal (1.e., solution) state. In the other, a proba lem is "reduced" to various subproblems that are also reduced in turn (and so on) until the ultimately resulting subproblems have trivial or known solutions, Each version is merely a slightly different way of thinking about basically the same problem-solving process. In each, the process involves generating alternative paths toward solutions, setting up certain key milestone states (or subproblems), and managing search resources wisely to find acceptable solutions.

[ocr errors]

Another concern, having antecedents in logic, is how to represent certain "modal" concepts involving time, necessity, possibility, and so forth. McCarthy & Hayes (1969) have analyzed some of the difficulties in formalizing these concepts; meanwhile, Hendrix (1973) and Bruce (1972) have developed systems that begin to deal with some of them.

The word "heuristic" is used because these techniques emphasize the use of special knowledge from the problem domain that "aids in discovering a solution" by drastically reducing the amount of search that would otherwise have to be employed. Often this knowledge takes the form of "rules-of-thumb" that help to limit or direct the search. Sone times there are constraining relations that can be employed to limit the search needed. (A good example of the use of constraints is the work of Waltz (1972). )

McCarthy and Hayes (1969) also discuss two fundamental problems concerning representation and reasoning. One is called the frame problem, and it concerns certain difficulties of model maintenance. If we have a representation of the world at a certain instant (based on observations and a priori knowledge), how should we represent and use "laws of physics" to update the model so that it represents the world (reasonably accurately) at some future instant? if a robot removes a book from a shelf, can we assume that a door across the room remains open without having to derive this fact or observe it again? There are several ways of dealing with this problem, e. 8., Green (1969), Fikes and Nilsson (1971), Sandewall (1972), and Hewitt (1969). These are nicely discussed by Hayes (1973).

I have already referred to some of the heuristic search paradigm ideas (subgoals, reasoning backwards, and so on) as being basic to common-sense reasoning, deduction, and problem solving (Chart 1). Here (in Chart 3), we want to cite mainly those aspects of heuristic search dealing with the search process itself. Once a problem is represented as a search problem, how can a solution be found efficiently?

The searching occurs in one of two graph structures, ordinary graphs (or trees), and AND-OR graphs (or trees), depending on whether the problem is viewed as one of finding a path to a goal state or one of reducing problems to subproblems, respectively. The search techniques that have been developed (by workers in AI, control theory, and operations

Another problem is the qualification problem. If a system uses its representation to "prove," say, that a certain plan will achieve a desired goal

« PreviousContinue »