Page images
PDF
EPUB

LISP

McCarthy (1960) Weissman (1967)

SIMULATION LANGUAGES See, for example, Gorden (1969)

GENERAL Bebrew and Raphael (1973)

INTERLISP

Teitelman (1972a,b, 1973, 1974)

DA4 Rulifson, et al (1971, 1972)

QLISP

Reboh and

Sacerdoti (1973)

POP-2 Burstall, et al (1971)

CONNIVER

Sussman and McDermott (1972) McDermett and Sussman (1972)

ACTORS Hewitt, et al (1973)

LEAP

Foldmen and Ravner (1969)

SAIL

Feldman (1972)

CHESS IDEAS

ALPHA-BETA PROCEDURE

Nawell, Shaw

and Simon (19586)

Samuel (1959)

Edwards and

Hart (1961)

Siagle and

Dixon (1969)

Fuller, Gashnig and

Gillogly (1973)

GO

Zobrist (1969) Ryder (1971)

[blocks in formation]
[blocks in formation]

1950

1955

1960

1965

IPL
Newell and

Show (1957)
Newell (1961)
Newell and

Tenge (1960)

1970

1950

1955

REF-ARF

Filos (1968, 1970)

ALGOL, ASSOCIATION

and HASH-CODING
SCHEMES

STATE-SAVING
STRUCTURES

Bebrew and
Wegbreit (1973a,b)

PLANNER
Hewitt (1969, 1971
1972)

POPLER
Davies (1971)

CHART 4: AI SYSTEMS AND LANGUAGES

Shannon (1950)

1960

1965

1970

CHECKERS

Samuel (1959)

Samuel (1967)

BEGINNING

CHESS PROGRAMS

Kotok (1962)

KALAH

Russell (1964)
Slagie and
DIXON (1969)

CHART 5: GAME PLAYING

1950

1955

Waterman (1970)

1960

1965

1970

SAINT

INTEGRATOR

Slagie (1961,
1963)

INTEGRATOR AND
D. E. SOLVER
SIN, SOLDIER
Moses (1967)

SCRATCHPAD
Blar, et al (1970)
Griesmer and
Janks (1971)

REDUCE

Hearn (1968, 1971)

CHART 6: MATH, SCIENCE AND ENGINEERING AIDS

[blocks in formation]

GEOMETRY

Gelernter (1960)

Gelernter, et al (1963)

THEOREM-PROVING

HEURISTICS

Bledsoe (1971)

Ernst (1971)

Bundy (1973)

HIGHER-ORDER

SYSTEMS

Robinson (1969)

Darlington (1971) Pietrzykowski and

Jensen (1972)

Huet (1973a,أظ

NON-RESOLUTION

Μαντης (1972) Bledsoo, et al (1972)

EARLY PAPERS

Turing (1949) McCarthy (1962)

RESOLUTION
THEOREM
PROVERS

For example,

Green (1969b)

PROGRAM VERIFYING SYSTEM

King (1969)

RECENT PROGRAM

GENERATION SYSTEMS

Buchanan and
Luckham (1974)

STANFORD

HAND SYSTEMS

Pingie, et al

(1968)

McCarthy, et al

(1968)

Scheinman (1969)

Feldman, et al

(1969)

Feldman, et al

(1971)

Paul (1971, 1973)

MIT HAND

SYSTEMS

1950

1955

1960

1965

1970

1950

1955

1960

HEURISTIC PROGRAMS

FOR LOGIC
Newell and Simon (1956)
Nowell, Shaw and Simon (1960)
McCarthy (1961)

SPECIALIST
PROGRAMS

Groups: Norton (1966)

Sets: Bledson (1971)

Analysis: Bledsoe,

et al (1972)

Topology: Bledsoe and

Brual (1973)

Program Correctness:

See automatic

programming

PREDICATE CALCULUS
COMPUTING PROCEDURES
Davis and Putnam (1960)
Prawitz (1960)

RESOLUTION

Robinson (1965)

GENERAL
Nrisson (1971)
Chang and

Loe (19730

RESOLUTION-BASED

PROVERS AND STRATEGIES

Wos, et al (1964, 1965)

Slagle (1967)

Andrews (1968)

Green (1969 b)

Luckham (1969)

Guard, et al (1969)

Kowalski (1970)

Loveland (1970)

Allen and Luckham (1970)

Yates et al (1970)

Luckham and Nilsson (1971)

Minkor, et al (1972)

Kowalski and Kushner (1971)

HEURISTIC
COMPILER

Simon (1963, 1972)

CHART 7: AUTOMATIC THEOREM PROVING

GENERAL

Balzer (1972)

London (1970)
London (1972)
Manna and

Waldinger (1971)

1965

1970

EARLY PROGRAM
GENERATION SYSTEMS

Green (1969b)
Waidinger and
Lee (1969)

HACKER

Sussman
(1973)

PROVING LISP
PROGRAMS

Boyer and

Moore (1973)

PROGRAM VERIFICATION
Floyd (1967)

McCarthy and Painter (1967)
Good and London (1968)
Manna (1969)

RECENT VERIFYING SYSTEMS

Waldinger and

Levitt (1973)

Elspes, et al (1973)

Igarashi, London

and Luckham (1973) Deutsch (1973)

ASSERTION

GENERATION

Elspas (1972)
Katz and

Manna (1973)

Wegbreit (1973)

CHART 8: AUTOMATIC ProgrammiNG (Including Verification)

PROGRAMMING AIDS Teitelman (1969, 19723. b, 1973)

1950

1955

1960

1965

1970

GENERAL

Rosen (19721
Fikes, et al
(19725)

MH-1

Ernst (1961)

EDINBURGH

FREDDY

Barrow and

Crawford (1972)

Ambler, et al
(1973)

Many internal memos;

for a review see

Winston (1972)

SRI SHAKEY

SYSTEM

Raphael (1968)

Nilsson (19696)

Coles (1970)

Raphael, et al

(1971)

Hart, et al (1972)

ADVANCED

INDUSTRIAL

AUTOMATION

Rosen (19731

Nevins, et al (1973)

Bolles and Paul

(1973)

CHART 9: ROBOTS

NITAC

Ejari, et al

(1971, 1972)

[blocks in formation]

that these useful ideas seen so familiar in AI research today testifies to the success of this early work. But the very cleanness of puzzles allowed researchers to avoid facing what has turned out to be the key problem, namely dealing with knowledge, huge amounts of knowledge, diverse, cluttered and interrelated.

Question-answering. As one step toward facing the problem of dealing with knowledge, several researchers concentrated on building inferential questionanswering systems. (See, in particular, the references listed under SIR, QA2, and QA3 in Chart 1.) Such systems should be able to store a large number of facts and should be able to respond to reasonable questions whose answers could be deduced from these facts. These systems required mechanisms for logical inference and led Al researchers into a romance with logic in general and with Robinson's resolution principle in particular. (See Chart 7.) This line of research clarified our concepts of applying inference techniques to common-sense knowledge and led to various useful schemes for associative retrieval of stored data. We also learned that for large question-answering systems the question of when to use inference methods was more important than the nature of the inference mechanism itself. Thus, we learned that we would need large amounts of secondary knowledge about how and when to use the primary knowledge of the domain.

Common-sense reasoning. In 1958, McCarthy proposed an ADVICE-TAKER that would be able to accept knowledge and use it to deduce answers to questions and to figure out simple plans for courses of action. One might ask such a system, for example, how to get to Timbuktu (a favorite example of McCarthy's). If the system knew about airline schedules, airports, how to get to airports, and other common (but immensely diverse) knowledge, it might answer thus: (1) go to your travel agent and find out about flights to Timbuktu, (2) using this information, select a flight and make a reservation, (3) drive to the airport at the appropriate time, (4) park your car, and (5) get on the appropriate airplane. Each of these steps, of course, could be expanded in detail.

Problems of this sort are clearly not as clean as puzzles; they demand the use of large amounts of knowledge; yet they have in common with puzzles the feature of planning a course of action to accomplish a goal.

Robotics research (see Chart 9) has probably contributed the most to our knowledge of how to generate plans based on large amounts of common-sense knowledge. Researchers at MIT, using an arm in a domain of simple blocks (called the BLOCKS world) and at SRI using a mobile robot in a domain of corridors and rooms, have developed various reasoning systems that can generate plans of action for a robot. Of these, we might mention in particular STRIPS, SHRDLU, and HACKER (see Chart 1).

There has been a lot of useful internal controversy about how to build reasoning systems and about the best directions for research. For a while, there was hope in some quarters that some universal systen (based, for example, like QA3 on Robinson's resolution principle) could be used for all of the tasks we have mentioned so far: puzzle-solving,

51-IFIP

question-answering, and common-sense reasoning.

First attempts to build such universal systems were unsuccessful in the incorporation of the necessary domain-specific knowledge and techniques and, as far as I know, there are at present no serious advocates of a simple universal system.

At the opposite extreme of this controversy, however, are the proponents of what I would call ad hocism, To them, following any systematic approach is anathema. Each task should simply be programmed on its own using whatever tricks might be needed. There is no doubt that this kind of opportunism is healthy for a growing field still in search of its general principles. Still, the following point must be made against rampant ad hocism: One part of developing a science is to discover those concepts that are important. We must try to produce intelligent behavior out of systems limited to various combinations of trial concepts. Our failures tell us where our present concepts are weak and give us hints about new ones that might be needed. If our trial concepts are always allowed the crutch of ad hocism, we do not learn enough about where the concepts are weak.

Another controversy concerns how much knowledge we ought to give our reasoning programs. At one extreme are researchers who insist that the program should be given only some basic premises from which it must derive any intermediate knowledge it needs to arrive at an answer. At the other (and impossible) extreme, programs would be provided explicitly with answers to all problems. There are some who feel that derivation of answers ultimately will play such a large role in intelligent systems that we may as well concentrate now on derivation techniques. To force derivation, they tend to work with knowledgeimpoverished systems,

The consensus just now emerging from this controversy is that, because of combinatoric problems, an intelligent system probably will be able to make only reasonably direct derivations at any stage. Thus, to deal with a large domain, such a system must begin with a large skeletal network of basic knowledge

about the domain and knowledge about how to use its knowledge. Any excursion from the known (explicitly represented) knowledge into the unknown (derived) can thus be well-guided (i.e., practical) even though the "volume" of the unknown part itself can be extremely large. It is senseless to insist that, to answer a single question, an intelligent system must repeat the tedious trial and error evolution of a large part of our cultural and scientific knowledge to say nothing of possibly having to repeat much of biological evolution itself. Even the "let's derive all" school would agree. What members of this school and some others did not realize was just how much knowledge would finally be needed by intelligent systems. Given this realization, the only possible course is to build "knowledge-based" programs."

2.2.2 Modeling and representation of knowledge (Chart 2)

Our ideas about how to represent knowledge have come from several of the applications areas. (Quite

Minsky (1974) guesses that a knowledge-based system reasoning about visual images (a system such as might be possessed by a typical human) "might need a few millions, but not billions, of structural units, interconnections, pointers."

obviously, every Al program uses some representational scheme, We cite in Chart 2 just a few of the important contributions.) Researchers in machine vision and perception and in natural language understanding were perhaps the first to realize how much knowledge would be needed by high performance programs. These two applications areas have thus probably contributed the most to our repertoire of representational techniques,

The systems mentioned in Chart 2 cover some of the major suggestions. For example:

Green (1969a,b,c): Statements in the first order predicate calculus,

Quillian (1968): Concept nodes in a graph structure linked by various relationships.

Schank et al. (1972): Canonical concept structures having "slots" for case information.

Hewitt (1969,71) and Winograd (1971): Patterninvoked procedures plus assertions.

Rulifson et al. (1971): Pattern-invoked procedures plus special list structures such as n-tuples, bags and sets with property lists all organized in a discrimination net,

Newell (1967): Sets of productions organized as
Markov tables.

Minsky (1974): Hierarchically organized structures called "frame systems." These have "free variables" (analogous to Schank's slots) that can be matched against constants occurring in the data to be analyzed.

For a period there was some controversy over whether knowledge should be represented assertionally or procedurally. (As an extreme case, a spiral, say, can be represented assertionally by a list of the points in the plane through which it passes, or it can be represented procedurally by a program that draws it.) Something of a cult was made of the "procedural embeading" of knowledge, but this controversy seems to be settling down now to an acceptance of the value of a combination of assertional and procedural knowledge.

Another concern, having antecedents in logic, is how to represent certain "modal" concepts involving time, necessity, possibility, and so forth. McCarthy & Hayes (1969) have analyzed some of the difficulties in formalizing these concepts; meanwhile, Hendrix (1973) and Bruce (1972) have developed systems that begin to deal with some of them.

McCarthy and Hayes (1969) also discuss two fundamental problems concerning representation and reasoning. One is called the frame problem, and it concerns certain difficulties of model maintenance. If we have a representation of the world at a certain instant (based on observations and a priori knowledge), how should we represent and use "laws of physics" to update the model so that it represents the world (reasonably accurately) at some future instant? if a robot removes a book from a shelf, can we assume that a door across the room remains open without having to derive this fact or observe it again? There are several ways of dealing with this problem, e.g., Green (1969), Fikes and Nilsson (1971), Sandewall (1972), and Hewitt (1969). These are nicely discussed by Hayes (1973).

[blocks in formation]

(the goal of being at the airport), how are we to deal with certain difficulties arising when new information is received prior to executing the plan. Suppose, for example, someone tells us that our automobile is out of gasoline so that now our plan (that called for driving to the airport) will not work. We had proved that it would, and now new information has rendered the proof invalid even though all of the information on which the original proof was based is still present. Hayes (1973) discusses this violation of the "extension property" and shows the close connection between the qualification problem and the frame problem. System builders (e.g., Hewitt (1969) and Rulifson et al. (1972)) have invented certain constructs that apparently get around these difficulties, although in a way that is somewhat unsatisfactory to logicians.

We are still quite a way, it seems, from having a sound theoretical basis for knowledge representation. It is my view that the necessity of developing large and complex reasoning systems will produce the new concepts out of which the needed theories will be constructed.

2.2.3 Heuristic search (Chart 3)

One of the first results of early Al research was the development of a point of view toward problem-solving sometimes called "the heuristic search paradigm." There are two closely related versions of this paradigm. In one, a "problem" is transformed into the canonical problem of finding a path through a "space" of problem states from the initial state to a goal (i.e., solution) state. In the other, a problem is "reduced" to various subproblems that are also reduced in turn (and so on) until the ultimately resulting subproblems have trivial or known solutions. Each version is merely a slightly different way of thinking about basically the same problem-solving process. In each, the process involves generating alternative paths toward solutions, setting up certain key milestone states (or subproblems), and managing search resources wisely to find acceptable solutions.

The word "heuristic" is used because these techniques emphasize the use of special knowledge from the problem domain that "aids in discovering a solution" by drastically reducing the amount of search that would otherwise have to be employed. Often this knowledge takes the form of "rules-of-thumb" that help to limit or direct the search. Sometimes there are constraining relations that can be employed to limit the search needed. A good example of the use of constraints is the work of Waltz (1972).]

[blocks in formation]
« PreviousContinue »