Page images

whether the model is sufficient to account for the phenomena.
we are investigating. Concerned as they are with the micro-
structure of behavior, information-processing psychologists

often prefer to work with extensive sequential data from indi

vidual subjects. (W. Reitman, 1969, p. 246.)

There is yet one more twist -- a radical one, but not totally implausible. One can view artificial intelligence as sufficient within itself for the entire task of understanding the nature of human intelligence. Thus, the behavioral data now being gathered and analyzed in psychological laboratories are taken to be irrelevant. With our long standing involvement in an empiricist view of science, this may seem like nonsense. But consider that the constraints on intelligent behavior in our world may be such that there exists in essence, only one type of system that can accomplish it. Then we might be able to discover that system by direct analysis, knowing only the nature of the world (the organism's task environment) and the general kinds of performances of which it is capable. The plausibility of this can be enhanced considerably if two conditions are added. First, the basic system itself must have arisen by evolution. Second, the system must be able to develop from a basic system (capabilities unknown, but fundamentally simple) to one with full intelligence. There are few who subscribe to this viewpoint totally. However a hint can be found in the following quotation:

Nor is it true that psychologists take the experimental
evidence into account but that others [engineers working
on pattern recognition] do not, for it is not clear that
much really firm evidence has been collected, except for
a few scattered findings, chiefly from neurophysiology.
As horrifying as it may sound to some, the chief sources
of specification of a model for pattern recognition are
intuition and introspection, and in this we all draw upon
our own resources as human beings. Since these are two
functions that have made twentieth century psychology

especially uneasy, there is no reason to think that
psychologists are terribly adept at them. (L. Uhr,
1966, p. 291.)

namely, on artificial intelligence

I have laid out this array of viewpoints to locate myself and the nature of my comments. I wish to focus on the strong end as theoretical psychology. (I do not, however, go to the last stage.) Thus, I am much concerned with the use of artificial intelligence systems as theories for detailed and explicit bodies of data on human cognitive behavior.

The literature that talks about simulation of cognitive processes speaks mostly from views down toward the weak end, as I have tried to indicate with the quotations. While I think that artificial intelligence can be relevant to psychology in all of these ways, I have always felt that quoting them smacked a bit of damning with faint praise. If it is not possible to do the real job i.e., to be theory in the full sense then one must settle for the advantages that do + exist. (To be fair to those who have espoused these various advantages ing myself

includclarity about the role of a new development is achieved only slowly.)


[ocr errors]
[ocr errors]
[ocr errors]
[ocr errors]
[ocr errors]


The second preliminary is to fix what I mean by artificial intelligence for the purpose of this paper. As shown in Figure 2 there is a very large encompassing domain labeled variously cybernetic systems, information processing systems, control systems, etc. this entire familiar interrelated scientific and technological domain that has arisen since World War II. One major subdomain is that of symbolic systems, which is pretty much coterminous with the systems of interest to computer science. Symbolic systems are to be distinguished from discrete systems, as the control theorist uses that term, in having symbols that have referential structure. Programming and linguistic systems would be another set of names for the same area.

Psychology itself has a nice example. One often hears that a good theory is one that leads to good new experiments. While true, this virtue often has to serve in the absence of more substantial advantages, such as predictive and explanatory power.

[blocks in formation]

Within symbolic systems there is a subdomain called heuristic programming, e.g., programs for problem solving, theorem proving, game playing, induction, etc. This is part of artificial intelligence, as the term is commonly used. There are also other parts of artificial intelligence, such as pattern recognition. Some pattern recognition systems are symbolic, e.g., the work of Uhr (1961). But other pattern recognition systems are discrete, though not symbolic (e.g., neural nets), and some are not even discrete (e.g., holographic systems).

With Figure 2 as background, then, when I refer to artificial intelligence I will mean heuristic programming -- that is, symbolic systems for performing intellectual functions. I will exclude such areas as pattern recognition -- not because they are any less important, but because they are a different story for a different time.

More important, I wish to broaden my concern from artificial intelligence to the whole of symbolic systems. For the right question to ask is not about the relation of psychology to artificial intelligence systems, but about the relation of psychology to symbolic systems. In fact, this larger view already has a name it is called information processing psychology. It is to be distinguished from the flurry within psychology some years ago on the use of information theory, as developed by Shannon (e.g., see Attneave, 1959). Information processing psychology is concerned essentially with whether a successful theory of human behavior can be found within the domain of symbolic systems.

The reason for the expansion is clear if you view the matter from psychology's vantage point, which wants to construct theories to describe and explain human behavior. Symbolic systems provide a possible class of systems within which such theories might be formed. Some of the behaviors of interest are primarily problem solving -- e.g., a man playing a game of chess. But much behavior of interest is not intellectually demanding -- e.g., learning new information, interpreting a command in natural language, retrieving a relevant fact. But these tasks are also susceptible to an analysis in terms of symbolic systems and information processing. Thus, artificial intelligence covers only a part of the relevant systems.

[ocr errors]

I am insisting on the importance of the general type of system used to form specific theories of human behavior in our case, symbolic systems. It is, then, worthwhile to note that psychology has searched for its theories mostly in terms of classes of systems other than symbolic systems. Behaviorism is in general coupled with a view of systems of stimulus and response associations. Gestalt psychology is coupled with a view of continuous fields which reorganize themselves. Psychoanalytic theory is framed in terms of energy constructs, with conservation laws

and the three of them account

a major organizing feature. All of these views for a large fraction of psychological theory systems.

are quite distinct from symbolic

[ocr errors]
[ocr errors]


[ocr errors]

This emphasis on the substantive content of information processing models is in sharp contradistinction to the neutrality of computer simulation per se. This latter has been emphasized by many people. It can be seen in the earlier quote of Uhr in connection on operationality. Here is another:

I should like to conclude with this final comment: My
insistence that a theoretical formulation be rendered in
such a manner that it could be converted into a computer
program does not in itself predispose us toward any par-
ticular type of theory.... The model resides wholly in.
the program supplied to the computer and not at all in the
hardware of the computer itself. For this reason any
model can be programmed
sufficiently explicit. (Shepard, 1963, p. 67.)

provided only that it is

My own insistence does not conflict with the above statement. Rather, it reflects an additional product of the growth of computer science, namely, that of a theoretical model of symbolic behavior. After the fact, one can see that such a theory might have emerged within psychology (or linguistics) without the advent of the computer. In historical fact, the theory emerged by trying to program the computer to do non-numerical tasks and by trying to construct abstract theories of computation and logic.

With this background, let me now make a series of points.

« PreviousContinue »