a variant is called means-ends analysis. The current element x is compared with the desired element Xa, and the resulting difference is used to select the operator, that is, the operator (means) is selected with a view toward the end. The flow diagram for this variant is given in the figure below the basic heuristic search. The situation described in Fig. 10.8 is overly simple in several respects. Initially, a set of elements may be given, rather than just one, with the search able to start from any of them. Likewise, finding any one of a set of elements can be desired, rather than just finding a single one. This final element (or set) can be given by a test instead of by an element, although this has consequences for variants such as means-ends analysis. More important, the operators need not involve only a single input and a single output. In logic, for instance, important rules of inference, such as modus ponens, take two inputs and deliver a single output: from a and a ) b infer b. Thus we can have multiple inputs for an operator. Similarly, if one is working backwards in logic (that is, from the conclusion to premises that make this conclusion follow), modus ponens shows up as an operator that has a single input but a set of outputs: to get b, prove a and a > b. Furthermore, all of the outputs must be obtained; thus independent subproblems must radiate from each element of the output set in order to solve the problem. Figure 10.9 shows one of the early theorem provers, LT (the Logic Theorist), which worked on elementary symbolic logic (15]. It is a heuristic search, using a breadth first strategy. However, it also uses generate-and-test and match, so that, like the warehouse location program, it has a composite structure. Comparison of LT with the basic heuristic search method will show that there are two unexpected twists to formulating the task of theorem proving for heuristic search. First, the problem has to be turned around, so that LT works backward from the original goal toward the given theorems. Thus the rules of inference must be expressed in an inverse sense. Second, the assumed theorems enter into the task both in the generation of operations and in the test. This actually reflects a restriction to the generality of LT, since it insists that one of the two expressions in the backward rules of inference be tied immediately to a theorem, rather than being a subproblem which need only make contact eventually. The only elaborations of LT from the basic heuristic search procedure are the insertion of a test for similarity between t and x before trying to apply the operator, and the rejection of duplicate elements, which requires keeping a list of the elements already tried. The test for solution is elaborated in the minimal generate-and-test way to take into account Given: the rules of inference of propositional logic; a set of theorems, {t}, assumed valid; a theorem, to. Elements: logic expressions, {x}. expression of the rules of inference, working backwards: ab,x:b) -- ac) +620 MChB (Chaining backward): (t:a b, x:d ) b) +d3a (1,7) I' 11 output Match: operations are substitution and the definition: abs ~a vb Figure 10.9. LT: Logic Theorist. 389 that any of the set of theorems will do. Although we have not shown the internal structure of the match, it docs use definitions as well as substitutions to make a theorem and an expression the same. 3.5. Induction Figure 10.10 shows a task that clearly involves inductive reasoning: you are to determine which of the figures 1-5 bears the same relationship to figure C as figure B does to figure A [4]. Similar problems exist in extrapolating series [22]: for example, what is the blank in abcbcdcd_? Another similar task is to discover a concept, given a sequence of items, some of which excmplify the concept whereas others do not [8]: for example, if xoxox, xoxxo, oxoxo are positive instances and wooxo, xxoox, , , , , xoooo are negative instances, what is oxxxo? Computer programs have been constructed for these tasks. They show a certain diversity due to the gross shape of the task; that is, the task of Fig. 10.10 gives one instance of the concept (A:B) and five additional possible ones {C:1, C:2, ..., C:5}, whereas the series provides a long sequence of exemplars if one assumes that each letter can be predicted from its predecessors. However, most of the programs use a single method, adapted to the particular top-level task structure. Figure 10.11 gives the method, although somewhat more sketchily than for the others. The first essential feature of the method is revealed in the problem statement, which requires the problem to be cast as one of finding a function or mapping of the given data into the associated (or predicted) data. The space of functions is never provided by the problem posercertainly not in the three examples just presented. Often it is not even clear what the range and domain of the function should be. For the series extrapolation task, to view {x:y} as {a:b, ab:c, abc:b,..., abcbcdcd:-} is already problem solving. Thus the key inductive step is the assumption of some space of functions. Once this is done the problem reduces to finding in this space one function (or perhaps the simplest one) that fits the exemplars. The second essential feature of the method is the use of a form or kernel for the function. This can be matched (in the sense of the match method) against the exemplars. Evidence in the items then operates ctly to specify the actual function from the kernel. Implicit in the procedure in Fig. 10.11 is that, inside the match, generation on the kernel (refer back to Fig. 10.5) produces, not the parts of the kernel itself, but the predictions of the y associated with the presented x. However, parts of the kernel expression must show through in these predictions, so that the modification operations of the match can specify or modify them in the light of differences. When the kernel expression actually has variables in it, the prediction from the kernel is sometimes a variable. Its value can be made equal to what it should be from the given x:y, and thus the kernel expression itself specified. Often the modification operations arc like linguistic replacement rules, and then the matter is somewhat more complex to describe. Most, but not all. Several adapt the paradigm used for pattern recognition programs. In addition, a method called the method of successive differences is applicable to scries estrapolation where the terms of the series are cxpressed as numbers. Problem statement a range {y}; a generator of associated pairs {x; y}. Find: a function f with domain {x} and range {Y}. such that f(x) = y for all {x:y}. Additional assumption (almost never given with the problem statement, and therefore constituting the actual inductive step): Given: a set of functions {f} constructable from a set of kernel forms {k}. Procedure It is not often possible to express the entire space of functions as a single form (whence a single match would do the job). Consequently a sequential generation of the kernels feeds the match process. Sometimes clues in the exemplars are used to order the generation; more often, generation is simply from the simplest functions to the more complex. This method is clearly a version of “hypothesis-and-test.” However the latter term is used much more generally than to designate the class of induction problems handled by this method. Furthermore, there is nothing in hypothesis-and-test which implies the use of match; it may be only generate-and-test. Consequently, we choose to call the method simply the induction method, after the type of task it is used for. 3.6. Summary The set of methods just sketched-generate-and-test, hill climbing, match, heuristic search, and induction—constitutes a substantial fraction of all methods used in heuristic programming. To be sure, this is only a judgment. No detailed demonstration yet cxists. Also, one or two im |