Problem statement Given: a comparison of two elements of a set {x} to determine which is greater; a set of operators {q} whose range and domain is {x} [i.e., q(x) = x', another element of {x}]. Find: the greatest x E {x}. Procedure A great deal has been written about hill climbing, and the interested reader will find a thorough discussion within the context of more elaborate methods for finding optima in Chapter 9 on structured heuristic programming. Here we note only the familiar fact that the method does not guarantee to find the best element in the space. An additional condition, unimodality, is required; otherwise the procedure may end up on a local peak, which is lower than the highest peak. Actually, unimodality is not easy to define in the most general situation to which hill climbing applies, since the underlying space (which is "seen" only through the operators) need not have neighborhoods in the sense required to define a local peak. Hill climbing shows up in a subordinate way in many heuristic programs, especially in the adaptive setting of parametric values. For example, in one program that attempted to construct programs satisfying certain criteria [9], the various basic instructions were selected at random and used to extend the program built so far. The entire program was an elementary form of heuristic search, discussed in Section 3.4. But superimposed on it was a hill-climbing program that gradually modified the probabilities of selecting the basic instructions so as to maximize the yield of programs over the long haul. The operators randomly jiggled the selection probabilities around (always maintaining their sum equal to one). The comparison was made on a statistical basis after observing the performance with the new probabilities for, say, 100 randomly selected problems. Management science is much concerned with seeking optima, although, as mentioned above, the methods used are more elaborate. This can be illustrated by a heuristic program developed by Kuehn and Hamburger [11] for locating warehouses so as to balance the costs of distribution (which decrease with more warehouses) and the costs of operation (which increase with more warehouses). The program consists of three separate optimizers: a cascade of a generate-and-test optimizer and a steepest ascent optimizer, followed by a simple hill climber, followed by a set of simple generate-and-test optimizers. Figure 10.7 gives the problem statement (leaving out details on the cost functions) and an indication of how the problem is mapped into the problem space for optimization. Three operators are defined, corresponding to the three separate stages already mentioned. The procedure is given for the first stage (called the Main Routine), but not for the other two (called the Bump and Shift Routine). The elements of the problem space consist of all subsets of warehouses, taken from a list of possible sites (which is a subset of the total set of sites with customer demand). The program builds up the set of warehouses by the operation of adding one warehouse at a time. The actual data structure corresponding to the element consists not only of the list of warehouses but also the assignment of each customer to a warehouse, the partial cost borne by that warehouse, and the total cost of operating (TC). (That incremental cost calculations are easier to make than calculations starting from scratch is an important aspect of the efficiency of programs such as this one.) The main part of the program simply considers adding new warehouses (i.e., taking steps in the problem space) and comparing these against the current position on total cost. It is a steepest ascent scheme, since it goes through the whole set and then picks the best one. The additional wrinkle is to eliminate from the set of unused warehouses any whose costs are less than the current position, thus depleting the set to be considered. In fact, the first stage terminates when this set becomes empty. The operator generator delivers only a fixed subset of all possible warehouse sites. It does this by a simple hill-climbing scheme whereby the best n sites are chosen from the total set on the basis of local cost (LC), which is the cost savings to be made from handling the local demand at the same site as the warehouse (n is a parameter of the program). This cascading of two optimizers keeps the second one from becoming excessively costly. The next two stages (the Bump and Shift Routine) make minor 'We often use the term problem space to refer to the set of potential solutions as defined by the problem statement of a method. It includes the associated. operations for moving around in the space. Problem statement Given: a set of customers, {c}, with locations and sales volume; a set of factories, {f}, with locations; a set of warehouse sites, {w}, with transportation costs to customers and to factories, and operating costs. Find: a set of warehouses that minimizes total costs. Problem space for hill climbing Elements: {xx is a subset of {w}}, Initial element: the null set. Desired element: the element with lowest TC. 2. delete w from x; generate operators Lend Procedure for stage 1 (operator 1) {w} {w} TC(x) for each x can compute TC(x). = = Generate operators: generate end 3. move wЄx to location of customer of w. Note: all these permit incremental calculation of TC, since only paired comparisons with existing w for each customer affected are required. delete → compare ↑ ↑ greater TC x {w so far} Figure 10.7. Warehouse location heuristic program. total cost of x to supply all customers. local cost of w to supply customers at location of w. {x[next moves} ↓ select greatest → {wn, . . ., w1} → generate→ adjustments in the element (the set of warehouses) that results from the first phase. Warehouses are successively eliminated if they fail to pay for themselves. This is hill climbing if the order of elimination affects subsequent costs; otherwise it is simply generating through the parts of the system, making local changes. Note that no new warehouses are added to the solution element after deletion. Finally, as the last stage, each warehouse is moved around locally in its own territory to find the most advantageous location. This stage constitutes a series of independent generate-and-test optimizations, since no interaction between warehouses is involved. 3.4. Heuristic Search The best-known method in heuristic programming is the one whereby the problem is cast as a search through an exponentially expanding space of possibilities—as a search which must be controlled and focused by the application of heuristics. All of the game-playing and theorem-proving programs make use of this method, as well as many of the management science applications [13].5 Figure 10.8 gives the most elementary variant of the method. It assumes a space of elements, the problem space, which contains one element representing the initial position, and another representing the final or desired position. Also available is a fixed set of operators, which when applied to elements in space produce new elements. (Operators need not always be applicable.) The problem is to produce the final desired position, starting at the initial one. With only this information available, the method involves a search that expands in a tree-like fashion. The initial element zo is the initial current position; operators are selected and applied to it; each new element is compared with xa to see whether the problem is solved; if not, it is added to a list of obtained positions (also called the "try list" or the "subproblem list"); and one of these positions is selected from which to continue the search. If about B of the operators applied are applicable to an obtained position, about BD elements will have been reached after D steps. The search is guided (i.e., the tree pruned) by appropriate selection and rejection of operators and elements. The flow diagram provides a scheme upon which the various possibilities can be localized. The most elementary ones are unconditional: a rule for operator selection or for element selection. The latter is often accomplished by keeping an ordered list of elements and simply selecting the first one on the list; hence the order is dictated by the insertion process. The simplest rules have been given names. Thus, if the list is last-in-first-out (so that insertion is al A few game players are exceptions. They use a recognition method that learns to associate to each game position (or class of positions) a good move. Although theoretically capable of handling complex games through the development of an appropriate classification, this method has not been used in any but simple games. Problem statement Given: a set {x}, the problem space; Procedure Basic: a set of operators {g} with range and domain in {x}; an initial element, xo; a desired element, xa. Find: a sequence of operators, 91, 92, . . ., In, such that they transform xo into xa: In[¶n-1 ... 91(xo) . . .] Figure 10.8. Heuristic search method. failure failure test ↓- ↓ compare ↓* insert ↓ compare ← ← select←{x/obtained} solution solution ways at the front), the resulting search is depth first. This scheme was used by almost all early game-playing programs. If the list is first-infirst-out (so that insertion is always at the end), the resulting search is breadth first. This scheme has been used by some theorem provers. An element may be completely rejected at the time of insertion. One of the most frequent heuristics is to reject if the element is already in the obtained list, that is, to avoid duplication. (This is a heuristic, since, though always beneficial, it requires extra effort and memory; thus it may not pay compared to, say, additional search.) Information already available in the scheme can be used; for instance, a rather important |