Problem statement for M1, the unimodal objective method Given: a set of n variables, {x}, where each x > 0: Find: ƒ is unimodal in the feasible region. in the feasible region such that z is maximum. Problem space PS1, for hillclimbing Elements: {}. Initial element: xo, a feasible solution (not obtained by M1). region, but these can be rejected by testing against the g. The procedure in the figure does not provide any information on how to generate the operators. If M1 is compared to SM, several differences are apparent. First, and most striking, the problem space for M1 is n-dimensional Euclidean space, whereas for SM it is a finite set of (n+m) points in (n + m)-dimensional space. Thus the search space has been drastically reduced, independently of what techniques are used to search it. Second, and almost as striking, M1 has all of {A} as operators (i.e., all of n-dimensional space again), whereas SM has only nm operators. Third, a unique operator is selected for application on the basis of partial information; it always both applies and improves the position. In M1 there is no reason to expect an operator either to produce a feasible solution or, if it does, to obtain an improvement; thus, extensive testing and comparing must be done. Finally, we observe that the cost per step in M1 (when applied to the same LP problem as SM) is mk, where k is the number of variables in ▲ and would normally be rather small. Compared to m(m + n) for SM, this yields the one aspect favoring M1. One can take about (m + n)/k steps in the space of M1 for each step in the space of SM. However, the thrashing around necessary at each point to obtain a positive step will largely nullify this advantage. This list of differences suggests constructing a sequence of methods that extend from M1 to SM, with decreasing spaces and operators and increasing cost per step (to pay for the additional sophistication). Much of the gain, of course, will come without changing problem spaces, but from acquiring better operator selection. There may be relatively few substantial changes of problem space. Figures 10.14 and 10.15 provide Problem statement for M2, the monotone objective method Find: f is monotone in the feasible region. in the feasible region such that z is maximum. Problem space PS2, main hill climbing Elements: { on boundary of feasible region (at least one x = 0 or g = 0)}. Initial element: xo in feasible region (not obtained by M2). Additional assumption for efficiency: g(x) = 0 can be solved for any x with all other x fixed. Problem space for M2°, hill climbing Elements: {x}. Initial element: x, given by M2 operator. Operators: {Ax, with appropriate sign}, where x' = x + Ax. Problem space PS1 for M1 Used as backup when PS2 terminates without optimum. Figure 10.14. M2: monotone objective method. Problem statement for M3, the consistent exchange problem Given: the conditions of M2, plus if an x is exchanged for other variables by moving along a maximal boundary, then Az has a consistent sign. Find: in the feasible region such that z is a maximum. Problem space PS3, main hill climbing Elements: { on the maximal boundary of feasible set; i.e., no x can be changed to increase z, holding other x fixed}. Initial element: xo, a feasible solution on the maximal boundary (not obtained by M3). Find: on the maximal boundary, such that exchange for x to increase z is not feasible; z increased. Additional assumption for efficiency: any system of k equations, {g(x) = 0}, can be solved for any set of k variables with the others fixed. Problem space for M3* Elements: {x}. Initial element: x, given by M3 operator. Operators: {Ax, with appropriate sign} and {A, subsets of {x}}, where x' = x + Ax. Δ. Figure 10.15. M3: consistent exchange method. two that seem to reflect some of the major boundaries to be crossed in getting from M1 to SM. Figure 10.14 shows method M2, which adds the assumption that the objective function, f, is monotone in the feasible region. In the LP problem of/ox is constant, but this is not required to justify the main conclusion; namely, that if a given change in a varible is good, more change in the same direction is better. The effect of this is to create new operators and, through them, a new space. The basic decision is always to drive a varia ble to a boundary (in the direction of increasing z, of course). Thus the space for M2 becomes the boundary set of the original space for M1 (those points where at least one of the constraints, including the x 0, attains zero). The operators in the space of M2 are full steps to a boundary (what are sometimes called macromoves in artificial intelligence). Now, finding the boundary is still a problem, although a more manageable one. Thus M2 has a second problem method, M2*, for this purpose. As described in Fig. 10.14 it can be a simple hill climber. An additional strong assumption has been made in the procedure of M2, namely, that only changes in a single variable, x, will be considered. This reduces the number of operators, as well as making the operator submethod M2* simpler. It is not justified by the assumptions of the problem statement, however, and consequently M2 will terminate at suboptimal positions where no single variable can be changed to increase z without decreasing some other variables. (This is often called the maximal or the Pareto optimum set.) Rather than relax the operators to a wider class, the original method, M1, is held in reserve to move off the maximal set. (However, if done with small steps, this is extremely inefficient for the LP problem, since the system just jitters its way slowly up a bounding plane.) The description of M2* gives an additional assumption: each equation g (x) can be solved directly for any x, given that the values of the other r's are determined. This permits direct calculation of the extreme value of ≈ on a boundary that is maximal. Slightly stronger conditions on the g's allow determination of the first constraint to become binding, without multiple evaluations. Figure 10.15 shows method M3, which adds the assumption that the exchange rate (e) always has a consistent sign as one moves along the feasible region, in response to introducing a variable x (what we have called exchanging). Again, in the LP problem e is constant, but this is not required to justify the main conclusion: that in a maximal situation, if adjustments are made in other variables to allow a particlular variable to increase, and the gain from the exchange is positive, it will always be positive; hence the new variable should be exchanged for as much as possible, namely, until another boundary is reached. This assumption not only allows a better way of dealing with the maximal cul-de-sac than does M2, with its regression to M1, but also permits the problem space to be changed radically a second time. The elements of the space now become the set of maximal points, thus a small subset of the space of M2. The operators remain the same; the individual variables. The application of an operator again cor responds to the solving of a subproblem, hence is accomplished by a submethod, M2*. The problem is as follows: given x (with a positive exchange rate), to advance it as far as possible. This means solving the constraints simultaneously for the variables, so as to remain on a boundary. As a change in the selected x is made, the current x moves off the maximal boundary by violating either the constraints or maximality. Adjustments must be made in the other x's to restore these conditions. What the new clause in the problem statement provides is not a way of making the adjustments, but a guarantee that if a change is once found that does increase z (after adjustment) it should be pushed to the limit. We have not described M3*, other than to indicate the available operators. At its most general (i.e., assuming no other information), it requires a two-stage process, one to discover a good direction and the other to push it. The latter is again a two-stage process, one to change the selected and the other to make the adjustments. We have included an additional assumption, similar to the one for M2*, that a direct way exists of solving systems of contraints for some variables in terms of others. This clearly can make an immense difference in the total efficiency of problem solving but does not alter the basic structuring of the task. M3 is already a recognizable facsimile of SM. The space has been cut to all subsets of the variables, although the final contraction to subsets of m variables has not occurred. (It is implicit in the problem statement of M3, with some mild conditions on the g's, but has not been brought out.) The operators of M3 and SM are the same. More precisely, they are isomorphic-the process of applying an operator is quite different in the two methods. There are still some steps to go. The kinds of methods that are possible for the operator need explication. They are represented in M3* only by the assumption that systems of equations can be solved. But the schemes in SM use special properties of linear systems. Similarly, we have not explored direct calculation of the exchange rates, with the subsequent replacement of comparison in the main method by comparison in the operator, to avoid expensive computation. We have not carried this example through in complete detail, nor have we established very many points on the path from a crude hill climber to SM. The two points determined are clearly appropriate ones and capture some of the important features of the method. They are not unexpected points, of course, since linear programming is well understood. The viewpoint underlying the analysis is essentially combinatorial, and such aspects have been thoroughly explored (e.g., see [23]). If these intermediate problems have any peculiar flavor, it is that they become established where the search spaces change, and these need not always |