"Dynamic Backtracking" by Matthew L. Ginsberg

Things to keep in mind:

Think about leading the discussion. When do I brings thing in and when do I ask for input from the class.

Go over why we know it is complete.

Distinction between backjumping and dynamic constraint assertion.

1. What is backtracking?

1.1 Present backtracking in a general context of problem solving.

1.2 Demonstrate the following programs in PROLOG.

Things to note about PROLOG:

· Built in backtracking (simple or chronological).

· Choice points are used to backtrack.

· The implementation of PROLOG determines exactly how choice points are generated in the clauses of a PROLOG data base.

1.3 Present the map coloring problem as a simple PROLOG program.

Note: some efficiency can be achieved by adding in cuts.

2. Preliminaries.

Example Problem:

Definition 2.1:

Variables: I = { 1, 2, 3 } = { albania, bulgaria, czech }

Domains: V = { V1, V2, V3 } = { {r,y,b}, {r,y,b}, {r,y,b} }

Constraints: = { ( (albania, bulgaria), ((r,y),. (r,b), (y,r), (y,b), (b,r), (b,y)) ),

( (bulgaria, czech), ((r,y),. (r,b), (y,r), (y,b), (b,r), (b,y)) ),

( (albania, czech), ((r,y),. (r,b), (y,r), (y,b), (b,r), (b,y)) ),

}

Definition 2.2:

Partial Solution (ordered pairs): P = { (albania,r), (bulgaria,y), (czech,b) }

Partial Solution variables: P-bar = { albania, bulgaria, czech }

Definition 2.3:

Elimination Explanation: E (see below)

Elimination Mechanism: ( { (albania, r) }, bulgaria ) = E = { ( r, {albania } ) }

or .E = ( { (albania, r), (bulgaria, y) }, czech ) = { ( r, {albania } ), ( y, { bulgaria } ) }

Eliminated Values: æ (see below)

æ =  ( { (albania, r), (bulgaria, y) }, czech ) = { r, y }

Elimination Mechanisms will be assumed to have the following properties:

1. It is correct, i.e. the elimination mechanism does not violate the constraints.

2. It is complete, i.e. the elimination mechanism upon failure to extend the solution will find at least one variable assignment that is violated by the attempted extension.

3. It is concise, i.e. the elimination will find a most a single value that can not be assigned to a specified variable.

Lemma 2.4:

Given a complete elimination mechanism, the final variable assignment of v to i that completes the CSP, said elimination mechanism will not produce a set of values that includes v.

Algorithm 2.5 (Depth-first search):

1. Set the partial solution to P to empty. Set the elimination set for each variable to empty.

2. If the variables assigned (i.e. P-bar) is equal to the variables in the problem (i.e. i) then a solution has been found. Otherwise select the next variable i that has not been assigned and set its elimination set to the values eliminated so far for I (i.e. Ei ( P,i)).

3. Choose a value from the remaining domain of values for i, if this domain is non-empty, select a value for i from the domain and assign it.

Question: What are the differences between chronological backtracking and the algorithm 2.5?

Answer: The use of elimination sets to keep track of what values a variable has already taken on (see Lemma 2.6) as versus storing choice points on the execution stack.

Question: Why is this complete?

Answer: Each variables domain is iterated across as each value is considered, and if it fails, it is moved to the elimination set.

Algorithm 3.1

The elimination set for a given variable that is selected for instantiation from (P,i).

This algorithm removes the last entry from P (i.e. for the assignment to variable i), creates an elimination set for the new last element of P (i.e. for the assignment to variable j) based on the E for all variables in ( P, i ) less j.

Lemma 3.2:

We can identify the variable assignment(s) which caused us to fail.

Algorithm 3.3 (Backjumping):

When considering an elimination set E take the last entry in P such that the variable j E. Go back to this entry of P (deleting all following entries) and add to the elimination set Ej the value assignment of j with the intersection of the original eliminating variables from E and the current variables of P (after deleting as noted above).

Question: What advantages does this algorithm have over 2.5? What disadvantages?

Answer: Backtracks over non-problem variables. Throws out

Proposition 3.4:

Question: Why is algorithm 3.3 (backjumping) complete and always expands fewer nodes than does depth-first search?

Answer: Fewer node expansion is a direct consequence of jumping back to the identified problem variable, while the completeness is derived from the fact that by jumping back to the problem variable we are skipping over a portion of the search space that would fail in any case.

Proposition 3.5:

Question: Why is the space requirements for backjumping important? (It is o(i2v))

Answer: At each stage of a backjump that fails the we expand the number of variables whose neighbors (in the search space) we are considering. That is to say if we backjump from variable i, having failed to find a value to assign to it, to the variable j we add the eliminating explanation from i to Ej .

Dynamic Backtracking:

Question: what is the problem with algorithm 3.3?

Answer: It needlessly discards work that has already been done (remember the part about throwing away all variable assignments after the backjump target).

So to avoid this problem, when jumping back over a variable we should retain the value assigned and the elimination set E of this variable less any member of E involves the variable that we are jumping back to.

Algorithm 4.1 (Dynamic backtracking I)

The elimination set for an instantiated variable is derived from the union of the current Ei with (P,i).

When a backjump variable j is selected, all dependencies tuples for this variable and the assigned value that caused the backjump are removed from elimination sets of all variables from the end of P to the location of j. Set the elimination set of j as above and add (vj, E P-bar) to Ej.

Theorem 4.2

Question: Why does dynamic backtracking always terminate? Why is it complete? Why is the random distribution of goal nodes in the search space significant to the expansion of fewer goal nodes (Prop. 3.5)?

Answer: The elimination set over all the variables grows monotonically (I think). At each step the elimination sets remain valid (based on Lemma 3.2). If the goal node is always at the "end" of some section of the search space then dynamic backtracking would be essentially the same as chronological backtracking.

Algorithm 4.3 (Dynamic backtracking II)

This version of the dynamic backtracking algorithm does not instantiate the variable selected to backjump to, the algorithm may select any of the un-instantiated variables at point it jumps back to.

Question: What advantage does algorithm 4.3 have over 4.2?

Answer: A heuristic can be applied to the selection of the variable to next instantiate that can take advantage of the instantiations made before the backjump.

3. More on the example - revisit using dynamic backtracking.

4. Experimentation, some problems with dynamic backtracking.

Problems that are difficult for dynamic backtracking (i.e. dynamic backtracking increases the amount work/backtracks) is one where the interaction of a small number of variables forces backtracking in a larger set of variables when it would be easier to unbind the small set of variables, then solve the larger set of variables rather than keeping the small set of variable bindings.

5. Summary.

5.1 Dynamic variable reordering (as versus dynamic choice among remaining variables). This is advantageous in that variables bound early in a search may be returned to without unbinding variables that may have required a lot of work to bind and/or have little to no interaction with earlier variables.

5.2 Dynamic new constraints, discard constraints when not useful avoiding the space penalty of

6. Future work.

· Problems from loops in variable binding based on forgetting elimination information (i.e. new constraints).

· When backtracking to avoid a difficulty of some sort, to where should one backtrack? 1) Backtracking to chronologically recent choices should have significant advantage (the fact that they are most recent would indicate that these choices should have the least entanglement within the overall problem). Another choice would be to the choice point that had the highest number of choices (as choice points with lower numbers of choices are more likely to force a backtrack sooner, as in one with a single value left to assign). And another choice is when faced with backtracking from a particular variable i, choosing another variable that interacts directly with i has advantages over a variable that does not.

Question: how should we choose among the various choosing techniques for backtracking targets? How do we give preference to the intuitive notion of lateral movement through the search space to remain close to a solution? How do we even know that we are even doing that?

Answer: ??

Dependency Pruning: This is a subtle problem. Given two sets of constraints u and v, if v is stricter than u and u has been eliminated, then trying to deal with v won't get you anywhere. A potential solution to this is to carry additional information in elimination explanation sets that allows explanations about subsets of problems that fail to be carried along as well.

7. Proofs.

Notes on Termination of dynamic backtracking:

(n,S) is based on deriving the conjunction of assignments from the current elimination set.

N is the conjunction of the 's above. N grows monotonically as the dynamic backtracking algotrithm proceeds because the elimination explanations grow each time a backtrack takes place.