Artificial Intelligence: A Modern Approach by Russell & Norvig

Part II Problem-solving

Chapt. 3 Solving Problems by Searching

1. Search as the foundation of problem solving.

2. Limitations on search: time complexity and (more importantly) space complexity.

3. Search quality: completeness and optimality.

4. Uniformed search methods (see Figure 1):

a) Breadth-first

b) Uniform-cost

c) Depth-first

d) Depth-limited

e) Iterative deepening

f) Bi-directional

5. Example problems

a) 8-puzzle (N-puzzle)

b) 8-queens (N-queens)

6. CSP

a) Backtracking

b) Forward checking and arc consistency (constraint propagation)

Criterion                  Time       Space      Optimal?     Complete?           
Breadth-first              b^d        b^d        yes          yes                 
Uniform-cost               b^d        b^d        yes          yes                 
Depth-first                b^m        bm         no           no                  
Depth-limited              b^l        bl         no           yes, if l >= d      
Iterative Deepening        b^d        bd         yes          yes                 
Bi-directional             b^d/2      b^d/2      yes          yes                 

Notes: b is the branching factor; d is the depth of the solution; m is the maximum depth of the search tree; l is the depth limit.

Figure 1: Comparing search strategies (page 81)

Questions:

* Why is the space constraint on search more important than the time constraint?

* Why should we be concerned about completeness and optimality?

Chapt. 4 Informed Search Methods

1. Heuristics

a) Choosing

i) relaxed problem

2. Informed search methods

3. Greedy search

a) Minimize cost to goal

b) Minimize total path cost: A* search

i) f(n) = g(n) + h(n)

ii) monotonicity and admissible heuristics (in context of A* search)

iii) Improvements and modifications of A*

a) IDA* (iterative deepening A*)

b) SMA* (simplified memory-bounded A*)

4. Iterative improvement algorithms

a) Hill climbing

b) Simulated annealing

5. CSP (again)

a) Most constrained variable heuristic

b) Most constraining variable heuristic

c) Least constraining value heuristic

Questions:

* What are some of the limits or problems with heuristics and, more generally, with informed search?

* What distinguishes iterative improvement from A* type searches?

Chapt. 5 Game Playing

1. Game playing as a subset of search

2. Characteristics of game playing

a) perfect information

b) uncertainty

3. Search algorithms

a) minimax

b) alpha-beta pruning

4. Randomness

a) expectiminimax

5. Feasibility of game search spaces (parallel with "real problems")

6. Metaresoning

Questions:

* What distinguishes game playing from search in general?

* Why is uncertainty an important aspect of game playing?

* Why has game playing not been emphasized more in the AI community?

* What assumptions are made about search trees during the analysis of game search algorithms?

* What are some of the problems associated with randomness?

Historical observation (to fill the page up): Even before the development of the concept of the computer as we know it (starting with Babbage and the Analytical Engine) construction of mechanical game playing devices fascinated us. Babbage envisioned a chess playing program, Konrad Zuse (in 1945), the first person to design an electronic programmable computer also developed ideas for using it to play chess, and Alan Turing (in 1951) wrote a chess playing program. Which indicates that an interest in mechanical game playing played at least some small role in computational device development even from the very beginning. It might be argued that many of the most challenging problems in the evolution of the computational technology (both hardware and software) has been driven by game playing. As an example, in recent decades bigger and better graphics systems, sound processing, virtual reality, and to a lesser degree, AI programs for planning strategies have all been areas strongly influenced by game playing on computers (and more generally, entertainment).