Content
- Search Overview
- Formalism of Search
- Searching Template
- Critical Properties of Search
- Uninformed Search
- BFS
- DFS
- Depth Limited Search
- Iterative Deepening Search
- Uniform-Cost Search
- Informed/Heuristic Search
- Best First Search
- A*
Search Overview
Formalism of Search
- State space
- Initial state & goal state
- Actions/successor functions/state space transitions
S(x)
- Costs
C(x, a, y)
- Heuristics to guide search
Complex situations
- Actions lead to multiple states
- Unsure of a given state
- => Probabilistic models
Searching Template
- State annotation
- State + action, parent state, cost, ...
- Frontier: states haven't explored or expanded but want to explore
- Order of frontiers (selection rules) implies the type of search, e.g.
BFS
,DFS
TreeSearch(Frontier, Successors, Goal):
If Frontier empty:
return failure
Curr = select state from Frontier
If (Goal(Curr)):
return Curr
Frontier' = (Frontier - {Curr}) ∪ Successors(Curr)
return TreeSearch(Frontier', Successors, Goal)
Critical Properties of Search
- Completeness
- Optimality
- Time complexity
- Space complexity
Uninformed Search
- Fixed rule for selecting next state
- Do not consider any domain specific information about the particular search problem
- e.g. BFS, Uniform-Cost, DFS, Depth-Limited, Iterative-Deepening
Path & Cycle Checking
- Path checking
- Ensure expanded state
c
not equal to state reached by any ancestor ofc
along this path - Path checked in isolation
- Ensure expanded state
- Cycle checking/ Multiple path checking
- Ensure
c
not equal to any previously expanded state - Space complexity of DFS goes
exponential
- Non-uniform costs have additional issues
- Ensure
BFS
O(b^(d+1))
space: explored nodes + nodes expanded at goal level before reaching goalO(b^(d+1))
time1 + b + b^2 +...+ b^d + b(b^d-1)
- Optimal
- Complete
DFS
O(bm)
space (one path at a time)O(b^m)
time,m
= length of longest path in state space- Complete for finite depth & acyclic graph or cycles pruned
- Not optimal
Depth Limited Search
DLS(Frontier, Successors, Goal):
While Frontier not empty:
n = first node from Frontier
Curr = terminal state of n
If (Goal(Curr)):
return n
If Depth < D: // limit Frontiers to depth D
Frontier = (Frontier - {n}) ∪ Successors(Curr)
Else:
Frontier = Frontier - {n}
CutOffOccured = TRUE
return FAIL
Iterative Deepening Search
- Iteratively increase depth limit and perform DLS
- Stop if solution found OR DLS failed without cutting off any nodes
- Not good if many cycles
O(bd)
spaceO(b^d)
time(d+1)b^0 + db + ... + b^d
- Complete
- Optimal if costs uniform
- Cost bound for non-uniform costs
Uniform-Cost Search
Frontier
ordered by increasing cost of path- Nodes rejected or replaced on Frontier guaranteed to have more costly paths
O(b^(C*/ε))
time & space,C*
= cost of optimal solution- Complete if positive, nonzero transition costs
Optimal
Given: each transition has costs >= ε > 0 Lemma 1: If n2 is expanded immediately after n1, then c(n1) <= c(n2) Lemma 2: When node n is expanded, every path in the search space with cost < c(n) has already been expanded Lemma 3: The first time USC expandes a node n terminating at state S, it has found the minimal cost path to S
Informed/Heuristic Search
h(n1) < h(n2)
if we guess it's cheaper to get to goal from n1 than from n2h(n) = 0
for goal nodes n
Best First Search
- Only use
h(n)
to guide the search
A*
f(n) = g(n) + h(n)
,g(n)
= cost of path to n- Adimissible h(n)
- Let
h*(n)
be the cost of an optimal path from n to goal node, then admissible heuristic satisfies:
h(n) <= h*(n)
- Never over-estimates the cost to reach goal, won't miss any promising paths
- Implies optimality
- Let
Monotonic/Consistent h(n)
- Local admissibility
h(n1) <= c(n1, a, n2) + h(n2)
Consistency -> Adimissibility
Assume h(n1) <= c(n1, a, n2) + h(n2) Prove h(n) <= h*(n) If not path from n to goal, then h*(n) = inf Else Let n->n1->...->n* be an optimal path from n to goal (Cost: h*(n); cost of sub-path: h*(ni)) Base case: n = n*, h(n) = 0 <= h*(n) = 0 Induction: h(n1) <= h*(n1) h(n) <= c(n, a1, n1) + h(n1) <= c(n, a1, n1) + h*(n1) = h*(n)
f(n)
non-decreasingf(n1) = g(n1) + h(n1) <= g(n1) + c(n1, a, n2) + h(n2) = f(n2)
f(n1) <= f(n2)
if n2 expanded after n1- If n has been expanded, every path with lower f-value has already been expanded
- The first time A* expands a node, it has found the min cost path to that node
- Admissible & monotonic h(n) => Optimal
- Same time & space bounds as UCS
- Number of nodes expanded no larger than UCS
- Complete
- Finite # of paths having
f-value < c(SolutionPath)
, since each action hascosts >= ε > 0
- Finite # of paths having
- Optimal
- Cycle checking: keep only the first path to a state, rejecting all subsequent paths
- Same time & space bounds as UCS
- Admissibility without monotonicity
- Optimal
- But ordering of nodes not guaranteed optimal when put in queue
- Cycle checking: keep revisited nodes with cost less than before
- Optimal
- Expand less state space, but still constrained by speed or memory
- IDA*: reduce memory requirements
- Cutoff:
f-value
curBound
smallestNotExplored
: smallestf-value
of discarded nodes in a round- Expand all nodes with
f-value
equal tof-limit
- Expand all nodes with
- Cutoff:
- Weighted A*
f(n) = (1-ε) * g(n) + ε * h(n)
- Anytime A*
- Find the best path for a given ε
- Reduce ε and replan
- IDA*: reduce memory requirements