Informed Search – A Detailed Beginner’s Guide

Building on our uninformed search knowledge, let’s dive into informed search – where we use problem-specific knowledge to guide our search toward the goal!


PART 1: THE PROBLEM WITH UNINFORMED SEARCH

Recall: Uniform Cost Search (UCS)

Remember how UCS works:

  • Expands nodes in order of increasing path cost from start
  • Creates “cost contours” – like rings spreading out from start
  • Complete and optimal but has a major flaw…

The Problem:

Start ●━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━● Goal
      (UCS expands in ALL directions equally)

      ↑         ↑         ↑         ↑
    Expands  Expands  Expands  Expands
    north    south    east     west
    equally! equally! equally! equally!

UCS is like dropping a stone in a pond – the ripples go out in all directions equally. But if we know where the goal is, why explore away from it?

Real-world analogy:
You’re in New York City, trying to get to Los Angeles. UCS would explore equally toward Boston, Miami, Chicago, AND Los Angeles. That’s wasteful!

What we want:

Start ●━━━━━━━━━━━━━━━━━━━━━━━━━━→● Goal
      (Focus expansion toward goal!)

      Mostly east expansion
      Little west expansion

PART 2: HEURISTIC FUNCTIONS – THE KEY TO INFORMED SEARCH

What is a Heuristic?

A heuristic function h(n) estimates the cost from node n to the goal.

Think of it as a “guess” or “estimate” of how far we are from the destination.

Formal Definition:

h(n) = estimated cost from node n to the nearest goal

Real-World Examples:

  1. Road Navigation:
  • Straight-line distance (as the crow flies) from current city to destination
  • Even though roads curve, it’s a useful estimate
  1. 8-Puzzle:
  • Misplaced tiles count: How many tiles are in wrong position?
  • Manhattan distance: Sum of how many rows + columns each tile is from correct spot
  1. Chess:
  • Material advantage (piece values)
  • Board control
  • King safety

Why “Heuristic”?

The word comes from Greek “heuriskein” meaning “to find” or “to discover”. It’s a practical method that may not be perfect but helps us find solutions faster.


PART 3: GREEDY BEST-FIRST SEARCH

The Idea

“Always move to the node that looks closest to the goal!”

Algorithm:

  1. Start at initial node
  2. Look at all successors
  3. Pick the one with smallest h(n) (lowest estimated distance to goal)
  4. Repeat

Visual Example

Start (h=10)
   ↓
Node A (h=8) ← Looks promising!
   ↓
Node B (h=5) ← Even better!
   ↓
Node C (h=3) ← Almost there!
   ↓
Goal (h=0) ← Found it!

Implementation

def greedy_search(initial_state, goal_test, successors, heuristic):
    fringe = PriorityQueue()  # Ordered by h(n)
    fringe.put((heuristic(initial_state), initial_state))
    explored = set()

    while not fringe.empty():
        h, state = fringe.get()  # Get node with smallest h

        if goal_test(state):
            return "Solution found!"

        if state not in explored:
            explored.add(state)
            for next_state in successors(state):
                if next_state not in explored:
                    h_next = heuristic(next_state)
                    fringe.put((h_next, next_state))

    return "No solution"

What Can Go Wrong? (Critical!)

Problem 1: Getting Trapped

Start (h=12) → A (h=8) → B (h=4) → C (h=2) ← Looks great!
                               ↓
                            D (h=100) ← Dead end!
                               ↓
                            Goal (h=0) ← Actually this way!

Greedy search took the promising path to B and C, but it was a dead end! By the time it realizes, it’s far from the real path.

Problem 2: Loops

Start → A (h=5) → B (h=3) → C (h=1) ← h=1 is promising
                   ↑                     ↓
                   └───── D (h=2) ←─── Goal (h=0) actually here!

Without cycle checking, it might go A→B→C→D→B→C→D forever!

Problem 3: Non-Optimal Paths

Path 1: Start → A (cost 10, h=1) → Goal (total cost 10)
Path 2: Start → B (cost 1, h=100) → Goal (total cost 100)

Greedy sees h=1 vs h=100, chooses Path 1 even though Path 2 is cheaper!

Greedy Search Properties

PropertyValue
Complete?Yes* (with cycle checking in finite spaces)
Optimal?NO! (Finds “looks good” path, not cheapest)
Time ComplexityO(b^m) – worst case explores everything
Space ComplexityO(b^m) – stores all nodes

*Like DFS, can get stuck in infinite paths without cycle checking


PART 4: A* SEARCH – THE BEST OF BOTH WORLDS

The Brilliant Insight

Combine:

  • UCS: Uses actual cost from start (g(n))
  • Greedy: Uses estimated cost to goal (h(n))

A* evaluation function:

f(n) = g(n) + h(n)
where:
- g(n) = actual cost from start to node n
- h(n) = estimated cost from n to goal
- f(n) = estimated total cost of path through n

Why This Works

  • g(n) ensures we don’t ignore cheap paths
  • h(n) guides us toward the goal
  • f(n) gives us the best of both!

Visual Example

Step 1: Start
S (g=0, h=6, f=6)

Step 2: Expand S's neighbors
A (g=1, h=5, f=6) ← f=6
B (g=5, h=2, f=7) ← f=7
C (g=2, h=5, f=7) ← f=7

Priority queue: [A(f=6), B(f=7), C(f=7)]

Step 3: Expand A (lowest f)
A (expanded)
├─ D (g=2, h=4, f=6) ← f=6
└─ E (g=3, h=3, f=6) ← f=6

Now multiple nodes have f=6...

Implementation

def a_star_search(initial_state, goal_test, successors, heuristic):
    fringe = PriorityQueue()  # Ordered by f(n) = g + h
    g_score = {initial_state: 0}
    f_score = {initial_state: heuristic(initial_state)}
    fringe.put((f_score[initial_state], initial_state))
    explored = set()

    while not fringe.empty():
        f, state = fringe.get()

        if goal_test(state):
            return "Solution found with cost:", g_score[state]

        if state not in explored:
            explored.add(state)

            for next_state, step_cost in successors(state):
                tentative_g = g_score[state] + step_cost

                if next_state not in g_score or tentative_g < g_score[next_state]:
                    g_score[next_state] = tentative_g
                    f = tentative_g + heuristic(next_state)
                    fringe.put((f, next_state))

    return "No solution"

PART 5: WHEN SHOULD A* STOP?

Critical Rule: Stop when you DEQUEUE a goal, not when you ENQUEUE one!

Why? Look at this example:

Start → A (g=2, h=3, f=5) → Goal (g=5, h=0, f=5)
   └→ B (g=1, h=4, f=5) → Goal (g=6, h=0, f=6)

When we first find Goal:
- Path through A: total cost 5
- Path through B: not found yet

If we stopped when we first put Goal in queue (path through A, f=5), 
we'd miss that path through B might be cheaper!

The correct behavior:

  1. Generate Goal through A (f=5) → add to queue
  2. Generate Goal through B (f=6) → add to queue
  3. Dequeue: get Goal through A (f=5) → this is optimal because f=5 < f=6
  4. STOP

PART 6: ADMISSIBLE HEURISTICS – THE SECRET TO OPTIMALITY

Definition

A heuristic h(n) is admissible if it NEVER OVERESTIMATES the true cost to reach the goal:

h(n) ≤ h*(n) for all n
where h*(n) = true minimum cost from n to goal

Think of it as optimistic – it always thinks we’re closer than we really are (or exactly right).

Examples

Good (Admissible):

  • Straight-line distance on a road map (roads are longer than straight lines)
  • Manhattan distance in 8-puzzle (tiles can’t teleport)
  • Number of misplaced tiles (actual moves needed ≥ misplaced count)

Bad (Inadmissible):

  • “I think it’s 5 miles” when it’s really 10 miles
  • “This puzzle can be solved in 2 moves” when it takes 4
  • Any heuristic that overestimates

Why Admissibility Matters

Think of it this way:

  • If your heuristic is optimistic (admissible), A* will be cautious
  • It won’t discard a path just because it looks expensive now
  • It guarantees we’ll find the optimal solution

If heuristic overestimates:

  • A* might think “this path looks too expensive, let’s try another”
  • Could discard the optimal path prematurely!

PART 7: PROVING A* IS OPTIMAL

The Proof (Step by Step)

Let’s prove that with an admissible heuristic, A* always finds the optimal solution.

Setup:

  • Let A be an optimal goal node (lowest cost)
  • Let B be a suboptimal goal node (higher cost)
  • Let n be some unexpanded node on the optimal path to A

Claim: A will be expanded before B

Proof:

  1. First, compare f(n) to f(A):
   f(n) = g(n) + h(n) ≤ g(n) + h*(n) = true cost through n to goal

Since h is admissible, h(n) ≤ h*(n)

  1. The true cost through n to goal is exactly f(A):
   true cost through n = g(n) + h*(n) = cost of optimal path = f(A)

Because n is on the optimal path to A

  1. Therefore:
   f(n) ≤ f(A)
  1. Compare f(A) to f(B):
   f(A) = g(A) + h(A) = true optimal cost + 0 = true optimal cost
   f(B) = g(B) + h(B) = true suboptimal cost + 0 = true suboptimal cost

   Since B is suboptimal: true suboptimal cost > true optimal cost
   Therefore: f(A) < f(B)
  1. Chain it together:
   f(n) ≤ f(A) < f(B)
   So f(n) < f(B)
  1. Conclusion:
  • Node n (on optimal path) has smaller f than B
  • So n will be popped from priority queue before B
  • This applies to ALL nodes on optimal path
  • Eventually A itself will be popped before B
  • Therefore A* finds optimal solution first!

PART 8: CONSISTENCY – A STRONGER PROPERTY

The Problem

Remember how A* expands in order of increasing f? But what if:

Node n (f=10) → child n' (f=9)

This breaks the “increasing f” property! We need to prevent this.

Definition of Consistency (Monotonicity)

A heuristic is consistent if for every node n and successor n’:

h(n) ≤ cost(n, n') + h(n')

In English: The heuristic estimate from n can’t decrease by more than the actual step cost.

Triangle Inequality Analogy

Think of a triangle:

      n
     / \
    /   \
c  /     \ h(n)
  /       \
 /         \
n' - - - - - Goal
    h(n')

The direct estimate h(n) shouldn’t be more than going through n’ (cost + h(n’)).

Why Consistency Matters

  1. Ensures f is non-decreasing along paths
   f(n') = g(n') + h(n')
          = g(n) + cost(n,n') + h(n')
          ≥ g(n) + h(n)        (by consistency)
          = f(n)
  1. With consistency:
  • First time we pop a node, we’ve found optimal path to it
  • Don’t need to revisit states (huge efficiency gain!)
  • Expansion order is truly by increasing f
  1. Relationship to Admissibility:
  • All consistent heuristics are admissible
  • But admissible heuristics may not be consistent
  • Most natural admissible heuristics ARE consistent

PART 9: A* CONTOURS – VISUALIZING THE SEARCH

What Are Contours?

Think of f-values as elevation on a map:

f=10:  ○────○────○
        \   / \   /
f=12:    ○──────○
        /   \  /   \
f=15:  ○────○────○

A* expands in order of increasing f-value, creating “f-contours”.

UCS vs A* Contours

Uniform Cost Search:

    N  N  N
   N  N  N  N
  N  N  S  N  N
   N  N  N  N
    N  N  N

Expands in all directions equally (g-contours)

A* Search:

        N
       N N
      N  S  G
       N N
        N

Expands toward goal, but hedges bets (f-contours)

What This Means

  • A* focuses expansion toward goal
  • But maintains optimality by considering alternative paths
  • The better the heuristic, the more focused the expansion

PART 10: HEURISTIC DESIGN – THE ART OF A*

The 8-Puzzle Example

State: 3×3 grid with tiles 1-8 and empty space

Heuristic 1: Misplaced Tiles

Start:       Goal:        Misplaced?
1 2 3       1 2 3        1: ✓
4 5 6       4 5 6        2: ✓
7 8 _       _ 7 8        3: ✓
                          4: ✓
Count how many are wrong:  5: ✓
- Tiles 7 and 8 are swapped 6: ✓
- Empty in wrong position   7: ✗ (should be empty)
- Total = 2 misplaced       8: ✗ (should be 7)
                           _: ✗ (should be 8)
                           h = 3 misplaced

Is it admissible? YES! Each misplaced tile needs at least 1 move to fix.

Heuristic 2: Manhattan Distance

For each tile, count rows + columns from correct position:

Tile 7: currently at (3,1) should be at (3,2) → distance 1
Tile 8: currently at (3,2) should be at (3,3) → distance 1
Empty: currently at (3,3) should be at (3,1) → distance 2

Total Manhattan distance = 4

Is it admissible? YES! Each tile needs at least Manhattan distance moves.

Heuristic 3: Actual Cost

What if we use the REAL remaining cost as heuristic?

  • Perfectly accurate
  • Admissible? YES (h = h*)
  • But… we’d have to solve the problem to get it!
  • Useless in practice – too expensive to compute

The Trade-off

Heuristic Quality vs. Computation Cost
        ↑
Quality │                    ● Actual cost
   of   │                 ● (too expensive!)
Estimate│              ● Good heuristic
        │           ● (balance point)
        │        ● Simple heuristic
        │     ● Zero heuristic (UCS)
        └──────────────────────────→
                 Computation Cost

PART 11: DOMINANCE – COMPARING HEURISTICS

Definition

Heuristic h₁ dominates h₂ if for all nodes n:

h₁(n) ≥ h₂(n)

The Semi-Lattice of Heuristics

              Exact Heuristic (h*)
                     ↑
           ┌─────────┴─────────┐
           │                   │
      Manhattan           Max(h1, h2)
      Distance                 ↑
           │              ┌────┴────┐
           │              │         │
      Misplaced        h1         h2
        Tiles
           ↑
           │
      Zero Heuristic (h=0) ← This is just UCS!

Key Points:

  • Higher (but still admissible) heuristics are better
  • They prune more nodes
  • They’re closer to the true cost
  • MAX of admissible heuristics is admissible!
  • Zero heuristic = UCS

PART 12: A* PROPERTIES SUMMARY

With Admissible Heuristic:

PropertyValue
Complete?Yes (in finite graphs)
Optimal?Yes (finds least-cost solution)
Time ComplexityO(b^d) in worst case
Space ComplexityO(b^d) – stores all generated nodes

With Consistent Heuristic (Even Better):

PropertyValue
Complete?Yes
Optimal?Yes
Optimal Efficiency?Yes – optimally efficient for given heuristic
Revisiting States?Never needed!
Expansion OrderStrictly increasing f

PART 13: PRACTICAL CONSIDERATIONS

When Admissibility Isn’t Everything

Sometimes in real applications, we might sacrifice optimality for speed:

Example:

# Inadmissible but fast heuristic
def fast_heuristic(state):
    return 2 * manhattan_distance(state)  # Overestimates!

# Search faster, but might not find optimal path
# Often "good enough" in practice

Common Hack: Weighted A*

f(n) = g(n) + w * h(n)  where w > 1
  • w > 1: More greedy, faster but potentially suboptimal
  • w < 1: More cautious, closer to UCS
  • w = 1: Standard A*

Real-World Applications

  1. Route Planning (Google Maps)
  • Uses A* with traffic data
  • Heuristic: straight-line distance × traffic factor
  • Often weighted for speed
  1. Video Games (Pathfinding)
  • NPCs need to navigate quickly
  • May use simplified A* with grid-based heuristics
  • Often trade optimality for real-time performance
  1. Robot Motion Planning
  • High-dimensional configuration space
  • Need fast approximations
  • May use inadmissible heuristics for speed

PART 14: COMPLETE COMPARISON TABLE

AlgorithmComplete?Optimal?HeuristicSpaceTypical Use
UCSYesYesNoneO(b^d)Uniform costs
GreedyNo*Noh(n)O(b^m)Quick & dirty
A*YesYesh(n) admissibleO(b^d)Optimal planning
Weighted A*YesNow×h(n)O(b^d)Real-time games

PART 15: KEY TAKEAWAYS FOR YOUR MIDTERM

  1. Heuristics are estimates of distance to goal
  2. Greedy search uses only heuristics – fast but not optimal
  3. A* combines actual cost (g) with heuristic (h)
  4. Admissible heuristic never overestimates – guarantees optimality
  5. Consistent heuristic ensures f increases along paths
  6. Stop condition: Dequeue goal, not enqueue
  7. Better heuristics = faster search (if cheap to compute)
  8. Trade-off: Heuristic quality vs. computation cost

Memory Aid:

  • A* = Actual cost + Admissible estimate
  • f(n) = g(ot here) + h(ope to goal)
  • Admissible = Always Downward (never up) estimate
  • Consistent = Cost can’t Cut heuristic too much

PRACTICE PROBLEMS

Problem 1

Is straight-line distance an admissible heuristic for road navigation?

Answer: YES – roads are never shorter than straight-line distance.

Problem 2

You have h₁ (misplaced tiles) and h₂ (Manhattan distance) for 8-puzzle. Which is better?

Answer: Manhattan distance dominates misplaced tiles (always ≥), so it’s better.

Problem 3

In A*, you generate a goal node with f=10. Should you stop?

Answer: NO! Only stop when you DEQUEUE it. There might be another path with f=9 still in queue.

Problem 4

Heuristic h(n) sometimes overestimates. What happens?

Answer: A* might discard optimal path, thinking it’s too expensive. Result may be suboptimal.

Problem 5

When is greedy search better than A*?

Answer: When you need ANY solution fast and don’t care about optimality. Also when heuristic is very accurate.


The key to informed search is understanding why A* works (the proof) and how to design good heuristics. Practice tracing A* on small graphs and calculating f, g, and h values.

Scroll to Top