When Algorithms Fail

Understanding the limits

Every Algorithm Has Assumptions

We have explored several pathfinding algorithms, each elegant in its own way. BFS ripples outward like waves. Dijkstra prioritizes the cheapest options. A* combines cost and heuristic to search intelligently. But here is a truth that textbooks sometimes gloss over: every algorithm makes assumptions about the problem it solves.

When those assumptions hold, the algorithm works beautifully. When they don't, it can fail—sometimes silently, giving you a wrong answer rather than an error.

Let's examine the core assumptions:

  • BFS assumes all edges have equal cost. It counts steps, not weights.
  • Dijkstra assumes all edge weights are non-negative. It relies on the fact that once a node is finalized, no cheaper path to it can exist.
  • A* assumes the heuristic is admissible—it never overestimates the true distance. This guarantees optimal paths.
  • Greedy Best-First Search makes no optimality guarantee. It just chases the heuristic, which can lead it astray.

Understanding these assumptions is not academic pedantry. It's the difference between code that works and code that occasionally produces wrong answers in production—bugs that are notoriously hard to track down.

Algorithm Assumptions at a Glance

BFS

Assumes: Unweighted edges (all costs equal)
Finds shortest path in unweighted graphs
On weighted graphs, may find expensive paths

Dijkstra

Assumes: Non-negative edge weights
Finds shortest path in weighted graphs
With negative weights, may miss shorter paths

A*

Assumes: Admissible heuristic (never overestimates)
Finds optimal path efficiently
With overestimating heuristic, may find suboptimal paths

Greedy Best-First

Assumes: None strictly required
Fast exploration toward goal
May explore unnecessary nodes, no optimality guarantee

Negative Edge Weights

Why would an edge ever have a negative weight? In the real world, negative costs arise more often than you might expect:

  • Financial systems: Rewards, rebates, or discounts that reduce the total cost
  • Game mechanics: Power-ups that give you resources back
  • Energy systems: Regeneration zones that restore energy

Consider a graph where most edges cost positive amounts, but one edge represents a shortcut with a reward—a negative cost. The intuition says: "Take the reward! It makes the path cheaper."

Dijkstra, however, can miss this. Its greedy nature commits to the "best so far" distance for each node and never looks back. Once it marks a node as finalized, it assumes no shorter path exists. With negative edges, that assumption breaks.

Interactive: Dijkstra Fails with Negative Weights

14212-3A0BCDE

Start at A with distance 0. All other nodes have infinite distance.

The red edge has a negative weight (-3)

Dijkstra's greedy approach commits to the "best so far" distance and never looks back. With negative edges, a longer path can become shorter after traversing the negative edge—but Dijkstra has already moved on.

The visualization shows Dijkstra making a greedy choice early on, then never reconsidering when the negative edge could have led to a cheaper overall path. The algorithm doesn't crash or raise an error—it just returns the wrong answer.

The Fix: Bellman-Ford

If you need to handle negative edge weights, Bellman-Ford is the classic solution. Unlike Dijkstra, Bellman-Ford relaxes all edges repeatedly (up to V1V-1 times, where VV is the number of vertices). This allows it to propagate improvements even through negative edges.

Bellman-Ford is slower—O(VE)O(VE) compared to Dijkstra's O((V+E)logV)O((V+E) \log V)—but correctness trumps speed when negative weights are possible. It can also detect negative cycles (cycles where the total weight is negative, making the shortest path undefined since you could loop forever).

Overestimating Heuristics

A* is powerful because it combines the completeness of Dijkstra with the speed of informed search. But its optimality guarantee depends entirely on one property: the heuristic must be admissible.

An admissible heuristic never overestimates the true remaining distance. Manhattan distance for 4-directional grids is admissible because you cannot reach the goal in fewer steps than the Manhattan distance. Euclidean distance is admissible because a straight line is the shortest possible path.

What happens if we use a heuristic that overestimates?

Interactive: A* with Overestimating Heuristic

SE
Pathfinding visualization grid. Start position at column 2, row 5. End position at column 9, row 5.
Admissible (h ≤ actual)
Start
End
Wall
Visited
Path

Increase the heuristic multiplier above 1.0 to make it overestimate. Watch how A* may find longer paths when the heuristic is not admissible. With multiplier = 1.0 (Manhattan distance), A* always finds the optimal path.

When the heuristic multiplier is above 1.0, the heuristic starts overestimating. The algorithm becomes "too greedy"—it dismisses potentially good paths because they seem far from the goal, even when they're actually optimal.

The result? A* might find a path, but not necessarily the shortest one. The guarantee is broken.

Why Would Anyone Overestimate?

Sometimes it's a mistake—using the wrong heuristic for your movement model. Other times it's intentional: an overestimating heuristic makes A* explore fewer nodes and run faster, at the cost of optimality. This trade-off is called weighted A* or epsilon-admissible search. You accept a potentially suboptimal path to get results faster.

For many applications—like video game AI where "good enough" paths are fine—this trade-off makes sense. But you must know you're making it.

No Path Exists

Sometimes the failure isn't about wrong answers—it's about no answer at all. What happens when the start and goal are simply disconnected?

Interactive: Exploring When No Path Exists

SE
Pathfinding visualization grid. Start position at column 3, row 6. End position at column 10, row 6.

Step 1 / 62 • Cells explored: 0

Start
End (unreachable)
Wall
Explored

When no path exists, the algorithm must explore the entire reachable region before it can definitively say "no path found." This is unavoidable—you cannot know a path doesn't exist without checking everywhere you can reach.

The algorithm doesn't immediately know there's no path. It has to explore the entire reachable region before it can definitively conclude that the goal is unreachable. This is unavoidable—you cannot prove a path doesn't exist without checking everywhere you can reach.

In practice, this means:

  • The algorithm will take longer on "no path" cases than successful cases of similar size
  • You should handle the "no path" case explicitly in your code
  • For large graphs, consider early termination heuristics (like checking if start and goal are in different connected components)

Practical Strategies

When dealing with potentially disconnected graphs:

  1. Pre-compute connected components if the graph is static. Then check if start and goal share a component before searching.
  2. Set iteration limits to prevent infinite-feeling searches on large graphs.
  3. Bidirectional search can detect disconnection faster—when both frontiers stop expanding without meeting, no path exists.

Getting Stuck

Greedy Best-First Search is tempting because it's fast. It ignores the cost-so-far (g(n)g(n)) entirely and just chases the heuristic (h(n)h(n)). This makes it zoom toward the goal—most of the time.

But greedy search can get "trapped." If the direct path is blocked, it might explore an entire dead end before backtracking to find the correct route.

Interactive: Greedy Search Gets Trapped

SE
Pathfinding visualization grid. Start position at column 2, row 6. End position at column 11, row 6.

Step 1 / 26

Greedy

Cells explored: ~25

Only considers h(n)

A*

Cells explored: 50

Considers g(n) + h(n)

Greedy best-first search dives toward the goal based only on the heuristic. It can get "trapped" exploring dead ends before finding the correct path. A* balances exploration cost with heuristic guidance, avoiding such traps.

Compare greedy best-first with A*. Both eventually find the path, but greedy explores a lot more unnecessary cells. It's seduced by the promise of getting close to the goal, only to discover a wall blocking the way.

A* avoids this trap because it considers g(n)g(n)—the actual cost to reach each node. When the greedy path turns out to be expensive (due to backtracking), A* naturally pivots to alternatives.

Lessons Learned

These failure modes teach us important principles for robust pathfinding code:

Validate your assumptions. Before choosing an algorithm, verify that your problem matches its assumptions. Are all weights non-negative? Is your heuristic admissible? Are all edges equal cost?

Test edge cases. Include tests with:

  • Negative edge weights (if your domain could have them)
  • Overestimating heuristics (to verify your heuristic choice)
  • Disconnected graphs (to ensure graceful handling)
  • Dead ends and traps (to check exploration efficiency)

Fail gracefully. When pathfinding fails, your code should handle it:

  • Return a clear "no path" indicator
  • Don't hang forever on unsolvable problems
  • Log or report when algorithms take unexpectedly long

Choose the right tool. Sometimes the answer isn't to "fix" an algorithm, but to choose a different one:

  • Need negative weights? Use Bellman-Ford.
  • Need guaranteed optimality? Ensure your heuristic is admissible.
  • Need speed over optimality? Consider weighted A* with a known bound on suboptimality.

Key Takeaways

  • Dijkstra fails with negative weights because its greedy commitment to "finalized" nodes prevents reconsidering cheaper paths through negative edges
  • A fails with overestimating heuristics* because it dismisses optimal paths that seem far from the goal
  • All algorithms must explore the entire reachable region to conclude no path exists—handle this case explicitly
  • Greedy best-first search has no optimality guarantee and can waste time in dead ends, though it's often fast in practice
  • Understanding failure modes prevents subtle bugs—wrong answers are harder to debug than crashes
  • Match your algorithm to your problem's constraints—there's no universal "best" pathfinding algorithm