The Problem

Search is a field of AI which concerns itself with finding paths from one state to another state. For my Artificial Intelligence Project I decided to explore two search methods mainly Dynamic A* Lite and A* with Jump Point neighbor pruning modification. My plan was to use these two search algorithms on traffic data to determine which search method would be better for a navigation system. The A* with Jump Point neighbor pruning modification was finished, however the D* Lite algorithm is still a little buggy. Thus, the original problem was revised to looking at the number of expansions that the JPS algorithm performs in comparison to the plain old A* algorithm for path finding. In addition there are visualizations of the D* Lite and Jump Point Search algorithms at the bottom of this page.

The Algorithms

D* Lite

The D* Lite algorithm was developed by Sven Koenig and Maxim Likhachev for a faster more lightweight alternative to the D* algorithm (developed by Anthony Stentz in 1995). The version of D* Lite that I implemented works by bascially running A* search in reverse starting from the goal and attempting to work back to the start. However, as the solver searches it also updates permanent cells to hold the values that they were assigned on this run of the algorithm. The solver then gives out the current solution and waits for some kind of change in the weights or obstacles that it is presented with. Currently the algorithm has a few special cases where there is not path returned even though there is a legal path for the solver to take. However, I was able to manage some small traffic simulations using variable edge weights to approximate medium traffic, obstacles to represent heavy traffic and buildings and uniform cost edge weights for freely moving traffic. My original plan was to use time estimates to create a path, however time estimates would mean that the graph does not follow the triangle inequality. Although with the preliminary testing I noticed that the D* Lite algorithm is front-loaded in terms of work, later on the algorithm makes up for the work because of the saved states. Thus, in terms of number of expansions the first search of the algorithm does almost all of the expansions for the entire lifetime of the algorithm.

Jump Point Search

The Jump Point Search (JPS) algorithm was developed by Daniel Harabor and Alban Grastien in 2011. I implemented the algorithm using a grid based graph that allows for pruning of neighbors when searching for new nodes to expand in the A* heuristic search. The JPS algorithm selects cells called jump points to expand instead of normal cells to expand. Jump points are established following rules outlined in the paper below, which basically boil down to: given a node x, is there any node y such that y is a neighbor of x and the optimal path from the parent of x to y must be through the node x? Jump points can then be classified as cells where the symmetry of a local path breaks. Using this knowledge the algorithm then selects a next node to expand based on which node has asymmetrical properties AND has the smallest A* key (heuristic plus cost from the start). I found that generally this algorithm expands less nodes than its pure A* cousin, and in the results section I will go over some test cases that were run to determine which algorithm "expands" more nodes.

Method

To test the differences between the number of expanded nodes for the A* and JPS algorithms I used the exact same code for placing neighbors on the queue and for removing the minimum element from the queue. The only different part of the algorithms is the part that find neighbors to place onto the queue. For each run I initialized a 100x100 grid with randomly generated start and goal cells. All cells had a small percentage to be selected as walls for every iteration (there was no quota for the number of walls). This grid was then given to the AStar solver first, and the resulting expansions and neighbors looked at were tallied. Then the grid was cleared of all helpful information (ie. the heuristic, key, and estimate of the cost from the start were all removed). The grid was then passed to the JPS solver and the resulting expansions and neighbors looked at were tallied. Then the grid was whiped completely clean to start a new run. A total of 1000 runs were performed and the following are the resulting values for the average of the number of nodes expanded/neighbors looked at per run.

To test the difference between the number of expanded nodes for the D* Lite and JPS algorithms I used the same setup as above, however the grids were not actually the same object, since the two problems have slightly different setups. Also one iteration consists of adding in 5 additional randomly placed walls, to ensure that the environment is changing. To make sure that I was counting fairly I ensured that the paths generated by both were valid paths and that they were of the same size.

Results


Wall Chance Average Nodes A* Average Neighbors A* Average Nodes JSP Average Neighbors JSP
100 592 4682 73 6919
75 594 4683 90 6842
50 587 4589 121 6445
25 531 4070 158 4633
20 584 4428 195 4739
15 542 4042 208 3926
10 521 3744 235 3165

What is really cool about these results is that we see that the the number of expanded nodes is inversely proportional to the number of neighbors that you need to look at (which is pretty cool, and not mentioned in the paper!). I stopped at 10, because although my JPS works in most scenarios, apparently it fails to get optimal paths for heavily walled graphs (it does generate mostly optimal solutions ie. only 1 of 1000 runs says that they are different length paths). This could be because the code was originally intended to not allow diagonal paths, but then I decided to change this restriction.

Chance JPS Expansions D* Lite Expansions
100 5094 5474
75 8085 8629
50 746 5254
25 7114 11862
20 4717 7476

This table is a bit outdated, however I was unable to get D* Lite to work in my final iteration of the algorithm so I feel that displaying the expansions of two non-optimal search algorithms may be of some use (looking at the numbers that we are seeing from A* above, we could expect that JPS will beat D* Lite up until about a density of 50, at which point we would start to expect D* Lite to overtake JPS, due to its fewer expansions when the graph is changed).

Further Wishes

I would have liked to look at the possibility of combining jump point search and some replanning algorithm like D* Lite, because as noted above the number of expansions and neighbors visited for JPS is generall much better than those of plain A*. Also looking forward it would definitely be cool to look at the possibilty of extending JPS to three dimensions (which is definitely possible from the looks of it, but there does not appear to be any literature out there on this subject).

Sources

Likhachev M, Ferguson D, Gordon G, Stentz A, Thrun S. 2005. Anytime Dynamic A*: An Anytime, Replanning Algorithm.

Koenig S, Likhachev M. 2005. Fast Replanning for Navigation in Unknown Terrain. IEEE Transactions on Robotics, Vol. 21, No. 3.

Stentz A. 1995. The Focussed D* Algorithm for Real-Time Replanning.

Harabor D, Grastien A. 2011. Online Graph Pruning For Pathfinding on Grid Maps.

Code

Gzipped Files

Applets