Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in AI and Deep Learning by (50.2k points)

I have a maze-like the following:

image

The goal is for P to find +, with sub-goals of

  • The path to + is the least cost (1 hop = cost+1)

  • The number of cells searched (nodes expanded) is minimized

I'm trying to understand why my A* heuristic is performing so much worse than an implementation I have for Greedy Best First. Here are the two bits of code for each:

 #Greedy Best First -- Manhattan Distance

self.heuristic = abs(goalNodeXY[1] - self.xy[1]) + abs(goalNodeXY[0] - self.xy[0])

#A* -- Manhattan Distance + Path Cost from 'startNode' to 'currentNode'

return abs(goalNodeXY[1] - self.xy[1]) + abs(goalNodeXY[0] - self.xy[0]) + self.costFromStart

In both algorithms, I'm using a heapq, prioritizing based on the heuristic value. The primary search loop is the same for both:

 theFrontier = []

heapq.heappush(theFrontier, (stateNode.heuristic, stateNode)) #populate frontier with 'start copy' as only available Node

#while !goal and frontier !empty

while not GOAL_STATE and theFrontier:

    stateNode = heapq.heappop(theFrontier)[1] #heappop returns tuple of (weighted-idx, data)

    CHECKED_NODES.append(stateNode.xy)

    while stateNode.moves and not GOAL_STATE:

        EXPANDED_NODES += 1

        moveDirection = heapq.heappop(stateNode.moves)[1]

        nextNode = Node()

        nextNode.setParent(stateNode)

        #this makes a call to setHeuristic

        nextNode.setLocation((stateNode.xy[0] + moveDirection[0], stateNode.xy[1] + moveDirection[1]))

        if nextNode.xy not in CHECKED_NODES and not isInFrontier(nextNode):

            if nextNode.checkGoal(): break

            nextNode.populateMoves()

            heapq.heappush(theFrontier, (nextNode.heuristic,nextNode))

So now we come to the issue. While A* finds the optimal path, it's pretty expensive at doing so. To find the optimal path of cost:68, it expands (navigates and searches through) 452 nodes to do so.

To find the optimal path of cost:68, it expands (navigates and searches through) 452 nodes to do so.

a_star

While the Greedy Best implementation I have finds a sub-optimal path (cost: 74) in only 160 expansions.

greedy_best_first

I'm really trying to understand where I'm going wrong here. I realize that Greedy Best First algorithms can behave like this naturally, but the gap in node expansions is just so large I feel like something has to be wrong here... any help would be appreciated. I'm happy to add details if what I've posted above is unclear in some way.

1 Answer

0 votes
by (107k points)

A* produces the optimal answer to the problem, greedy best-first search provides any solution.

It's supposed that A* has to perform more work.

You can take a look at weighted A* if you want a variation of A* that is not optimal anymore but returns a solution much faster. It just consists of putting weight to the heuristic 

(weight > 1). In practice, it gives you an immense performance increase

For example, could you try this :

syntax: return 2*(abs(goalNodeXY[1] - self.xy[1]) + abs(goalNodeXY[0] - self.xy[0])) + self.costFromStart

If you are looking to learn more about Artificial Intelligence then you visit Artificial Intelligence(AI) Tutorial. Also, if you are appearing for job profiles of AI Engineer or AI Expert then you can prepare for the interviews on Artificial Intelligence Interview Questions.

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...