Explore Courses Blog Tutorials Interview Questions
0 votes
in AI and Deep Learning by (50.2k points)

I've implemented a minimax algorithm with alpha-beta pruning. To get the best move, I call the alpha-beta algorithm with the rootAlphaBeta function. However, in the rootAlphaBeta function, I spotted some very strange behavior. When I call the rootAlphaBeta function with a ply of 4, it makes about 20 000 calls, but when I call the alphaBeta function directly, it makes only about 2000 calls. I can't seem to find what's the problem, as the number of calls should be the same.

The move that both algorithms eventually find should be the same, right? I think so, at least the score of the move is the same, I have no way of knowing the move the alphaBeta chooses when I call it directly without rootAlphaBeta.

def alphaBeta(self, board, rules, alpha, beta, ply, player):

    """Implements a minimax algorithm with alpha-beta pruning."""

    if ply == 0:

        return self.positionEvaluation(board, rules, player)

    move_list = board.generateMoves(rules, player)

    for move in move_list:

        board.makeMove(move, player)

        current_eval = -self.alphaBeta(board, rules, -beta, -alpha, ply - 1,


        board.unmakeMove(move, player)

        if current_eval >= beta:

            return beta

        if current_eval > alpha:

            alpha = current_eval

    return alpha


def rootAlphaBeta(self, board, rules, ply, player):

    """Makes a call to the alphaBeta function. Returns the optimal move for a 

    player at given ply."""

    best_move = None

    max_eval = float('-infinity')

    move_list = board.generateMoves(rules, player)

    for move in move_list:

        board.makeMove(move, player)

        current_eval = -self.alphaBeta(board, rules, float('-infinity'),

                                       float('infinity'), ply - 1,


        board.unmakeMove(move, player)

        if current_eval > max_eval:

            max_eval = current_eval

            best_move = move

    return best_move

1 Answer

0 votes
by (108k points)

The use of alpha-beta pruning in the minimax search reduces by a large factor the number of bottom positions that need to be examined, typically by several orders of magnitude in many game-playing programs.

In your program, the rootAlphaBeta function is not been updating the alpha value.

The function calls all its child nodes with the full range of (-inf, inf) when it could have narrowed down the range for all child nodes except the first. This will prevent the pruning of some branches which will have no effect on the final score, and increase the node count.

For Analysis of the alpha-beta pruning algorithm, refer to the following link:

Browse Categories