Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in AI and Deep Learning by (50.2k points)

I am trying to play tic tac toe using iterative Alpha-Beta pruning, I have one second limit for a move but some reason, it doesn't work well.

I modified the regular alpha-beta code so instead of returning alpha or beta, it returns a state (which is the board with the next move)

Each time I create children I update their depth.

But again for some reason, I keep losing and I see that my alpha-beta doesn't see the best move to make.

Here is my code:

 The outer loop:

while (watch.get_ElapsedMilliseconds() < 900 && d <= board.length * board[0].length - 1)

        {

            s = maxiMin(beginSt, d, watch);

            if (s.getNextMove().getIsWin() == true)

            {

                break;

            }

            d++;

        }

        return new location(s.getNextMove().getRow(), s.getNextMove().getCol());

The alpha beta:

public State maxiMin(State s, int depth, Stopwatch timer)

    {

        if (s.getDepth() == 7)

        {

            Console.WriteLine();

        }

        if (timer.get_ElapsedMilliseconds() > 850 || s.getDepth() == depth || goalTest(s.getBoard()) != 0)

        {

            s.evaluationFunc(line_length, PlayerShape);

            s.setAlpha(s.getEvaluation());

            s.setBeta(s.getEvaluation());

            return s;

        }

        LinkedList<State> children = createChildren(s, true);

        // No winner, the board is full

        if (children.get_Count() == 0)

        {

            s.evaluationFunc(line_length, PlayerShape);

            s.setAlpha(s.getEvaluation());

            s.setBeta(s.getEvaluation());

            return s;

        }

        while (children.get_Count() > 0)

        {

            State firstChild = children.get_First().get_Value();

            children.RemoveFirst();

            State tmp = miniMax(firstChild, depth, timer);

            int value = tmp.getBeta();

            if (value > s.getAlpha())

            {

                s.setAlpha(value);

                s.setNextMove(tmp);

            }

            if (s.getAlpha() >= s.getBeta())

            {

                return s;

            }

        }

        return s;

    }

    public State miniMax(State s, int depth, Stopwatch timer)

    {

        if (s.getDepth() == 7)

        {

            Console.WriteLine();

        }

        if (timer.get_ElapsedMilliseconds() > 850 || s.getDepth() == depth || goalTest(s.getBoard()) != 0)

        {

            s.evaluationFunc(line_length, PlayerShape);

            s.setAlpha(s.getEvaluation());

            s.setBeta(s.getEvaluation());

            return s;

        }

        LinkedList<State> children = createChildren(s, false);

        // No winner, the board is full

        if (children.get_Count() == 0)

        {

            s.evaluationFunc(line_length, PlayerShape);

            s.setAlpha(s.getEvaluation());

            s.setBeta(s.getEvaluation());

            return s;

        }

        while (children.get_Count() > 0)

        {

            State firstChild = children.get_First().get_Value();

            children.RemoveFirst();

            State tmp = maxiMin(firstChild, depth, timer);

            int value = tmp.getAlpha();

            if (value < s.getBeta())

            {

                s.setBeta(value);

                s.setNextMove(tmp);

            }

            if (s.getAlpha() >= s.getBeta())

            {

                return s;

            }

        }

        return s;

    }

I would appreciate it if anyone can tell me if something is wrong. I suspect maybe it something to do with that I am returning "s" instead of the regular alpha-beta which returns the evaluation but I didn't manage to find the error.

Thanks in advance,

1 Answer

0 votes
by (107k points)

Tic-tac-toe seems dumb, but it requires you to lookahead one opponent's move to ensure that you will not lose. That is, you need to consider your opponent's move after your next move.

For example, suppose that the computer uses 'O'. At (D), the computer did not consider the opponent's next move and place at the corner (which is preferred over the side). At (E), the opponent was prepared to block the computer's winning move and create a fork.

image

As for your code, I believe one of your problems is that you don't seem to increment your depth in the recursive calls.

You have many issues of poor styling in your code, you separated miniMax and MaxiMin into two functions though they are fundamentally the same. you repeat over a collection by removing elements from it as opposed to using for-each or an iterator(or even an int iterator).

Minimax is a decision rule used in Artificial Intelligence so if you wish to learn more about Minimax Decision Rule then visit this Artificial Intelligence Course.

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...