Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in AI and Deep Learning by (50.2k points)

I write programs to play a board game variants sometimes. The basic strategy is standard alpha-beta pruning or similar searches, sometimes augmented by the usual approaches to endgames or openings. I've mostly played around with chess variants, so when it comes time to pick my evaluation function, I use a basic chess evaluation function.

However, now I am writing a program to play a completely new board game. How do I choose a good or even decent evaluation function?

The main challenges are that the same pieces are always on the board, so a usual material function won't change based on position, and the game has been played less than a thousand times or so, so humans don't necessarily play it well enough yet to give insight. (PS. I considered a MoGo approach, but random games aren't likely to terminate.)

Game details: The game is played on a 10-by-10 board with a fixed six pieces per side. The pieces have certain movement rules and interact in certain ways, but no piece is ever captured. The goal of the game is to have enough of your pieces in certain special squares on the board. The goal of the computer program is to provide a player which is competitive with or better than current human players.

1 Answer

0 votes
by (108k points)

An evaluation function, also known as a heuristic evaluation function or static evaluation function, is a function used by game-playing programs to estimate the value or goodness of a position in the minimax and related algorithms. The evaluation function is typically designed to prioritize speed over accuracy; the function looks only at the current position and does not explore possible moves (therefore static).

Coming to your question for choosing a decent evaluation function, start with a really simple evaluation function. For example, just use the current size of the largest component. 

Find a few candidates for your evaluation function, like mobility (number of possible moves) minus the opponent's mobility, then try to find the optimal weight for each metric. Genetic algorithms seem to work pretty well for optimizing weights in an evaluation function.

  • Create a population with random weights, fight them against each other with limited depth and turns. 

  • Now replace the losers with random combinations from the winners. 

  • Shuffle and repeat and take notes on the population average after every generation. 

  • Let it run until you're satisfied with the result, or until you see a need to adjust the range for some of the metrics and try again if it appears that the optimal value for one metric might be outside your initial range.

Hope this helps!

Browse Categories

...