2 views

I want to program a chess engine that learns to make good moves and win against other players. I've already code representation of the chessboard and a function which outputs all possible moves. So I only need an evaluation function which says how good a given situation of the board is. Therefore, I would like to use an artificial neural network which should then evaluate a given position. The output should be a numerical value. The higher the value is, the better is the position for the white player.

My approach is to build a network of 385 neurons: There are six unique chess pieces and 64 fields on the board. So for every field, we take 6 neurons (1 for every piece). If there is a white piece, the input value is 1. If there is a black piece, the value is -1. And if there is no piece of that sort on that field, the value is 0. In addition to that, there should be 1 neuron for the player to move. If it is White's turn, the input value is 1 and if it's Black's turn, the value is -1. I think that the configuration of the neural network is quite good. But the main part is missing: How can I implement this neural network into a coding language (e.g. Delphi)? I think the weights for each neuron should be the same in the beginning. Depending on the result of a match, the weights should then be adjusted. But how? I think I should let 2 computer players (both using my engine) play against each other. If White wins, Black gets the feedback that its weights aren't good. So it would be great if you could help me implementing the neural network into a coding language (best would be Delphi, otherwise pseudo-code). Thanks in advance!

by (108k points)

You can take the following strategy as your reference. I have taken only 1 chessboard in the algorithm. I have taken the inputs in image format, you can take integer values for representing your input values. The strategy goes like this:

A camera is placed on top of the gaming board so that it can capture the whole board. If the user is the white player, he starts by moving a piece and then pressing enter on the keyboard. The press signifies the end of the player’s turn. The Neural Chess Player will then take an image and observe the current position then predicts it's next move. A terminal timer will act as the game clock. The Neural Chess Player will send a press call to the timer to say that the turn has ended.

There are a few steps for creating our data:

• Gathering the main objects needed. Lay a chessboard flat, placing all the chess pieces on the side arranged from their points. Then place the camera and laptop beside the chess pieces.

• Then set up the camera on top of the laid out chess set where it only focusing on one square tile.

• Then after gathering all the chess pieces, we will do the data augmentation process. The first step is to import all our required libraries.

import os

from PIL import Image

import numpy as np

Then we need to get all the filenames of the images

white_images = os.listdir('../../input/images/White')

black_images = os.listdir('../../input/images/Black')

Next, we are going to create our helper function convert_images, inside we will do the augmentation/synthesis. Our dataset is only about 400–500 images, we could rely on data augmentation to increase this number 4x or 5x more, heavily depending on how many and what data augmentation method we will use ( data augmentation method must be decided). This model only provides flips and rotations. Resizing the data was also done inside the function because working with a lot of pixels will make the learning process longer and compute-intensive and currently, we don’t have any powerful graphics card laying around.

def convert_images(filename, size, name, key, counter):

img = Image.open(filename)

img = img.resize(size)

timestamp = counter

img.save('../../input/images/modified/{1}/resized_{1}_{0}_{2}.png'.format(name, key, timestamp))

rotate45 = img.rotate(45)

rotate45.save('../../input/images/modified/{1}/rotate45_{1}_{0}_{2}.png'.format(name, key, timestamp))

rotate90 = img.rotate(90)

rotate90.save('../../input/images/modified/{1}/rotate90_{1}_{0}_{2}.png'.format(name, key, timestamp))

flip = img.transpose(Image.FLIP_LEFT_RIGHT)

flip.save('../../input/images/modified/{1}/flip_{1}_{0}_{2}.png'.format(name, key, timestamp))

Now after performing the following code here is an example that shows the use of the functions from above:

size = (48, 48)

complete_file_location = os.path.abspath(os.path.join('../../input/images/'))

for idx, whi in enumerate(white_images):

print(idx)

white_piece = whi.split('_')

complete_white = complete_name(complete_file_location, white_piece[1], whi)

print(complete_white)

convert_images(complete_white, size, white_piece[2], white_piece[1], idx)

for bla in black_images:

black_piece = bla.split('_')

complete_black = complete_name(complete_file_location, black_piece[1], bla)

print(complete_black)

convert_images(complete_black, size, black_piece[2], black_piece[1], counter)

You could now do an initial run on any model and gradient check if the images are enough and for the task or if we still need more. If we still need to add more images, we could do it by making more videos on different environments and running the code again.