I want to train the neural networks to play the 2048 game. I know that NN's aren't a good choice for state games like 2048, but I want to achieve that NN will play the game like an experienced human, i.e. moving tiles only in three directions.
But I can't figure out how to self-train NN since we don't know the valid output. Normally, e.g. in regression, you know the correct output and you can compute the loss (e.g. mean squared error) and update the weights. But in 2048 the valid output is unknown (of course you can compute the score for each direction you can move, e.g., the direction with the highest difference score_after_move - previous_score would be our valid output, but I think that's not the way to self-learn the NN). So is it possible to define loss function for the 2048 game? The best would be a differentiable one.
The next question is when to update the weights: after each move or rather after a complete game (game over)?
If it's important: my NN topology will be for now simple:
2D matrix of gaming board -> 2D matrix of input neurons -> 2D fully-connected hidden layer -> 1D 4-neuron layer
So each tile will be input to the corresponding neuron in the first layer (is there any special name for 2D fully-connected layer?). The expected output from the last layer is a vector of length 4, e.g. [1, 0, 0, 0] will be "up" movement direction.
For now, I have implemented the headless class (in Python/NumPy) for the 2048 game, because using the visual input is slow and also more work to do.
P.S. Maybe I am thinking incorrectly about NN learning for this game (or games generally). Feel free to show me a better way, I would appreciate that. Thanks :)