The batch size of the training set depends on the size of the training set. In your problem, the training set has about 1000 samples. So you can use a batch size 32.
The batch size should increase w.r.t the training set. For example, if your training set has thousands of observations, then you should use a bigger batch size accordingly.
Larger batch sizes result in faster progress in training but don't always converge as fast. Smaller batch sizes train slower but can converge faster.
The models improve with more epochs of training because slower learning converges fast.
If you try some data visualization to get a better insight into your data, then you can select the batch size by your self easily.
If you wish to learn Python, then check out this Python Course by Intellipaat.
Hope this answer helps.