LSTM is a type of Recurrent Neural Network (RNN). RNNs and LSTM are used on sequential or time-series data. LSTM is known for its ability to extract both long- and short- term effects of pasts events.
Using LSTMs:
You have to set what your encoded vector looks like. Suppose you want it to be an array of 20 elements, a 1-dimension vector. So, shape (None,20). The size of it is up to you, and there is no clear rule to know the ideal one.
And your input must be three-dimensional, such as your (1200,10,5). In Keras summaries and error messages, it will be shown as (None,10,5), as "None" represents the batch size, which can vary each time you train/predict.
For example:
from keras.layers import *
from keras.models import Mode
inpE = Input((10,5))
outE = LSTM(units = 20, return_sequences=False)
This is enough for a very very simple encoder resulting in an array with 20 elements. Let's create the model:
encoder = Model(inpE, outE)
For the decoder, it is complicated. You don't have an actual sequence anymore, but a static meaningful vector. You may want to use LTSMs still, they will suppose the vector is a sequence.
But here, since the input has a shape (None,20), you must first reshape it to some 3-dimensional array in order to attach an LSTM layer next.
Code:
inpD = Input((20,))
outD = Reshape((10,2))
If you don't have 10 steps anymore, you won't be able to just enable "return_sequences" and have the output you want. You'll have to work a little. Actually, it's not necessary to use "return_sequences" or even to use LSTMs, but you may do that.
outD1 = LSTM(5,return_sequences=True)
You could work in many other ways, such as simply creating a 50 cell LSTM without returning sequences and then reshaping the result:
alternativeOut = LSTM(50,return_sequences=False)
alternativeOut = Reshape((10,5))
And our model goes:
decoder = Model(inpD,outD1)
alternativeDecoder = Model(inpD, outD)
After that, you unite the models with your code and train the autoencoder. All three models will have the same weights, so you can make the encoder bring results just by using its predict method.
encoderPredictions = encoder.predict(data)
What I often see about LSTMs for generating sequences is something like predicting the next element.
You take just a few elements of the sequence and try to find the next element. And you take another segment one step forward and so on. This may be helpful in generating sequences.
Hope this answer helps.