0 votes
1 view
in Machine Learning by (47.8k points)

I'm trying to build an LSTM autoencoder with the goal of getting a fixed-sized vector from a sequence, which represents the sequence as good as possible. This autoencoder consists of two parts:

LSTM Encoder: Takes a sequence and returns an output vector (return_sequences = False)

LSTM Decoder: Takes an output vector and returns a sequence (return_sequences = True)

So, in the end, the encoder is a many to one LSTM and the decoder is a one to many LSTM.

If someone wants to try out, here is my procedure to generate random sequences with moving ones (including padding):

import random

import math

def getNotSoRandomList(x):

    rlen = 8

    rlist = [0 for x in range(rlen)]

    if x <= 7:

        rlist[x] = 1

    return rlist

sequence = [[getNotSoRandomList(x) for x in range(round(random.uniform(0, 10)))] for y in range(5000)]

### Padding afterward

from keras.preprocessing import sequence as seq

data = seq.pad_sequences(

    sequences = sequence,

    padding='post',

    maxlen=None,

    truncating='post',

    value=0.

)

1 Answer

0 votes
by (33.2k points)

LSTM is a type of Recurrent Neural Network (RNN). RNNs and LSTM are used on sequential or time-series data. LSTM is known for its ability to extract both long- and short- term effects of pasts events.

Using LSTMs:

You have to set what your encoded vector looks like. Suppose you want it to be an array of 20 elements, a 1-dimension vector. So, shape (None,20). The size of it is up to you, and there is no clear rule to know the ideal one.

And your input must be three-dimensional, such as your (1200,10,5). In Keras summaries and error messages, it will be shown as (None,10,5), as "None" represents the batch size, which can vary each time you train/predict.

For example:

from keras.layers import *

from keras.models import Mode

inpE = Input((10,5))

outE = LSTM(units = 20, return_sequences=False)

This is enough for a very very simple encoder resulting in an array with 20 elements. Let's create the model:

encoder = Model(inpE, outE) 

  

For the decoder, it is complicated. You don't have an actual sequence anymore, but a static meaningful vector. You may want to use LTSMs still, they will suppose the vector is a sequence.

But here, since the input has a shape (None,20), you must first reshape it to some 3-dimensional array in order to attach an LSTM layer next.

Code:

inpD = Input((20,))   

outD = Reshape((10,2))

    

If you don't have 10 steps anymore, you won't be able to just enable "return_sequences" and have the output you want. You'll have to work a little. Actually, it's not necessary to use "return_sequences" or even to use LSTMs, but you may do that.

outD1 = LSTM(5,return_sequences=True)

You could work in many other ways, such as simply creating a 50 cell LSTM without returning sequences and then reshaping the result:

alternativeOut = LSTM(50,return_sequences=False)

alternativeOut = Reshape((10,5))

And our model goes:

decoder = Model(inpD,outD1)  

alternativeDecoder = Model(inpD, outD)   

After that, you unite the models with your code and train the autoencoder. All three models will have the same weights, so you can make the encoder bring results just by using its predict method.

encoderPredictions = encoder.predict(data)

What I often see about LSTMs for generating sequences is something like predicting the next element.

You take just a few elements of the sequence and try to find the next element. And you take another segment one step forward and so on. This may be helpful in generating sequences.

Hope this answer helps.

Welcome to Intellipaat Community. Get your technical queries answered by top developers !


Categories

...