Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Machine Learning by (19k points)

I have a data set from a number of users (nUsers). Each user is sampled randomly in time (nonconstant nSamples for each user). Each sample has a number of features (nFeatures). For example:

nUsers = 3 ---> 3 users

nSamples = [32, 52, 21] ---> first user was sampled 32 times second user was sampled 52 times etc.

nFeatures = 10 ---> constant number of features for each sample.

I would like the LSTM to produce a current prediction based on the current features and on previous predictions of the same user. Can I do that in Keras using the LSTM layer? I have 2 problems: 1. The data has a different time series for each user. How do I incorporate this? 2. How do I deal with adding the previous predictions into the current time feature space in order to make a current prediction?

Thanks for your help!

1 Answer

0 votes
by (33.1k points)

The first thing to consider in this problem is what should be the ‘batch size’. In your problem, each user is a sequence, so users can be the ‘batch size’ for your problem. So

the number of examples  = number of users.

It sounds like each user is a sequence, so, users may be the "batch size" for your problem. So at first, nExamples = nUsers.

In your problem, you should define a maximum length of "looking back". Say you can predict the next element from looking at the 7 previous ones, for instance (and not looking at the entire sequence).

For that, you should separate your data like this:

example 1: x[0] = [s0, s1, s2, ..., s6] | y[0] = s7   

example 2: x[1] = [s1, s2, s3, ..., s7] | y[1] = s8

Where sn is a sample with 10 features. Usually, it doesn't matter if you mix users. Create these little segments for all users and put everything together.

This will result in arrays shaped like

x.shape -> (BatchSize, 7, 10) -> (BatchSize, 7 step sequences, 10 features)   

y.shape -> (BatchSize, 10)

In this case, just replace y for the value you want. That may result in

y.shape -> (BatchSize,)

if you want a single result.

Assume your longest sequence, as in your example, is 52. Then:

x.shape -> (Users, 52, 10).    

Then you will have to "pad" the sequences to fill the blanks.

You can, for instance, fill the beginning of the sequences with zero features, such as:

x[0] = [s0, s1, s2, ......., s51] -> user with the longest sequence    

x[1] = [0 , 0 , s0, s1, ..., s49] -> user with a shorter sequence

Keras has a "variable length sequences" method to do that. You can still use a fixed size array.

Hope this answer helps.

Browse Categories

...