The first thing to consider in this problem is what should be the ‘batch size’. In your problem, each user is a sequence, so users can be the ‘batch size’ for your problem. So
the number of examples = number of users.
It sounds like each user is a sequence, so, users may be the "batch size" for your problem. So at first, nExamples = nUsers.
In your problem, you should define a maximum length of "looking back". Say you can predict the next element from looking at the 7 previous ones, for instance (and not looking at the entire sequence).
For that, you should separate your data like this:
example 1: x[0] = [s0, s1, s2, ..., s6] | y[0] = s7
example 2: x[1] = [s1, s2, s3, ..., s7] | y[1] = s8
Where sn is a sample with 10 features. Usually, it doesn't matter if you mix users. Create these little segments for all users and put everything together.
This will result in arrays shaped like
x.shape -> (BatchSize, 7, 10) -> (BatchSize, 7 step sequences, 10 features)
y.shape -> (BatchSize, 10)
In this case, just replace y for the value you want. That may result in
y.shape -> (BatchSize,)
if you want a single result.
Assume your longest sequence, as in your example, is 52. Then:
x.shape -> (Users, 52, 10).
Then you will have to "pad" the sequences to fill the blanks.
You can, for instance, fill the beginning of the sequences with zero features, such as:
x[0] = [s0, s1, s2, ......., s51] -> user with the longest sequence
x[1] = [0 , 0 , s0, s1, ..., s49] -> user with a shorter sequence
Keras has a "variable length sequences" method to do that. You can still use a fixed size array.
Hope this answer helps.