Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Machine Learning by (19k points)

I build a simple recommendation system for the MovieLens DB inspired by https://databricks-training.s3.amazonaws.com/movie-recommendation-with-mllib.html.

I also have problems with explicit training like here: Apache Spark ALS collaborative filtering results. They don't make sense Using implicit training (on both explicit and implicit data) gives me reasonable results, but explicit training doesn't.

While this is ok for me by now, im curious on how to update a model. While my current solution works like

  1. having all user ratings
  2. generate model
  3. get recommendations for user

I want to have a flow like this:

  1. having a base of ratings
  2. generate model once (optional save & load it)
  3. get some ratings by one user on 10 random movies (not in the model!)
  4. get recommendations using the model and the new user ratings

Therefore I must update my model, without completely recompute it. Is there any chance to do so?

While the first way is good for batch processing (like generating recommendations in nightly batches) the second way would be good for nearly-live generating of recommendations.

1 Answer

0 votes
by (33.1k points)

To get predictions for new users using the trained model:

For a user in the model, you should use its latent representation (vector u of size f (number of factors)), which is multiplied by the product latent factor matrix (matrix made of the latent representations of all products, a bunch of vectors of size f) and gives you a score for each product. For new users, the problem is that you don't have access to their latent representation, but what you can do is use a similarity function to compute a similar latent representation for this new user by multiplying it by the transpose of the product matrix.  Varied details we will get while working on Spark Machine Learning.

i.e. if user latent matrix is u and your product latent matrix is v, for user in the model, you get scores by doing: u_i * v for a new user, you don't have a latent representation, so take the full representation full_u and do full_u * v^t * v This will approximate the latent factors for the new users and should give reasonable recommendations.

This allows you to compute predictions for new users without having to do the heavy computation of the model which you can now do only once in a while. So you have batch processing at night and can still make a prediction for the new users during the day.

Since, Spark is just a part of Machine Learning, more detailing and varied methods on this we will get is through the Machine Learning Tutorials.

Hope this answer helps you! 

Browse Categories

...