A solution here can be to add checkpointing, which prevents the recursion used by the codebase from creating an overflow. First, create a new directory to store the checkpoints. Then, you may have your SparkContext use that directory for checkpointing. Here is the example in Python:
sc.setCheckpointDir('checkpoint/')
You may also need to add checkpointing to the ALS as well, but I haven't been able to determine whether that makes a difference. Now, if you want to add a checkpoint there (probably not necessary), you can simply do:
ALS.checkpointInterval = 2