What you are asking is possible to do but it can be far away from a trivial approach. What you need here is a Java (friendly) wrapper so you don't have to deal with Scala features which cannot be easily expressed using plain Java and as a result don't play well with Py4J gateway.
Assuming your class is int the package com.example and have Python DataFrame called df
df = ... # Python DataFrame
you'll have to:
jvm = sc._jvm
ssqlContext = sqlContext._ssql_ctx
jdf = df._jdf
simpleObject = jvm.com.example.SimpleClass(ssqlContext, jdf, "v")
from pyspark.sql import DataFrame
DataFrame(simpleObject.exe(), ssqlContext)
The result should be a valid PySpark DataFrame. Also, you can combine all the steps into a single call.
Important Note: Just keep in mind that this approach is possible only if Python code is executed solely on the driver. It won’t work if the code is used inside Python action or transformation