It doesn't work because Spark's toDF() method is expecting an explicit schema when data in your RDD is a simple type, like float. Spark cannot infer column names and types on simple types, such as floats or integers, in an RDD without a schema. That's solved by providing a schema by specifying a column name.
Here's how to do it.
With a Column Name Convert the RDD to a DataFrame:
Use toDF() and assign column name directly:
from pyspark.sql import SparkSession
# Start SparkSession
spark = SparkSession.builder.appName("Example").getOrCreate()
# Create an RDD of floats
myFloatRdd = sc.parallelize([1.0, 2.0, 3.0])
#Convert to DataFrame with a specified column name
df = myFloatRdd.toDF(c("value"))
df.show()