Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (11.4k points)

I'm trying to load an SVM file and convert it to a DataFrame so I can use the ML module (Pipeline ML) from Spark. I've just installed a fresh Spark 1.5.0 on an Ubuntu 14.04 (no spark-env.sh configured).

My my_script.py is:

from pyspark.mllib.util import MLUtils
from pyspark import SparkContext

sc = SparkContext("local", "Teste Original")
data = MLUtils.loadLibSVMFile(sc, "/home/svm_capture").toDF()


and I'm running using:  ./spark-submit my_script.py

And I get the error:

Traceback (most recent call last):
File "/home/fred-spark/spark-1.5.0-bin-hadoop2.6/pipeline_teste_original.py", line 34, in <module>
data = MLUtils.loadLibSVMFile(sc, "/home/fred-spark/svm_capture").toDF()
AttributeError: 'PipelinedRDD' object has no attribute 'toDF'

1 Answer

0 votes
by (32.3k points)

toDF method is a monkey patch executed inside SparkSession (SQLContext constructor in 1.x) constructor so to be able to use it you have to create a SQLContext (or SparkSession) first:

# SQLContext or HiveContext in Spark 1.x

from pyspark.sql import SparkSession

from pyspark import SparkContext

sc = SparkContext()

rdd = sc.parallelize([("a", 1)])

hasattr(rdd, "toDF")

## False

spark = SparkSession(sc)

hasattr(rdd, "toDF")

## True

rdd.toDF().show()

## +---+---+

## | _1| _2|

## +---+---+

## |  a| 1|

## +---+---+

Not to mention you need a SQLContext to work with DataFrames anyway.

1.2k questions

2.7k answers

501 comments

693 users

Browse Categories

...