Explore Courses Blog Tutorials Interview Questions
0 votes
in Big Data Hadoop & Spark by (11.4k points)

I'm wondering if there is a concise way to run ML (e.g KMeans) on a DataFrame in pyspark if I have the features in multiple numeric columns.

I.e. as in the Iris dataset:

(a1=5.1, a2=3.5, a3=1.4, a4=0.2, id=u'id_1', label=u'Iris-setosa', binomial_label=1)

I'd like to use KMeans without recreating the DataSet with the feature vector added manually as a new column and the original columns hardcoded repeatedly in the code.

The solution I'd like to improve:

from pyspark.mllib.linalg import Vectors
from pyspark.sql.types import Row
from import KMeans, KMeansModel

iris ="/opt/data/iris.parquet")
# Row(a1=5.1, a2=3.5, a3=1.4, a4=0.2, id=u'id_1', label=u'Iris-setosa', binomial_label=1)

df = r: Row(
                    id =,
                    a1 = r.a1,
                    a2 = r.a2,
                    a3 = r.a3,
                    a4 = r.a4,
                    label = r.label,
                    features = Vectors.dense(r.a1, r.a2, r.a3, r.a4))

kmeans_estimator = KMeans()\
kmeans_transformer =

predicted_df = kmeans_transformer.transform(df).drop("features")
# Row(a1=5.1, a2=3.5, a3=1.4, a4=0.2, binomial_label=1, id=u'id_1', label=u'Iris-setosa', prediction=1)

I'm looking for a solution, which is something like:

feature_cols = ["a1", "a2", "a3", "a4"]
prediction_col_name = "prediction"
<dataframe independent code for KMeans>
<New dataframe is created, extended with the `prediction` column.>

1 Answer

0 votes
by (32.3k points)

In your case I would suggest you to use VectorAssembler. Do something like this:

from import VectorAssembler

ignore = ['id', 'label', 'binomial_label']

assembler = VectorAssembler(

    inputCols=[x for x in df.columns if x not in ignore],



Now, combine the assembler with k-means using ML Pipeline:

from import Pipeline

pipeline = Pipeline(stages=[assembler, kmeans_estimator])

model =

Browse Categories