Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (11.4k points)

I'm trying to group by date in a Spark dataframe and for each group count the unique values of one column:

test.json
{"name":"Yin", "address":1111111, "date":20151122045510}
{"name":"Yin", "address":1111111, "date":20151122045501}
{"name":"Yln", "address":1111111, "date":20151122045500}
{"name":"Yun", "address":1111112, "date":20151122065832}
{"name":"Yan", "address":1111113, "date":20160101003221}
{"name":"Yin", "address":1111111, "date":20160703045231}
{"name":"Yin", "address":1111114, "date":20150419134543}
{"name":"Yen", "address":1111115, "date":20151123174302}


And the code:

import pyspark.sql.funcions as func
from pyspark.sql.types import TimestampType
from datetime import datetime

df_y = sqlContext.read.json("/user/test.json")
udf_dt = func.udf(lambda x: datetime.strptime(x, '%Y%m%d%H%M%S'), TimestampType())
df = df_y.withColumn('datetime', udf_dt(df_y.date))
df_g = df_y.groupby(func.hour(df_y.date))   
df_g.count().distinct().show()


The results with pyspark are

df_y.groupby(df_y.name).count().distinct().show()
+----+-----+
|name|count|
+----+-----+
| Yan|    1|
| Yun|    1|
| Yin|    4|
| Yen|    1|
| Yln|    1|
+----+-----+


And what I'm expecting is something like this:

df = df_y.toPandas()
df.groupby('name').address.nunique()
Out[51]:
name
Yan    1
Yen    1
Yin    2
Yln    1
Yun    1


How can I get the unique elements of each group by another field, like address?

1 Answer

0 votes
by (32.3k points)

I would suggest you to go with the easiest way to do this count of distinct elements of each group using the function countDistinct:

import pyspark.sql.functions as func

from pyspark.sql.types import TimestampType

from datetime import datetime

df_y = sqlContext.read.json("/user/test.json")

udf_dt = func.udf(lambda x: datetime.strptime(x, '%Y%m%d%H%M%S'), TimestampType())

df = df_y.withColumn('datetime', udf_dt(df_y.date))

df_g = df_y.groupby(func.hour(df_y.date))    

df_y.groupby(df_y.name).agg(func.countDistinct('address')).show()

+----+--------------+

|name|count(address)|

+----+--------------+

| Yan|             1|

| Yun|             1|

| Yin|             2|

| Yen|             1|

| Yln|             1|

+----+--------------+

The docs are available [here](https://spark.apache.org/docs/1.6.0/api/java/org/apache/spark/sql/functions.html#countDistinct(org.apache.spark.sql.Column, org.apache.spark.sql.Column...)).

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...