Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (11.4k points)

The Spark Programming Guide mentions slices as a feature of RDDs (both parallel collections or Hadoop datasets.) ("Spark will run one task for each slice of the cluster.") But under the section on RDD persistence, the concept of partitions is used without introduction. Also, the RDD docs only mention partitions with no mention of slices, while the SparkContext docs mentions slices for creating RDDs, but partitions for running jobs on RDDs. Are these two concepts the same? If not, how do they differ?

1 Answer

0 votes
by (32.3k points)

They are just the same thing. The documentation has been fixed for Spark 1.2. For more details check out the bug: https://issues.apache.org/jira/browse/SPARK-1701

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...