In Spark, RDD stands for Resilient Distributed Dataset. RDD is a core abstraction and logical data structures of Spark. RDD is a collection of elements divided into partitions and distributed to multiple nodes of the cluster to store and process data in parallel. RDDs are immutable i.e. we cannot alter the RDD once created but can create new RDDs by transformations.
If you wish to learn Spark then sign up for this Spark Training course by Intellipaat.
You can watch this video on RDD tutorial to understand in-detail of RDD: