Hadoop performs distributed processing for large data sets across the cluster of commodity servers and works on multiple machines parallel. For processing the data, the client submits data and program to Hadoop. HDFS (Hadoop Distributed File System) stores the data whereas MapReduce process the data and Yarn divide the tasks.
If you want to learn Hadoop, I recommend this Hadoop Training program by Intellipaat.
You can check this video to understand more about how Hadoop works: