Shubham Agarwal gets into the way that Spark translates operations on Resilient Distributed Datasets into actions:
When we do a transformation on any RDD, it gives us a new RDD. But it does not start the execution of those transformations. The execution is performed only when an action is performed on the new RDD and gives us a final result.
So once you perform any action on an RDD, Spark context gives your program to the driver.
The driver creates the DAG (directed acyclic graph) or execution plan (job) for your program. Once the DAG is created, the driver divides this DAG into a number of stages. These stages are then divided into smaller tasks and all the tasks are given to the executors for execution.
Click through for more details.
Let’s block ads! (Why?)