Spark interview questions complete guide
1. Can you tell me what is Apache Spark about?
2. What are the features of Apache Spark?

- High Processing Speed: Apache Spark helps in the achievement of a very high processing speed of data by reducing read-write operations to disk. The speed is almost 100x faster while performing in-memory computation and 10x faster while performing disk computation.
- Dynamic Nature: Spark provides 80 high-level operators which help in the easy development of parallel applications.
- In-Memory Computation: The in-memory computation feature of Spark due to its DAG execution engine increases the speed of data processing. This also supports data caching and reduces the time required to fetch data from the disk.
- Reusability: Spark codes can be reused for batch-processing, data streaming, running ad-hoc queries, etc.
- Fault Tolerance: Spark supports fault tolerance using RDD. Spark RDDs are the abstractions designed to handle failures of worker nodes which ensures zero data loss.
- Stream Processing: Spark supports stream processing in real-time. The problem in the earlier MapReduce framework was that it could process only already existing data.
- Lazy Evaluation: Spark transformations done using Spark RDDs are lazy. Meaning, they do not generate results right away, but they create new RDDs from existing RDD. This lazy evaluation increases the system efficiency.
- Support Multiple Languages: Spark supports multiple languages like R, Scala, Python, Java which provides dynamicity and helps in overcoming the Hadoop limitation of application development only using Java.
- Hadoop Integration: Spark also supports the Hadoop YARN cluster manager thereby making it flexible.
- Supports Spark GraphX for graph parallel execution, Spark SQL, libraries for Machine learning, etc.
- Cost Efficiency: Apache Spark is considered a better cost-efficient solution when compared to Hadoop as Hadoop required large storage and data centers while data processing and replication.
- Active Developer’s Community: Apache Spark has a large developers base involved in continuous development. It is considered to be the most important project undertaken by the Apache community.
3. What is RDD?
- Parallelized collections: Meant for running parallelly.
- Hadoop datasets: These perform operations on file record systems on HDFS or other storage systems.
4. What does DAG refer to in Apache Spark?
5. List the types of Deploy Modes in Spark.
- Client Mode: The deploy mode is said to be in client mode when the spark driver component runs on the machine node from where the spark job is submitted.
- The main disadvantage of this mode is if the machine node fails, then the entire job fails.
- This mode supports both interactive shells or the job submission commands.
- The performance of this mode is worst and is not preferred in production environments.
- Cluster Mode: If the spark job driver component does not run on the machine from which the spark job has been submitted, then the deploy mode is said to be in cluster mode.
- The spark job launches the driver component within the cluster as a part of the sub-process of ApplicationMaster.
- This mode supports deployment only using the spark-submit command (interactive shell mode is not supported).
- Here, since the driver programs are run in ApplicationMaster, in case the program fails, the driver program is re-instantiated.
- In this mode, there is a dedicated cluster manager (such as stand-alone, YARN, Apache Mesos, Kubernetes, etc) for allocating the resources required for the job to run as shown in the below architecture.

Apart from the above two modes, if we have to run the application on our local machines for unit testing and development, the deployment mode is called “Local Mode”. Here, the jobs run on a single JVM in a single machine which makes it highly inefficient as at some point or the other there would be a shortage of resources which results in the failure of jobs. It is also not possible to scale up resources in this mode due to the restricted memory and space.
6. What are receivers in Apache Spark Streaming?
- Reliable receivers: Here, the receiver sends an acknowledegment to the data sources post successful reception of data and its replication on the Spark storage space.
- Unreliable receiver: Here, there is no acknowledgement sent to the data sources.
7. What is the difference between repartition and coalesce?
Repartition | Coalesce |
---|---|
Usage repartition can increase/decrease the number of data partitions. | Spark coalesce can only reduce the number of data partitions. |
Repartition creates new data partitions and performs a full shuffle of evenly distributed data. | Coalesce makes use of already existing partitions to reduce the amount of shuffled data unevenly. |
Repartition internally calls coalesce with shuffle parameter thereby making it slower than coalesce. | Coalesce is faster than repartition. However, if there are unequal-sized data partitions, the speed might be slightly slower. |
8. What are the data formats supported by Spark?
9. What do you understand by Shuffling in Spark?

It is to be noted that Spark has no control over what partition the data gets distributed across.
10. What is YARN in Spark?
- YARN is one of the key features provided by Spark that provides a central resource management platform for delivering scalable operations throughout the cluster.
- YARN is a cluster management technology and a Spark is a tool for data processing.
Spark Interview Questions for Experienced
11. How is Apache Spark different from MapReduce?
MapReduce | Apache Spark |
---|---|
MapReduce does only batch-wise processing of data. | Apache Spark can process the data both in real-time and in batches. |
MapReduce does slow processing of large data. | Apache Spark runs approximately 100 times faster than MapReduce for big data processing. |
MapReduce stores data in HDFS (Hadoop Distributed File System) which makes it take a long time to get the data. | Spark stores data in memory (RAM) which makes it easier and faster to retrieve data when needed. |
MapReduce highly depends on disk which makes it to be a high latency framework. | Spark supports in-memory data storage and caching and makes it a low latency computation framework. |
MapReduce requires an external scheduler for jobs. | Spark has its own job scheduler due to the in-memory data computation. |
12. Explain the working of Spark with the help of its architecture.

13. What is the working of DAG in Spark?

- The first task is to interpret the code with the help of an interpreter. If you use the Scala code, then the Scala interpreter interprets the code.
- Spark then creates an operator graph when the code is entered in the Spark console.
- When the action is called on Spark RDD, the operator graph is submitted to the DAG Scheduler.
- The operators are divided into stages of task by the DAG Scheduler. The stage consists of detailed step-by-step operation on the input data. The operators are then pipelined together.
- The stages are then passed to the Task Scheduler which launches the task via the cluster manager to work on independently without the dependencies between the stages.
- The worker nodes then execute the task.
Each RDD keeps track of the pointer to one/more parent RDD along with its relationship with the parent. For example, consider the operation val childB=parentA.map()
on RDD, then we have the RDD childB that keeps track of its parentA which is called RDD lineage.
14. Under what scenarios do you use Client and Cluster modes for deployment?
- In case the client machines are not close to the cluster, then the Cluster mode should be used for deployment. This is done to avoid the network latency caused while communication between the executors which would occur in the Client mode. Also, in Client mode, the entire process is lost if the machine goes offline.
- If we have the client machine inside the cluster, then the Client mode can be used for deployment. Since the machine is inside the cluster, there won’t be issues of network latency and since the maintenance of the cluster is already handled, there is no cause of worry in cases of failure.
15. What is Spark Streaming and how is it implemented in Spark?
- Data from sources like Kafka, Kinesis, Flume, etc are processed and pushed to various destinations like databases, dashboards, machine learning APIs, or as simple as file systems. The data is divided into various streams (similar to batches) and is processed accordingly.
- Spark streaming supports highly scalable, fault-tolerant continuous stream processing which is mostly used in cases like fraud detection, website monitoring, website click baits, IoT (Internet of Things) sensors, etc.
- Spark Streaming first divides the data from the data stream into batches of X seconds which are called Dstreams or Discretized Streams. They are internally nothing but a sequence of multiple RDDs. The Spark application does the task of processing these RDDs using various Spark APIs and the results of this processing are again returned as batches. The following diagram explains the workflow of the spark streaming process.

16. Write a spark program to check if a given keyword exists in a huge text file or not?
def keywordExists(line): if (line.find(“my_keyword”) > -1): return 1 return 0lines = sparkContext.textFile(“test_file.txt”);isExist = lines.map(keywordExists);sum = isExist.reduce(sum);print(“Found” if sum>0 else “Not Found”)
17. What can you say about Spark Datasets?
- Spark datasets are strongly typed structures that represent the structured queries along with their encoders.
- They provide type safety to the data and also give an object-oriented programming interface.
- The datasets are more structured and have the lazy query expression which helps in triggering the action. Datasets have the combined powers of both RDD and Dataframes. Internally, each dataset symbolizes a logical plan which informs the computational query about the need for data production. Once the logical plan is analyzed and resolved, then the physical query plan is formed that does the actual query execution.
Datasets have the following features:
- Optimized Query feature: Spark datasets provide optimized queries using Tungsten and Catalyst Query Optimizer frameworks. The Catalyst Query Optimizer represents and manipulates a data flow graph (graph of expressions and relational operators). The Tungsten improves and optimizes the speed of execution of Spark job by emphasizing the hardware architecture of the Spark execution platform.
- Compile-Time Analysis: Datasets have the flexibility of analyzing and checking the syntaxes at the compile-time which is not technically possible in RDDs or Dataframes or the regular SQL queries.
- Interconvertible: The type-safe feature of datasets can be converted to “untyped” Dataframes by making use of the following methods provided by the Datasetholder:
- toDS():Dataset[T]
- toDF():DataFrame
- toDF(columName:String*):DataFrame
- Faster Computation: Datasets implementation are much faster than those of the RDDs which helps in increasing the system performance.
- Persistent storage qualified: Since the datasets are both queryable and serializable, they can be easily stored in any persistent storages.
- Less Memory Consumed: Spark uses the feature of caching to create a more optimal data layout. Hence, less memory is consumed.
- Single Interface Multiple Languages: Single API is provided for both Java and Scala languages. These are widely used languages for using Apache Spark. This results in a lesser burden of using libraries for different types of inputs.
18. Define Spark DataFrames.
Dataframes can be created from an array of data from different data sources such as external databases, existing RDDs, Hive Tables, etc. Following are the features of Spark Dataframes:
- Spark Dataframes have the ability of processing data in sizes ranging from Kilobytes to Petabytes on a single node to large clusters.
- They support different data formats like CSV, Avro, elastic search, etc, and various storage systems like HDFS, Cassandra, MySQL, etc.
- By making use of SparkSQL catalyst optimizer, state of art optimization is achieved.
- It is possible to easily integrate Spark Dataframes with major Big Data tools using SparkCore.
19. Define Executor Memory in Spark
spark.executor.memory
that belongs to the -executor-memory
flag. Every Spark applications have one allocated executor on each worker node it runs. The executor memory is a measure of the memory consumed by the worker node that the application utilizes.20. What are the functions of SparkCore?
Spark Core does important functions such as memory management, job monitoring, fault-tolerance, storage system interactions, job scheduling, and providing support for all the basic I/O functionalities. There are various additional libraries built on top of Spark Core which allows diverse workloads for SQL, streaming, and machine learning. They are responsible for:
- Fault recovery
- Memory management and Storage system interactions
- Job monitoring, scheduling, and distribution
- Basic I/O functions
21. What do you understand by worker node?

22. What are some of the demerits of using Spark in applications?
- Spark makes use of more storage space when compared to MapReduce or Hadoop which may lead to certain memory-based problems.
- Care must be taken by the developers while running the applications. The work should be distributed across multiple clusters instead of running everything on a single node.
- Since Spark makes use of “in-memory” computations, they can be a bottleneck to cost-efficient big data processing.
- While using files present on the path of the local filesystem, the files must be accessible at the same location on all the worker nodes when working on cluster mode as the task execution shuffles between various worker nodes based on the resource availabilities. The files need to be copied on all worker nodes or a separate network-mounted file-sharing system needs to be in place.
- One of the biggest problems while using Spark is when using a large number of small files. When Spark is used with Hadoop, we know that HDFS gives a limited number of large files instead of a large number of small files. When there is a large number of small gzipped files, Spark needs to uncompress these files by keeping them on its memory and network. So large amount of time is spent in burning core capacities for unzipping the files in sequence and performing partitions of the resulting RDDs to get data in a manageable format which would require extensive shuffling overall. This impacts the performance of Spark as much time is spent preparing the data instead of processing them.
- Spark doesn’t work well in multi-user environments as it is not capable of handling many users concurrently.
23. How can the data transfers be minimized while working with Spark?
- Usage of Broadcast Variables: Broadcast variables increases the efficiency of the join between large and small RDDs.
- Usage of Accumulators: These help to update the variable values parallelly during execution.
- Another common way is to avoid the operations which trigger these reshuffles.
24. What is SchemaRDD in Spark RDD?

25. What module is used for implementing SQL in Apache Spark?
The four major libraries of SparkSQL are:
- Data Source API
- DataFrame API
- Interpreter & Catalyst Optimizer
- SQL Services
Spark SQL supports the usage of structured and semi-structured data in the following ways:
- Spark supports DataFrame abstraction in various languages like Python, Scala, and Java along with providing good optimization techniques.
- SparkSQL supports data read and writes operations in various structured formats like JSON, Hive, Parquet, etc.
- SparkSQL allows data querying inside the Spark program and via external tools that do the JDBC/ODBC connections.
- It is recommended to use SparkSQL inside the Spark applications as it empowers the developers to load the data, query the data from databases and write the results to the destination.

26. What are the different persistence levels in Apache Spark?
persist()
method on the RDD. There are different persistence levels for storing the RDDs on memory or disk or both with different levels of replication. The persistence levels available in Spark are:- MEMORY_ONLY: This is the default persistence level and is used for storing the RDDs as the deserialized version of Java objects on the JVM. In case the RDDs are huge and do not fit in the memory, then the partitions are not cached and they will be recomputed as and when needed.
- MEMORY_AND_DISK: The RDDs are stored again as deserialized Java objects on JVM. In case the memory is insufficient, then partitions not fitting on the memory will be stored on disk and the data will be read from the disk as and when needed.
- MEMORY_ONLY_SER: The RDD is stored as serialized Java Objects as One Byte per partition.
- MEMORY_AND_DISK_SER: This level is similar to
MEMORY_ONLY_SER
but the difference is that the partitions not fitting in the memory are saved on the disk to avoid recomputations on the fly. - DISK_ONLY: The RDD partitions are stored only on the disk.
- OFF_HEAP: This level is the same as the
MEMORY_ONLY_SER
but here the data is stored in the off-heap memory.
The syntax for using persistence levels in the persist() method is:
df.persist(StorageLevel.<level_value>)
The following table summarizes the details of persistence levels:
Persistence Level | Space Consumed | CPU time | In-memory? | On-disk? |
---|---|---|---|---|
MEMORY_ONLY | High | Low | Yes | No |
MEMORY_ONLY_SER | Low | High | Yes | No |
MEMORY_AND_DISK | High | Medium | Some | Some |
MEMORY_AND_DISK_SER | Low | High | Some | Some |
DISK_ONLY | Low | High | No | Yes |
OFF_HEAP | Low | High | Yes (but off-heap) | No |
27. What are the steps to calculate the executor memory?
Number of nodes = 10Number of cores in each node = 15 coresRAM of each node = 61GB
To identify the number of cores, we follow the approach:
Number of Cores = number of concurrent tasks that can be run parallelly by the executor. The optimal value as part of a general rule of thumb is 5.
Hence to calculate the number of executors, we follow the below approach:
Number of executors = Number of cores/Concurrent Task = 15/5 = 3Number of executors = Number of nodes * Number of executor in each node = 10 * 3 = 30 executors per Spark job
28. Why do we need broadcast variables in Spark?

29. Differentiate between Spark Datasets, Dataframes and RDDs.
Criteria | Spark Datasets | Spark Dataframes | Spark RDDs |
---|---|---|---|
Representation of Data | Spark Datasets is a combination of Dataframes and RDDs with features like static type safety and object-oriented interfaces. | Spark Dataframe is a distributed collection of data that is organized into named columns. | Spark RDDs are a distributed collection of data without schema. |
Optimization | Datasets make use of catalyst optimizers for optimization. | Dataframes also makes use of catalyst optimizer for optimization. | There is no built-in optimization engine. |
Schema Projection | Datasets find out schema automatically using SQL Engine. | Dataframes also find the schema automatically. | Schema needs to be defined manually in RDDs. |
Aggregation Speed | Dataset aggregation is faster than RDD but slower than Dataframes. | Aggregations are faster in Dataframes due to the provision of easy and powerful APIs. | RDDs are slower than both the Dataframes and the Datasets while performing even simple operations like data grouping. |
30. Can Apache Spark be used along with Hadoop? If yes, then how?

Hadoop can be integrated with Spark in the following ways:
- HDFS: Spark can be configured to run atop HDFS to leverage the feature of distributed replicated storage.
- MapReduce: Spark can also be configured to run alongside the MapReduce in the same or different processing framework or Hadoop cluster. Spark and MapReduce can be used together to perform real-time and batch processing respectively.
- YARN: Spark applications can be configured to run on YARN which acts as the cluster management framework.
31. What are Sparse Vectors? How are they different from dense vectors?
val sparseVec: Vector = Vectors.sparse(5, Array(0, 4), Array(1.0, 2.0))
- In the above example, we have the vector of size 5, but the non-zero values are there only at indices 0 and 4.
- Sparse vectors are particularly useful when there are very few non-zero values. If there are cases that have only a few zero values, then it is recommended to use dense vectors as usage of sparse vectors would introduce the overhead of indices which could impact the performance.
- Dense vectors can be defines as follows:
val denseVec = Vectors.dense(4405d,260100d,400d,5.0,4.0,198.0,9070d,1.0,1.0,2.0,0.0)
- Usage of sparse or dense vectors does not impact the results of calculations but when used inappropriately, they impact the memory consumed and the speed of calculation.
32. How are automatic clean-ups triggered in Spark for handling the accumulated metadata?
spark.cleaner.ttl
parameter or by doing the batch-wise division of the long-running jobs and then writing the intermediary results on the disk.33. How is Caching relevant in Spark Streaming?
- Caching using cache method:
val cacheDf = dframe.cache()
- Caching using persist method:
val persistDf = dframe.persist(StorageLevel.MEMORY_ONLY)
The main advantages of caching are:
- Cost efficiency: Since Spark computations are expensive, caching helps to achieve reusing of data and this leads to reuse computations which can save the cost of operations.
- Time-efficient: The computation reusage leads to saving a lot of time.
- More Jobs Achieved: By saving time of computation execution, the worker nodes can perform/execute more jobs.

34. Define Piping in Spark.
pipe()
method on RDDs which gives the opportunity to compose different parts of occupations that can utilize any language as needed as per the UNIX Standard Streams. Using the pipe()
method, the RDD transformation can be written which can be used for reading each element of the RDD as String. These can be manipulated as required and the results can be displayed as String.