Process Advisors

*Subject to Terms and Condition


Azure Data Factory Interview Questions and Answer


Azure Data Factory is a cloud-based Microsoft tool that collects raw business data and further transforms it into usable information. It is a data integration ETL (extract, transform, and load) service that automates the transformation of the given raw data. This Azure Data Factory Interview Questions blog includes the most-probable questions asked during Azure job interviews. Following are the questions that you must prepare for:

Q1. Why do we need Azure Data Factory?
Q2. What is Azure Data Factory?
Q3. What is the integration runtime?
Q4. What is the limit on the number of integration runtime?
Q5. What is the difference between Azure Data Lake and Azure Data Warehouse?
Q6. What is blob storage in Azure?
Q7. What is the difference between Azure Data Lake Store and Blob storage?
Q8. What are the steps for creating ETL process in Azure Data Factory?
Q9. What is the difference between HDinsight & Azure Data Lake Analytics?
Q10. What are the top-level concepts of Azure Data Factory?

These Azure Data Factory interview questions are classified into the following parts:
1. Basic

2. Intermediate

3. Advanced

Check out this video on Azure Data Factory Tutorial by Intellipaat:

Basic Interview Questions

1. Why do we need Azure Data Factory?

  • The amount of data generated these days is huge and this data comes from different sources. When we move this particular data to the cloud, there are a few things needed to be taken care of.
  • Data can be in any form as it comes from different sources and these different sources will transfer or channelize the data in different ways and it can be in a different format. When we bring this data to the cloud or particular storage we need to make sure that this data is well managed. i.e you need to transform the data, delete unnecessary parts. As per moving the data is concerned, we need to make sure that data is picked from different sources and bring it at one common place then store it and if required we should transform into more meaningful.
  • This can be also done by a traditional data warehouse as well but there are certain disadvantages. Sometimes we are forced to go ahead and have custom applications that deal with all these processes individually which is time-consuming and integrating all these sources is a huge pain. we need to figure out a way to automate this process or create proper workflows.
  • Data factory helps to orchestrate this complete process into a more manageable or organizable manner.
Aspiring to become a data analytics professional? Enroll in Data Analytics Courses in Bangalore and learn from the best.

2. What is Azure Data Factory?

Cloud-based integration service that allows creating data-driven workflows in the cloud for orchestrating and automating data movement and data transformation.

  • Using Azure data factory, you can create and schedule the data-driven workflows(called pipelines) that can ingest data from disparate data stores.
  • It can process and transform the data by using compute services such as HDInsight Hadoop, Spark, Azure Data Lake Analytics, and Azure Machine Learning.
Want to learn big data? Enroll in this Big Data Hadoop Course in Bangalore taught by industry experts.

3. What is the integration runtime?

  • The integration runtime is the compute infrastructure that Azure Data Factory uses to provide the following data integration capabilities across various network environments.
  • 3 Types of integration runtimes:
    • Azure Integration Run Time: Azure Integration Run Time can copy data between cloud data stores and it can dispatch the activity to a variety of compute services such as Azure HDinsight or SQL server where the transformation takes place
    • Self Hosted Integration Run Time: Self Hosted Integration Run Time is software with essentially the same code as Azure Integration Run Time. But you install it on an on-premise machine or a virtual machine in a virtual network. A Self Hosted IR can run copy activities between a public cloud data store and a data store in a private network. It can also dispatch transformation activities against compute resources in a private network. We use Self Hosted IR because the Data factory will not be able to directly access on-primitive data sources as they sit behind a firewall. It is sometimes possible to establish a direct connection between Azure and on-premises data sources by configuring the Azure firewall in a specific way if we do that we don’t need to use a self-hosted IR.
    • Azure SSIS Integration Run Time: With SSIS Integration Run Time, you can natively execute SSIS packages in a managed environment. So when we lift and shift the SSIS packages to the data factory, we use Azure SSIS Integration Run Time.

Learn more about the concept by reading from the blog regarding SSIS by Intellipaat.

4. What is the limit on the number of integration runtimes?

There is no hard limit on the number of integration runtime instances you can have in a data factory. There is, however, a limit on the number of VM cores that the integration runtime can use per subscription for SSIS package execution.

Get 100% Hike!

Master Most in Demand Skills Now !

5. What is the difference between Azure Data Lake and Azure Data Warehouse?

Data Warehouse is a traditional way of storing data that is still used widely. Data Lake is complementary to Data Warehouse i.e if you have your data at a data lake that can be stored in the data warehouse as well but there are certain rules that need to be followed.

Complementary to data warehouse Maybe sourced to the data lake
Data is Detailed data or Raw data. It can be in any particular form. you just need to take the data and dump it into your data lake Data is filtered, summarised, refined
Schema on read (not structured, you can define your schema in n number of ways) Schema on write(data is written in Structured form or in a particular schema)
One language to process data of any format(USQL) It uses SQL
Preparing for the Azure Certification exam? Join our Azure Training in Bangalore!

Intermediate Interview Questions

6. What is blob storage in Azure?

Azure Blob Storage is a service for storing large amounts of unstructured object data, such as text or binary data. You can use Blob Storage to expose data publicly to the world or to store application data privately. Common uses of Blob Storage include:

  • Serving images or documents directly to a browser
  • Storing files for distributed access
  • Streaming video and audio
  • Storing data for backup and restore disaster recovery, and archiving
  • Storing data for analysis by an on-premises or Azure-hosted service


Certification in Cloud & Devops

7. What is the difference between Azure Data Lake store and Blob storage?

  Azure Data Lake Storage Gen1 Azure Blob Storage
Purpose Optimized storage for big data analytics workloads General-purpose object store for a wide variety of storage scenarios, including big data analytics
Structure Hierarchical file system Object store with flat namespace
Key Concepts Data Lake Storage Gen1 account contains folders, which in turn contains data stored as files Storage account has containers, which in turn has data in the form of blobs
Use Cases Batch, interactive, streaming analytics, and machine learning data such as log files, IoT data, clickstreams, large datasets Any type of text or binary data, such as application back end, backup data, media storage for streaming, and general-purpose data. Additionally, full support for analytics workloads; batch, interactive, streaming analytics, and machine learning data such as log files, IoT data, clickstreams, large datasets
Server-side API WebHDFS-compatible REST API Azure Blob Storage REST API
Data Operations – Authentication Based on Azure Active Directory Identities Based on shared secrets – Account Access Keys and Shared Access Signature Keys.
To learn more about big data, check out this Big Data Course offered by Intellipaat.

8. What are the steps for creating ETL process in Azure Data Factory?

While we are trying to extract some data from Azure SQL server database, if something has to be processed, then it will be processed and is stored in the Data Lake Store.

Steps for Creating ETL

  • Create a Linked Service for source data store which is SQL Server Database
  • Assume that we have a cars dataset
  • Create a Linked Service for destination data store which is Azure Data Lake Store
  • Create a dataset for Data Saving
  • Create the pipeline and add copy activity
  • Schedule the pipeline by adding a trigger

9. What is the difference between HDinsight & Azure Data Lake Analytics?

HDInsight Azure Data Lake Analytics
HDInsight is Platform as a service Azure Data Lake Analytics is Software as a service.
If we want to process a data set, first of all, we have to configure the cluster with predefined nodes and then we use a language like pig or hive for processing data It is all about passing queries, written for processing data and Azure Data Lake Analytics will create necessary compute nodes as per our instruction on-demand and process the data set
Since we configure the cluster with HD insight, we can create as we want and we can control it as we want. All Hadoop subprojects such as spark, Kafka can be used without any limitation. With azure data lake analytics, it does not give much flexibility in terms of the provision in the cluster, but Microsoft Azure takes care of it. We don’t need to worry about cluster creation. The assignment of nodes will be done based on the instruction we pass. In addition to that, we can make use of USQL taking advantage of dotnet for processing data.

10. What are the top-level concepts of Azure Data Factory?

  • Pipeline: It acts as a carrier in which we have various processes taking place. An individual process is an activity.
  • Activities: Activities represent the processing steps in a pipeline. A pipeline can have one or multiple activities. It can be anything i.e process like querying a data set or moving the dataset from one source to another.
  • Datasets: Sources of data. In simple words, it is a data structure that holds our data.
  • Linked services: These store information that is very important when it comes to connecting an external source.

For example: Consider SQL server, you need a connection string that you can connect to an external device. you need to mention the source and the destination of your data.

Career Transition

Advanced Interview Questions

11. How can I schedule a pipeline?

  • You can use the scheduler trigger or time window trigger to schedule a pipeline.
  • The trigger uses a wall-clock calendar schedule, which can schedule pipelines periodically or in calendar-based recurrent patterns (for example, on Mondays at 6:00 PM and Thursdays at 9:00 PM).

12. Can I pass parameters to a pipeline run?

  • Yes, parameters are a first-class, top-level concept in Data Factory.
  • You can define parameters at the pipeline level and pass arguments as you execute the pipeline run on-demand or by using a trigger.

Looking to learn about Azure? Check out our blog on Azure Tutorial!

13. Can I define default values for the pipeline parameters?

You can define default values for the parameters in the pipelines.

14. Can an activity in a pipeline consume arguments that are passed to a pipeline run?

Each activity within the pipeline can consume the parameter value that’s passed to the pipeline and run with the @parameter construct.


Become a Cloud and DevOps Architect

15. Can an activity output property be consumed in another activity?

An activity output can be consumed in a subsequent activity with the @activity construct.

16. How do I gracefully handle null values in an activity output?

You can use the @coalesce construct in the expressions to handle the null values gracefully.

Check out Intellipaat’s Azure Training and get a head start in your career now!

17. Which Data Factory version do I use to create data flows?

Use the Data Factory V2 version to create data flows.

18. What has changed from private preview to limited public preview in regard to data flows?

  • You will no longer have to bring your own Azure Databricks clusters.
  • Data Factory will manage cluster creation and tear-down.
  • Blob datasets and Azure Data Lake Storage Gen2 datasets are separated into delimited text and Apache Parquet datasets.
  • You can still use Data Lake Storage Gen2 and Blob storage to store those files. Use the appropriate linked service for those storage engines.

19. How do I access data by using the other 80 dataset types in Data Factory?

  • The Mapping Data Flow feature currently allows Azure SQL Database, Azure SQL Data Warehouse, delimited text files from Azure Blob storage or Azure Data Lake Storage Gen2, and Parquet files from Blob storage or Data Lake Storage Gen2 natively for source and sink.
  • Use the Copy activity to stage data from any of the other connectors, and then execute a Data Flow activity to transform data after it’s been staged. For example, your pipeline will first copy into Blob storage, and then a Data Flow activity will use a dataset in the source to transform that data.

Learn about various certifications in Azure in our in-depth blog on Microsoft Azure Certification.

20. Explain the two levels of security in ADLS Gen2?

The two levels of security applicable to ADLS Gen2 were also in effect for ADLS Gen1. Even though this is not new, it is worth calling out the two levels of security because it’s a very fundamental piece to getting started with the data lake and it is confusing for many people just getting started.

  • Role-Based Access Control (RBAC). RBAC includes built-in Azure roles such as reader, contributor, owner, or custom roles. Typically, RBAC is assigned for two reasons. One is to specify who can manage the service itself (i.e., update settings and properties for the storage account). Another reason is to permit the use of built-in data explorer tools, which require reader permissions.
  • Access Control Lists (ACLs). Access control lists specify exactly which data objects a user may read, write, or execute (execute is required to browse the directory structure). ACLs are POSIX-compliant, thus familiar to those with a Unix or Linux background.

POSIX does not operate on a security inheritance model, which means that access ACLs are specified for every object. The concept of default ACLs is critical for new files within a directory to obtain the correct security settings, but it should not be thought of as an inheritance. Because of the overhead assigning ACLs to every object, and because there is a limit of 32 ACLs for every object, it is extremely important to manage data-level security in ADLS Gen1 or Gen2 via Azure Active Directory groups.

Looking to crack DP-200? Check out our blog on DP-200 certification to crack the exam on the first attempt!

Course Schedule

Name Date Details
AWS Certification 03 Jun 2023(Sat-Sun) Weekend Batch
View Details
AWS Certification 10 Jun 2023(Sat-Sun) Weekend Batch
View Details
AWS Certification 17 Jun 2023(Sat-Sun) Weekend Batch
View Details

Leave a Reply

Your email address will not be published. Required fields are marked *

Find Azure Data Factory Training in Other Regions

Hyderabad Chennai Bangalore