• Articles
  • Tutorials
  • Interview Questions

What is Azure Data Factory (ADF) - The Beginner‘s Guide

Tutorial Playlist

Today’s information technology landscape is heavily dependent on data. This data can be structured or unstructured and can be stored on-premises or in the cloud. Working on all these different forms of data to provide a uniform data pipeline and make them usable is a herculean and costly task. Azure Data Factory has been introduced as a viable solution to this problem.

On August 6, 2015, the initial version of the Azure Data Factory was launched. Azure Data Factory offers companies the ability to fully process their data, which results in enhanced productivity, business profitability, and insight into the data. This includes handling complex data workflows and integrating multiple data sources. When analyzing Azure Data Factory customers by industry, the three biggest segments are information technology and services (29%), computer software (9%), and financial services (5%).

Check out this YouTube video to learn more about Azure Full Course 2023:

What is Azure Data Factory (ADF)?

Data Factory in Azure is a cloud-based ETL or ELT and data integration tool that allows users to move data between on-premises and cloud systems, as well as schedule data flows.

Conventionally, SQL Server Integration Services (SSIS) is used for data integration from databases stored in on-premises infrastructure, but it cannot handle data in the cloud. But Azure Data Factory can work on the cloud or on-premises and has superior job scheduling features, which makes it better than SSIS.Microsoft Azure built this platform for users to build workflows for importing data from both on-premise and cloud data stores, converting, and processing data using current computing services like Hadoop. The outcomes can be subsequently uploaded to a data repository, either on-premises or in the cloud, for Business Intelligence (BI) applications to utilize.

To know more about Azure Data Science Certification check out our blog on the DP-100 Certification preparation guide.

Why Azure Data Factory?

The most commonly used tool for data integration on-premises is SSIS but there are some challenges to be overcome when dealing with data on the cloud. Azure Data Factory can tackle these challenges faced while moving data to or from the cloud, by the following methods:

  • Job scheduling and orchestration: There is a shortage of services that trigger data integration on the cloud. Although there are some services like Azure Scheduler, Azure Automation, SQL VM, etc. available for data movement, the job scheduling capabilities of Azure Data Factory are superior to them. 
  • Security: Every piece of data in transit between the cloud and on-premises is always automatically encrypted by Azure Data Factory. 
  • Continuous integration and delivery: The Azure Data Factory integration with GitHub allows you to develop, build, and deploy to Azure effortlessly. 
  • Scalability: Azure Data Factory was designed to be capable of handling large volumes of data. 

Cloud Computing EPGC IITR iHUB

How does Azure Data Factory work?

Azure Data Factory can connect to all of the data and processing sources you’ll need, including SaaS services, file sharing, and other online services. You can use the Data Factory service to design data pipelines that move data and then schedule them to run at specific intervals. This means that we can choose between a scheduled or one-time pipeline mode.

Copy activity in a data pipeline can be used to move data from both on-premise and cloud sources to a centralized data store in the cloud or on-premises for further analysis and processing. After storing data in a centralized data storage location, HDInsight Hadoop, Azure Data Lake Analytics, and Machine Learning actively transform the data.

What Is the ETL Process?

ETL stands for Extract, Transform, and Load. It is a data integration process that is used in data warehousing and business intelligence. This process involves extracting the data from various sources and then transforming it into a suitable format, as the data we extract can be in different formats. Afterward, it goes into the target database for analysis.

Let us understand Extract, Transform, and Load in detail:

Extract: In this phase, data is extracted from various sources. Sources can include databases, blob storage, spreadsheets, APIs, and more. The process involves connecting to the source system, retrieving the data, and pulling it to the staging area. This staging area holds the data temporarily before further processing.

Transform: In this phase, the data cleaning and structuring are done, and then the extracted data is transformed into a format suitable for analysis. Transformation tasks may include data filtering, sorting, aggregating, and converting the data types.

Load: Once the transformation of data is done, the data is loaded into a target database. Depending on the requirements, the loading process may be done in batches or in real-time.

These tools are designed for the automation and simplification of the data extraction, transformation, and loading processes. Here are some of the popular ETL tools used in the industry: Azure Data Factory, Informatica PowerCenter, Apache Airflow, AWS Glue, Hadoop, and Hevo.

How Does Azure Data Factory Differs from Other ETL Tools?

Azure Data Factory is a cloud-based ETL or ELT tool offered by Microsoft. The main issue with primitive ETL tools is that they have to be upgraded and maintained from time to time. On the other hand, this is not required for the Azure Data Factory tool as it is a cloud-based serverless service where everything is managed by the cloud service provider (Microsoft Azure).

Let’s take a look at some of the features of Azure Data Factory that distinguish it from other tools:

  • Azure Data Factory can auto-scale according to the workload. It is a fully managed PAAS service.
  • It can also run the SSIS packages.
  • It can also run up to one time per minute.
  • It can also work with computing services like Azure Batch and HDInsights to execute big data computations during the ETL process.
  • It also can help you connect to your on-premises data by creating a secure gateway.

Go through our set of Azure Data Factory Interview Questions to crack your interview.

Key Azure Data Factory Components

Knowing about the Azure Data Factory features is important in understanding Azure Data Factory’s working. They are:

  • Datasets: Datasets contain data source configuration parameters but at a finer level. A table name or file name, as well as a structure, can all be found in a dataset. Each dataset is linked to a certain linked service, which determines the set of potential dataset attributes.
  • Activities: Data transfer, transformations, and control flow operations are all examples of activities in azure data factory. Database query, saved procedure name, arguments, script location, and other options can be found in activity configurations. An activity can take one or more input datasets and output one or more datasets.
  • Linked Services: Configuration parameters for specific data sources are stored in the linked services in azure data factory. This could include information such as the server/database name, file folder, credentials, and so on. Each data flow may contain one or more related services, depending on the nature of the job.
  • Pipelines: Pipelines are groups of actions that make sense. Each pipeline in a data factory can have one or more actions. Pipelines make scheduling and monitoring several logically related operations a lot easier.
  • Triggers: Triggers are pipeline scheduling configurations that contain configuration settings such as start/end dates, execution frequency, and so on. Triggers in azure data factory aren’t required for ADF implementation; they’re only required if you want pipelines to run automatically and on a set schedule.
Relationship between different components of ADF

Want to learn concepts such as Flow process, Data lake, Analytics & Loading Data to Power BI, then go through our Azure Data Factory tutorial.

Get 100% Hike!

Master Most in Demand Skills Now !

Creating a Data Factory Resource

In this section, we will see how to create a Data Factory Service in Microsoft Azure and how it can help us move data from one location to another.

To create an Azure Data Factory Service, you need to log in to your Azure portal using your account credentials. Make sure that you have an Azure subscription and are signing in with a user account that is a member of the contributor, owner, or administrator role on the Azure subscription before building a new Data Factory that will be used to orchestrate the data copying and transformation.Open the Microsoft Azure Portal in your web browser, log in with an authorized user account, then search for Data Factory in the portal search panel and select the Data Factories option, as shown below:

Data Factory search on Azure Portal

To create a new data factory, click the + Create option in the Data Factories window, as shown below:

Create Data Factory

Provide the subscription type that you prefer for the service. Then give a resource group if you already have one, or else create a new one. Give the nearest Azure region for you to host the ADF on. Provide a unique name for the Data Factory, and whether to create a V1 or V2 data factory from the Basics tab of the Create Data Factory window, as shown:

Basic Details

The setup will then require you to configure a repository for your Data Factory CI/CD process in the Git Configuration tab. Here you can make changes between the Development and Production environments, and it will ask you whether to configure Git during the ADF creation or later.

Git Configuration

You must decide whether you will use a Managed VNET for the ADF and the type of endpoint that will be utilized for the Data Factory connection from the Networking tab of the Create Data Factory window, as shown below:

Networking Configuration

Click the Review + Create option after specifying the Data Factory network options to review the selected options before creating the Data Factory, as illustrated below:

Review and Create

After you’ve double-checked your choices, click the Create button to begin creating the Data Factor. You can monitor the progress of the Data Factory creation from the Notifications button of the Azure Portal, and a new window will be displayed once the Data Factory is created successfully, as shown below:

Deployment Successful

To open the constructed Data Factory, click the Go to Resources option in the given window. Under the Overview pane, you’ll see that a new Data Factory has been built, and you’ll be able to review the Data Factory’s important information, the Azure Data Factory documentation, and the pipelines and activity summary.

You can also check the Activity Log for different activities performed on the Data Factory, control ADF permissions under Access Control, diagnose and solve problems under Diagnose and Solve Problems, configure ADF networking, lock the ADF to prevent changes or deletion of the ADF resource, and perform other monitoring, automation, and troubleshooting options.

Configuring Data Factory

Get certified in Microsoft Azure with this course: Microsoft Azure Training Course for Azure Administrator certification.

Data Migration

The most straightforward approach to begin transferring data is to use the Data Copy Wizard. With its help, you can easily create a data pipeline which can help you transfer the data from the source to the destination data store.

In addition to using the DataCopy Wizard, you may customize your activities by manually constructing each of the major components. Data Factory entities are in JSON format, so you may build these files in your favorite editor and then copy them to the Azure portal. So the input and output datasets and the pipelines can be created in JSON for migrating data.

Prepare for the Azure Interview and crack like a pro with these Microsoft Azure Interview Questions.

Data Migration Using Azure Data Factory Tool

In this section, we will explore how to use the Azure Data Factory Copy Data tool to transfer data from one Blob Storage to another.

Before you begin, ensure that you have created two storage accounts: One for the source, where you will be storing the dataset. Let’s call this storage account “source”, in our case we have named it “intellipaat-data-store-2024”. Other destinations. The goal is to copy the data from the source container to another storage account (destination) using the Azure Data Factory Copy Data tool. You can name the other container as “destination”.

Note: The file types supported by Azure data factory are: Delimited text,  XML, JSON, Avro, Delta, Parquet, and Excel.


We will first start with container creation inside an Azure Storage Account.

First, go to your storage account and click on the “Containers” option under the “Data Storage” section.

Data Migration Using Azure Data Factory Tool

Now, click on the “+ Container” option, enter the name of the container, and click on the “Create” button to create a container.

Data Migration Using Azure Data Factory Tool 2

Click on this newly created container to open it, so that we can upload our data.

Data Migration Using Azure Data Factory Tool 3

Now, since we are in our container, we need to click on the “Upload” button, browse the file to upload, and click on the “Upload” button.

Data Migration Using Azure Data Factory Tool 4

As you can see, our data is uploaded. Now, we can go to our data factory resource.

Data Migration Using Azure Data Factory Tool 5

Go to the data factory resource and click on the “Launch studio” button

Data Migration Using Azure Data Factory Tool 6

Now, we are going to create a pipeline on the Azure data factory portal. To create a pipeline, you need to click on the “New” dropdown and select “Pipeline.”

Data Migration Using Azure Data Factory Tool 7

Now, let’s name our pipeline. In the “Properties” section, give the name of your pipeline. In our case, we have named it “intellipaat_copy_pipeline”. From the activity pane, expand the “Move and transform” dropdown and drag and drop the “Copy Data” activity onto the canvas. It helps in copying data from one location to another. 

Next, go to the “Source” section and click on “New Dataset”. A source dataset describes the format and schema of the source data.

Data Migration Using Azure Data Factory Tool 8

Here, select “Azure Blob Storage” and click on “Continue”.  This step involves specifying the source from which we are copying the data. In our case, since our data is stored in the Blob Storage, we have selected “Azure Blob Storage”. 

Data Migration Using Azure Data Factory Tool 9

Once you click continue, it will ask you to select the file type. Since our data set is a ‘.csv’ file hence, we have selected “Delimited Text” here.

Data Migration Using Azure Data Factory Tool 10

Here, we need to create a linked service. A linked service is essentially a reference to the source and destination locations between which we’ll be moving our data. In this case, we will create two linked services: one for the source (datastore2024) and another for the destination storage account (intellipaat2024). To create a linked service, click on “+New.” 

Data Migration Using Azure Data Factory Tool 11

On the New Linked Service page, enter a name for the linked service. Here we have named it “Source_connection” as this linked service is referencing the source storage account. Under “Storage Account Name,” select the source storage account where you have uploaded the dataset. Test your connection, once the connection is successful, click on “Create.”

Data Migration Using Azure Data Factory Tool 12

Next, you need to locate the file inside the container; for that, you need to click on the “browse icon” marked in black.

Data Migration Using Azure Data Factory Tool 13

Select the container name > select the file that you want to copy, in our case, we only have a single file. Then, click on “OK.”

Data Migration Using Azure Data Factory Tool 14

Go to the “sink” section and create a new sink dataset. The sink dataset defines the schema and connection information for the destination data store. In this step, you will specify the destination where you intend to paste the data after copying it.

Data Migration Using Azure Data Factory Tool 15

Select the type of destination store. Since we are going to store the copied data in blob storage hence, select “Azure Blob Storage”, and click on “Continue”.

Data Migration Using Azure Data Factory Tool 16

Select the file type to which you want to transform your data. In our case, since we don’t want to change the file format, we will keep it as Delimited Text only. Click on “Continue.”

Data Migration Using Azure Data Factory Tool 17

In the linked service, click on “New.” Now, we need to create another linked service, this time for the destination data store. Follow the same process we used previously for creating the linked service.

Data Migration Using Azure Data Factory Tool 18

Give it a name, select the destination storage account, and click on Create.

Data Migration Using Azure Data Factory Tool 19

Browse the container, where you want to copy the file and click on “Ok.”

Data Migration Using Azure Data Factory Tool 20
Data Migration Using Azure Data Factory Tool 21

Once everything is done, first click on validate to check for any errors. Afterward, click on the “debug” button. 

Data Migration Using Azure Data Factory Tool 22

Once the pipeline is triggered successfully, you can go to the storage account and check whether the file has been copied or not. You can see that we were able to copy the file successfully.

Data Migration Using Azure Data Factory Tool 23

You can observe how seamlessly we migrated data from one location to another using this Data Factory tool. With this tool, you can migrate your on-premises data to Azure Cloud or a SQL database, and vice versa. Azure Data Factory offers various use cases; for instance, you can automate the data migration process using Azure Functions and event triggers. Whenever there are changes in the storage account, Azure Functions will trigger the pipeline.

Conclusion

In conclusion, Azure Data Factory is a powerful cloud-based data integration service that allows organizations to create, schedule, and manage data pipelines. It enables data integration scenarios such as data movement, data transformation, and data flow.

Additionally, it offers a wide range of features and integration options that can be tailored to meet the specific needs of any organization. Overall, Azure Data Factory is an essential tool for organizations that want to take advantage of the benefits of cloud computing while effectively managing their data integration process.

If you are looking to start your career or even elevate your skills in the field of Data Engineering, enroll today in our comprehensive Azure Data Engineer Certification Course for the DP-203 Exam or join Intellipaat’s Master’s in Power BI for Azure Data Factory.

If you’re eager to explore Azure Data Factory in-depth, feel free to join Intellipaat’s Azure Community and get answers to your queries.

Course Schedule

Name Date Details
Azure Training 20 Apr 2024(Sat-Sun) Weekend Batch
View Details
Azure Training 27 Apr 2024(Sat-Sun) Weekend Batch
View Details
Azure Training 04 May 2024(Sat-Sun) Weekend Batch
View Details