First of all, letâ€™s look at the topics that we will cover in this blog:

- What is TensorFlow?
- How does TensorFlow work?
- Why is TensorFlow popular?
- Components of TensorFlow
- TensorFlow Architecture for Building Models
- Basics of TensorFlow

**What is TensorFlow?**

TensorFlow is one of the most in-demand tools used by ML/AI Engineers. It is an open-source framework developed by Google, which is used to build various Machine Learning and Deep Learning models.

TensorFlow helps us train and execute neural network image recognition, natural language processing, digit classification, and many more. Also, using the same models used for development, TensorFlow facilitates the estimation of the output at various scales.

The main objective of using TensorFlow is not just the development of a deep neural network. But, it is focused to reduce the complexity of implementing computations on large numerical datasets. Since Deep Learning models require a lot of computation for attaining accuracy, companies started using TensorFlow. Thus, Google made TensorFlow available to everyone.

**Check out the below tutorial to learn and master the use of TensorFlow:**

**How does TensorFlow work?**

One of the best things about TensorFlow is it provides a feature that helps us create structures for our Machine Learning models. These structures are made of dataflow graphs. The dataflow graphs denote the functionalities that we want to implement. It consists of a set of nodes in a well-defined order where we can specify the methods for computation.

Also, the dataflow graphs show us how the data moves through the graph, including its functioning. The above diagram gives more clarity about the mechanism of TensorFlow.

However, the data that we need to feed into the model should be a multidimensional array while using TensorFlow for our applications. These multidimensional arrays are known as tensors, and they are very helpful while dealing with massive amounts of data.

In a graph, every node will represent a mathematical operation, while each connection or the edge between nodes will be a multidimensional data array (tensor).

**Why is TensorFlow popular?**

Here are a few reasons for the popularity of TensorFlow:

- As it is designed to be open to all, TensorFlow is known as the best library among all other libraries used for developing AI-based applications.
- The Tensorflow library integrates various APIs to construct Deep Learning architectures, such as convolutional neural networks or recurrent neural networks.
- The TensorFlow framework is based on the computation of dataflow graphs. These graphs enable developers to represent the development of a neural network.
- The framework also enables them to debug the application.
- As it is built on Python, it becomes easy to learn and implement.
- Both the C++ and Python APIs are supported by TensorFlow, which makes the development easier than other frameworks used for the same purpose.
- Also, in earlier days, in the development of AI/ML-based applications, engineers used to create each mechanism of the application without the help of any library or framework. But, with the emergence of various frameworks such as TensorFlow, the development of complex applications has become an easier task.
- The library and packages have thousands of built-in functions that enable developers to avoid writing complex and time-consuming codes.
- Moreover, if they are not comfortable with C++ or Python, then they can use Java or R programming as well because these languages have integration with TensorFlow.
- Another major advantage of using TensorFlow is that it enables developers to work with both GPUs and CPUs.

*Enroll in our **Artificial Intelligence Certification** to start a bright career as an AI Engineer.*

**Components of TensorFlow**

There are various components of TensorFlow that help us create and execute programs, and they include tensors and graphs. Now, letâ€™s understand them in detail.

**Tensor**

The name â€˜TensorFlowâ€™ is derived from its core structure: Tensor. All computations in TensorFlow require tensors to execute a program. Now, what exactly is a tensor? A tensor is an n-dimensional vector or a matrix that can contain all data types. All tensor values carry the same type of data with a known (or partially known) form. Also, the dimensionality of the matrix is defined by the shape of the input data.

A tensor may be derived from the input data or the outcome of a process. All the functions/methods are carried out in a graph defined by using the TensorFlow library. The graph is a sequence of functions that are consecutively carried out. Each operation represented in a graph is known as an op node, and these nodes are related to each other. The graph describes the operation nodes and the relations between the nodes. Also, the edges connected to the nodes in the graph describe the operations to be performed.

*Looking to get started with TensorFlow? Check out our **TensorFlow Tutorial** now.*

Further, letâ€™s look at the graphs in detail.

**Graphs**

A graph is one of the important components that enable the graphical representation of the programmed process. Therefore, we use a graph framework in TensorFlow to represent complex ML/AI processes. Graphs help us collect and describe the sequence of computations that we want our model to perform. Below are some of the advantages of using graphs:

- We can run graphs on CPUs, GPUs, and on a mobile operating system
- The portability feature of these graphs enables us to save it for performing computations in the future.
- We can easily visualize which operations are being performed and how we can get the output with the help of nodes and edges represented by the graphs.

Now, as we understand why we use graphs, let’s discuss and learn about dataflow graphs.

When we develop complex Deep Learning models, it contains a lot of complex processes with the input data store in tensors. Using the data in the tensors, we need to define the flow of execution to perform the computations correctly. For this, we use dataflow graphs that help us visualize the flow of data. The dataflow graphs are made of nodes and edges. The nodes show us the component where computation is performed, and the edges represent the data that needs to be transferred after the computation process.

*What are TensorFlow and Keras? Check out the key differences between them in our comparison blog on **Keras vs TensorFlow**.*

**TensorFlow Architecture for Building Models**

In this section of the blog on â€˜What is TensorFlow?,â€™ we will discuss the architecture of TensorFlow. Basically, its architecture is similar to that used in Machine Learning, but the components that are used in TensorFlow are different. The architecture consists of three parts:

**Data preprocessing**: Here, we have to prepare the data for the purpose of feeding it to the model that we need to build. It includes removing duplicate values, feature scaling, standardization, and many other tasks.**Model building**: The next step after data preprocessing is model building, where we create our model using various algorithms.**Model training and evaluation**: The final step after building our model is training and evaluating it to check whether the model is generating an accurate output or not.

*Preparing for job interviews? Check out the most asked **TensorFlow Interview Questions and Answers** now!*

**Basics of TensorFlow**

In this section, we will get to know about the basics of TensorFlow and the various elements of a program. First, letâ€™s look at the two basic concepts using which TensorFlow works. They are listed below:

**Constructing a computational graph**: The first step is to construct a graph with the help of code.**Implementing the computational graph**: Then, for implementing the graph, we have to create a session. Without creating a session, graphs cannot be executed. We will learn more about sessions when we discuss the components of a program.

Further, letâ€™s discuss the components of a program used to store and manipulate data in TensorFlow.

**Constants**

Similar to other programming languages, constants in TensorFlow are immutable values. It is a simple entity that modifies the value of the end result of the program. We can use the below command to create a constant:

**Syntax**:

tf.constant()

**Example**:

#One dimensional constant x = tf.constant([1,2,3,4,5,6], dtype=tf.float64) #We can give shape to the tensor tf.constant( [10,20,30,40,50,60,70,80,90] , shape=(3,3)) 0utput: array([10,20,30], [40,50,60], [70,80,90] )

**Variables**

Variables enable us to change the values while implementing a program. If we are working with a supervised learning algorithm, it needs several iterations to train the model for generating accurate results. The objective is to reduce the error by trying out different values. Here, we cannot use constants to store the values. Therefore, in this case, the variables help us iteratively change the values to evaluate the models, using different parameters (values). Also, variables are known as mutable tensors.

**Syntax**:

tf.Variable(argument 1, argument 2)

**Example**:

#Creating variables m = tf.Variable([.3],dtype=tf.float32) x = tf.Variable([-.3],dtype=tf.float32) #Creating constants c = tf.constant(tf.float32) #Linear regression model using variables and constants Lin_mod = m*x+b

**Placeholders**

The special type of variables in TensorFlow that enables us to feed data from outside is called placeholders. Typically, placeholders help us load the data from the local system in the form of a CSV file, image, or any other format. This allows us to allocate values later. To initialize a placeholder, we use feed_dict that helps us give value to the placeholder. We use the following command to create a placeholder:

**Syntax**:

tf.placeholder()

**Exampl**e:

x = tf.placeholder(tf.float32) y = x*2 sess = tf.Session() Output = sess.run(y,feed_dict={a:3.0})

**Sessions**

All computations in a TensorFlow program are represented by a graph. However, creating a graph is not sufficient as we are designing a set of programs to execute through the graph. Therefore, we need to execute the graph, and for that, we use sessions.

A session enables us to allot resources for the AI/DL models, and it maintains the record of actual values. It can provide memory to save the current state of a variable. Also, we execute the session to measure the performance of the model by evaluating the logic contained by the nodes.

**Syntax:**

tf.Session()

**Example:**

# Creating constants m = tf.constant(18.0) bn= tf.constant(4.0) # Defining the operation k =m*n # Executing the session session = tf.Session() print(sess.run(k))

Here, without creating a session, we cannot execute the program and the logic.

In a nutshell, within a program, constants, variables, and placeholders enable data handling, after which we must run a session to execute the program.

With this, we have come to the end of the â€˜What is TensorFlow?â€™ blog. In the blog, we learned all that we needed to know about TensorFlow and its related concepts.

*If you have any doubts, you can comment below or visit our** AI & Deep Learning Community**.*

Course Schedule

Name | Date | |
---|---|---|

Data Science Architect |
2021-01-23 2021-01-24 (Sat-Sun) Weekend batch |
View Details |

Data Science Architect |
2021-01-30 2021-01-31 (Sat-Sun) Weekend batch |
View Details |

Data Science Architect |
2021-02-06 2021-02-07 (Sat-Sun) Weekend batch |
View Details |