• Articles
  • Tutorials
  • Interview Questions

Deep Learning with TensorFlow - Use Case

Tensorflow- Use Case

Image Recognition and Image classification is one of the most basic yet very popular applications of Deep Learning. But how do we implement the algorithm of Deep Learning in our real-world problems? We certainly need a platform for that. That is when TensorFlow comes into the picture. With the help of this open source Deep Learning Platform we can train and test our Deep Learning Models.

Interested in learning Machine Learning? Click here to learn more in this Machine Learning Training in New York!

Watch this Tensorflow Tutorial for Beginners Video

Video Thumbnail

The primary agenda of this tutorial is to trigger an interest of Deep Learning in you with a real-world example. This tutorial highlights the use case implementation of Deep Leaning with TensorFlow.
Here we will be considering the MNIST dataset to train and test our very first Deep Learning model. Important theoretical aspects of the network are also mentioned in the very beginning of this tutorial.
If you have missed the previous parts of this tutorial series, then do these links

Image Detection Introduction:

On a day to day basis we feed our brain with an enormous amount of data, image data is one of them. Just the way human brain learns from these data, we can make a machine imitate the same with the help of Artificial Intelligent and Deep Learning.
Image Recognition and Classification is powered by AI and Deep Learning. These applications have been embedded in many industries in the past few years.
Here, we will take up one of the image recognition examples with the help of MNIST dataset.

Watch this Python Face Recognition Video

Video Thumbnail

Feed forward network:

Typically, one Feed-Forward Neural Network consists of three layers of neurons: input layer, hidden layer and output layer. In a Neural Network, neurons are connected in a feed-forward fashion. In this structure, input units are connected to the nodes in the hidden layer,which are again connected to the units in output layer.
Feed forward network

Backpropagation

Backpropagation is a method used to adjust the weights of neurons in a Feed Forward Neural Network based upon the error occurred during training.

Certification in Bigdata Analytics

What is Activation function?

Some of the useful activation functions, Identity Function, Binary StepFunction, Sigmoid Function, ReLU Function and Softmax. After we apply this nonlinear transformation over the input signal we get a transformed output which again is sent to the next layer of neurons as input. Let us see what will happen if we don’t apply activation function in our input signal, the output will simply become a linear function. Activation functions are very useful feature of the artificial neural networks which decides if a neuron should be activated or not. As we move ahead in this tutorial we can see the use of softmax activation function. Let us discuss it beforehand.

Become Master of Machine Learning by going through this online Machine Learning course in Sydney.

Softmax:

The sigmoid function is useful while dealing with classification problems with more than two classes. It gives the probability of the input being in a class, by the help of which we define the output. Keep in mind that the use of softmax function is done in the output layer of the classifier.

Learn about Deep Learning with TensorFlow in our comprehensive blog now!

Get 100% Hike!

Master Most in Demand Skills Now!

Use Case Implementation:

Dataset Loading:

We will be using the same(MNIST) dataset that we have used in our Deep Learning Introduction tutorial. MNIST is an open source dataset where we have a collection of handwritten digits. This is one of the most popular deep learning datasets available on the internet.
Use Case Implementation

About MNIST:

  • It has 70,000 images in 10 classes (0 to 9)
  • Out of those 70,000 images, 60,000- training set and 10,000-test set.

Looking to crack Artificial Intelligence and Machine learning interviews? Then use our guide at Tensorflow Interview Questions to land your dream job.

Why did we choose MNIST Dataset?

Data gathering and data preparation is a backbreaking task, which is also time consuming (we will see that in future examples). For now, to get an idea of model training we take the example of MNIST dataset where we get the data in the simplest and right form to start working on.
Let’s get started,

Program Structure:

  • Load Dataset
  • Model Making
  • Training Model

Learn Machine Learning from experts, click here to more in this Machine Learning Training in Hyderabad!

Load data:

  • Import tensorflow library
  • We will access the dataset by giving the following command examples.tutorial.mnist
  • This command will download the dataset into the mentioned drive, which in this case “D:\\Course\\data”
  • Then read the dataset and store it mnsit
import tensorflow as tf
from tensorflow.examples.tutorial.mnist import input_data
mnist = input_data.read_data_sets (“D:\\Course\\data”, one_hot= True)

Now that we have imported both the library and the dataset, let us try to have a look at the first image from the dataset.

mnist.train.images[1]

Executing this code, we get an array of values ranging from 0 to 1 depending upon the grayscale values of each pixel in the image.
To view this array as an image we will import the following libraries.

  • Import numpy
  • Import matplotlib
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

Now,

  • We will store the array of the image-1 in “image
  • We have reshaped the image to 28 x 28 pixels.
  • And using matplotlib we will show the image.
image= mnist.train.images[1]
image =image.reshape(28,28)
plt.imshow(image,cmap=’gray’)
plt.show()

And if we run this set of codes we will get the following image as an output.
output
Similarly, let us look at the 100th image

image= mnist.train.images[100]
image =image.reshape(28,28)
plt.imshow(image,cmap=’gray’)
plt.show()

Output:
output2
We can also see the labels that we have in this particular image by using the command shown below:

mnist.train.labels[100]

Output:

[0,0,0,0,0,0,0,1,0,0]

This is due to one_hot=Truecommand,  our labels show that the 7th position holds the value 1.
Similarly, we can also see the labels of the images from 100th to 200th position by the following command.

mnist.train.labels[100:200]

Become a Big Data Architect

Model making:

In our model, we will have 28*28=784 neurons in an image input. Which means number of nodes in the input layer will also be 784. Let us say we will make three hidden layers, each one of 500 nodes, and we will pass 100 inputs per batch. We know that the output of the model should range from 0 to 9 so nodes in output layer would be 10.

  • Define nodes
  • Define Placeholders
  • Define hidden layers

Go through this Artificial Intelligence Interview Questions And Answers to excel in your Artificial Intelligence Interview.

Define nodes:

  • of nodes on input layer = 784
  • of hidden layer=3
  • of nodes on one hidden layer=500
  • of nodes on one out layer=10
n_hidden_h1=500
n_hidden_h2=500
n_hidden_h3=500
n_classes=10
batch_size=100
Read our detailed blog on Deep Learning Interview Questions that will help you to crack your next job interview.

Define placeholders:

Now we need to define placeholders to pass data into the graph.

x=tf.placeholder(‘float’,[None,784])
y=tf.placeholder(‘float’)

Here we have passed this parameter, [None,784] to flatten out those 784 neurons in an image to feed into our network. Before moving ahead, I will explain the significance of weights and biases in our network.
Bias: It refers to an extra input to neurons, which makes sure that even when the inputs all 0’s, there’s still going to be an activation in the neuron.
Bias
bias output
Weights: A weight holds the strength of the connection between units in a network, also decides how much influence the input will have on the output. You can see in the network shown above how in every connection we have defined the weights. During backpropagation these weights are adjusted.

For the best of career growth, check out Intellipaat’s Machine Learning Course and get certified.

Define hidden layers:

Now we will define the hidden layers of the network. We have assigned 3 hidden layers each one with 500 nodes.

hidden_layer_1={‘weights’:tf.Variable(tf.random_normal([784, n_hidden_h1])),’biases’:tf.Variable(tf.random_normal([n_hidden_h1]))}
hidden_layer_2={‘weights’:tf.Variable(tf.random_normal([n_hidden_h1, n_hidden_h2])),’biases’:tf.Variable(tf.random_normal([n_hidden_h2]))}
hidden_layer_3={‘weights’:tf.Variable(tf.random_normal([n_hidden_h2, n_hidden_h3])),’biases’:tf.Variable(tf.random_normal([n_hidden_h3]))}
output_layer_3={‘weights’:tf.Variable(tf.random_normal([n_hidden_h3, n_classes])),’biases’:tf.Variable(tf.random_normal([n_classes]))}

Now we will write equation-1 in tensorflow code:

l1=tf.matmul(x,hidden_layer_1[‘weights’])+hidden_layer_1[‘biases’]
def network(data):
l1=tf.add(tf.matmul(data,hidden_layer_1[‘weights’]), hidden_layer_1[‘biases’])
l1=tf.nn.relu(l1)
l2=tf.add(tf.matmul(l1,hidden_layer_2[‘weights’]), hidden_layer_2[‘biases’])
l2=tf.nn.relu(l2)
l3=tf.add(tf.matmul(l1,hidden_layer_3[‘weights’]), hidden_layer_3[‘biases’])
l3=tf.nn.relu(l3)output = tf.add(tf.add(tf.matmul(l1,hidden_layer_3[‘weights’]), hidden_layer_3[‘biases’]))
return output

Certification in Bigdata Analytics

Training and Running Network:

def train_neural_network(x):
prediction= neural_network_model(x)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels =y))
optimizer=tf.train.AdamOptimizer().minimize(cost)
#comment-1
n_epochs=10
#comment-2
with tf.Session() as sess:
sess.run(tf.global_variable_initializer())
#comment-3
for epoch in range(n_epochs):
epoch_loss = 0
for _ in range(int(mnist.train.num.examples/batch_size)):
x_temp, y_temp=mnist.train.next_batch(batch_size)
z,c=sess.run([optimizer,cost],feed_dict={x:x_temp,y:y_temp})
epoch_loss=epoch_loss + c
print(‘Epoch’,epoch, ‘completed out of’,n_epochs,’loss’:epoch_loss)
correct=tf.equal(tf.argmax(prediction,1),tf.argmax(y,1))
accuracy=tf.reduce_mean(tf.cast(correct,’float’)
print(“Accuracy”,accuracy.eval({x:mnist.test.images,y:mnist.test.labels}))
train_neural_network(x)

#comment-1: optimizer will minimize the cost which is a part of backpropagation
#comment-2: Let us take a moment to understand what epoch means here,
we have 60000 images which we will pass in 100 batches, so 600 iteration means 1 epoch. and like that here we are just limiting it to 10 epochs.
We can say,

Feed Forward + Backpropagation = 1 epoch

#comment-3: We have initialized our session with the global_vaiables_initializer
Output: Now we will run  our network and see with each epoch how the loss varies and also we will get to know the accuracy of the model.

feed forward backpropagation

As you can in the output shown above, with each epoch the loss gets reduced this is due to the optimizer we have used in our network.
In this model we have gained an accuracy of 95%. In future, with better models and better networks, we will also learn how to increase the accuracy.
That is all for now. Hope this tutorial triggered some interest in you about Deep Learning. To get an elaborate idea about deep learning and more implementation examples refer to the Artificial Intelligence Online Course.

We hope this tutorial helps you gain knowledge of Machine Learning Training. If you are looking to learn Online Machine Learning Course in a systematic manner with expert guidance and support then you can enroll to our Machine Learning Course Online.

Course Schedule

Name Date Details
Machine Learning Course 23 Nov 2024(Sat-Sun) Weekend Batch View Details
30 Nov 2024(Sat-Sun) Weekend Batch
07 Dec 2024(Sat-Sun) Weekend Batch

About the Author

Principal Data Scientist

Meet Akash, a Principal Data Scientist with expertise in advanced analytics, machine learning, and AI-driven solutions. With a master’s degree from IIT Kanpur, Aakash combines technical knowledge with industry insights to deliver impactful, scalable models for complex business challenges.