Basic difference between Deep Neural Network, Convolution Neural Network and Recurrent Neural Network

In this tutorial we will see about deep learning with Recurrent Neural Network, architecture of RNN, comparison between NN & RNN, variants of RNN, applications of AE, Autoencoders – architecture and application.

Deep neural networksConvolution neural networksRecurrent neural networks
Provides lift for classification and forecastingFeatures extraction & classification of imagesFor sequence of events, languages, models, time series etc.
More than one hidden layerMore than one hidden layerMore than one hidden layer

Watch this Artificial Intelligence Tutorial for Beginners video

Recurrent Neural Network Basic difference between Deep Neural Network, Convolution Neural Network and Recurrent Neural Network In this tutorial we will see about deep learning with Recurrent Neural Network, architecture of RNN, comparison between NN & RNN, variants of RNN, applications of AE, Autoencoders - architecture and application. Deep neural networks Convolution neural networks

Recurrent neural network :

Time series analysis such as stock prediction like price, price at time t1, t2 etc.. can be done using Recurrent neural network. Predictions depend on earlier data, in order to predict time t2, we get the earlier state information t1, this is known as recurrent neural network.

Feedforward NN :

Inputs are given in the form of feed as batches to each network. Retaining or passing the information to next layer is done in cyclic connections.

Long short term memory :
They are explicitly designed to address the long term dependency problem, there are gates to remember, where to forget in LSTM. RNN with LSTM prevents vanishing gradient effect by passing errors recursively to the next NN. It controls the gradient flow & enable better preservation of “long-range dependencies” by using gates.

Interested in becoming AI Expert? Click here to learn more in this Artificial Intelligence Course!

Key components of LSTM :

  • Gates – forget, memory, update, read
  • tanh(x) – values from -1 to 1
  • sigmoid(x) – values from 0 to 1
RNNLSTM
Has loopsSpecial type of RNN
Maintains memory from previous stateMaintains memory from previous and even other states.
Length of the memory is very limitedLength of the memory is quite large.

Watch this Neural Network Tutorial for Beginners video

Recurrent Neural Network Basic difference between Deep Neural Network, Convolution Neural Network and Recurrent Neural Network In this tutorial we will see about deep learning with Recurrent Neural Network, architecture of RNN, comparison between NN & RNN, variants of RNN, applications of AE, Autoencoders - architecture and application. Deep neural networks Convolution neural networks


Architecture of LSTM :
This is similar to Convolution Neural Networks. Data from layer 1 are passed to next layer as well in the data flows manner.
Step 1: forget Gate – Earlier gate which has data to be remembered are concatenated with the new data to be remembered.
Step 2 : Memory Gate – Here it is used to determine how much information should be stored in the memory and how much percentage to forget. Operations like dot product, additions are performed here.
Step 3: Update Gate – Forget from the early state and operations are performed and updated.
Step 4 : Write the final output
If your will to preparing for Artificial Intelligence job please go through this  Top Artificial Intelligence Interview Questions And Answers.

Watch this Artificial Intelligence Tutorial for Beginners video

Recurrent Neural Network Basic difference between Deep Neural Network, Convolution Neural Network and Recurrent Neural Network In this tutorial we will see about deep learning with Recurrent Neural Network, architecture of RNN, comparison between NN & RNN, variants of RNN, applications of AE, Autoencoders - architecture and application. Deep neural networks Convolution neural networks


Variants of RNN :

  • LSTM
  • GRU :Gated recurrent unit
  • End to end network
  • Memory networ

Applications of RNN :

  • Predicting stock prices
  • Speech recognition
  • Image captions
  • Word predictions
  • Language translation


Autoencoder : The aim of autoencoder is to learn a compressed form of given data.The predicted value should be roughly same as input. It has three layers only – Input data with bias (L1), compressed/Encoded layer (L2), Prediction layer (L3). In the autoencoder it passes some digit in compressed form, we can get the decoded format from layer L3.
Applications of autoencoder:

  • Data denoising
  • Dimensionality reduction
  • Image reconstruction
  • Image colorization

If you have any technical doubts & queries related to Artificial Intelligence, post the same on Intellipaat Community.

Leave a Reply

Your email address will not be published. Required fields are marked *

Solve : *
12 + 1 =