Introduction:
A recurrent neural network (RNN) is a form of artificial neural network that is designed to operate with time-series data or data that contains sequences. Ordinary feed-forward neural networks are only intended for data points that are unrelated to one another. However, if we have data in a sequence where one data point is dependent on the preceding data point, we must change the neural network to account for these dependencies. RNNs feature the idea of memory, which allows them to store the states or information of prior inputs in order to construct the sequence’s next output.
Why RNN?
RNNs were developed in response to many flaws in the feed-forward neural network:
- Cannot deal with consecutive data
- Only takes into account the current input
- Cannot recall prior inputs
The RNN is the solution to these problems. An RNN can deal with sequential data, accepting both current and previously received inputs. Because of their internal memory, RNNs can remember past inputs.
Working:
Source: https://www.simplilearn.com/tutorials/deep-learning-tutorial/rnn
The information in Recurrent Neural Networks cycles in a loop to the middle-hidden layer.
The input layer ‘x’ receives the neural network’s input, analyses it, and sends it to the middle layer.
The middle layer ‘h’ can be made up of several hidden layers, each having its own activation functions, weights, and biases. A recurrent neural network can be used if the various parameters of different hidden layers are not affected by the preceding layer, i.e., the neural network does not have memory.
The Recurrent Neural Network will standardize the various activation functions, weights, and biases, resulting in identical values for each hidden layer. Then, rather than constructing numerous hidden layers, it will generate one and loop over it as many times as necessary.
Types of RNN:
There are 4 types of RNN namely:
- One to One
- One to Many
- Many to One
- Many to Many
- One to One: This form of neural network is called Vanilla Neural Network. It is applied to general machine learning issues with a single input and output.
- One to Many: This neural network has a single input and several outputs. The image caption is an example of this.
- Many to One: This RNN accepts a series of inputs and produces a single output. A good example of this type of network is sentiment analysis, in which a given sentence can be classified as having positive or negative thoughts.
- Many to Many: This RNN takes a series of inputs and produces a series of outputs. One example is machine translation.
Problems solved using RNN:
- Generating Text: We want to forecast the likelihood of each word given the previous words given a sequence of words.
- Machine Translation: In that our input is a series of words in our source language, machine translation is analogous to language modeling (e.g. German). We wish to generate a string of words in our target language (e.g. English).
- Speech Recognition: We can anticipate a sequence of phonetic segments together with their probabilities given an input sequence of acoustic signals from a sound wave.
- Generating Image Descriptions: RNNs were utilized as a part of a model to generate descriptions for unlabelled photos, together with Convolutional Neural Networks.
- Chatbots: Chatbots can respond to your questions. When a sequence of words is provided as input, a sequence of words is generated as output.