How RNNs Work
- Input Sequence: The network processes sequential data, such as time-series data or text.
- LSTM Layer: The key layer in RNNs that retains memory over long sequences using gates.
- Memory Cell: Maintains information about past inputs to capture temporal dependencies.
- Dense Layers: Perform transformations on the features extracted by the LSTM layer for final output.
- Feedback Loop: RNNs incorporate feedback by feeding outputs of one step as inputs to the next.
- Output Layer: Produces the final sequence output or prediction.
- Applications: Used in tasks like language modeling, speech recognition, and time-series forecasting.
RNN Code Example
Here's how we can define the layers of an RNN:
import tensorflow as tf
from tensorflow.keras import layers
model = tf.keras.Sequential([
layers.Input(shape=(100, 64)), # Input Layer (sequence length = 100, features = 64)
layers.SimpleRNN(128, activation="tanh", return_sequences=True), # RNN Layer 1
layers.SimpleRNN(64, activation="tanh"), # RNN Layer 2
layers.Dense(10, activation="softmax") # Output Layer
])