Transformer with Pytorch (NLP)

How to build a Transformer model using PyTorch. In this tutorial, we're diving into how to whip up a Transformer model using PyTorch, which is a big deal in the world of machine learning.

12/4/2023

Why Transformers and why Pytorch

These Transformers are game-changers, especially in Natural Language Processing (NLP) tasks like translating languages and summarizing text. They've outshined old-school LSTM networks because they can handle long-range stuff and do computations in parallel.

And why PyTorch? Well, it's super popular among the ML crowd because it's easy to use, flexible, and gets the job done fast. It's like the go-to toolkit for anyone working in machine learning and AI.

What is Transformers?

Transformers, introduced in the paper "Attention is All You Need" by Vaswani et al., have become a game-changer in many NLP tasks thanks to their unique design and effectiveness. At the core of Transformers lies the attention mechanism, particularly the concept of 'self-attention,' which lets the model weigh and prioritize different parts of the input data. This feature is crucial for handling long-range dependencies in data. Essentially, it's a way for the model to focus on different parts of the input when generating an output.

This mechanism allows the model to consider various words or features in the input sequence, assigning each a 'weight' indicating its importance for producing the desired output. For example, in a sentence translation task, while translating a specific word, the model might give higher attention weights to words that are grammatically or semantically related to the target word. This capability enables the Transformer to capture dependencies between words or features, regardless of their position in the sequence.

The impact of Transformers in NLP is massive. They've consistently outperformed traditional models in numerous tasks, showcasing their superior ability to comprehend and generate human language with more depth and nuance.

For those interested in diving deeper into NLP, I recommend checking out DataCamp's "Introduction to Natural Language Processing in Python" course. It's a valuable resource for gaining a better understanding of the field.

Building the Transformer Model

To build the Transformer model the following steps are necessary:

  1. Importing the libraries and modules

  2. Defining the basic building blocks - Multi-head Attention, Position-Wise Feed-Forward Networks, Positional Encoding

  3. Building the Encoder block

  4. Building the Decoder block

  5. Combining the Encoder and Decoder layers to create the complete Transformer network

Importing the libraries and modules

To kick things off, we'll import the PyTorch library for its core functionality, the neural network module to create our neural networks, the optimization module to train these networks, and the data utility functions to handle our data seamlessly.

In addition, we'll import the standard Python math module for mathematical operations, and the copy module to create copies of complex objects. These tools lay the groundwork for defining our model's architecture, handling data, and setting up the training process.

Defining the basic building blocks: Multi-Head Attention, Position-wise Feed-Forward Networks, Positional Encoding

Multi-head Attention

The Multi-Head Attention mechanism calculates attention between every pair of positions in a sequence. It's made up of several "attention heads," each capturing different aspects of the input sequence.

For a deeper dive into Multi-Head Attention, take a look at the "Attention mechanisms" section of the Large Language Models (LLMs) Concepts course. You'll find more detailed insights there.

def scaled_dot_product_attention(self, Q, K, V, mask=None):

  • Calculating Attention Scores: attn_scores = torch.matmul(Q, K.transpose(-2, -1)) / math.sqrt(self.d_k). Here, the attention scores are calculated by taking the dot product of queries (Q) and keys (K), and then scaling by the square root of the key dimension (d_k).

  • Applying Mask: If a mask is provided, it is applied to the attention scores to mask out specific values.

  • Calculating Attention Weights: The attention scores are passed through a softmax function to convert them into probabilities that sum to 1.

  • Calculating Output: The final output of the attention is calculated by multiplying the attention weights by the values (V).

def split_heads(self, x):

  • This method reshapes the input x into the shape (batch_size, num_heads, seq_length, d_k). It enables the model to process multiple attention heads concurrently, allowing for parallel computation.

def combine_heads(self, x):

  • After applying attention to each head separately, this method combines the results back into a single tensor of shape (batch_size, seq_length, d_model). This prepares the result for further processing.

def forward(self, Q, K, V, mask=None):

  • The forward method is where the actual computation happens:

    1. Apply Linear Transformations: The queries (Q), keys (K), and values (V) are first passed through linear transformations using the weights defined in the initialization.

    2. Split Heads: The transformed Q, K, V are split into multiple heads using the split_heads method.

    3. Apply Scaled Dot-Product Attention: The scaled_dot_product_attention method is called on the split heads.

    4. Combine Heads: The results from each head are combined back into a single tensor using the combine_heads method.

    5. Apply Output Transformation: Finally, the combined tensor is passed through an output linear transformation.

In summary, the MultiHeadAttention class encapsulates the multi-head attention mechanism commonly used in transformer models. It takes care of splitting the input into multiple attention heads, applying attention to each head, and then combining the results. By doing so, the model can capture various relationships in the input data at different scales, improving the expressive ability of the model.

Position-wise Feed-Forward Networks

def init(self, d_model, d_ff):

  • d_model: Dimensionality of the model's input and output.

  • d_ff: Dimensionality of the inner layer in the feed-forward network.

  • self.fc1 and self.fc2: Two fully connected (linear) layers with input and output dimensions as defined by d_model and d_ff.

  • self.relu: ReLU (Rectified Linear Unit) activation function, which introduces non-linearity between the two linear layers.

def forward(self, x):

  • x: The input to the feed-forward network.

  • self.fc1(x): The input is first passed through the first linear layer (fc1).

  • self.relu(...): The output of fc1 is then passed through a ReLU activation function. ReLU replaces all negative values with zeros, introducing non-linearity into the model.

  • self.fc2(...): The activated output is then passed through the second linear layer (fc2), producing the final output.

In summary, the PositionWiseFeed Forward class defines a position-wise feed-forward neural network that consists of two linear layers with a ReLU activation function in between. In the context of transformer models, this feed-forward network is applied to each position separately and identically. It helps in transforming the features learned by the attention mechanisms within the transformer, acting as an additional processing step for the attention outputs.

Positional Encoding

Positional Encoding is used to inject the position information of each token in the input sequence. It uses sine and cosine functions of different frequencies to generate the positional encoding.

def init(self, d_model, max_seq_length):

  • d_model: The dimension of the model's input.

  • max_seq_length: The maximum length of the sequence for which positional encodings are pre-computed.

  • pe: A tensor filled with zeros, which will be populated with positional encodings.

  • position: A tensor containing the position indices for each position in the sequence.

  • div_term: A term used to scale the position indices in a specific way.

  • The sine function is applied to the even indices and the cosine function to the odd indices of pe.

  • Finally, pe is registered as a buffer, which means it will be part of the module's state but will not be considered a trainable parameter.

def forward(self, x):

  • The forward method simply adds the positional encodings to the input x.

  • It uses the first x.size(1) elements of pe to ensure that the positional encodings match the actual sequence length of x.

The PositionalEncoding class adds information about the position of tokens within the sequence. Since the transformer model lacks inherent knowledge of the order of tokens (due to its self-attention mechanism), this class helps the model to consider the position of tokens in the sequence. The sinusoidal functions used are chosen to allow the model to easily learn to attend to relative positions, as they produce a unique and smooth encoding for each position in the sequence.

Building the Encoder Blocks

def init(self, d_model, num_heads, d_ff, dropout):

Parameters:

  1. d_model: The dimensionality of the input.

  2. num_heads: The number of attention heads in the multi-head attention.

  3. d_ff: The dimensionality of the inner layer in the position-wise feed-forward network.

  4. dropout: The dropout rate used for regularization.

Components:

  1. self.self_attn: Multi-head attention mechanism.

  2. self.feed_forward: Position-wise feed-forward neural network.

  3. self.norm1 and self.norm2: Layer normalization, applied to smooth the layer's input.

  4. self.dropout: Dropout layer, used to prevent overfitting by randomly setting some activations to zero during training.

def forward(self, x, mask):

Input:

  1. x: The input to the encoder layer.

  2. mask: Optional mask to ignore certain parts of the input.

Processing Steps:

  1. Self-Attention: The input x is passed through the multi-head self-attention mechanism.

  2. Add & Normalize (after Attention): The attention output is added to the original input (residual connection), followed by dropout and normalization using norm1.

  3. Feed-Forward Network: The output from the previous step is passed through the position-wise feed-forward network.

  4. Add & Normalize (after Feed-Forward): Similar to step 2, the feed-forward output is added to the input of this stage (residual connection), followed by dropout and normalization using norm2.

  5. Output: The processed tensor is returned as the output of the encoder layer.

The EncoderLayer class defines a single layer of the transformer's encoder. It encapsulates a multi-head self-attention mechanism followed by position-wise feed-forward neural network, with residual connections, layer normalization, and dropout applied as appropriate. These components together allow the encoder to capture complex relationships in the input data and transform them into a useful representation for downstream tasks. Typically, multiple such encoder layers are stacked to form the complete encoder part of a transformer model.

Building the Decoder Blocks

def init(self, d_model, num_heads, d_ff, dropout):

Parameters:

  1. d_model: The dimensionality of the input.

  2. num_heads: The number of attention heads in the multi-head attention.

  3. d_ff: The dimensionality of the inner layer in the feed-forward network.

  4. dropout: The dropout rate for regularization.

Components:

  1. self.self_attn: Multi-head self-attention mechanism for the target sequence.

  2. self.cross_attn: Multi-head attention mechanism that attends to the encoder's output.

  3. self.feed_forward: Position-wise feed-forward neural network.

  4. self.norm1, self.norm2, self.norm3: Layer normalization components.

  5. self.dropout: Dropout layer for regularization.

def forward(self, x, enc_output, src_mask, tgt_mask):

Input:

  1. x: The input to the decoder layer.

  2. enc_output: The output from the corresponding encoder (used in the cross-attention step).

  3. src_mask: Source mask to ignore certain parts of the encoder's output.

  4. tgt_mask: Target mask to ignore certain parts of the decoder's input.

Processing Steps:

  1. Self-Attention on Target Sequence: The input x is processed through a self-attention mechanism.

  2. Add & Normalize (after Self-Attention): The output from self-attention is added to the original x, followed by dropout and normalization using norm1.

  3. Cross-Attention with Encoder Output: The normalized output from the previous step is processed through a cross-attention mechanism that attends to the encoder's output enc_output.

  4. Add & Normalize (after Cross-Attention): The output from cross-attention is added to the input of this stage, followed by dropout and normalization using norm2.

  5. Feed-Forward Network: The output from the previous step is passed through the feed-forward network.

  6. Add & Normalize (after Feed-Forward): The feed-forward output is added to the input of this stage, followed by dropout and normalization using norm3.

  7. Output: The processed tensor is returned as the output of the decoder layer.

The DecoderLayer class defines a single layer of the transformer's decoder. It consists of a multi-head self-attention mechanism, a multi-head cross-attention mechanism (that attends to the encoder's output), a position-wise feed-forward neural network, and the corresponding residual connections, layer normalization, and dropout layers. This combination enables the decoder to generate meaningful outputs based on the encoder's representations, taking into account both the target sequence and the source sequence. As with the encoder, multiple decoder layers are typically stacked to form the complete decoder part of a transformer model.

Next, the Encoder and Decoder blocks are brought together to construct the comprehensive Transformer model.

Combining the Encoder and Decoder layers to create the complete Transformer network

def init(self, src_vocab_size, tgt_vocab_size, d_model, num_heads, num_layers, d_ff, max_seq_length, dropout):

Parameters:

  1. src_vocab_size: Source vocabulary size.

  2. tgt_vocab_size: Target vocabulary size.

  3. d_model: The dimensionality of the model's embeddings.

  4. num_heads: Number of attention heads in the multi-head attention mechanism.

  5. num_layers: Number of layers for both the encoder and the decoder.

  6. d_ff: Dimensionality of the inner layer in the feed-forward network.

  7. max_seq_length: Maximum sequence length for positional encoding.

  8. dropout: Dropout rate for regularization.

Components:

  1. self.encoder_embedding: Embedding layer for the source sequence.

  2. self.decoder_embedding: Embedding layer for the target sequence.

  3. self.positional_encoding: Positional encoding component.

  4. self.encoder_layers: A list of encoder layers.

  5. self.decoder_layers: A list of decoder layers.

  6. self.fc: Final fully connected (linear) layer mapping to target vocabulary size.

  7. self.dropout: Dropout layer.

def generate_mask(self, src, tgt):

This method is used to create masks for the source and target sequences, ensuring that padding tokens are ignored and that future tokens are not visible during training for the target sequence.

def generate_mask(self, src, tgt):

This method defines the forward pass for the Transformer, taking source and target sequences and producing the output predictions.

  1. Input Embedding and Positional Encoding: The source and target sequences are first embedded using their respective embedding layers and then added to their positional encodings.

  2. Encoder Layers: The source sequence is passed through the encoder layers, with the final encoder output representing the processed source sequence.

  3. Decoder Layers: The target sequence and the encoder's output are passed through the decoder layers, resulting in the decoder's output.

  4. Final Linear Layer: The decoder's output is mapped to the target vocabulary size using a fully connected (linear) layer.

The final output is a tensor representing the model's predictions for the target sequence.

The Transformer class brings together the various components of a Transformer model, including the embeddings, positional encoding, encoder layers, and decoder layers. It provides a convenient interface for training and inference, encapsulating the complexities of multi-head attention, feed-forward networks, and layer normalization.

This implementation follows the standard Transformer architecture, making it suitable for sequence-to-sequence tasks like machine translation, text summarization, etc. The inclusion of masking ensures that the model adheres to the causal dependencies within sequences, ignoring padding tokens and preventing information leakage from future tokens.

These sequential steps empower the Transformer model to efficiently process input sequences and produce corresponding output sequences.

Training the PyTorch Transformer Model

Hyperparameters:

These values define the architecture and behavior of the transformer model:

  1. src_vocab_size, tgt_vocab_size: Vocabulary sizes for source and target sequences, both set to 5000.

  2. d_model: Dimensionality of the model's embeddings, set to 512.

  3. num_heads: Number of attention heads in the multi-head attention mechanism, set to 8.

  4. num_layers: Number of layers for both the encoder and the decoder, set to 6.

  5. d_ff: Dimensionality of the inner layer in the feed-forward network, set to 2048.

  6. max_seq_length: Maximum sequence length for positional encoding, set to 100.

  7. dropout: Dropout rate for regularization, set to 0.1.

Creating a Transformer Instance:

This line creates an instance of the Transformer class, initializing it with the given hyperparameters. The instance will have the architecture and behavior defined by these hyperparameters.

Generating Random Sample Data:

The following lines generate random source and target sequences:

  1. src_data: Random integers between 1 and src_vocab_size, representing a batch of source sequences with shape (64, max_seq_length).

  2. tgt_data: Random integers between 1 and tgt_vocab_size, representing a batch of target sequences with shape (64, max_seq_length).

  3. These random sequences can be used as inputs to the transformer model, simulating a batch of data with 64 examples and sequences of length 100.

The code snippet demonstrates how to initialize a transformer model and generate random source and target sequences that can be fed into the model. The chosen hyperparameters determine the specific structure and properties of the transformer. This setup could be part of a larger script where the model is trained and evaluated on actual sequence-to-sequence tasks, such as machine translation or text summarization.

Training the Model

Next, the model will be trained utilizing the aforementioned sample data. However, in a real-world scenario, a significantly larger dataset would be employed, which would typically be partitioned into distinct sets for training and validation purposes.

Sample data preparation

For illustrative purposes, a dummy dataset will be crafted in this example. However, in a practical scenario, a more substantial dataset would be employed, and the process would involve text preprocessing along with the creation of vocabulary mappings for both the source and target languages

Loss Function and Optimizer:

  1. criterion = nn.CrossEntropyLoss (ignore_index=0): Defines the loss function as cross-entropy loss. The ignore_index argument is set to 0, meaning the loss will not consider targets with an index of 0 (typically reserved for padding tokens).

  2. optimizer = optim.Adam(...): Defines the optimizer as Adam with a learning rate of 0.0001 and specific beta values.

Model Training Mode:

  1. transformer.train(): Sets the transformer model to training mode, enabling behaviors like dropout that only apply during training.

The code snippet trains the model for 100 epochs using a typical training loop:

  1. for epoch in range(100): Iterates over 100 training epochs.

  2. optimizer.zero_grad(): Clears the gradients from the previous iteration.

  3. output = transformer(src_data, tgt_data[:, :-1]): Passes the source data and the target data (excluding the last token in each sequence) through the transformer. This is common in sequence-to-sequence tasks where the target is shifted by one token.

  4. loss = criterion(...): Computes the loss between the model's predictions and the target data (excluding the first token in each sequence). The loss is calculated by reshaping the data into one-dimensional tensors and using the cross-entropy loss function.

  5. loss.backward(): Computes the gradients of the loss with respect to the model's parameters.

  6. optimizer.step(): Updates the model's parameters using the computed gradients.

  7. print(f"Epoch: {epoch+1}, Loss: {loss.item()}"): Prints the current epoch number and the loss value for that epoch.

This code snippet trains the transformer model on randomly generated source and target sequences for 100 epochs. It uses the Adam optimizer and the cross-entropy loss function. The loss is printed for each epoch, allowing you to monitor the training progress. In a real-world scenario, you would replace the random source and target sequences with actual data from your task, such as machine translation.

Conclusion

This step-by-step guide has helped you(me) understand the Text Transformer (NLP).

PS. This my note so no discussion here

PS. The real reason that we have no discussion here because I don't want to maintain the forum.

PS. Figures in this page are from the original paper