Published: June 5, 2025
8
77
611

Self-attention in LLMs, clearly explained:

Before we start a quick primer on tokenization! Raw text β†’ Tokenization β†’ Embedding β†’ Model Embedding is a meaningful representation of each token (roughly a word) using a bunch of numbers. This embedding is what we provide as an input to our language models. Check thisπŸ‘‡

Image in tweet by Akshay πŸš€

The core idea of Language modelling is to understand the structure and patterns within language. By modeling the relationships between words (tokens) in a sentence, we can capture the context and meaning of the text.

Image in tweet by Akshay πŸš€

Now self attention is a communication mechanism that help establish these relationships, expressed as probability scores. Each token assigns the highest score to itself and additional scores to other tokens based on their relevance. You can think of it as a directed graph πŸ‘‡

Image in tweet by Akshay πŸš€

To understand how these probability/attention scores are obtained: We must understand 3 key terms: - Query Vector - Key Vector - Value Vector These vectors are created by multiplying the input embedding by three weight matrices that are trainable. Check this out πŸ‘‡

Image in tweet by Akshay πŸš€

Now here's a broader picture of how input embeddings are combined with Keys, Queries & Values to obtain the actual attention scores. After acquiring keys, queries, and values, we merge them to create a new set of context-aware embeddings. Check this outπŸ‘‡

Image in tweet by Akshay πŸš€

Implementing self-attention using PyTorch, doesn't get easier! πŸš€ It's very intuitive! πŸ’‘ Check this out πŸ‘‡

Image in tweet by Akshay πŸš€

I'll leave you with this visual, which intuitively explains self-attention as a communication mechanism between tokens. This communication can be represented by a directed graph πŸ‘‡

Image in tweet by Akshay πŸš€

If you found it insightful, reshare with your network. Find me β†’ @akshay_pachaar βœ”οΈ For more insights and tutorials on LLMs, AI Agents, and Machine Learning!

Share this thread

Read on Twitter

View original thread

Navigate thread

1/9