Self-attention in LLMs, clearly explained:
Before we start a quick primer on tokenization! Raw text β Tokenization β Embedding β Model Embedding is a meaningful representation of each token (roughly a word) using a bunch of numbers. This embedding is what we provide as an input to our language models. Check thisπ
The core idea of Language modelling is to understand the structure and patterns within language. By modeling the relationships between words (tokens) in a sentence, we can capture the context and meaning of the text.
Now self attention is a communication mechanism that help establish these relationships, expressed as probability scores. Each token assigns the highest score to itself and additional scores to other tokens based on their relevance. You can think of it as a directed graph π
To understand how these probability/attention scores are obtained: We must understand 3 key terms: - Query Vector - Key Vector - Value Vector These vectors are created by multiplying the input embedding by three weight matrices that are trainable. Check this out π
Now here's a broader picture of how input embeddings are combined with Keys, Queries & Values to obtain the actual attention scores. After acquiring keys, queries, and values, we merge them to create a new set of context-aware embeddings. Check this outπ
Implementing self-attention using PyTorch, doesn't get easier! π It's very intuitive! π‘ Check this out π
I'll leave you with this visual, which intuitively explains self-attention as a communication mechanism between tokens. This communication can be represented by a directed graph π
If you found it insightful, reshare with your network. Find me β @akshay_pachaar βοΈ For more insights and tutorials on LLMs, AI Agents, and Machine Learning!







