Product

Friday, 20 March 2026

Rotary Position Embeddings for Long Context Length

 

Rotary Position Embeddings (RoPE) is a technique for encoding token positions in a sequence. It is widely used in many models and works well for standard context lengths. However, it requires adaptation for longer contexts. In this article, you will learn how RoPE is adapted for long context length.

Let’s get started.

Rotary Position Embeddings for Long Context Length
Photo by Nastya Dulhiier. Some rights reserved.

Overview

This article is divided into two parts; they are:

  • Simple RoPE
  • RoPE for Long Context Length

Simple RoPE

Compared to the sinusoidal position embeddings in the original Transformer paper, RoPE mutates the input tensor using a rotation matrix:

𝑋𝑛,𝑖=𝑋𝑛,𝑖⁢cos⁡(𝑛⁢𝜃𝑖)–𝑋𝑛,𝑑2+𝑖⁢sin⁡(𝑛⁢𝜃𝑖)𝑋𝑛,𝑑2+𝑖=𝑋𝑛,𝑖⁢sin⁡(𝑛⁢𝜃𝑖)+𝑋𝑛,𝑑2+𝑖⁢cos⁡(𝑛⁢𝜃𝑖)

where 𝑋𝑛,𝑖 is the 𝑖-th element of the vector at the 𝑛-th position of the sequence of tensor 𝑋. The length of each vector (also known as the hidden size or the model dimension) is 𝑑. The quantity 𝜃𝑖 is the frequency of the 𝑖-th element of the vector. It is computed as:

𝜃𝑖=1𝑁2⁢𝑖/𝑑

A simple implementation of RoPE looks like this:

The code above defines a tensor inv_freq as the inverse frequency of the RoPE, corresponding to the frequency term 𝜃𝑖 in the formula. It is called inverse frequency in the RoPE literature because it is inversely proportional to the wavelength (i.e., the maximum distance) that RoPE can capture.

When you multiply two vectors from positions 𝑝 and 𝑞, as you would do in the scaled-dot product attention, you find that the result depends on the relative position 𝑝 −𝑞 due to the trigonometric identities:

cos⁡(𝑎–𝑏)=cos⁡(𝑎)⁢cos⁡(𝑏)+sin⁡(𝑎)⁢sin⁡(𝑏)sin⁡(𝑎–𝑏)=sin⁡(𝑎)⁢cos⁡(𝑏)–cos⁡(𝑎)⁢sin⁡(𝑏)

In language models, relative position typically matters more than absolute position. Therefore, RoPE is often preferable to the original sinusoidal positional embeddings.

RoPE for Long Context Length

The functions sin⁡𝑘⁢𝑥 and cos⁡𝑘⁢𝑥 are periodic with period 2⁢𝜋/𝑘. In RoPE, the term 𝜃𝑖 is called the frequency term because it determines the periodicity. In a language model, the high-frequency terms are important because they help understand nearby words in a sentence. The low-frequency terms, however, are useful for understanding context that spans across multiple sentences.

Therefore, when you design a model with a long context length, you want it to perform well for short sentences since they are more common, but you also want it to handle long contexts that your model should support. You do not want RoPE to treat every sequence length equally.

The strategy is to reallocate the RoPE scaling budget: apply a scaling factor to improve long-range stability (at low frequencies of sine and cosine) while avoiding scaling when local position information is important (at high frequencies of sine and cosine).

In Llama versions 1 and 2, RoPE is implemented with a maximum length of 4096, similar to the previous section. In Llama 3.1, the model’s context length is expanded to 131K tokens, whereas RoPE is computed with a base length of 8192. The implementation is as follows:

The constructor of the RotaryPositionEncoding class uses a more sophisticated algorithm to compute the inv_freq tensor. The idea is to compute a wavelength for each frequency component, representing the maximum distance between two tokens that the corresponding RoPE component can capture. If the wavelength is too short (or the frequency is too high), the frequency remains unchanged. However, if the wavelength is too long, the frequency is reduced by the scale_factor, effectively increasing the maximum distance that the RoPE component can capture. To ensure stability, frequency components between the low- and high-frequency thresholds are interpolated smoothly.

To illustrate the effect of scaling, you can plot the resulting inverse frequency with Matplotlib:

The plot is shown below:

Plot of inverse frequency before and after RoPE scaling

You can see that the original RoPE frequency is preserved until the wavelength is approximately 2000 tokens (at an inverse frequency of around 0.003), after which it is gradually scaled. The wavelength is scaled by a factor of 8 when it exceeds 9000 tokens (i.e., the inverse frequency is below 6e-4).

From the x-axis of the plot, you can see that around 60% of the dimensions capture dependencies within 2000 tokens, while the rest capture distances up to 60000 tokens (2⁢𝜋⁢𝑁 exactly; a larger 𝑁 enables the model to support longer context lengths).

This effectively provides higher resolution for RoPE at short distances and lower resolution at long distances, consistent with how language models should behave when understanding language.

Further Reading

Below are some resources that you may find useful:

Summary

In this article, you learned how RoPE is adapted for long context length. Specifically, you learned how Llama 3 supports longer context lengths by scaling the RoPE frequency at the low-frequency end.

No comments:

Post a Comment

Connect broadband

Introduction to Small Language Models: The Complete Guide for 2026

  In this article, you will learn what small language models are, why they matter in 2026, and how to...