Positional Encoding Visualizer
Exploring how transformers keep track of word order through interactive visualization.
May 08, 2025
Visualizer
Full screen click here
Concept
While trying to better understand positional encoding in Transformers, I wanted to visualize it and decided to vibe code one up.
Ended up zero-shotting it with Gemini and it’s beautiful.
Beyond being visually interesting, I find it artistically meaningful - it’s like Gemini creating a tool to examine and understand a fundamental aspect of its own architecture. There’s a beautiful recursiveness in an AI system helping visualize the mathematical structures that make its own existence possible.
Prompt Used
write a beautiful, modern web app that visualize positional encoding
allow users to change, with UI elements: max_len, d_model
plot the dimensional embedding in a dynamic, easy to understand, elegant way, with animation.
import torch
max_len = 1000
d_model = 64
pe = torch.zeros(max_len, d_model)
import math
for pos in range(max_len):
for i in range(d_model // 2):
pe[pos, 2 * i] = math.sin(pos / 10000 ** (2 * i / d_model))
pe[pos, 2 * i + 1] = math.cos(pos / 10000 ** (2 * i / d_model))
import matplotlib.pyplot as plt
import seaborn as sns
for dim in range(d_model):
sns.lineplot(pe[:, dim])