Software Engineering KB

Home

❯

09 Machine Learning and AI

❯

01 Deep Learning

❯

01 Concept

❯

Positional Encoding

Positional Encoding

Feb 10, 20261 min read

  • deep-learning
  • transformers
  • positional-encoding

Positional Encoding

← Back to Transformers

Injects sequence order information into the model since self-attention is permutation-invariant. Original Transformer uses sinusoidal encoding; modern models often use learned positional embeddings, rotary positional embeddings (RoPE), or ALiBi.

Related

  • Self-Attention (position-agnostic without encoding)
  • Encoder-Decoder Architecture (uses positional encoding)

deep-learning transformers positional-encoding


Graph View

  • Positional Encoding
  • Related

Backlinks

  • Transformers

Created with Quartz v4.5.2 © 2026

  • GitHub