Published onFebruary 9, 2026|Views: 1733|40 min readUnderstanding DeepSeek's Multi-Head Latent Attention (MLA)llmattentiontransformersdeepseekmlakv-cacheinferenceOn bottlenecks in attention, kv caching, long-context decoding, attention variants, and how DeepSeek MLA came to be. Part 1 of the FlashMLA blog series.
Published onMay 23, 2025|Views: 1148|35 min readData Quality Is All You Need?llmpretrainingmidtrainingposttrainingdata-qualitysynthetic-datadpoNotes on Microsoft phi-4 data pipeline for pre-training, 'mid-training', supervised fine-tuning and preference optimization