Chinese AI company DeepSeek on Sunday night released a new research paper introducing a novel “conditional memory” architecture for large language models, and open-sourced a related memory module named Engram.
The paper, titled “Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models,” was jointly authored by researchers from Peking University and DeepSeek. The author list includes Liang Wenfeng, a co-founder and key researcher at DeepSeek.
The study proposes adding a scalable lookup-based memory structure to language models, enabling what the authors call “conditional memory” — a mechanism that allows models to selectively retrieve and use relevant stored information during inference.
According to the paper, the approach significantly improves model performance on knowledge retrieval, reasoning, programming and mathematics tasks under the same parameter size and compute constraints, effectively introducing a new dimension of sparsity beyond traditional mixture-of-experts or parameter pruning methods.
DeepSeek said the conditional memory mechanism allows models to dynamically access external memory rather than encoding all knowledge directly into parameters, improving both efficiency and generalization.
Alongside the paper, DeepSeek open-sourced the corresponding memory module, Engram, allowing developers and researchers to experiment with and integrate the system into their own models.
The release comes as AI companies and research institutions increasingly explore memory-augmented architectures as a way to improve model capabilities without proportionally increasing model size and computational cost.
DeepSeek said it hopes the open-sourcing of Engram will accelerate research into scalable memory systems and help establish conditional memory as a practical component of next-generation large language models.

