AsianFin — Tencent’s Hunyuan team has officially open-sourced its first hybrid inference Mixture-of-Experts (MoE) model, Hunyuan-A13B.
The model features a total of 80 billion parameters, with only 13 billion active during inference—delivering performance on par with leading open-source models of similar architecture while offering faster inference and higher cost-efficiency.
Hunyuan-A13B is the first open-source 13B-class hybrid inference MoE model in the industry. It is now available on open-source platforms such as ModelScope, GitHub, and Hugging Face. Tencent Cloud has also launched API access for the model, enabling developers to quickly integrate and deploy it.