NEWS  /  Brief News

Tencent Hunyuan Open-Sources Its First Hybrid MoE Model with Accelerated Inference

Jun 30, 2025, 4:04 a.m. ET

AsianFin — Tencent’s Hunyuan team has officially open-sourced its first hybrid inference Mixture-of-Experts (MoE) model, Hunyuan-A13B.

The model features a total of 80 billion parameters, with only 13 billion active during inference—delivering performance on par with leading open-source models of similar architecture while offering faster inference and higher cost-efficiency.

Hunyuan-A13B is the first open-source 13B-class hybrid inference MoE model in the industry. It is now available on open-source platforms such as ModelScope, GitHub, and Hugging Face. Tencent Cloud has also launched API access for the model, enabling developers to quickly integrate and deploy it.

Please sign in and then enter your comment