AsianFin — Huawei has officially open-sourced its 7-billion-parameter dense model and the 72-billion-parameter Pangu Pro MoE (Mixture-of-Experts) model, along with model inference technology based on its Ascend AI infrastructure.
Key updates include:
-
The model weights and core inference code for the Pangu Pro MoE 72B model are now available on open-source platforms.
-
Huawei has also released large-scale MoE inference code optimized for its Ascend chip architecture.
-
The weights and inference code for the Pangu 7B dense model will be made available shortly.
This move signals Huawei’s deepening commitment to open AI ecosystems, particularly in advancing large-scale model performance through domestic computing frameworks.