AsianFin -- Intellifusion, one of China’s earliest AI chip developers, is making a strategic pivot toward AI model inference—betting that the era of training dominance is giving way to inference-led growth in computing demand.
The Shenzhen-based firm, listed on Shanghai’s STAR Market (688343.SH), unveiled its latest suite of inference-focused products on July 25, ahead of the 2025 World Artificial Intelligence Conference. Among them: the DeepQiong X6000 Mesh inference accelerator card, boasting 256 TOPS of compute and optimized for high-throughput workloads such as decoding 256 video streams in real-time and supporting large models with hundreds of billions of parameters.
Intellifusion’s new all-in-one servers—Shenmu 6203 (2U), Tianzhou 6408 (4U), and Tianzhou 680G (8U)—extend this performance into data centers and edge environments, delivering up to 4 PFLOPS of inference capacity. CEO Chen Ning says these products mark a turning point for the company, which is now “fully committed” to inference computing chips after 11 years of neural processing unit (NPU) development.
“2025 will be a defining year for AI. Large models are maturing, costs are falling, and inference is about to outpace training in both growth and application,” Chen told TMTPost.
AI development is typically divided into two stages: training, which demands massive datasets and compute, and inference, where trained models are deployed to solve real-world problems. As AI adoption broadens—from chatbots to autonomous vehicles—cloud-based inference is quickly taking center stage.
According to IDC, cloud-based inference accounted for 58.5% of AI computing power in 2022 and is projected to hit 62.2% by 2026. AMD CEO Lisa Su forecasts AI inference compute demand will grow over 80% annually—potentially surpassing training as the primary driver for data center expansion.
“The inference chip market remains a blue ocean,” Chen said. “While the training chip sector is worth hundreds of billions, inference is just beginning. We believe it will outpace training within five years.”
At the heart of Intellifusion’s new offerings is the DeepQiong X6000 Mesh accelerator card, powered by the firm’s self-developed fourth-generation NPU optimized for Transformer-based models. The card uses a D2D (die-to-die) Chiplet design and C2C (chip-to-chip) mesh architecture—an innovation in China’s AI chip ecosystem. Intellifusion claims it is the first company to mass-produce such chips using fully domestic fabrication and packaging processes.
Complementing the chip, Intellifusion is rolling out inference servers and integrated machines for data centers and smart city deployments. Customers include municipal computing centers, telecom carriers, research institutes, and major Chinese internet firms.
“The DeepSeek all-in-one machines break the ‘last mile’ in closed-loop AI deployment,” Chen said, adding that the cooling AI hype is not a retreat, but a rational reshuffling to real-world use cases.
Intellifusion’s shift is already showing results. The company reported 2024 revenue of more than 900 million yuan ($124 million), up 81.3% year-on-year. Q1 2025 revenue surged 168.2% to 264 million yuan, a record for the period.
A deal with Deyuan Fanghui to provide 4,000 PFLOPS in inference compute over three years is expected to contribute 1.6 billion yuan in revenue. Payments began in early 2025, with roughly 200 million yuan booked in the first half.
On the consumer side, Intellifusion is seeing strong uptake of its Qiancheng AI technologies in wearables, supplying Huawei, Honor, and OPPO, while its “Dr. Luka” hardware line continues to gain traction. The company expects 50%+ growth in its consumer business in H1 2025.
Looking ahead, Intellifusion is preparing to launch its next-generation inference chip architecture—“Computing Power Building Blocks 2.0”—by late 2026, featuring:
Nova500 NPU: Native FP8/FP4, custom operators for large models, 5× compute efficiency, 3× energy efficiency.
3D Hybrid Bonded Memory: 10× bandwidth and memory efficiency.
NB-Mesh interconnect: Full-mesh, all-reduce, memory semantic access.
Advanced packaging: Heterogeneous die, UCIE D2D Chiplets.
NB-Link: PCIe interface with CPU-NPU shared memory access.
CTO Li Aijun says the upgrades will support embedded, edge, and cloud inference for models such as MoE (mixture of experts) and edge-scale large models.
Founded in 2014, Intellifusion has invested heavily in edge computing chips and has already shipped five generations of NPUs. In 2023, it launched its DeepEdge10 platform, targeting scenarios from IoT to intelligent computing centers.
Now, the company is placing its biggest bet yet on inference.
“Most inventions in the U.S. stay in labs,” said Chen. “But in China, the value is in large-scale implementation. AI inference chips will become the core infrastructure enabling AI to reshape all hardware—from glasses to robots—over the next five years.”
Chen believes that by linking data, algorithms, and chip development through China’s vast application scenarios, Intellifusion can drive a “data flywheel” of continuous innovation. He sees AI inference chips as China’s opportunity to gain a foothold in the Fourth Industrial Revolution.
“Our biggest asset isn’t chips. It’s our team,” he said. “With the right DNA, we’ll overcome challenges—from supply chains to ecosystems—and continue building a globally competitive inference chip company.”