NEWS  /  Brief News

Zhipu AI Releases and Open-Sources Multimodal Reasoning Model GLM-4.1V-Thinking

Jul 02, 2025, 3:22 a.m. ET

AsianFin — Chinese AI startup Zhipu AI has officially released and open-sourced its latest multimodal vision-language model, GLM-4.1V-Thinking, the company announced Tuesday.

Designed for complex cognitive tasks, the general-purpose reasoning model supports multimodal inputs including images, video, and documents. Alongside the model launch, Zhipu unveiled a new ecosystem platform called “Agent Application Space” and kicked off its “Agents Pioneer Program,” committing hundreds of millions of yuan to support AI Agent startups.

Please sign in and then enter your comment