NEWS  /  Analysis

Anthropic’s Claude Model Sparks Controversy as AI Firms Race to Monetize Programming Capabilities

By  xinyue  Jan 13, 2026, 2:11 a.m. ET

Industry experts say the Claude Code incident highlights the broader competition among AI model providers over commercial monetization.

Anthropic, the U.S.-based AI startup spun out of OpenAI, has found itself at the center of a heated debate in the AI programming community after abruptly restricting external access to its flagship model, Claude Code, just as the company prepares for rapid revenue growth and a major funding round.

According to sources cited by The Wall Street Journal on Jan. 8, Anthropic is planning to raise $10 billion at a pre-money valuation of $350 billion—nearly double its valuation from just four months ago. The company expects its annualized revenue for 2026 to nearly triple year-on-year, reaching $26 billion, fueled by enterprise clients ramping up AI-related procurement and investments.

However, just a day after the fundraising report, on Jan. 9, developers reported that Claude Code, Anthropic’s most advanced programming model, was suddenly blocked from external calls via third-party platforms such as Cursor and OpenCode. Users encountered error messages indicating that certain tool permissions were exclusive to Claude Code, effectively cutting off prior access.

The decision immediately ignited discussions on social media and programming forums, with posts trending on Hacker News. Many developers criticized the sudden cutoff, noting that individual subscriptions to Claude Code cost $200 per month, while enterprise users who relied on API access often paid more than $1,000 for equivalent usage. Some third-party platforms had previously allowed enterprise users to “jailbreak” the system to complete high-volume workloads at lower costs, a practice now blocked.

OpenCode founder Dax Raad responded that the company would investigate the issue and clarify the situation for affected users. Anthropic team member Thariq Shihipar, a founding team member and Claude agent SDK lead, confirmed the blocking measures and said that accounts banned due to misunderstandings had been reinstated. Shihipar added that the company had strengthened security measures to prevent similar problems and promised clearer terms of service for future subscribers.

The restrictions sparked renewed debate over AI service terms, with critics arguing that security and subscription rules were inconsistent. Using an analogy, Shihipar likened the situation to a gym: “Gym members can choose how long they want to work out, but they can’t turn the gym into a temporary residence or abuse the equipment in violation of the rules.” He added that monitoring third-party tools was challenging and necessary to ensure operational stability.

The controversy quickly drew attention from Elon Musk’s AI venture, xAI. Tech journalist Kylie Robison reported on Jan. 10 that xAI employees had been using Claude Code via Cursor before the service was cut off. In an internal email, xAI co-founder Tony Wu described the move as “a new strategy Anthropic has launched against all major competitors,” noting that it would reduce productivity but encourage xAI to develop its own coding models and products.

The debate intensified when Nikita Bier, product lead at X (formerly Twitter), called for Anthropic’s removal from the platform. Observers noted that similar restrictions had occurred before: Anthropic had revoked OpenAI’s access to Claude in 2025 amid acquisition rumors around rival Windsurf.

Despite tensions, other AI companies quickly sought to provide alternatives. OpenAI engineer Thibault Sottiaux expressed support for OpenCode, announcing that subscribers could access OpenAI’s Codex model through the platform. GitHub senior vice president Jared Palmer also endorsed the move. OpenCode subsequently released version 1.1.11, expanding access to ChatGPT Plus and Pro subscribers, while allowing open-source models from recently IPO’d Chinese AI companies MiniMax and Zhipu to be integrated.

For xAI, Wu characterized Anthropic’s decision as “both good news and bad news,” noting that the incident coincided with Musk’s announcement that Grok Code would soon receive a major update, along with a new “ambient programming” product, Grok Build. Analysts see Anthropic’s actions as part of a broader strategy to protect core intellectual property and maintain user lock-in amid intensifying competition in AI programming.

Industry experts say the Claude Code incident highlights the broader competition among AI model providers over commercial monetization. Anthropic’s strategy appears aimed at protecting its programming model advantage and ensuring enterprise clients remain within its ecosystem. Yet rivals such as xAI, OpenAI, and Google’s Gemini are simultaneously enhancing their coding capabilities, with new model versions expected in 2026.

The Information reported that DeepSeek, a startup expected to shake up the AI coding sector, may launch a competing model around the Chinese New Year, with preliminary tests already surpassing Claude’s performance. Analysts note that in the race to develop better models, any lead is often short-lived. Anthropic must balance protecting its assets with maintaining goodwill among subscribers, while capitalizing on the enterprise AI boom.

Anthropic is also accelerating its enterprise ambitions beyond programming. On Jan. 12, the company announced Claude for Healthcare, a new product designed for healthcare providers, insurers, and consumers. The system allows users with Claude Pro and Max subscriptions to manage personal health records, integrate with mobile devices, and share data through Apple Health and Android Health Connect. Anthropic emphasized that no personal data would be stored in Claude’s memory or used for model training.

Eric Kauderer-Abrams, head of life sciences at Anthropic, said Claude for Healthcare represents “a significant step forward in using AI to help people tackle complex medical issues.” Built on Claude Opus 4.5, the model is reportedly more accurate in simulated medical and scientific tasks and reduces error rates, while connecting directly to industry-standard databases to assist clinicians with report generation and decision support.

The healthcare initiative is widely seen as a direct response to OpenAI, which launched ChatGPT Health on Jan. 7. OpenAI’s product aims to integrate health information with ChatGPT’s AI capabilities, helping users better understand their health and make informed decisions. Greg Brockman, OpenAI president and co-founder, has described AI in healthcare as a personal priority, emphasizing the platform’s potential to “save lives.”

Apple and Ant Group are also reportedly planning AI-powered health offerings, including Apple’s upcoming “AI Health Agent” and Alibaba’s Ant Aifu, which now has over 30 million monthly active users. Analysts expect the AI healthcare market to grow from under $40 billion today to more than $1 trillion by 2034, with hospitals and enterprise users accounting for the largest share.

Anthropic has focused on B2B applications, with Banner Health in the United States already deploying Claude to over 22,000 clinical providers. The company is also promoting applications in life sciences, including pharmaceuticals, reinforcing its strategy to develop enterprise-focused AI solutions.

Since its founding in 2021, Anthropic has emphasized safety, reliability, and stability—qualities that appeal to enterprise clients, according to co-founder and president Daniela Amodei. In a recent interview with CNBC, she stressed that the long-term value of large models lies in productivity tools rather than entertainment features.

Amodei also addressed Anthropic’s highly anticipated IPO, saying the company would use capital responsibly and carefully evaluate the timing and structure of any public offering, without providing a specific timeline.

As Anthropic expands its AI programming and healthcare offerings, the company finds itself balancing multiple priorities: protecting intellectual property, maintaining enterprise relationships, and competing against global rivals in both general-purpose and specialized AI applications.

 

Please sign in and then enter your comment