NEWS  /  Analysis

Only Months Left for Coders? Anthropic CEO Warns AI Could Take Over All Software Jobs as AGI Nears

By  xinyue  Jan 21, 2026, 9:47 p.m. ET

Amodei, whose company Anthropic develops the Claude family of large language models, argued that the convergence of three forces — scale, multimodality and autonomous agents — has made the trajectory toward AGI increasingly visible.

Screenshot from the official World Economic Forum livestream

Screenshot from the official World Economic Forum livestream

Artificial intelligence (AI) could soon eliminate the need for human programmers almost entirely, Anthropic Chief Executive Dario Amodei said at the World Economic Forum in Davos on Wednesday, predicting that AI systems may be capable of replacing software engineers end-to-end within as little as six to 12 months — a claim that has reignited global debate over how fast artificial general intelligence (AGI) is approaching and how deeply it could disrupt white-collar work.

Amodei’s remarks, delivered during a rare joint appearance with Google DeepMind CEO Demis Hassabis, marked one of the starkest warnings yet from an AI industry leader about the speed at which machines could overtake human cognitive labor.

Speaking before a packed audience of policymakers, executives and investors, the two executives framed the discussion around a provocative theme: what the world will look like on “the first day after AGI arrives.”

Unlike similar discussions held a year earlier in Paris, the Davos conversation carried a noticeably sharper sense of urgency. The question was no longer whether AGI — loosely defined as AI that matches or exceeds human capabilities across most cognitive domains — would arrive, but how soon, and whether societies are remotely prepared.

Amodei, whose company Anthropic develops the Claude family of large language models, argued that the convergence of three forces — scale, multimodality and autonomous agents — has made the trajectory toward AGI increasingly visible.

“With each generation, models are getting better at reasoning, better at acting, and better at improving themselves,” he said. “When you put those together, timelines compress very quickly.”

Hassabis, whose DeepMind unit within Alphabet has long pursued AGI as an explicit goal, largely agreed on the direction of travel, though he offered a more measured forecast. He said there was roughly a 50% chance AGI would be achieved before the end of the decade, reiterating a probability he first cited last year.

At the center of Amodei’s argument was what he described as a rapidly closing feedback loop: AI systems that can write code are increasingly being used to improve the very models that generate that code.

“AI writing code leads to better AI, which leads to faster iteration,” Amodei said. “That flywheel is spinning faster than most people realize.”

He suggested that once this loop becomes sufficiently autonomous — with models not just coding, but designing experiments, running evaluations and refining architectures — research and development could accelerate at an exponential pace.

“If AI can write AI in a near-perfect closed loop,” he said, “you get something that looks like a miracle — an explosion in capability.”

Anthropic’s own growth, he argued, offered a glimpse of that dynamic. The company’s revenue has increased roughly 100-fold over the past three years, he said, reflecting surging demand for advanced AI systems in software development and enterprise workflows.

Amodei’s most arresting claim concerned the future of software engineering — and how quickly it could be upended.

“At Anthropic, our engineers rarely write code by hand anymore,” he said. “The models do it. Humans review, guide, and set direction.”

In his view, this division of labor could soon make human programmers largely redundant. Within six to 12 months, he said, AI systems could handle the entire software lifecycle end-to-end — from requirements gathering and system design to frontend and backend development, testing, deployment and maintenance.

In industry terms, “end-to-end” means far more than generating snippets of code. It encompasses the full scope of what software engineers do today, including architecture decisions, cross-file refactoring, debugging and integration with complex production environments.

Supporters of Amodei’s view point to benchmarks such as SWE-Bench, a widely used test that evaluates how well AI models can solve real-world software engineering problems drawn from GitHub repositories.

Unlike toy coding challenges, SWE-Bench tasks require models to identify bugs, understand large codebases, modify multiple files and produce patches that pass continuous integration tests.

On the benchmark’s verified subset, Anthropic’s latest Claude model reportedly achieves a success rate above 70% in a constrained environment, at a cost of well under $1 per task — a level that many researchers say rivals or exceeds junior engineers.

When mapped onto real-world job hierarchies, the results are striking. Easier tasks correspond to entry-level roles with zero to three years of experience, while more complex tasks align with mid-level engineers responsible for multi-file changes and system-level reasoning. Only the most difficult problems — those requiring deep architectural redesign or extensive domain research — remain largely out of reach for today’s models.

But Amodei warned that this gap may not last long.

“Going from junior to staff-level performance may only take a few iterations,” he said. “Moats that once took decades to build are eroding in real time.”

Beyond coding, Amodei made an even broader prediction: that by 2026 or 2027, AI systems could reach “Nobel-level” performance across multiple scientific fields.

By that, he said, he meant the ability to generate insights, hypotheses and solutions comparable to those produced by top human researchers — not merely to summarize existing knowledge.

Such a leap, he argued, would have profound implications for science, medicine and the global economy — but also for employment.

Amodei reiterated a warning he has made before: that up to 50% of junior white-collar jobs could disappear within the next one to five years, as AI systems absorb routine cognitive work in fields such as law, finance, consulting and software.

Hassabis, while acknowledging the speed of recent progress, urged caution against assuming a smooth, purely software-driven path to AGI.

“There are real bottlenecks,” he said, pointing to the physical world as a key constraint.

While AI has made dramatic gains in mathematics, coding and pattern recognition, he said, automating discovery in the natural sciences remains far harder. Fields such as chemistry, biology and materials science require experiments in the real world — a loop that cannot yet be fully closed by software.

“Scientific creativity is not just about reasoning,” Hassabis said. “It’s about interacting with reality.”

He added that if progress in robotics, laboratory automation or energy infrastructure lags behind advances in algorithms, the overall curve of AI development could flatten, buying societies more time to adapt.

The debate over employment dominated much of the audience discussion. While Amodei painted a bleak picture for entry-level workers, Hassabis struck a more balanced tone.

He said there was already evidence that hiring for junior roles and internships was slowing, particularly in technology. But he argued that new categories of work would eventually emerge, as they have in past technological revolutions.

“There will be short-term pain,” Hassabis said. “But in the long run, new and more meaningful jobs will be created.”

He urged young professionals to focus less on traditional career ladders and more on mastering AI tools themselves.

“Even people building these models haven’t fully explored their capability overhang,” he said. “If you can harness that, you can leapfrog in your field faster than any internship would allow.”

Both executives agreed on one point: once AGI truly arrives, historical analogies may break down.

“When you have systems that match human intelligence across domains, everything changes,” Hassabis said. “We are in uncharted territory.”

Amodei was even more blunt.

“By 2026,” he said, “the world is going to look very different for a lot of people — especially those at the beginning of white-collar careers.”

As policymakers grapple with how to regulate AI and economists debate its impact on growth and inequality, the Davos exchange underscored a growing consensus among AI leaders: the pace of change is accelerating, and the window for preparation may be far shorter than most societies expect.

For software engineers — and millions of other knowledge workers — the question raised in Davos was no longer theoretical.

It was whether the countdown has already begun.

Please sign in and then enter your comment