Jason Wei, a core research scientist at OpenAI
AsianFin — The GenAI Week 2025 conference opened Sunday at the Santa Clara Convention Center with a provocative keynote from Jason Wei, a core research scientist at OpenAI, who used humor and hard data to confront one of the most pressing questions of the AI era: what remains uniquely human when intelligence becomes nearly free?
Maybe one day, AI will write the most elegant love poems or plan the perfect date, Wei said to laughter near the close of his 40-minute talk.
“But one thing it will never do: console your angry girlfriend.” The lighthearted moment belied a keynote packed with sobering insights on the cost of intelligence, the future of work, and the shifting boundaries of human relevance.
Rather than announce new models or showcase flashy demos, Wei focused on the systemic implications of generative AI’s rapid evolution. His message was clear: the marginal cost of intelligence is collapsing, and with it, the traditional value humans have placed on cognitive labor.
Photo taken by AsianFin staff
Using MMLU benchmark data, Wei demonstrated that the cost to generate a million tokens at a GPT-3–level performance has dropped from roughly $100 in 2021 to just 10 cents in 2024 using open-source models—a thousandfold decrease. He credited the leap to advances in adaptive compute, a method of dynamically allocating resources depending on task complexity.
For simple queries like identifying a state capital, models require only fractions of a second. For complex problems, such as advanced mathematics, models now “choose” to spend up to 30 seconds on in-depth reasoning. That shift, Wei argued, signals a fundamental redefinition of cognition: no longer a scarce human asset, thinking has become a scalable, on-demand service.
But it’s not just the cost of intelligence that is shifting. Wei introduced what he called “Verifier’s Law,” the notion that AI doesn’t displace difficult jobs—it displaces verifiable ones. Contrary to the popular belief that complexity offers protection from automation, Wei argued that jobs with clear, standardized evaluation criteria are the most vulnerable.
These include mathematics, legal analysis, financial reporting, and structured writing. In contrast, AI still struggles in domains with high ambiguity and subjective feedback, such as branding, leadership evaluation, and cultural discourse.
Photo taken by AsianFin staff
Drawing on examples from DeepMind’s AlphaEvolve, which outperformed elite human researchers in combinatorial mathematics by iterating through massive search spaces, Wei suggested that AI’s real power lies not in creativity but in systems that reward multi-path trial, verification, and refinement. This shift, he said, is already redrawing the boundaries of professional labor—not by industry or difficulty, but by whether work can be measured and scored.
Wei also pushed back against the common assumption that AI’s growth will follow a smooth, exponential curve. Instead, he argued that the technology evolves in jagged spurts—delivering breakthroughs in narrow, structured domains while plateauing in others. The implication, he said, is that generative AI will soon dominate fields with replicable processes and clear standards but remain limited in more fluid, culturally embedded, or emotionally charged work.
The larger philosophical challenge, Wei argued, is how humans define value in a world of surplus cognition. While AI may soon outperform humans in information retrieval and structured analysis, Wei stressed that human advantage will lie in designing systems of meaning: judgment, meta-design, conviction, and cultural intuition. These are the qualities, he said, that cannot be programmed or priced.
In a world where AI can generate 10,000 answers, the question becomes—do you know which one matters? Wei said.
The keynote resonated throughout the packed convention hall, setting the tone for a weeklong summit that brings together leading figures from OpenAI, DeepMind, Anthropic, and startups across the U.S. and Asia. Wei’s remarks landed as both a roadmap and a warning: the future of work is not defined by technical difficulty but by whether a task can be structured and verified. For technologists, founders, and investors, this presents both opportunity and disruption.
His final message was pointed: while intelligence may no longer be a human monopoly, the ability to frame problems, design systems, and build trust remains an irreplaceable edge. AI isn’t here to replace everyone, he said. It’s here to amplify those who can design the structure it operates within, he added.