The Reinforcement Gap — or why some AI skills improve faster than others
Samira Vishwas October 06, 2025 12:24 AM

AI coding tools are getting better fast. If you don’t work in code, it can be hard to notice how much things are changing, but GPT-5 and Gemini 2.5 have made a whole new set of developer tricks possible to automate, and last week Sonnet 2.4 did it again.

At the same time, other skills are progressing more slowly. If you are using AI to write emails, you’re probably getting the same value out of it you did a year ago. Even when the model gets better, the product doesn’t always benefit — particularly when the product is a chatbot that’s doing a dozen different jobs at the same time. AI is still making progress, but it’s not as evenly distributed as it used to be.

The difference in progress is simpler than it seems. Coding apps are benefitting from billions of easily measurable tests, which can train them to produce workable code. This is reinforcement learning (RL), arguably the biggest driver of AI progress over the past six months and getting more intricate all the time. You can do reinforcement learning with human graders, but it works best if there’s a clear pass-fail metric, so you can repeat it billions of times without having to stop for human input.

As the industry relies increasingly on reinforcement learning to improve products, we’re seeing a real difference between capabilities that can be automatically graded and the ones that can’t. RL-friendly skills like bug-fixing and competitive math are getting better fast, while skills like writing make only incremental progress.

In short, there’s a reinforcement gap — and it’s becoming one of the most important factors for what AI systems can and can’t do.

In some ways, software development is the perfect subject for reinforcement learning. Even before AI, there was a whole sub-discipline devoted to testing how software would hold up under pressure — largely because developers needed to make sure their code wouldn’t break before they deployed it. So even the most elegant code still needs to pass through unit testing, integration testing, security testing, and so on. Human developers use these tests routinely to validate their code and, as Google’s senior director for dev tools recently told me, they’re just as useful for validating AI-generated code. Even more than that, they’re useful for reinforcement learning, since they’re already systematized and repeatable at a massive scale.

There’s no easy way to validate a well-written email or a good chatbot response; these skills are inherently subjective and harder to measure at scale. But not every task falls neatly into “easy to test” or “hard to test” categories. We don’t have an out-of-the-box testing kit for quarterly financial reports or actuarial science, but a well-capitalized accounting startup could probably build one from scratch. Some testing kits will work better than others, of course, and some companies will be smarter about how to approach the problem. But the testability of the underlying process is going to be the deciding factor in whether the underlying process can be made into a functional product instead of just an exciting demo.

Techcrunch event

San Francisco
|
October 27-29, 2025

Some processes turn out to be more testable than you might think. If you’d asked me last week, I would have put AI-generated video in the “hard to test” category, but the immense progress made by OpenAI’s new Sora 2 model shows it may not be as hard as it looks. In Sora 2, objects no longer appear and disappear out of nowhere. Faces hold their shape, looking like a specific person rather than just a collection of features. Sora 2 footage respects the laws of physics in both obvious and subtle ways. I suspect that, if you peeked behind the curtain, you’d find a robust reinforcement learning system for each of these qualities. Put together, they make the difference between photorealism and an entertaining hallucination.

To be clear, this isn’t a hard and fast rule of artificial intelligence. It’s a result of the central role reinforcement learning is playing in AI development, which could easily change as models develop. But as long as RL is the primary tool for bringing AI products to market, the reinforcement gap will only grow bigger — with serious implications for both startups and the economy at large. If a process ends up on the right side of the reinforcement gap, startups will probably succeed in automating it — and anyone doing that work now may end up looking for a new career. The question of which healthcare services are RL-trainable, for instance, has enormous implications for the shape of the economy over the next 20 years. And if surprises like Sora 2 are any indication, we may not have to wait long for an answer.

© Copyright @2025 LIDEA. All Rights Reserved.