Self-driving cars are supposed to be the future. AI is supposed to take the wheel, navigate flawlessly, and eliminate human error. Yet here we are, still gripping our steering wheels while AI stumbles through simulations, making mistakes that range from hilariously bad to downright dangerous.
Why? Because AI learns through trial and error—the digital equivalent of throwing darts in the dark until it finally hits the bullseye. That’s fine when the stakes are low, like playing chess or optimizing ads. But when it comes to real-world applications—where a mistake means plowing into a pedestrian—this approach falls apart.
According to a study conducted by Zhenghao Peng, Wenjie Mo, Chenda Duan, and Bolei Zhou from the University of California, Los Angeles (UCLA), along with Quanyi Li from the University of Edinburgh, AI training can be dramatically improved using Proxy Value Propagation (PVP). Their research, titled Learning from Active Human Involvement through Proxy Value Propagation, challenges traditional reinforcement learning by proving that active human intervention leads to faster, safer, and more efficient AI training.
Traditional Reinforcement Learning (RL), the standard way AI learns to make decisions, is painfully slow. It requires millions of attempts before an AI figures out what works. Worse, it assumes AI can understand human intent just by following a reward system—when in reality, reward systems often lead to bizarre, unintended behaviors. Think of an AI trained to win a race that figures out it can just drive in circles at the start line to rack up “distance traveled” points without ever finishing the course.
Clearly, AI needs a better teacher. And that teacher? You.
Let humans intervene in real timeProxy Value Propagation (PVP) is a new method that turns AI training into something far more human. Instead of letting AI blunder through its mistakes for months, PVP lets humans step in, intervene, and show AI what to do in real time.
The result is surprising. AI learns much faster, with fewer mistakes, and—most importantly—it actually aligns with human expectations instead of blindly chasing reward points.
AI struggles with strategy: Study shows LLMs reveal too much in social deduction games
The numbers don’t lie: PVP worksThe team behind PVP put it to the test in GTA V, CARLA (a driving simulator), and MiniGrid (a virtual maze navigation task). The results were stunning:
In other words, AI actually started driving like a human—not just a robot programmed to maximize abstract rewards.
A win for AI—and for humansPVP isn’t just better for AI. It also makes life easier for the people training it. Traditional AI training requires constant human oversight, hours of feedback, and a whole lot of patience. With PVP, AI needed 50% less human effort to train. Testers rated PVP-trained AI 4.8 out of 5 for accuracy, compared to just 3.0 for older methods. AI that followed PVP training caused significantly less stress for human trainers—because it didn’t constantly require corrections. For a technology that’s supposed to make our lives easier, that’s a huge step forward.
From GTA to the streetsPVP has already proven itself in virtual driving tests. The real question is: can it work in real-world applications?
The potential is massive. Instead of relying solely on pre-programmed rules, AI could learn directly from human intervention—making it safer, faster. AI-powered robots in warehouses, hospitals, or even homes could be trained in real time instead of through trial-and-error. Human doctors could intervene during AI-assisted surgeries or diagnoses, directly teaching the system what’s right or wrong.
Sometimes, the goal is just to make AI human enough—to act in ways we expect, to align with our values, and to avoid mistakes that put us at risk.
Featured image credit: Kerem Gülen/Midjourney