
Demis Hassabis, CEO of Google DeepMind, stated on CNBC‘s podcast “The Tech Download” that Chinese AI models trail U.S. and Western capabilities by merely months, countering perceptions of a larger gap.
Hassabis made this observation during the podcast, which launched on Friday. He specified that Chinese AI models stand closer to U.S. and Western levels than anticipated one or two years prior. “Maybe they’re only a matter of months behind at this point,” Hassabis told The Tech Download. This view positions China nearer to parity in AI development than previously estimated by some observers.
Approximately one year ago, the Chinese AI lab DeepSeek released a model that disrupted markets. This model delivered strong performance using less-advanced chips and at costs lower than those of American counterparts. The release highlighted China’s capacity to produce competitive AI systems under resource constraints. DeepSeek has since introduced additional models, diminishing the initial surprise but maintaining attention on its advancements.
Following DeepSeek’s entry, established Chinese tech giants and emerging firms accelerated their efforts. Alibaba, a major player in cloud computing and e-commerce, unveiled highly capable AI models. Startups including Moonshot AI and Zhipu also launched models demonstrating substantial competence. These developments collectively illustrate a rapid expansion in China’s AI ecosystem, with multiple entities contributing sophisticated systems.
Hassabis acknowledged China’s proficiency in closing gaps but questioned its record on pioneering advances. “The question is, can they innovate something new beyond the frontier? So I think they’ve shown they can catch up … and be very close to the frontier … But can they actually innovate something new, like a new transformer … that gets beyond the frontier? I don’t think that’s been shown yet,” Hassabis said. This statement underscores a distinction between replication and origination in AI progress.
The transformer architecture, referenced by Hassabis, originated from a 2017 scientific breakthrough by Google researchers. This innovation forms the foundational structure for large language models developed across AI laboratories in subsequent years. Products such as OpenAI’s ChatGPT and Google’s Gemini rely on transformer-based systems to process and generate language at scale.
Other prominent technology executives have echoed recognition of China’s advancements. Nvidia CEO Jensen Huang commented last year that the U.S. holds no substantial lead in the AI competition. “China is well ahead of us on energy. We are way ahead on chips. They’re right there on infrastructure. They’re right there on AI models,” Huang said. These remarks detail specific domains where China matches or exceeds U.S. strengths.
Chinese technology firms encounter significant hurdles, particularly in hardware access. A U.S. export ban restricts shipments of Nvidia’s leading-edge semiconductors, essential for training advanced AI models. The White House has signaled potential approval for sales of Nvidia’s H200 chip to China, representing an upgrade from previously accessible versions but not Nvidia’s most advanced product. Domestic alternatives from companies like Huawei aim to address this shortfall, though their performance remains inferior to Nvidia’s semiconductors.
Analysts foresee potential long-term consequences from these hardware limitations. Richard Clode, portfolio manager at Janus Henderson, addressed this on CNBC’s “The China Connection” last week. “I do suspect, though that we will start seeing a divergence as that superior U.S. AI infrastructure starts iterating those models and starts making those models more capable over time in years to come. So I would expect from here we’re probably at peak relative Chinese AI capability versus [U.S.],” Clode told the program. This perspective highlights infrastructure’s role in sustained AI iteration.
Even within China, industry leaders concede challenges. Lin Junyang, technical lead of Alibaba’s Qwen team, spoke at an AI conference in Beijing last week. He estimated less than a 20% chance that a Chinese firm would overtake U.S. tech giants in AI over the next three to five years, according to the South China Morning Post. Lin attributed this to U.S. computing infrastructure being one to two orders of magnitude larger than China’s.
Hassabis attributes China’s absence of frontier breakthroughs primarily to mindset rather than technological barriers. He described DeepMind as a “modern day Bell Labs,” fostering “exploratory innovation” instead of solely “scaling out what’s known today.” Bell Labs, established in the early 1900s, produced numerous Nobel Prize-winning discoveries. “And of course, that’s already very difficult, because you need world‑class engineering already to be able to do that. And China definitely has that,” Hassabis said. “The scientific innovation part that’s a lot harder. To invent something is about 100 times harder than it is to copy it. … That’s the next frontier really, and I haven’t seen evidence of that yet, but it’s very difficult,” he added. These comments differentiate engineering execution from inventive processes.
Hassabis ranks among the foremost figures in AI. He founded DeepMind over ten years ago; Google acquired the company in 2014. DeepMind has propelled Alphabet-owned Google’s AI initiatives, including the Gemini assistant. In November, Google released Gemini 3, its most recent model. Users and the market have received Gemini 3 positively, as Google addressed concerns of lagging behind competitors like OpenAI.