Reflection, a startup founded by former Google DeepMind researchers, has raised $2 billion at an $8 billion valuation. The company aims to establish an open-source frontier AI lab in the U.S. to challenge closed and foreign competitors.
The company was established in March 2024 by co-founders Misha Laskin and Ioannis Antonoglou. Laskin previously led reward modeling for the Gemini project at Google DeepMind, while Antonoglou was a co-creator of AlphaGo, the AI system that defeated the world Go champion in 2016. This experience informs their strategy to build frontier models outside established tech giants. Reflection initially concentrated on creating autonomous coding agents before broadening its mission to serve as an open-source alternative to labs such as OpenAI and Anthropic.
The new $2 billion funding round elevates Reflection’s valuation to $8 billion, a 15-fold increase from its $545 million valuation recorded seven months prior. Along with the new capital, Reflection announced the recruitment of top AI talent from DeepMind and OpenAI. The company also stated it has developed an advanced AI training stack to be made open for all and has “identified a scalable commercial model that aligns with our open intelligence strategy.”
Reflection’s team, led by CEO Misha Laskin, numbers about 60 people. This staff is primarily comprised of AI researchers and engineers working on infrastructure, data training, and algorithm development. The startup has secured a compute cluster and, according to Laskin, plans to release a frontier language model next year. This model is expected to be trained on a dataset of “tens of trillions of tokens.”
In a post on the social media platform X, Reflection detailed its technical achievements. “We built something once thought possible only inside the world’s top labs: a large-scale LLM and reinforcement learning platform capable of training massive Mixture-of-Experts (MoEs) models at frontier scale,” the company stated. “We saw the effectiveness of our approach first-hand when we applied it to the critical domain of autonomous coding. With this milestone unlocked, we’re now bringing these methods to general agentic reasoning.”
Today we're sharing the next phase of Reflection.
We're building frontier open intelligence accessible to all.
We've assembled an extraordinary AI team, built a frontier LLM training stack, and raised $2 billion.
Why Open Intelligence Matters
Technological and scientific…
— Reflection AI (@reflection_ai) October 9, 2025
The Mixture-of-Experts (MoE) model is a specific system architecture that underpins frontier-level large language models. The ability to train these complex models at scale was, until recently, primarily confined to large, closed-access AI laboratories. Chinese firms, including DeepSeek, made a significant advance by developing methods to train MoE models in an open-source manner. This development was subsequently followed by similar achievements from other China-based models like Qwen and Kimi.
The emergence of these powerful open-source models from China serves as a primary motivator for Reflection’s mission. “DeepSeek and Qwen and all these models are our wake-up call because if we don’t do anything about it, then effectively, the global standard of intelligence will be built by someone else,” Laskin said. He added, “It won’t be built by America.”