The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 

OpenAI launches GPT-5.3-Codex-Spark for ultra-fast real-time coding

Tags: new testing
DATE POSTED:February 13, 2026
OpenAI launches GPT-5.3-Codex-Spark for ultra-fast real-time coding

On Thursday, OpenAI announced GPT-5.3-Codex-Spark, a lightweight version of its agentic coding tool Codex, which the company launched earlier this month. Powered by Cerebras’ Wafer Scale Engine 3 chip, Spark enables faster inference as the first milestone in OpenAI’s multi-year partnership with Cerebras.

The original GPT-5.3-Codex model serves longer, heavier tasks requiring deeper reasoning and execution. In contrast, GPT-5.3-Codex-Spark focuses on swift operations. OpenAI describes it as a smaller version designed specifically for reduced latency during inference processes. This new tool integrates hardware from Cerebras directly into OpenAI’s physical infrastructure, representing deeper collaboration between the two companies.

OpenAI and Cerebras revealed their partnership last month through a multi-year agreement valued at over $10 billion. At that time, OpenAI stated, “Integrating Cerebras into our mix of compute solutions is all about making our AI respond much faster.” The company now positions Spark as the initial achievement in this alliance, emphasizing its role in accelerating AI responses.

Cerebras’ Wafer Scale Engine 3 powers Spark’s inference capabilities. This third-generation waferscale megachip contains 4 trillion transistors, enabling high-performance computing tailored for AI workloads. OpenAI highlights Spark’s suitability for real-time collaboration and rapid iteration. The tool functions as a daily productivity driver, assisting users with rapid prototyping rather than extended computations handled by the base GPT-5.3-Codex model.

Spark operates with the lowest possible latency on Codex. OpenAI explains its purpose in an official statement: “Codex-Spark is the first step toward a Codex that works in two complementary modes: real-time collaboration when you want rapid iteration, and long-running tasks when you need deeper reasoning and execution.” Cerebras’ chips support workflows that demand extremely low latency.

Currently, Spark appears as a research preview exclusively for ChatGPT Pro users within the Codex app. This limited rollout allows initial testing among subscribers on the Pro plan. Prior to the announcement, OpenAI CEO Sam Altman hinted at the release on Twitter. He posted, “We have a special thing launching to Codex users on the Pro plan later today.” Altman added, “It sparks joy for me.”

Cerebras, established over a decade ago, has gained prominence in the AI sector. Last week, the company secured $1 billion in fresh capital, achieving a valuation of $23 billion. Cerebras has indicated plans to pursue an initial public offering. Sean Lie, CTO and co-founder of Cerebras, commented on the development: “What excites us most about GPT-5.3-Codex-Spark is partnering with OpenAI and the developer community to discover what fast inference makes possible — new interaction patterns, new use cases, and a fundamentally different model experience.” Lie described the preview as “just the beginning.”

Featured image credit

Tags: new testing