Airtable is applying its data-first design philosophy to AI agents with the debut of Superagent on Tuesday. It's a standalone research agent that deploys teams of specialized AI agents working in parallel to complete research tasks.
The technical innovation lies in how Superagent's orchestrator maintains context. Earlier agent systems used simple model routing where an intermediary filtered information between models. Airtable's orchestrator maintains full visibility over the entire execution journey: the initial plan, execution steps and sub-agent results. This creates what co-founder Howie Liu calls "a coherent journey" where the orchestrator made all decisions along the way. "It ultimately comes down to how you leverage the model's self-reflective capability," Liu told VentureBeat. Liu co-founded Airtable more than a dozen years ago with a cloud-based relational database at its core.
Airtable built its business on a singular bet: Software should adapt to how people work, not the other way around. That philosophy powered growth to over 500,000 organizations, including 80% of the Fortune 100, using its platform to build custom applications fitted to their workflows.
The superagent technology is an evolution of capabilities originally developed by DeepSky (formerly known as Gradient), which Airtable acquired in October 2025.
From structured data to free-form agentsLiu frames Airtable and Superagent as complementary form factors that together address different enterprise needs. Airtable provides the structured foundation, and the superagent handles unstructured research tasks.
"We obviously started with a data layer. It's in the name Airtable: It's a table of data," Liu said.
The platform evolved as scaffolding around that core database with workflow capabilities, automations, and interfaces that scale to thousands of users. "I think Superagent is a very complementary form factor, which is very unstructured," Liu said. "These agents are, by nature, very free form."
The decision to build free-form capabilities reflects industry learnings about using increasingly capable models. Liu said that as the models have gotten smarter, the best way to use them is to have fewer restrictions on how they run.
How Superagent's multi-agent system worksWhen a user submits a query, the orchestrator creates a visible plan that breaks complex research into parallel workstreams. So, for example if you're researching a company for investment, it'll break that up into different parts of that task, like research the team, research the funding history, research the competitive landscape. Each workstream gets delegated to a specialized agent that executes independently. These agents work in parallel, their work coordinated by the system, each contributing its piece to the whole.
While Airtable describes Superagent as a multi-agent system, it relies on a central orchestrator that plans, dispatches, and monitors subtasks — a more controlled model than fully autonomous agents.
Airtable's orchestrator maintains full visibility over the entire execution journey: the initial plan, execution steps and sub-agent results. This creates what Liu calls "a coherent journey" where the orchestrator made all decisions along the way. The sub-agent approach aggregates cleaned results without polluting the main orchestrator's context. Superagent uses multiple frontier models for different sub-tasks, including OpenAI, Anthropic, and Google.
This solves two problems: It manages context windows by aggregating cleaned results without pollution, and it enables adaptation during execution.
"Maybe it tried doing a research task in a certain way that didn't work out, couldn't find the right information, and then it decided to try something else," Liu said. "It knows that it tried the first thing and it didn't work. So it won't make the same mistake again."
Why data semantics determine agent performanceFrom a builder perspective, Liu argues that agent performance depends more on data structure quality than model selection or prompt engineering. He based this on Airtable's experience building an internal data analysis tool to figure out what works.
The internal tool experiment revealed that data preparation consumed more effort than agent configuration.
"We found that the hardest part to get right was not actually the agent harness, but most of the special sauce had more to do with massaging the data semantics," Liu said. "Agents really benefit from good data semantics."
The data preparation work focused on three areas: restructuring data so agents could find the right tables and fields, clarifying what those fields represent, and ensuring agents could use them reliably in queries and analysis.
What enterprises need to knowFor organizations evaluating multi-agent systems or building custom implementations, Liu's experience points to several technical priorities.
Data architecture precedes agent deployment. The internal experiment demonstrated that enterprises should expect data preparation to consume more resources than agent configuration. Organizations with unstructured data or poor schema documentation will struggle with agent reliability and accuracy regardless of model sophistication.
Context management is critical. Simply stitching different LLMs together to create an agentic workflow isn't enough. There needs to be a proper context orchestrator that can maintain state and information with a view of the whole workflow.
Relational databases matter. Relational database architecture provides cleaner semantics for agent navigation than document stores or unstructured repositories. Organizations standardizing on NoSQL for performance reasons should consider maintaining relational views or schemas for agent consumption.
Orchestration requires planning capabilities. Just like a relational database has a query planner to optimize results, agentic workflows need an orchestration layer that plans and manages outcomes.
"So the punchline and the short version is that a lot of it comes down to having a really good planning and execution orchestration layer for the agent, and being able to fully leverage the models for what they're good at," Liu said.