Open‑source AI is quietly reshaping enterprise tech stacks—and a new McKinsey, Mozilla Foundation, and Patrick J. McGovern Foundation survey of 700 technology leaders shows why cost, control, and community are tipping the scales. When a global bank needed full transparency for a risk‑scoring model, it skipped the closed APIs and fine‑tuned Llama 3 on its own servers. That story is no longer the outlier; it is the emerging pattern.
Open source meets the AI gold rushThe past two years have produced a surge of awareness, experimentation, and funding around generative AI and large language models. Enterprises want results fast, but they also need room to tinker. Open repositories deliver both. A single Git pull can spin up a working model in minutes, letting engineers explore architectures, tweak parameters, and benchmark without procurement delays. That do‑it‑yourself velocity explains why projects such as Meta’s Llama, Google’s Gemma, DeepSeek‑R, and Alibaba’s Qwen families have landed in production pipelines despite their “research only” disclaimers. Performance gaps with proprietary titans like GPT‑4 are shrinking; the freedom to inspect and modify the code remains priceless.
What the survey revealsTo quantify the shift, McKinsey partnered with the Mozilla Foundation and the Patrick J. McGovern Foundation, canvassing 41 countries and more than 700 senior developers, CIOs, and CTOs. The study, titled “Open Source in the Age of AI”, is the largest snapshot yet of how enterprises blend open and closed solutions as they move from pilot projects to value capture at scale. Respondents span industries from finance to healthcare, manufacturing to public sector, giving the data broad relevance. While the full report arrives in March, the preview numbers already upend a few assumptions about what “enterprise‑grade” AI looks like in 2025.
Across multiple layers of the AI technology stack, more than half of surveyed organizations use at least one open‑source component—often right next to a commercial API key.
These percentages climb even higher inside organizations that rate AI as “critical to competitive advantage.” In that cohort, leaders are 40 % more likely than peers to integrate open models and libraries, underscoring a simple fact: when AI is strategic, control and flexibility matter.
Open tools are not a panacea. When asked about adoption barriers, 56 % of respondents cite security and compliance concerns, while 45 % worry about long‑term support. Proprietary vendors score higher on “time to value” and “ease of use” because they bundle hosting, monitoring, and guardrails. And when executives prefer closed systems, they do so for one dominant reason: 72 % say proprietary solutions offer tighter control over risk and governance. In other words, enterprises weigh openness against operational certainty on a case‑by‑case basis.
This benchmark asks if AI can think like an engineer
Multimodel is the new normalMcKinsey’s data echoes a trend seen in cloud computing a decade ago: hybrid architectures win. Few companies will go all‑in on open or proprietary; most will mix and match. A closed foundation model might power a customer‑facing chatbot, while an open Llama‑variant handles internal document search. The choice often hinges on latency, privacy, or domain specificity. Expect “bring your own model” menus to become as standard as multi‑cloud dashboards.
Playbook for CTOsBased on survey insights and expert interviews, McKinsey outlines a pragmatic decision matrix for technology leaders:
One interviewed CIO summed it up: “We treat models like microservices. Some we build, some we buy, all must interoperate.”
Why it matters nowEnterprises face a strategic crossroads. Betting solely on proprietary platforms risks vendor lock‑in and opaque model behavior. Going fully open demands in‑house expertise and rigorous security posture. The survey data suggests the winning play is optionality: build a stack that lets teams swap models as fast as the market evolves. Open source is no longer the rebel choice; it is becoming a first‑class citizen in enterprise AI. Ignore it, and you may find your competition iterating faster, hiring better talent, and paying less for inference. Embrace it thoughtfully, and you gain leverage, insight, and a community that never stops shipping improvements.
According to “Open Source in the Age of AI,” 76 % of leaders plan to expand open‑source AI use in the next few years. The age of multimodel pragmatism has arrived—code is open, the playing field is wide, and the smartest organizations will learn to thrive in both worlds.