For years, cybersecurity has been defined by a simple but dangerous gap: the time between when a vulnerability is discovered and when it’s patched.
[contact-form-7]Fraudsters have traditionally exploited that window, often with catastrophic results.
Now, Google is showing that the arms race may no longer be moving at human speed, potentially signaling an end to the era of overloaded analysts chasing alerts and engineers patching software after the fact.
The tech giant unveiled several updates Tuesday (July 15) around agentic artificial intelligence-powered cybersecurity. Google is developing autonomous systems that can detect, decide and respond to threats in real time — often without human intervention.
“Our AI agent Big Sleep helped us detect and foil an imminent exploit,” Sundar Pichai, CEO of Google and its parent company, Alphabet, posted on social platform X. “We believe this is a first for an AI agent — definitely not the last — giving cybersecurity defenders new tools to stop threats before they’re widespread.”
New from our security teams: Our AI agent Big Sleep helped us detect and foil an imminent exploit. We believe this is a first for an AI agent – definitely not the last – giving cybersecurity defenders new tools to stop threats before they’re widespread.
— Sundar Pichai (@sundarpichai) July 15, 2025
For business leaders, especially chief information security officers (CISOs) and chief financial officers (CFOs), this rising reality may pose new questions. Are enterprise organizations ready for defense at machine speed? What’s the cost of not adopting these tools? Who’s accountable when AI systems take action?
Read also: What B2B Firms Can Learn From Big Tech’s Cybersecurity Initiatives
From Threat Reaction to Autonomous PreventionHistorically, zero-day vulnerabilities — unknown security flaws in software or hardware — are discovered by adversaries first, exploited quietly, and later disclosed after damage has occurred. Big Sleep reversed that pattern. No alerts, no tip-offs — just AI running autonomously and flagging a high-risk issue before anyone else even knew it existed.
For CISOs, this means a new category of tools is emerging. They’re AI-first threat prevention platforms that don’t wait for alerts but seek out weak points in code, configurations or behavior, and they take defensive action automatically.
For CFOs, it signals a change in cybersecurity economics. Prevention at this scale is potentially cheaper and more scalable than the human-powered models of the past. But that’s only if the AI is accurate and accountable.
“The models are only as good as the data being fed to them,” Boost Payment Solutions Chief Technology Officer Rinku Sharma told PYMNTS in April. “Garbage in, garbage out holds true even with agentic AI.”
The PYMNTS Intelligence report “The AI MonitorEdge Report: COOs Leverage GenAI to Reduce Data Security Losses” found that the share of chief operating officers (COOs) who said their companies had implemented AI-powered automated cybersecurity management systems leapt from 17% in May 2024 to 55% in August.
The report found that these COOs adopted new AI-based systems because they could identify fraudulent activities, detect anomalies and provide real-time threat assessments.
See also: Payments Execs Say AI Agents Give Payments an Autonomous Overhaul
Agentic AI and Risk Accountability at the Edge of the Front LineWith power comes responsibility, and in cybersecurity, that translates to risk ownership.
Agentic AI systems, by definition, act independently. That autonomy introduces new challenges for governance and compliance. Who’s responsible if an AI mistakenly flags a critical system and shuts it down? What happens if the AI fails to detect a breach?
“This isn’t a technical upgrade; it’s a governance revolution,” Kathryn McCall, chief legal and compliance officer at Trustly, told PYMNTS in June.
“You’ve got to treat these AI agents as non-human actors with unique identities in your system,” she added. “You need audit logs, human-readable reasoning and forensic replay.”
The emergence of agentic AI solutions for cybersecurity also has enterprise composition implications. As workforces remain hybrid and attack surfaces widen, endpoint security is only as good as its weakest device. Bringing autonomous protection to the edge — phones, browsers, apps — may no longer be optional.
Stax Chief Technology Officer Mark Sundt told PYMNTS in June that if agentic AI is the engine, orchestration is the transmission. Without a central conductor, even the most capable agents act in isolation.
“You’ve got agents to agents … but who’s driving the process?” Sundt said. “Who’s doing the orchestration?”
In that light, cybersecurity investments must now answer a new question: How much decision-making power are we ready to give our machines?
The adversaries aren’t waiting, and the AI agents aren’t slowing down.
For WEX Chief Digital Officer Karen Stroup, the best approach to deploying agentic AI involves a disciplined strategy of experimentation.
“If you’re going to experiment with agentic AI or any type of AI solutions, you want to focus on two things,” she told PYMNTS in April. “One is the areas where you’re most likely to have success. And two, is there going to be a good return on that investment?”
For all PYMNTS AI and digital transformation coverage, subscribe to the daily AI and Digital Transformation Newsletters.
The post Agentic AI Turns Enterprise Cybersecurity Into Machine vs. Machine Battle appeared first on PYMNTS.com.