The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
 

Only 16% of enterprises effectively govern AI agent access to core systems

DATE POSTED:January 26, 2026
Only 16% of enterprises effectively govern AI agent access to core systems

A recent survey reveals significant gaps in the governance of artificial intelligence (AI) agents accessing enterprise systems. The 2026 CISO AI Risk Report by Cybersecurity Insiders and Saviynt, found that 71% of organizations use AI tools that access core business systems such as Salesforce and SAP, but only 16% effectively govern this access.

The report, based on a survey of 235 chief information security officers and senior security leaders at large enterprises, indicates that 92% of organizations lack full visibility into their AI identities. Moreover, 95% of respondents doubt their ability to detect or contain misuse of AI agents. Three-quarters of those surveyed reported discovering unauthorized “shadow AI” tools operating with credentials or elevated system access that were not being monitored.

Many security leaders do not enforce access policies for AI identities, with 86% falling into this category. Only 17% govern at least half of their AI identities with the same scrutiny applied to human users. Just 5% of respondents felt prepared to handle a compromised AI agent.

Operational incidents stemming from AI agents are already occurring. Nearly 47% of CISOs have observed AI agents exhibiting unintended or unauthorized behavior, and 33% have experienced a security incident or near-miss related to AI in the past year.

The proliferation of shadow AI is partly due to its accessibility and productivity benefits, with adoption often happening in days. A separate report from Netskope noted that 47% of generative AI users still operate through personal accounts, leading to company data being sent to unmonitored services.

Traditional identity and access management tools are primarily designed for human users and are ill-suited for autonomous AI systems. The survey found that 60% of organizations still rely on login-based authentication methods, such as session management or password policies, for AI identities. These methods are not appropriate for systems that require API-first controls, including token lifecycle management and scope-limited authorization. Only 25% of organizations currently utilize AI-specific monitoring or controls.

Security experts note that AI agents pose a distinct challenge because they can act with delegated authority, chain actions across systems, and accumulate permissions as integrations expand. Experts emphasize that governing AI identities requires a paradigm shift, as they do not behave like human users or traditional machine accounts.

In response, CISOs are prioritizing identity as a critical enforcement layer. The survey indicates that 73% would invest in API and workload identity discovery if budget allowed, and 68% would focus on continuous monitoring and posture analytics. Microsoft and CyberArk have both underscored the importance of treating every AI agent as a “first-class identity” that requires rigorous governance, including clear ownership, lifecycle management, zero trust, and least privilege principles.

The EU AI Act‘s high-risk system requirements are scheduled to take full effect in 2026. This, combined with increasing global regulatory scrutiny, is narrowing the timeframe for organizations to address AI governance deficiencies.

Featured image credit