The risk of headline-grabbing incidents, like Samsung’s ChatGPT data leak, related to AI usage outside of the authorization and control of IT (a.k.a. shadow AI) is clear. Most IT teams recognize that a high-profile incident can have serious repercussions. However, the risk of shadow AI goes well beyond the risk of a single incident. In fact, the recent Komprise IT Survey indicates that 79% of organizations have experienced negative outcomes from sending corporate data to AI. 

Unfortunately, breaches and compliance issues aren’t the only shadow AI risks businesses need to account for. The compute, storage, and network performance impact related to unauthorized AI tools are emerging as a shadow AI downside that is quietly hurting bottom lines. Simply put: shadow AI is quietly straining infrastructure and hurting financial performance. 

Organizations that want to see their AI initiatives succeed face the challenge of balancing tradeoffs between shadow AI risk mitigation, business agility, and innovation. In this article, we’ll dive into the hidden costs of shadow AI as well as shadow AI risks and solutions to help teams build an AI strategy that pragmatically addresses the shadow AI challenge.

Auvik logo

Try Auvik SaaS Management now

Take the first step in reclaiming control over your SaaS environment.

What’s driving the rise of shadow AI?

The surge in AI accessibility and capabilities has created a world where users are rapidly adopting AI tools. 

Unfortunately for IT, the overall increase in adoption has led to a significant rise in shadow AI. The Komprise report indicated that the vast majority (90%) of IT leaders are concerned about privacy and security issues related to shadow AI. 

Specific drivers of the increase in shadow AI include: 

  • Free and freemium AI tools: Users can leverage generative AI tools like ChatGPT and Claude directly from a browser without payment. No payment means little to no friction in accessing AI tools, even if they’re not authorized by IT. 
  • Improved AI performance: Stanford’s 2025 AI Index report indicates that AI tools continue to improve their performance on key benchmarks, and large language models (LLMs) even outperform humans in some programming tasks. As AI quality improves, the incentive to use shadow AI tools increases.  
  • AI everywhere: Another interesting insight from Stanford’s index is that AI is being embedded in many other systems. Even the highly regulated healthcare industry is seeing a rapid rise in AI. In 2023, the FDA approved 223 AI-enabled medical devices, a 1386.67% increase from 2015. When AI is everywhere, shadow AI can easily emerge. 
  • Lack of clear AI use policies: If employees don’t know what is or isn’t allowed by IT, they can inadvertently make decisions that introduce shadow AI. Organizational policies that aren’t explicit enough to address AI use cases, or a lack of user education on those policies, can create ambiguity that, in turn, leads to an increase in shadow AI adoption. 

While there is certainly upside in some shadow AI use cases, the unmanaged use of AI can come with significant business risks and costs. Let’s examine how those costs can arise. 

The hidden costs of shadow AI 

The sharp increase in shadow AI adoption has many IT and security teams scrambling to catch up. To truly manage shadow AI, it’s essential to fully scope the associated costs. With that in mind, let’s review five costs of shadow AI that organizations should consider. 

Infrastructure costs

Running AI models on internal infrastructure can be very expensive. The exact costs will vary greatly depending on the use case, but ProjectPro provides some reasonable back-of-the-napkin estimates on AI infrastructure costs that we’ve summarized in the table below:

Use caseInfrastructure cost rough estimate
Small-scale AI projects run on a cloud platform$500-2,000 per month
Large-scale AI projects run on a cloud platform that includes training AI models$10,000-$100,000 per month
On-premise hardware cost for a system that includes GPUs or TPUs for AI models$10,000-$100,000

While it might be hard for a five-figure hardware purchase to fly under IT’s radar, cloud costs can often go undetected until it’s too late. This can lead to compute-intensive shadow AI projects meaningfully increasing cloud infrastructure costs. Infrastructure costs that should be attributed to shadow AI get even trickier when you consider AI tools that run on existing infrastructure and increase compute (e.g., CPU, GPU, or TPU) utilization. 

Network bottlenecks 

Shadow AI tools can consume significant amounts of bandwidth. This is particularly true for applications where AI is used in workflows that involve downloading or transmitting large files, as well as tools that multiple users access simultaneously. Like other high-throughput or bursty workloads, shadow AI can create network performance challenges such as network congestion, increased latency and jitter, and poor end-user experience. Organizations can end up experiencing these shadow AI side effects, which can cause a financial impact in the form of reduced productivity and increased network costs (e.g., network egress costs). 

Data security and privacy risk

Fines due to compliance breaches and reputational damage from data leaks are a textbook shadow AI risk, and it shows. 90% of the IT leaders in the Komprise survey were either “extremely worried” (46%) or “moderately worried” (44%) about privacy and security risk related to shadow AI. A quick look at shadow IT stats can prove that these are well-founded concerns, as IBM reported the average cost of a data breach at $4.45 million. 

Wasted spending

Many organizations already have authorized AI initiatives in place. Shadow AI can duplicate these efforts, leading to staff wasting time and resources on projects that could be addressed through formal channels. Additionally, the organizations are negatively impacted by the opportunity cost of missing out on the collaboration that could occur between stakeholders in different AI projects focused on the same business use case. 

Reduced quality of work

Vibe coding and incidents like Replit wiping out a production database are prime examples of the quality risks that come with generative AI tools.  But it’s not just code generation that can introduce quality issues. Generative AI tools can hallucinate and provide inaccurate information in any use case. In fact, “false or inaccurate results from queries” was one of the most common negative outcomes of generative AI use that IT leaders in the Komprise survey identified. 

When AI doesn’t produce work that meets quality standards, humans are tasked with the cleanup. That increased manual effort comes with a cost. A more insidious risk is the reputational damage that can occur if your organization becomes associated with “AI slop” and misinformation. 

Why organizations struggle to detect and control shadow AI

Many of the same problems that make SaaS-based shadow IT difficult to detect and address directly apply to shadow AI. Most AI tools are accessed via the browser rather than directly installed on a device, which makes them harder to detect than traditional desktop software. Similarly, freemium models that allow users to sign up with personal email accounts mean that identity-based monitoring focused on business accounts can completely miss AI usage. 

Further, shadow AI use can even be embedded into other products. With so many different devices and services integrating AI as a feature, users can interact with unauthorized AI on a site or service that isn’t primarily an AI platform. For example, grammar checkers, design tools, and CRM platforms have all added AI features over the past few years. Unfortunately, IT doesn’t always have visibility into exactly how data is handled in those tools.

What’s your shadow IT & AI risk factor?

Find out in this free quiz and guide.

How to mitigate shadow AI risk without slowing innovation 

So, how can IT and security teams address shadow AI risk and control costs without bringing productivity and innovation to a screeching halt? It requires a combination of strategy, culture, and security controls that create the right incentives for end users while maintaining tight control over sensitive data. In the next sections, we’ll explore four techniques that can help teams understand how to identify and mitigate shadow AI risks in organizations of all sizes. 

Develop a data management strategy for AI

The data security risk associated with AI projects is undeniable. The challenge is mitigating that risk while still enabling AI tools to deliver business value. A sound data management strategy for AI needs to address:

  • Data loss prevention (DLP) implementations to reduce the risk of data leakage 
  • Data classification using techniques such as data labeling to categorize data and flag sensitive information for proper handling and access controls 
  • Data storage that is cost-effective, secure, and supportive of AI capabilities (e.g., ensuring the right data is stored where AI tools reside) 

Effective data management is crucial in reducing shadow AI risk, as the right technical controls can minimize the likelihood of a sensitive data leak, even if an internal user inadvertently uses an unauthorized AI tool. 

Train users to use AI securely and effectively 

Security education and training have made significant strides in helping organizations mitigate the risks of social engineering and shadow IT. Teams can apply similar strategies to reduce the risk of shadow AI negatively impacting the bottom line. 

Fundamentally, shadow AI occurs because your users are implementing AI tools without IT oversight. Educating users on how to follow formal channels to address their shadow AI use cases securely reduces the likelihood that they’ll resort to unauthorized tools. 

Here are some tips on how you can train users to avoid shadow AI:

  • Explicitly define what is and isn’t allowed– Clarity is key when it comes to acceptable AI use. There are so many different types of tools and potential shadow AI use cases that employees might run into that it can be easy to get lost. Make sure your organization has a well-defined source of truth, like an AI acceptable use policy, that removes ambiguity. 
  • Integrate AI-related topics to existing security education programs– Many organizations already have security training programs. Incorporate acceptable AI use training into these programs and track completion to ensure the message reaches your users effectively.  
  • Be collaborative– If users think they’ll be punished or penalized for bringing their shadow AI use to IT’s attention, they’ll go to greater lengths to hide it. Build a culture of collaboration that encourages your users to engage with IT on AI projects and manage AI risk. As part of your training, let users know that they’re encouraged to bring their shadow AI use cases to IT. 

Invest in tools that detect and monitor shadow AI

Managing shadow AI begins with detecting it. That’s why so many IT leaders are exploring their options for advanced tools for detecting shadow AI security risks. While user collaboration and manual techniques, such as surveys, can uncover some unauthorized AI use, automated detection tools are essential to get a comprehensive picture of the state of shadow AI in an environment. 

Because AI tools are frequently browser-based, SaaS discovery tools can be particularly useful in uncovering shadow AI. Additionally, SaaS management platforms help ensure that you have ongoing monitoring, not just point-in-time discovery, of unauthorized AI use. Additionally, the right SaaS management platform can uncover risky usage patterns such as shared accounts or the use of personal emails instead of corporate identities to access AI tools. 

Monitor infrastructure and network performance

Endpoint monitoring and network monitoring can help IT uncover abnormal resource utilization patterns and spikes that could be related to shadow AI. For example, some potential indicators of unauthorized use of AI tools include: 

  • Bandwidth utilization spikes
  • Unexpected network flows
  • Increased resource (CPU, memory, etc.) utilization on endpoints 

By properly monitoring and alerting for infrastructure and network performance issues, IT can detect resource-intensive shadow AI use before it spirals out of control. 

How Auvik helps you take control of shadow AI 

Auvik SaaS Management offers robust SaaS discovery and risk-detection capabilities, enabling IT to identify unauthorized AI use based on real-world activity. With Auvik, you can:

  • Build a SaaS inventory that includes AI apps like ChatGPT
  • Discover exactly which apps users are accessing and detect risky behaviors like password sharing and using personal accounts on business devices
  • Detect shadow AI and shadow IT, and devise a strategy to bring AI tools under management or offboard them

If you’d like to see how we can help you mitigate shadow AI risk and improve your SaaS visibility, sign up for a free 14-day trial today

Auvik logo

Try Auvik SaaS Management now

Take the first step in reclaiming control over your SaaS environment.

Leave a Reply

Your email address will not be published. Required fields are marked *