February Protiviti survey found that half of company leaders don’t know the extent of employee AI use
Companies are losing control over how their employees adopt artificial intelligence. The phenomenon known as shadow AI—the use of unauthorized consumer AI tools for work purposes—is now affecting an estimated 71 percent of workers in the United Kingdom.
This unauthorized use of tools has spiraled so far beyond company IT governance that experts now describe it as even worse than traditional shadow IT, which was the old problem of employees using unapproved software. A 2024 Microsoft report revealed that nearly 80 percent of workers who use AI rely on their own personal tools rather than company-approved platforms.
One biotech researcher, Gregg Bayes-Brown, used a personal Google account to access NotebookLM despite fully understanding the company AI policies that he himself had helped develop. His reasoning was purely practical: the tool compressed 150 hours of work into just 30 minutes. “The chance of you being eclipsed by a Chinese peer is a massive risk,” he said, weighing that concern against potential data leaks.
Several well-known companies have already experienced shadow AI disasters. Samsung banned ChatGPT on company devices after engineers uploaded proprietary source code. Amazon grew wary when ChatGPT responses began reproducing internal company data word for word.
Leslie Nielsen, Chief Information Security Officer at Mimecast, called shadow AI “death by a thousand cuts.” A single employee uploading financial documents to an unapproved AI tool could expose sensitive information if someone outside the company uses the right prompts to trigger the regurgitation of that confidential data.
A February survey by Protiviti found that half of company leaders do not know the extent of employee AI use. Only 40 percent had formal AI governance policies in place. Yet 90 percent of IT leaders at large companies plan to increase their AI tool budgets this year.
There is a clear paradox: 80 percent of IT leaders surveyed by Freshworks believe that unsanctioned AI users are more productive, but 86 percent witnessed at least one negative incident resulting from unauthorized AI use—including compliance violations, security breaches, or data leaks.
Nicole Jiang, co-founder of Fable Security, stated that shadow AI is “worse” than shadow IT because “companies are actually allowing and pushing for more AI adoption at a rate we’ve never seen before.”