FlexOS | AI in HR Today with Anthony Onesto
Subscribe
FlexOS | AI in HR Today with Anthony Onesto
Issue #
50

The Invisible Co-Worker: How ‘Shadow AI’ Is Changing HR Rules Without Your Permission

Shadow AI is exploding inside companies—driving productivity, risk, and a trust crisis HR can no longer ignore.

The Invisible Co-Worker: How ‘Shadow AI’ Is Changing HR Rules Without Your Permission

It's time to address the big issue hiding in our technology systems.

For years, we’ve discussed 'Shadow IT.'

This happens when employees, frustrated by slow company processes, create their own spreadsheets to get work done.

It was inconvenient, but at least it was predictable.

Now, we are dealing with something new: “Shadow AI.”

Unlike a spreadsheet that only does what you ask, Shadow AI can think, learn, and, most concerning, make things up.

A recent WalkMe (an SAP company) "AI in the Workplace"​ survey​ found that 78% of employees use AI tools at work without their employer's approval.

Think about that: nearly 8 out of 10 people in your company are ignoring risk protocols.

They aren’t doing this to cause harm, but simply to keep up with fast-changing technology.

TOGETHER WITH

The "Productivity Paradox"

Why is this happening? The main reason is friction.

We have built complex, secure, and frankly, often clunky HCM systems.

Meanwhile, employees have access to slick, consumer-grade AI tools on their phones that offer instant gratification.

This is known as the Productivity Paradox.

About 80% of employees think AI helps them work faster, but almost 60% say it sometimes takes longer to learn the tool than to do the task by hand.

People want to feel more efficient, even if it doesn’t actually save time.

They avoid official processes because they feel outdated.

The Danger and Risk

This is where HR leaders need to focus.

Shadow IT follows clear rules, but Shadow AI operates on probabilities and can be unpredictable.

When an employee feeds sensitive data into a Large Language Model (LLM), that model doesn't just process it; it retains it for training.

That is a massive security leak.

An even bigger risk is not just data leaking out, but also incorrect data coming in.

This is called Generative Risk.

Imagine a manager needs to write a high-stakes performance review.

They are pressed for time, so they feed a few bullet points into an unapproved or some general AI tool.

The AI, trying to be helpful, "hallucinates."

It invents a meeting that never happened or cites a behavioral issue that doesn't exist because it's using data that isn't specific to your organization, but to EVERY organization.

That false information could end up in the company's official records.

This could lead to legal trouble based on something the AI made up.

The Trust Crisis

One of the most surprising things about Shadow AI is what it reveals about leadership.

We are seeing a Trust Paradox.

In situations where an employee anticipates bias from a human manager, say, a complex promotion decision or a conflict resolution, they are turning to Shadow AI.

They see the AI as more fair and objective than their manager.

If your employees trust an algorithm (which we know has its own biases) more than they trust your leadership team, you don't just have a technology problem.

You have a culture problem.

The Cost of Ignoring It

​IBM’s Cost of a Data Breach Report (2024/2025​) illustrates that the financial implications are real.

Organizations with high levels of Shadow AI usage incur data breach costs that are, on average, $670,000 higher than those without it.

Why? because you can’t fix a breach in a system you don’t even know exists.

Also, regulations like the EU AI Act label HR systems as 'high risk.'

If Shadow AI is not controlled and AI is enabled through trusted systems, companies could face fines up to 7% of their global annual revenue.

That could threaten the business.

Stop Banning, Start Enabling

So, what do we do? Do we ban it?

No, banning is not the answer. If you try to ban it, employees will just find new ways to use it secretly.

The best solution is to manage and guide the use of AI and to look for systems that build AI at the edge, but thoughtfully.

These systems are typically trusted, with AI and trust built into their platforms.

Give your teams tools that work for them, not just existing and new tools that rely on “jazz hands AI” as their core go-to-market.

  • Identify trusted tools that use bleeding-edge AI, with trust at the center and without context limited to your data alone, not the world's.
  • Set up a clear labeling system for tools: green means approved, yellow means use with caution, and red means banned.
  • Encourage a culture of responsibility instead of just demanding compliance. Help employees understand why these risks matter.
  • Use technology to help manage risk.

Shadow AI is not the enemy, but it could cause damage.

It shows that employees are moving faster than company systems.

HR should stop trying to catch up and instead create clear guidelines for the future of work.

HR should also vet existing systems and look for folks who have been around for a while and are building cutting-edge AI tools. They are out there (hint, hint).

What do you think? Does your team trust AI more than their managers?