Published Date:
August 28, 2025

Why 95% of AI Implementation Fails, and What to Do Instead

Discover why 95% of enterprise AI projects fail, key pitfalls derailing success, and strategies leaders can adopt for measurable, scalable AI impact.
By
Daan van Rossum
Founder & CEO, FlexOS

Earlier today, when I delivered a workshop on “What’s Best, and What’s Next for AI” to over 50 senior leaders from Nordic companies, the topic of ​AI implementation​ came up.

How is it that company-wide AI transformation is so hard to pull off?

A report from MIT’s Media Lab NANDA initiative, titled The GenAI Divide: State of AI in Business 2025,” gives us a few valuable insights.

Data drawn from 300 publicly disclosed AI initiatives, 150 structured interviews with leaders, and a survey of 350 employees, the paper made headlines through its crucial finding that 95% of enterprise generative-AI pilots deliver no measurable ROI.

But this is not a story about how the technology is flawed. It’s one of the organizational and operational integration issues, like broken workflows, lack of feedback loops, and poor alignment with business needs.

MIT Paper: Why Most Projects Falter

So why do so many AI implementations fail? The paper found 5 key reasons:

1. The “Learning Gap”

Most deployments use generic tools like ChatGPT that don’t adapt or learn from context, so they don’t integrate into workflows or retain business- or role-specific knowledge.

Many of us expected AI to become smarter over time, but quickly learned that memory, if any, was very limited.

This is what the paper calls “the primary factor” that keeps organizations from winning with AI.

And this is in big part because ‘consumer AI’ is so stellar in its experience and quality of outputs that enterprise-sanctioned tools disappoint right away.

2. Build vs. Buy

Building internal AI solutions shows a much lower success rate (~33%), while purchasing from specialized vendors or partnerships succeeds ~67% of the time.

This is remarkable and highlights how the user experience is crucial in real AI impact.

While we have seen remarkable success stories of in-house tools, like in the case of ​McKinsey’s Lilli​, the truth is that while data privacy and security matters, not everyone can pull off a successful AI platform.

3. Budget Misalignment

Over 70% of generative AI budgets flow into sales and marketing tools, with low ROI. Higher-impact areas, like back-office automation, logistics, and fraud detection, are underfunded.

I know, it’s tempting to buy into shiny tools, but this is a good reminder that AI excels at rote tasks in unsexy workflows. Even though it’s not always easy to measure impact.

A VP of Procurement commented on the challenge of where to place AI bets: "If I buy a tool to help my team work faster, how do I quantify that impact? How do I justify it to my CEO when it won't directly move revenue or decrease measurable costs? I could argue it helps our scientists get their tools faster, but that's several degrees removed from bottom-line impact."

4. Shadow AI

Even when official adoption stalls, employees are using consumer AI tools informally, creating a “shadow AI economy” that bypasses governance but may actually be delivering productivity.

In fact, according to the paper, 90% of employees are using AI personally, but only 40% of firms have bought official licenses.

This is a tough one for companies to balance, as I shared with workshop attendees today.

While we can’t go about this transformation without any policy, being overly restrictive leads to people adopting their own tools.

And way worse than employees not using a company-approved AI tool is them emailing data between company and personal laptops, where they enter the very data the company is trying to protect into a free ChatGPT account.

5. Pilots Stuck in “Purgatory”

So now to that stat.

According to the researchers, most pilots don’t make it to production. According to the data in this survey, only about 5% scale with the rest remaining experimental, failing to make any real impact.

This is especially true for specialized AI tools built on top of the core LLMs like ChatGPT or Gemini. While companies are being pitched nonstop by vendors for tools that “will change everything,” the reality is that without user-backed adoption, positive impact will be limited.

A CIO quoted in the paper mentioned that “We've seen dozens of demos this year. Maybe one or two are genuinely useful. The rest are wrappers or science projects."

Notable Exceptions: Why the 5% Succeed

We could continue to focus on the 95%, but to reap the benefits of AI and continue to compete in a marketplace where more AI-centric companies challenge us for market share, a look at the successful 5% could be more helpful.

Success stories in the paper, frequently from nimble startups or focused teams, share these traits:

  • They focus on one well‑defined pain point, executed with precision and purpose. (We do this in our ​Executive AI Boot Camp​ from the very start.)
  • They partner with external specialists rather than build from scratch. Trying to do it all yourself can be counterproductive.
  • They empower line managers, not centralized AI labs, to pick and adapt tools to the workflow. I wrote about this in “​The AI Implementation Sandwich​.”
  • They deploy in structured, data‑rich domains like finance, logistics, and back-office functions where AI can plug into existing metrics.

In other words, success isn’t about building the flashiest AI; it’s about focusing on one pain point, executing relentlessly, and making AI invisible by embedding it where the work actually happens.

The Bottom Line: Driving Successful AI Adoption

Employees are quietly side‑stepping official AI platforms in favor of consumer AI tools that actually serve them.

No number of policies and mandates is going to change this.

If a personal ChatGPT account with all its unrestricted features turns me into a “Superworker,” I’d be hard-pressed to imagine tools like those don’t exist.

That friction between policy and practicality shows “shadow AI” is an unresolved alignment that will pester us for years to come, unless we get concrete on how to achieve AI success.

And that could start today.

Until next week,

- Daan

Lead with AI - Icon

New Lead with AI Boot Camp in October:Lead the GPT-5 Era and Get Certified

The newly launched ChatGPT-5 will change the way leaders work with AI significantly, so upskilling has never been more important. Build the skills now and turn GPT-5 + Agents into measurable results.

Here’s your shortcut to confidently lead with AI, and be ready to lead the AI adoption in your organization:

  • Build 5+ custom AI assistants for your role
  • 4 live, hands-on coaching sessions (two time options)
  • 15 bite-sized lessons, fully updated for GPT-5 + Agent mode
  • Verified AI Certification to signal the skills you’ve earned

Enroll now and learn alongside global leaders. Only 40 seats per cohort so you get real one-on-one coaching and personalized solutions.

Enroll Now for the October 3 Cohort

Lead with AI - Icon

AI Displaces Young Workers (Stanford), How to AI-Proof Your Workforce (Deloitte), Michelin’s AI Implementation Playbook

I read dozens of AI newsletters weekly, so you don’t have to. Here are the top 3 insights worth your attention:

#1 New Evidence of AI Displacing Young Workers (Stanford)

​A new Stanford study​ using ADP payroll data reveals a 13% decline in employment for 22–25-year-olds in AI-exposed roles, such as software development and customer service, since the launch of ChatGPT in late 2022. However, the employment of older workers in the same roles remains stable or even increases.

In an ​interview with Derek Thompson​, the researchers explain that younger workers are more vulnerable because their tasks overlap with what AI can automate, while older employees bring tacit knowledge and contextual expertise that AI struggles to replicate. The data also confirms that AI-augmented work, including roles that demand strategic thinking, complex collaboration, or nuanced judgment, is far less susceptible to automation.

>> Read ​the full research here​ and ​the researchers’ insights here​.

#2 How to AI-Proof Your Workforce (Deloitte)

A new Deloitte global survey of over 11,000 workers reveals how AI and demographic shifts are transforming talent dynamics. With aging workforces and fewer young entrants, organizations risk widening skill gaps and losing institutional knowledge.

Four key actions organizations can take to prepare for this workforce shift include blending human-AI collaboration, keeping humans “on the loop,” using AI as an upskilling engine (especially for early-career talent), and capturing expertise from retiring employees.

Together, they outline a practical roadmap for building a more resilient, adaptive workforce.

​>> Dive into the research report here.​

#3 Michelin’s AI Implementation Playbook

In ​this case study​, Michelin shows how a 136-year-old manufacturer is driving €50M+ in annual ROI with over 200 AI use cases.

They build in-house tools like the patented IRIS system, which automates parts of tire inspection but leaves final calls to trained inspectors. They pilot AI broadly but scale selectively, expanding only after testing for measurable value.

It’s a reminder that real impact comes from tight feedback loops and teams trained to let AI augment their work.

​>> Read the full case study here.

Lead with AI - Icon

Prompt of the Week

A good prompt makes all the difference, even when you're just using a core LLM.

If you've ever asked ChatGPT to “write an article” and ended up with something dry or generic, you're not alone.

This prompt from Neil Patel makes AI a more effective writing assistant by providing it with more straightforward writing guidelines, focuses, and constraints.

Write Articles People Actually Want to Read

I want to write an article about [insert topic] that includes stats and cite your sources. And use storytelling in the introductory paragraph.

The article should be tailored to [insert your ideal customer].

The article should focus on [what you want to talk about] instead of [what you don’t want to talk about].

Please mention [insert your company or product name] in the article and how we can help [insert your ideal customer] with [insert the problem your product or service solves]. But please don't mention [insert your company or product name] more than twice.

And wrap up the article with a conclusion and end the last sentence in the article with a question.

👉 Try it, tweak it, and save it for your future use. If this prompt is helpful (or if you made it better), share with us here!

P/S: In ​our 3-week Boot Camp​, we help leaders master crafting ‘SuperPrompts’ and 10x their AI usage through practical exercises, even with just a core AI platform. We can help you do the same. ​Explore the program here and claim our promotion​.
Lead with AI - Icon

Exclusive for PRO Members only: Mastering LinkedIn Writing with AI

Lead with AI PRO is excited to welcome Ruby Nguyen, CEO at Curieous (World’s Top 20 Innovative EdTech, 18k+ LinkedIn followers, 5M+ views), to lead our next Masterclass on AI for LinkedIn writing on September 11th.

Your LinkedIn presence can win you investors, clients, and top talent - if you know how to use it with purpose. In this Masterclass, Ruby will show you how to turn AI into your secret weapon for influence. You’ll walk away with:

  • A personal branding strategy that sets you apart
  • How to build your own “big idea” library for consistent branding
  • Content frameworks & AI prompt systems, with a live demo
  • Best practices for engagement and credibility - including exclusive insider insights straight from LinkedIn HQ

To ensure global access, this session will run twice:

🌏 APAC / EU timezone: 4 pm Singapore/ 9 am London/ 10 am Amsterdam

🌍 EU / AM timezone: 4 pm London/5 pm Amsterdam / 8 am Pacific Time / 11 am Eastern Time

Want to join this exclusive Masterclass for PRO members?

Then it's time to upgrade - now two weeks for free:

2-WEEK FREE TRIAL

Lead with AI - Icon

If you made it this far, reply and tell me what you'd love AI to take over in your daily workflow.

Also, please forward this newsletter to a colleague and ask them to subscribe.

If you have any other questions or feedback, just reply here or inbox me.

See you next week,

Daan van Rossum - Lead with AI

Daan van Rossum​

Host, Lead with AI

Daan van Rossum

Founder & CEO, FlexOS

Earlier today, when I delivered a workshop on “What’s Best, and What’s Next for AI” to over 50 senior leaders from Nordic companies, the topic of ​AI implementation​ came up.

How is it that company-wide AI transformation is so hard to pull off?

A report from MIT’s Media Lab NANDA initiative, titled The GenAI Divide: State of AI in Business 2025,” gives us a few valuable insights.

Data drawn from 300 publicly disclosed AI initiatives, 150 structured interviews with leaders, and a survey of 350 employees, the paper made headlines through its crucial finding that 95% of enterprise generative-AI pilots deliver no measurable ROI.

But this is not a story about how the technology is flawed. It’s one of the organizational and operational integration issues, like broken workflows, lack of feedback loops, and poor alignment with business needs.

MIT Paper: Why Most Projects Falter

So why do so many AI implementations fail? The paper found 5 key reasons:

1. The “Learning Gap”

Most deployments use generic tools like ChatGPT that don’t adapt or learn from context, so they don’t integrate into workflows or retain business- or role-specific knowledge.

Many of us expected AI to become smarter over time, but quickly learned that memory, if any, was very limited.

This is what the paper calls “the primary factor” that keeps organizations from winning with AI.

And this is in big part because ‘consumer AI’ is so stellar in its experience and quality of outputs that enterprise-sanctioned tools disappoint right away.

2. Build vs. Buy

Building internal AI solutions shows a much lower success rate (~33%), while purchasing from specialized vendors or partnerships succeeds ~67% of the time.

This is remarkable and highlights how the user experience is crucial in real AI impact.

While we have seen remarkable success stories of in-house tools, like in the case of ​McKinsey’s Lilli​, the truth is that while data privacy and security matters, not everyone can pull off a successful AI platform.

3. Budget Misalignment

Over 70% of generative AI budgets flow into sales and marketing tools, with low ROI. Higher-impact areas, like back-office automation, logistics, and fraud detection, are underfunded.

I know, it’s tempting to buy into shiny tools, but this is a good reminder that AI excels at rote tasks in unsexy workflows. Even though it’s not always easy to measure impact.

A VP of Procurement commented on the challenge of where to place AI bets: "If I buy a tool to help my team work faster, how do I quantify that impact? How do I justify it to my CEO when it won't directly move revenue or decrease measurable costs? I could argue it helps our scientists get their tools faster, but that's several degrees removed from bottom-line impact."