Here's something that troubles me: Companies are pouring $644 billion into AI investments this year. Meanwhile, you've probably seen that headline floating around: "95% of enterprise AI initiatives fail."

Is that number a bit clickbaity? Maybe. The methodology varies depending on who's counting and what "failure" means. Some pilots never launch. Some launch and get abandoned. Some technically work but nobody uses them.

But here's what's not clickbait: 42% of companies report zero ROI from their AI deployments. That number's from hard financial data, not headlines. And when these projects crash and burn, everyone points fingers at the technology. Wrong API. Insufficient data. Integration issues. Technical debt.

That's like blaming the car when you never taught anyone how to drive.

More importantly, we might be racing toward something much bigger than failed pilots.

The AI bubble nobody wants to talk about

Remember "The Big Short"? Michael Burry - the guy who predicted the 2008 housing crisis when everyone thought he was crazy - is now betting against AI hype.

Let’s let that sink in. The man who spotted the subprime mortgage bubble, who was right when literally every major financial institution was wrong, is looking at AI and seeing the same patterns.

We're seeing all the classic bubble signs. Massive capital inflows with little regard for fundamentals. Valuations disconnected from actual business results. Everyone afraid of missing out, so they pile in without a clear plan. And most tellingly: very few people can explain how they'll actually make money from this.

Walk into most boardrooms and ask "How will this AI investment generate returns?" and you'll get a lot of hand-waving about "transformation" and "staying competitive" and "future-proofing." Very little about actual revenue growth, cost reduction, or competitive moats. That should terrify every CFO.

The AI market has ballooned to unprecedented levels on pure potential. But at some point (probably soon) investors will want to see returns. Real returns. Not "we're experimenting" or "building for the future" or "AI-ready infrastructure." They'll want revenue growth, cost savings, competitive advantages: tangible business outcomes that show up in financial statements.

If organizations can't demonstrate real ROI soon, we're looking at a correction that'll make the dotcom crash look gentle. Billions in write-offs. Mass layoffs in the AI sector. Budget freezes across tech departments. And worst of all, a setback that could delay genuinely valuable AI adoption by years because everyone will be gun-shy.

The clock is ticking. And here's the uncomfortable truth: Most organizations can't capture ROI from AI because the people making decisions - the executives, the board members, the budget holders - don't actually understand what they bought. It’s purchased on the premise of headcount reduction (labor), implemented like a tool (software), and measured like…. well who knows how.

The real problem is staring at us

I was talking to a senior executive last month who was trying to pinpoint why their AI rollout flopped. They'd invested a lot into the best tools money could buy. Every team had access to ChatGPT Enterprise, Claude, and a suite of specialized AI applications.

I asked the question: "Can you show me how one of your team uses these tools to solve a problem in your business?"

Longer pause.

"Well, I don't personally use them. That's what the team looks at, I can get you in touch."

There lies the small crack in the windshield that ends up breaking it.

It’s a fair and common response but it shows the delegation of duty that comes when trialling AI. Leaders are making multimillion-dollar bets on technology nobody fully understands, then delegating the "AI problem" to team members who have even less context about strategic priorities. IT get to do the training, and BizOps get handed the rest.

This executive was far from stupid. He was just operating from a mental model that worked for the last 40 years: Buy enterprise software, have IT implement it, train end users, measure adoption. That's how you rolled out CRM systems, ERPs, and every other business tool.

But AI isn't like those tools. It's not a system that runs in the background. It's not a process you can flowchart once and forget. Well - it might be right this moment, but remember the Big Short problem I mentioned above? Those economics rely on severe and drastic profits for Big AI firms, which rely on severe and drastic implementations - and therefore layoffs.

The sad truth is that we’re being forced to replace a LOT of humans with AI right now, with a technology that we’re betting will replace humans - which there is no evidence of yet.

So we’re in a moment; a fundamentally different way of working that requires leaders to understand it firsthand: not just conceptually, but practically.

When executives don't use AI themselves, they can't ask the right questions. They can't spot when vendors are selling snake oil. They can't recognize which use cases will actually drive value versus which ones are just shiny demos. And they definitely can't lead their organizations through the cultural changes required to make this work.

The companies avoiding the bubble aren't the ones with the best AI tools. They're the ones where leadership actually understands what they bought, and what it’s going to become.

Why leadership literacy is the unlock

Here's the pattern I see in organizations that are actually getting ROI from AI:

The CEO or C-suite executives use AI daily. Not performatively. Not for the occasional demo. They're in there writing Master Prompts, asking their AI to structure their thoughts, debugging outputs, and figuring out what works and what doesn't. They build intuition about where AI excels and where it falls flat.

Then (and only then) they can start to have informed conversations about AI strategy that are not grounded in delegation or theory. They can evaluate vendor pitches with real BS detectors. They can spot when middle management is overselling capabilities or underselling risks. They can make smart build-vs-buy decisions because they've actually used the technology in question.

Most importantly, they can lead by example. When the CEO shows up to meetings with AI-generated analysis they've personally reviewed and refined, it sends a signal: This is how we work now. It's not optional. It's not just for the tech team. It's core to how we compete.

Compare that to organizations where leadership treats AI as someone else's problem. Strategy gets set in boardrooms by people who've never opened ChatGPT. Budgets get approved based on vendor presentations and analyst reports, not hands-on understanding. Implementation gets delegated down the chain until it lands on people with the least context about business priorities.

Those organizations aren't failing because of bad technology. They're failing because of an abdication of leadership that understands that what got you here, will get you no further. That’s a sentence worth remembering when you realize just how much about your professional life and ways of working you need to rip up to succeed in the era of AI.

And as the bubble pressure builds, that abdication gets expensive fast. Because when you're burning millions on AI tools that nobody can use effectively, the market eventually notices.

It's not just leaders.

Only 39% of workers globally have received any company training on AI. Three-quarters of knowledge workers are using AI tools right now, but 70% have zero workplace guidance on how to use them properly.

Think about that for a second. We hand people incredibly powerful technology, then act surprised when things go sideways.

Shadow AI: the issue hiding in plain sight

Here's where it gets really interesting (and a bit scary).

Microsoft surveyed 31,000 workers across 31 countries and found that 78% bring their own AI tools to work without asking permission. Most of that ChatGPT usage happening at your company right now? It's through personal accounts (74% of it). Same story with Google Gemini (94% personal accounts) and basically every other AI platform.

A study by Cyberhaven found that 38% of employees are sharing sensitive company information with AI tools. Not because they're malicious. Because nobody taught them what's safe and what isn't.

The average organization now has 65-75 different GenAI apps in use. Between 80-90% of them? Completely unmanaged.

McKinsey's research shows that executives underestimate AI usage in their companies by 300%. Leaders think 4% of employees are using AI extensively. The real number is 13%.

If you're a leader reading this and thinking "that's not us," I'd gently suggest you might want to ask pointier questions.

The math that needs to scare leaders

Let's talk money: because that's what the market will eventually care about.

IDC projects $5.5 trillion in global losses by 2026 because of IT skills shortages. AI skills represent the biggest gap. McKinsey estimates another $4.4 trillion in productivity growth sitting on the table, unrealized, because workforces don't have the skills to capture it.

That's $10 trillion in value that's either being lost or left uncaptured. And here's the uncomfortable part: A lot of that $644 billion being invested this year is going to end up in the "lost" column, not the "captured" column.

Meanwhile, workers with AI skills are commanding salary premiums of 25-56%—that's $18,000 to $37,000 more per year. Top AI engineers are pulling $160,000-$300,000 annually, with elite researchers getting over $1 million in total compensation.

The hiring market reflects this scarcity too. ManpowerGroup found that 74% of employers globally struggle to find skilled talent, with AI and data skills at the top of that list. AI job growth is running 12.5 times faster than the overall job market.

So a lot of companies are stuck in a brutal cycle which I heard first hand two months ago - captured perfectly in one sentence:

“How long will this AI training take? We’re very busy.”

- At least 50% of discussions about AI training that I have had, feature this quote in some form.

The irony was not lost - the only solve this business realistically has is to save time BY better adopting AI.

In this case, the leader can't implement AI effectively because they lack the time. But they also can't hire or retain those workers because everyone's competing for the same tiny talent pool. Meanwhile, they're hemorrhaging budget on tools that mostly sit unused or misused.

This is exactly how bubbles pop. When the money dries up and investors demand returns, organizations will suddenly discover they spent millions learning absolutely nothing.

What happens when you get people to the start line

Now for the good news: when organizations do this right, the results are dramatic.

Nielsen Norman Group analyzed three major studies and found average productivity increases of 66% among AI-literate workers. Customer support saw 13.8% improvement. Business writing got 59% faster. Software developers completed 126% more projects.

Federal Reserve Bank research shows AI users save an average of 5.4% of total work hours (about 2.2 hours per week). During the hours when they're actively using AI tools, workers report 33% higher productivity.

GitHub Copilot studies demonstrate 55.8% faster task completion for developers, with an 84% increase in successful builds and 15% higher pull request merge rates.

But here's what I find most fascinating: the productivity gains aren't distributed evenly. Junior knowledge workers show 35% productivity increases versus just a few percentage points for seniors. AI is democratizing capability by helping less experienced workers more dramatically.

New employees reach expert-level productivity in 2 months with AI assistance versus 8 months without. That's a 4x acceleration in learning curves.

And it's not just speed. Quality improves too. Customer support success rates go up 1.3%. Business writing quality scores increase 18%. Companies with comprehensive AI training programs report 25% increases in sales performance.

Every major consulting firm sees the same thing

I've read through recent research from McKinsey, BCG, Gartner, Deloitte, PwC, and Accenture. They all converge on the same conclusion: we have a massive workforce readiness crisis.

But here's what they're too polite to say directly: It's a leadership readiness crisis first.

BCG identifies a "silicon ceiling" where 75% of leaders use GenAI several times weekly versus only 51% of frontline workers. Sounds good until you realize—those leaders are checking boxes, not building real capability. They're doing the AI equivalent of learning to send an email and calling themselves "digital natives."

McKinsey's survey of 56,000 workers shows 94% report AI familiarity, yet only 13% use AI for 30%+ of daily work. 47% expect this usage level within a year while only 20% of leaders believe this realistic. Business leaders are living in a different reality than their workforce because of the nature of the tasks that GenAI helps people do + the age brackets of these people - and their strategic planning reflects that disconnect.

Gartner found that 45% of high-maturity organizations keep AI projects operational for 3+ years versus only 20% of low-maturity organizations. The difference? High-maturity orgs have leaders who understand the technology well enough to set realistic expectations, make smart resourcing decisions, and sustain momentum through inevitable challenges.

But here's the stat that really gets me, from Accenture: 95% of workers see value in working with GenAI and 94% are ready to learn AI skills. Yet only 5% of organizations are reskilling their workforce at scale.

There's no technology problem here. There's a literacy problem: understanding what an AI colleague is now, and what they are going to grow to become.

Leaders who don't understand AI can't prioritize AI literacy. They can't design effective training. They can't identify which skills matter versus which ones are vendor marketing. And they can't hold their organizations accountable for building real capability versus going through the motions.

The jobs question everyone's asking

I'd be remiss if I didn't address the elephant in the room: What happens to jobs?

The World Economic Forum projects 92 million jobs displaced by 2030 (8% of current jobs). But they also project 170 million new positions created, for a net gain of 78 million jobs.

This transformation will happen unevenly. Manufacturing faces particular vulnerability. Peak displacement will occur between 2027-2030. Skills requirements are evolving fast;PwC research shows skills changing 66% faster in AI-exposed jobs.

The workers who will thrive are those who learn to work alongside AI effectively. The ones who will struggle are those left behind without training.

In 2027, you will either orchestrate AI, repair it, or report to it.

- ai2027.com

Here's the thing though: this displacement is mostly preventable. Most jobs won't disappear; they'll transform. But workers will need support through that transformation. Organizations that invest proactively in AI literacy won't just avoid layoffs; they'll capture competitive advantages while everyone else is still figuring out the basics.

It’s a now thing.

The time to address this is right now. And as the math goes, it’s not just about keeping your business thriving, it’s about the entire economy being helpful to humans in the next 3 years, or very, very unhelpful.

Michael Burry didn't get famous by being wrong about bubbles. When someone with that track record starts betting against AI hype, smart leaders should be asking themselves: "Can I actually prove the value of what we're spending on AI?"

Most can't. Because they've never personally experienced what good AI usage looks like. They're making strategic decisions about technology they don't understand, based on advice from people who might understand the tech but not the business.

Here's what happens next if this doesn't change: The market will demand ROI. The bubble will correct. Budgets will freeze. And the companies that can't demonstrate clear, measurable value from their AI investments will get hammered: in their stock price, in their competitive position, and in their ability to attract talent.

Every day organizations wait, the gap widens. The talent market gets more expensive. The competitive disadvantage grows. And most critically, employees develop bad habits and workarounds that become harder to undo.

But this is fixable. And it starts at the top.

If you're a leader reading this, the first step isn't buying more AI tools or hiring more AI talent. It's rolling up your sleeves and learning this stuff yourself.

  • Spend 30 minutes a day for the next two weeks actually using AI tools. Not watching demos. Not reading about AI. Using it. For your actual work. Build a set of master prompts that begin to act as your life glossaries, ready to train whatever tool you use.

  • Write your next strategy memo with AI assistance. Analyze your last board deck. Generate three different approaches to your biggest current challenge.

  • Make AI literacy a board-level priority. Not with consultants and frameworks; with actual usage. Have every executive demonstrate competency with AI tools. Hold each other accountable. Model the behavior you expect from the organization.

  • Hold an internal AI bootcamp, where you task your leadership team with all coming to the table on how they think AI can improve [insert product/process here].

QUICK AD BREAK: Get in touch with me directly if you want a free session to walk you through the above, I’ll show you exactly how and give you a free toolkit I’ve built out for leaders to speed up their onboarding in to AI literacy.

The organizations that move decisively on AI literacy today (starting with their leaders) will find themselves with compounding advantages by this time next year.

Leaders: Want to honestly assess where you stand? We've created a 2-minute diagnostic that benchmarks your personal AI capability and your organization's readiness against industry standards and 200+ other leaders. No vendor pitch. Just a clear picture of where the gaps are. Take the assessment →

Ready to talk specifics? Our programs start with leadership teams, because that's where transformation starts. We work with executives to build hands-on AI capability first, then scale that throughout their organizations. Book a call →

Read more, see all the stats and sources in the AI Literacy report for 2026:

Keep Reading

No posts found