Article 2: Why AI Will Be Harder to Implement Than Most People Think

If the first article was about getting clear on what artificial intelligence actually is—and what it isn’t—then the next logical question is far more practical: why hasn’t this already happened at scale? 

Given the speed of innovation, the volume of capital, and the sheer amount of attention AI receives, many assume broad adoption inside large enterprises is inevitable—and imminent. The underlying belief is simple: the technology exists, so the change will naturally follow. 

That assumption gets one thing wrong. 

Enterprise transformation has never been driven by technical capability alone. It is shaped—and often constrained—by people, incentives, and institutions. AI is no exception. 

In fact, if history is any guide, AI may be harder to implement well than many of the technologies that came before it. 

Technology Doesn’t Fail in Enterprises. Adoption Does. 

One of the most persistent myths surrounding AI is the idea of instant transformation—that an organization can simply “turn it on,” restructure workflows, and unlock productivity gains at scale. 

Anyone who has lived through real enterprise change knows how unrealistic that is. 

ERP systems didn’t roll out overnight. Cloud migrations took years longer than expected. Even basic data‑driven operating models required prolonged cultural and organizational change before they delivered on their promise. 

AI introduces all of those same challenges—plus new ones around trust, accountability, data integrity, and risk. It also introduces a very human concern that rarely gets addressed honestly: 

“If this works the way people say it will, what happens to my role?” 

That question matters—and it brings us to a useful parallel. 

The SelfDriving Car Analogy 

We’ve had the core technology required for self‑driving cars for more than a decade. Sensors, computer vision, real‑time decision engines, and massive compute power—the foundational components have been in place for years. 

And yet, self‑driving cars are still not broadly on the road. 

Why? 

Not because the technology doesn’t work in controlled environments. It does. But because real‑world deployment is constrained by factors that have little to do with technical capability and everything to do with human systems: 

  • Who is liable when something goes wrong? 
  • Who signs off on safety? 
  • How does regulation adapt? 
  • How much risk is society willing to tolerate? 
  • Who trusts the system enough to give up control? 
  • And ultimately, who do I hold accountable when it fails? 

Empirical research reinforces this reality. MIT Sloan studies of industrial AI adoption describe a clear ‘productivity J-curve’: organizations frequently experience an initial decline in productivity after introducing AI, followed only later by sustained gains. As Professor Kristina McElheran explains, ‘AI isn’t plug-and-play—it requires systemic change, and that process introduces friction, particularly for established firms.’ Slower adoption is not failure; it is an expected phase of transformation. 

In other words, the technology moved faster than the institutions, incentives, and trust structures surrounding it. AI inside the enterprise faces an almost identical challenge. 

The models may be powerful. The demos may be impressive. But widespread adoption depends on governance, accountability, data readiness, human trust, and organizational alignment—none of which move at the speed of code. 

Where AI Actually Has to Live 

To understand why this matters, it helps to ground the conversation inside a real organization. In any Fortune 500—or even Fortune 1000—company, there is an IT organization whose primary responsibility is not innovation. It is stability. Keeping the lights on. Technology leaders often frame it the same way: 

“My job isn’t to make it brighter. It’s to make sure it doesn’t go dark.” 

These teams are responsible for core systems, data integrity, security, and compliance. They are rewarded for minimizing risk, not accelerating experimentation. Now layer on the dominant external narrative around AI. The same teams tasked with implementing this technology are repeatedly told—implicitly and sometimes explicitly—that AI will eventually make their roles obsolete. 

When IT teams are told that the technology they’re being asked to implement may one day eliminate their jobs, resistance isn’t irrational. It’s human. And that resistance matters, because these teams control access to systems, data, and production environments. No AI strategy comes to life—much less thrives, reduces cost, or increases profitability—without them. 

Inertia Is a Feature of Large Organizations 

Across Equiteq’s North America practice, we’ve worked with hundreds of technology services firms and specialty consulting businesses—and through them, thousands of enterprise clients. One pattern is consistent across all of them: 

Organizations change more slowly than technology evolves. 

We’ve seen companies continue to run mission‑critical processes on Excel spreadsheets or Access databases long after enterprise‑grade tools were available, implemented, and demonstrably superior. Not because leadership didn’t understand the benefits or because better solutions didn’t exist—but because changing how work actually gets done, how people decide, and how teams collaborate is extraordinarily difficult. 

AI doesn’t eliminate this inertia. It collides with it head‑on. 

Mandates Don’t Implement Technology. People Do. 

A common rebuttal is that none of this matters because boards and executives will mandate AI adoption. And to be fair, boards can—and often should—mandate outcomes. But they don’t sit on the shop floor, in the accounting pod, or inside the steering committee meeting making day‑to‑day decisions that determine whether implementation succeeds. Someone still has to integrate models into legacy systems, prepare and govern data, tune outputs, manage risk, monitor performance, and retrain systems as conditions change. In most enterprises today, data is fragmented, inconsistently defined, and politically owned. 

Cleaning it up is neither fast nor simple. 

Just like self‑driving cars, the true constraint isn’t the intelligence of the system. It’s the environment the system has to operate within. 

AI Replaces Tasks, Not Accountability 

Another misconception slowing meaningful adoption is the belief that AI replaces entire roles. In practice, AI replaces tasks—and often only portions of tasks. What it does not replace is accountability. 

There’s a more subtle limitation that often gets overlooked, and it came up recently in a conversation with the CEO of a high‑growth consulting firm. He described it as the loss of what he called the “executive smudge.” 

AI is very good at forcing data into clean categories and binary conclusions. Leadership decisions rarely operate that way. 

Take employee turnover at a manufacturing facility. On paper, someone resigning shows up as a negative—or a “regrettable loss”—that worsens the metric. But in reality, turnover can be healthy. A departure may strengthen the team, remove cultural drag, or create space for stronger talent. An experienced executive understands that nuance and tells the story accurately. The data alone does not. 

The same dynamic exists in finance. Anyone who has spent time in a boardroom knows there is both an art and a science to financial storytelling. The numbers matter—but interpretation matters just as much. With AI, you gain precision and consistency. What you often lose is the ability to shape context deliberately: to explain why something that looks negative on paper may actually be the right outcome, the right path, or the right long-term move, despite the short-term loss. 

That “smudge” isn’t manipulation—it’s judgment. And it’s exactly why accountability cannot be automated. AI can surface patterns and anomalies at scale, but someone still has to interpret them, contextualize them, and own the narrative that informs real decisions. 

Someone remains responsible for decisions, customer outcomes, regulatory exposure, financial results, and reputational risk. That responsibility doesn’t disappear simply because an algorithm is involved. You can’t fire a bot when something goes wrong. 

As long as accountability remains human—and it will—humans remain firmly in the loop. To be clear, none of this is an argument against AI’s transformative potential. Quite the opposite. The fact that AI will be adopted unevenly, iteratively, and with friction is not a weakness. It’s the reality of how complex systems evolve. And the organizations that extract disproportionate value from AI will not be the ones chasing headlines or vendor promises. They’ll be the ones that understand adoption as a human challenge first and a technical challenge second

That reality creates a meaningful opportunity for the firms best positioned to help guide the transition. 

Read more of the series

Article 1: Resetting the Conversation on AI - What AI actually is, what it isn't, and why the doomer narrative fails under any historical lens.

Article 3: What AI Means for Technology Services and Specialty Consulting Firms - How AI is reshaping firm value, operating models, and investor expectations - and how to respond.

Executive Summary - review a summary of all three articles.

Contact the authors