Article 1: Artificial Intelligence – Let’s Start by Getting the Basics Right 

Artificial intelligence is everywhere right now. Boardrooms. Earnings calls. Conference agendas. LinkedIn feeds. White papers. Podcasts. Think pieces. Academic articles. And unfortunately, much of the public discourse is dominated by bad takes from people incented to drive clicks, not to share truth—or even the most likely reality. 

The biggest challenge we have with AI today has very little to do with the technology itself. The real problem is that most people talking about AI don’t actually understand it—or, more importantly, don’t understand how technology gets implemented and used inside real organizations. 

There are plenty of well‑written articles by people who sound authoritative. They reference models, architectures, and macroeconomic theory. They position themselves as experts because they’ve studied technology, written about it, or advised from the sidelines. 

But here’s the reality: many of these perspectives come from individuals who have never spent meaningful time operating inside the enterprises they claim will leverage this technology. They’ve never been responsible for implementing systems inside Fortune 500 companies. They’ve never dealt with the organizational friction, inertia, and human dynamics that actually determine whether technology succeeds or fails. As McKinsey’s North America Chair Eric Kutcher noted earlier this year, the AI shift is "80 percent business transformation and 20 percent technology transformation." In other words, the constraint on AI’s impact isn’t model capability; it’s leadership, operating models, incentives, and adoption—exactly the pattern we’ve seen with every prior general-purpose technology. 

That lack of real‑world experience is how we ended up with the current “AI doomer” narrative. The loudest version of that narrative typically sounds like some variation of the following: 

  • AI is going to eliminate white‑collar work 
  • AI is going to hollow out the middle class 
  • AI is going to detonate the economy like a nuclear bomb 

That argument is not just flawed—but deeply unconvincing. 

The Doomer Narrative Falls Apart Under Any Historical Lens 

Let’s be clear: AI is an incredible technological innovation. The idea of agentic AI—systems that can reason, act, iterate, and operate with increasing autonomy—will be transformational. But transformational is not the same thing as catastrophic. 

History matters here. Every major innovation that fundamentally changed how work gets done sparked fear at the time it was introduced: 

  • The internet 
  • Email 
  • ERP systems 
  • Cloud computing 
  • Data visualization tools 
  • Internal collaboration platforms 

Each of these technologies disrupted workflows. Each changed job descriptions. And yes—each eliminated certain roles entirely. But every single one of them also created more jobs than they destroyed. 

ERP systems, for example, were supposed to automate finance teams out of existence. Instead, they created entirely new roles—FP&A functions, systems integrators, and data governance teams—that simply didn’t exist before. The work didn’t disappear; it evolved. 

The idea that artificial intelligence would somehow be the first technology in the history of mankind to reverse that pattern—to destroy more economic value than it creates—is a bridge too far. Not from a technical perspective, but from an anthropological and sociological one. 

Put simply: innovation breeds innovation. It always has. Ernst & Young’s research is already showing direct signs of this historical trend continuing. EY’s AI Pulse Survey found that 96% of organizations investing in AI saw productivity gains, with the majority reinvesting those gains into growth, reskilling, and new capabilities—not workforce reduction. 

If you believe AI will be the exception, you’re not really making a technology argument—you’re making a civilizational one. 

AI Is a Toolset, Not a Replacement for Humanity 

One of the biggest mistakes people make when talking about AI is framing it as a substitute for human work rather than an amplifier of it. 

AI doesn’t remove the need for judgment. It doesn’t remove the need for context. It doesn’t remove the need for domain expertise. And it certainly doesn’t remove the need for humans who understand how organizations actually function. 

At the end of the day, someone still owns the decision, the outcome, and the risk—and that accountability doesn’t disappear just because an algorithm is involved. 

What AI does extraordinarily well is: 

  • Increase speed 
  • Increase scale 
  • Reduce friction in information processing 
  • Enable better decision‑making when paired with the right data 

That last point is critical—and often ignored. 

AI does not magically understand your business. It doesn’t intuit how your organization works. It doesn’t know which data matters, which data is wrong, or which decisions carry real‑world consequences. 

AI has to be trained, tuned, and supplied with structured, meaningful data—and, critically, governed by humans who understand the implications of its output. That reality alone should dismantle the idea that AI is on track to simply erase white‑collar work wholesale. 

Why Experience Matters in This Conversation 

Speaking from years of collective transaction and operational work across Equiteq’s North America practice, we’ve seen every version of a technology services firm and specialty consulting business imaginable. We’ve analyzed their customer bases, client engagements, and detailed work products. 

We’ve also implemented nearly every type of enterprise technology you can think of—ERP systems, accounting platforms, internal communication tools, collaboration software, and operational systems. In every case, the technology itself was objectively better than what it replaced. More efficient. More accurate. More scalable. 

And in every case, adoption took far longer than anyone expected—sometimes years, sometimes decades. 

We’ve worked with organizations that continued running core parts of their business on Excel spreadsheets or Access databases long after enterprise‑grade alternatives were available and fully implemented. Not because better tools didn’t exist, but because organizational behavior is far more powerful than technical capability. 

When IT teams are told—explicitly or implicitly—that the technology they’re being asked to implement may eventually eliminate their roles, resistance isn’t irrational. It’s human. 

That’s the lens through which AI needs to be understood. 

Getting the Conversation Back on Solid Ground 

AI isn’t hype—but it’s also not magic. 

It’s an extraordinarily powerful platform that, when implemented thoughtfully, will drive meaningful growth, productivity, and innovation across industries. It will absolutely change how work gets done. What it won’t do is flip a switch and make humans irrelevant. 

Fear‑based narratives lead to fear‑based decisions. And fear is a terrible foundation for strategy—especially when dealing with a technology that requires long‑term thinking, disciplined execution, and human buy‑in to succeed. 

Before we talk about AI implementation, AI strategy, or AI’s impact on specific industries, we first need to get comfortable with what AI actually is—and what it isn’t. 

Everything else builds from here. 

Read more of the series

Article 2: The Reality of Implementing AI Inside Enterprises - Why adoption will be slow, human-driven, and far messier than most pundits claim.

Article 3: What AI Means for Technology Services and Specialty Consulting Firms - How AI is reshaping firm value, operating models, and investor expectations - and how to respond.

Executive Summary - review a summary of all three articles.