Why Are Mid-level Techies Struggling to Show AI Impact in the Enterprise? (Issue #24)
Part 1 of 4: Why are mid-career technical delivery folks - both on the client-side as well as the vendor-side - struggling to turn years of their tech delivery experience into visible AI impact?
This is Part 1 of a four-part series on the trials and tribulations of middle level IT professionals in the AI era. Whether they are inside ‘client-side’ enterprises, or inside ‘vendor-side’ organisations.
I call them the “rusting middle”: 10–20+ year tech veterans in enterprises such as banks, manufacturing firms etc, and Service Provider orgs such as IT services, and B2B SaaS.
You know who you are: Project and Product Managers, Solution Architects, Engineering Managers, Business Analysts, Delivery Managers, Senior Consultants, and so on.
The very people who keep the enterprise wheels turning, yet now feel the ground shifting under them in the AI era.
They thrived in the ERP–Cloud–Digital wave. But in the AI wave, they’re struggling to graft relevance onto that old experience. The ‘rusting middle’ is STILL the backbone – yet it’s creaking just as AI makes speed and adaptability non-negotiable.
I’m exploring why this cohort is struggling to adapt, how enterprises help or hinder, and what it means for AI at scale.
In later parts, summarised briefly towards the end of this edition, I explore other “why’s” and provide some “moves” that can help reduce this crisis of sorts.
The View from the Middle
“I’ve spent 12 years as a programme manager in this bank,” John tells me.
“My value has always been knowing how to deliver complex projects – keep the vendors in line, push through milestones, keep the whole show on the rails. That used to be enough.
“But now I see my peers showing off AI-driven prototypes in weeks. My projects still take quarters. Suddenly the question isn’t about whether I deliver, it’s why I’m not innovating. And honestly, I don’t know what to say anymore.”
John isn’t resisting AI. He pulls out his phone and shows me a private folder of prompts he’s been using at home to draft risk logs and tidy up status reports.
“But I can’t bring this into the office,” he admits.
“Nobody’s told me it’s allowed. And even if I did, would it look like I was cutting corners? I feel like tenure – all the trust I’ve built – has become a liability instead of an asset.”
That’s the human face of what I keep hearing: a value plateau.
The people who’ve spent a decade or more building trust and delivering reliability can’t seem to turn that into visible AI impact. Let's explore why. And what can be done about it.
The View from the Top
On the other side of the same table, the CIO has his own version.
“We’ve invested millions in AI pilots,” he tells me. “Fraud detection, KYC, copilots for customer service … you name it. Every consultant’s been through here with their deck. The board wants outcomes. But when I look at my mid-level managers and analysts, the people meant to execute this vision, I see very little change. The pilots don’t scale. The returns don’t show. And my frozen middle doesn’t seem to move.”
He pauses before adding something that sounds more like a confession than a complaint.
“I don’t know if it’s resistance, or if I’ve set the wrong incentives. I still reward predictability – the on-time delivery, the neat budget. Maybe they’re just doing exactly what I asked of them. But I can’t help asking myself – why aren’t my most experienced people the ones driving the AI wins?”
Two voices, different positions, but same “stuckness”.
The Plateau in Practice
When you look across industries, the pattern repeats.
In banks, analysts sit outside the shiny labs.
In pharma, compliance writers watch AI drafts stall in red tape.
In IT services, engineers use copilots while PMs keep billing old timelines.
Employees sense it in the air: their skills don’t seem to map anymore.
They watch juniors move fast with copilots, but they know quality still depends on judgment – their judgment. And yet, that judgment doesn’t show up on the AI dashboard.
So both sides end up asking the same thing, in different words.
“I’ve put in the years,” says John, “so why does it feel like I’m falling behind right when the big opportunity arrives?”
“I’ve put in the investment,” says the CIO, “so why does it feel like my most trusted people aren’t delivering the payoff?”
Sitting with the Paradox
What’s striking is that both are right.
The professional is still valuable, but their contribution doesn’t look like “AI impact.”
The decision-maker is right too: the frozen middle isn’t converting experience into visible wins.
There’s no easy resolution here. The plateau is real, it’s measurable, and it’s frustrating for everyone involved.
And maybe the first step isn’t to rush to solve it, but to sit with the paradox:
Experienced pros deliver predictably – exactly what they were trained for.
Leaders want speed and experimentation – exactly what the system doesn’t reward.
Both sides think the other is falling short.
This is the most expensive no-man’s-land in enterprise AI. Boards see spend without outcomes. Professionals see tenure without relevance.
And until something shifts, the plateau remains.
Closing Thought
I don't pretend to have a clean answer.
And maybe the real question isn't "How do we fix it?" just yet.
But there are moves that can be made, even at this first stage:
Program managers can document their AI experiments – the prompts they use for risk logs, the time saved on status reports – and present this as "AI-enabled delivery" rather than hiding it. Small proof points that show tenure plus AI equals better outcomes.
Analysts can identify the judgment calls that only human experience catches – the regulatory nuances, the data quality flags – and position themselves as "AI quality controllers" who make AI output safe and useful.
Engineers can create the reliability patterns that AI needs – the monitoring, the rollback procedures, the production discipline – and become the bridge between AI's unpredictability and enterprise standards.
Leaders can shift one performance metric from "delivered on time" to "delivered faster using AI while maintaining standards." Small signal, big message.
These aren't grand strategies. They're tiny, visible wins that move one pilot into production and build confidence from both sides.
They start breaking the plateau.
The deeper moves, though, sit in the layers we’ll cover next:
Part 2 — The Data Ceiling: how to unlock value when privacy, silos, and governance walls choke AI adoption.
Part 3 — The LLMOps Bottleneck: how to operationalise AI reliably when DevOps veterans suddenly feel like rookies.
Part 4 — The Economics of Substitution: how to redesign roles when AI raises output but mid-level jobs start to shrink.
Each part tackles a different choke point, with its own set of moves. Together, they sketch a path out of the paradox we’ve surfaced today.
See you next week with Part 2! And if you have any ‘survival tips’, send them my way!
Srini