15 Issues In: What 1000 Enterprise AI Readers Taught Me
Skip the archive. Steal the hits. Avoid my mistakes.
15 editions. Weeklies without fail. Mild exhaustion. Surprising lessons. This one’s for the reader who prefers a curated digest to wandering through my archive. Also: if you're ever tempted to write your own, read this first.
Quick Brief
Fifteen issues published. 1,000+ subscribers across platforms. And to my mild astonishment, people are actually not just reading this stuff but trying it out, implementing and succeeding or failing.
I haven’t written this as yet another "lessons learned" waffle. I wanted to share the method behind what turned out to be a reasonably systematic approach to writing on enterprise AI adoption.
Most of these weeklies came from a 70/30 split: 70% reading and listening - Substack threads, LinkedIn posts and commenting on them, analyst reports, Reddit discussions, X arguments, podcasts (so damn useful!); the other 30% actually doing the work with prospects and clients through our business, @Stack Digital.
I used ChatGPT and other models, of course, but only after the 12-lever spine, some outside inputs from reading + listening, AND real client chats made the idea too itchy to ignore.
Plus I also mention why I've dropped Beehiiv and sticking with the LinkedIn mothership and now the Substack sidecar.
“The” Framework: Why It Became My North Star
When I started High Stakes, I had a theory that enterprise AI adoption fails not because of rubbish technology, but because of fragmented execution.
Everyone was banging on about the latest model or the sexiest use case.
But what were the core levers that actually determine whether AI projects survive contact with enterprise reality?
Some digging through the various Deep Research models gave me these 12:
Strategy & Vision | Data Foundations | Tooling & Platforms | Governance & Ethics | Security & Privacy | Workforce & Change | Use-Case Prioritization | Procurement & Vendor Management | Integration & Architecture | Value & ROI Measurement | MLOps & Lifecycle | Sustainability & Footprint
Each lever represents a chokepoint where AI initiatives go to die quietly.
Miss one, and your pilot can become a PowerPoint memorial.
I haven't treated all 12 levers equally. Research told me to weight certain themes more heavily:
Workforce & Change (because humans break things).
Security & Privacy (because breaches kill companies).
Sustainability & Footprint (because this will be BIG).
Strategy & Vision (because someone has to know where we're going).
Governance & Ethics (because compliance isn't optional).
So across 15 issues, I spent more time on these five. Not because the others don't matter, but because these are where most enterprises actually get stuck.
The Audience That Showed Up (And Why They Stayed)
I wrote for three types of people having very different AI conversations:
Sellers - CEOs, CROs, and GTM leaders at IT services firms, B2B SaaS companies, and AI model providers trying to decode how enterprise buyers actually make decisions.
Buyers - CIOs, digital transformation heads, and procurement leaders navigating vendor pitches with budget pressure and board oversight breathing down their necks.
Influencers - AI engineers, industry analysts, and consultants who shape the narrative and need market intelligence they can build thought leadership around.
What I didn't expect was how much overlap there'd be.
The same person might be selling AI services by day and buying AI tools for their own team by afternoon.
Turns out, everyone's playing multiple roles in this market.
How Algorithms Humble You
Here's where I learned something mortifying about newsletter distribution.
Nothing came back until Issue #5 went out. Radio silence for a month.
I was starting to wonder if I was shouting into the void.
Then Issue #5 hit - the one about sales becoming technical and suddenly my inbox woke up.
But I see that people were replying about Issues #1 through #4 too.
Turns out, LinkedIn and other social algorithms had been sitting on my earlier editions like some sort of digital post office.
So when someone writes "You nailed the credit-trap effect" about the hyperscaler piece, or "We had that exact PyTorch hiring conversation yesterday" about the developer post, they're not necessarily responding in order.
Which makes me look either prescient or completely confused, depending on your perspective.
The feedback taught me that timing in newsletter land is more art than science.
But the substance started landing hard.
The 12-Lever Tour: What Actually Resonated
Let me walk you through how the weighted approach played out across 15 issues.
I will share the feedback that proved (or disproved) the framework. I wanted to tag them too, but that would have meant seeking their OK for it. A bit of a logistics challenge.
You know who you are when you see your comments! And my thanks to you again :))
The Heavy Hitters (Issues 1, 4, 5, 6, 8, 10, 11, 13, 14, 15)
These tackled the five core levers I'd weighted heavily:
Workforce & Change (Issues #5, #6, #10):
Technical sales evolution.
Developer hiring shifts.
Copilot adoption incentives.
Reader response:
"..its funny… my best rep can't spell PyTorch, but she closes deals. Now I'm making her shadow our Solutions Architect, your technical sales point hit home."
Security & Privacy (Issues #14, #15):
Zero-trust mesh for AI.
Security certifications as GTM.
Reader response:
"...well, thanks for this article..I forwarded the zero-trust piece to a customer after one of their vendors failed an audit due to lack of call-level traceability. And it seems to be pushing us ahead in the CISO eye"
Sustainability & Footprint (Issue #8):
ESG as competitive moat.
Reader response:
"... pitched 'Token Trim as a Service' and won against three bigger firms. Borrowed your exact wording – it's not that we did not know this, just that a particular style of naming lands better, that’s all…"
Strategy & Vision (Issues #1, #4):
Hyperscaler credit traps.
AI proposals speaking CFO.
Reader response:
"...we just realized 70% of our AI spend is pre-committed to AWS. Your credit-trap warning came six months too late, but we're fixing it now.."
Governance & Ethics (Issues #11, #13):
Policy-as-Code implementation.
CFO-driven payback rules.
Reader response:
"... had Copilot rollout with training videos. Nobody watched. Then we offered £500 bonuses. now it's a competition between teams…:)"
The Supporting Cast (Issues 2, 3, 7, 9, 12)
These covered the remaining levers – important, but not where most enterprises get stuck:
Reader response:
"... your AI vault blueprint saved us six months of architectural debates. We implemented it directly with our data team….can’t complain.."
What the Weighted Framework Revealed
Three patterns emerged that I hadn't expected:
People problems trump technology problems. The workforce and change management pieces consistently sparked the longest reader replies. Getting humans to adopt AI tools is messier than getting AI tools to work.
Security sells harder than features. The zero-trust and the most recent certification piece are causing more conversations than anything else. Risk mitigation beats feature benefits in enterprise sales.
Money conversations happen at the top. Issues tied to CFO concerns, ROI measurement, and budget protection got the most implementation feedback. Enterprise AI adoption is ultimately a financial discipline disguised as a technology challenge.
What I Got Wrong
I tried working with a Beehiiv marketing agency for three months. Didn’t work. A bit too long for me given their ‘wait and watch’ approach to drive engagement.
I also learned that subscriber count is a vanity metric designed to make you feel important while missing what matters.
Started with 1,200 imported emails, cleaned out the dead weight, pared down to about 600, and now sit at about 1,000 readers, about 25% of them opening the emails weekly, 55% opening once every two weeks, or, increasingly, engaging on Linkedin in some way.
Quality over quantity isn't just a platitude…felt like survival.
Moving to Substack lately wasn't about features. It was about focus. Beehiiv felt like a performance dashboard. Has some great features and would go back later, maybe.
Although it is early days, Substack feels like a writing desk that happens to sit next to some of the smartest tech thinkers on the planet.
The Unintended Consequences
Three things happened that I didn't see coming:
Some IT Services folks started using some content in client advisory work.
A few consultants mentioned using Issue #8 (ESG/Clean AI) as reference material for client conversations. Apparently, some were forwarding it along as background reading. Whether that's helpful or just adds to everyone's inbox clutter, I couldn't say.
Some readers adapted the frameworks for internal use.
A few mentioned using the AI scorecard and vault blueprints in their own planning sessions. Whether that's helpful or just adds to the slide deck pile remains to be seen.
Vendor roadmaps shifted.
Three growing SaaS companies say they adjusted product priorities based on market analysis from the newsletter. The zero-trust mesh piece influenced a couple of security vendors' 2026 plans. I am yet to decide if this is validating or terrifying!
What's Next: The Second Cycle
The weighted 12-lever approach proved that enterprise AI adoption is systematic, not spontaneous.
But your feedback revealed gaps I need to fill:
Vertical breakdowns (AI in financial services ≠ AI in manufacturing).
Regulatory compliance as competitive advantage.
Multi-agent orchestration in enterprise workflows.
Carbon accounting for AI operations.
Issues #17-30 will cycle through the 12 levers again, but with the same weighting strategy and deeper cuts into these emerging patterns.
Same framework, sharper focus, more implementation detail.
Thank You (For the Reality Check)
If you've ever replied, disagreed, implemented something, or just lurked with purpose > thank you.
This started as a side project to organize my own thinking.
It's become a systematic approach to understanding how enterprise AI actually gets adopted.
The best part seems to be that you're not just reading, you are implementing.
That's what makes this worth doing.
See you next Thursday for Issue #17. With a brand-new piece on something brand-new.
- Srini
P.S. If High Stakes influenced a decision at your organization, reply and share the story. I'll feature the best ones in Issue #20.
Was this newsletter valuable?
⭐⭐⭐ The weighted framework approach works, keep cycling through.
⭐⭐ Good reflection but need more implementation detail.
⭐ Interesting but too meta.
Which of the 5 weighted levers needs deeper coverage next? Hit reply.