
Something I’ve been thinking about lately, since I got back from the Tech Show London, earlier this year. As teams lean more heavily on coding agents, engineering capacity is starting to behave like an external service dependency. I needed a shorthand for this, I feel like we can put ‘aaS’ infront of anything nowadays. But I wanted something I could use in conversation without it sounding like I was reading an error code.
I went through a few candidates. AIEaaS (AI Engineering as a Service) was where I started, but try saying that in a sentence, it was a bit like a tongue twister you’d attempt as a kid. AgEaaS (Agentic Engineering as a Service) kept the familiar “-aaS” lineage for the cloud-native crowd, but it’s no better out loud. AIDE (Agentic Intelligent Development Engineering) was clever (I thought) as it doubles as a word meaning assistant; but it didn’t have the right feel to it in terms of what I wanted to convey. MAST (Model-Assisted Software Teams) foregrounds the team angle but drops the “as a service” framing, which was arguably the whole point.
I landed on ACES, Agentic Coding & Engineering as a Service (I know, you could also say ACESaaS which I think sounds pretty cool too, subjective I know). It’s short, pronounceable, and carries the right connotation: aces are powerful, but you don’t control the deck they come from (metaphorically, I promise I’m better at Poker than I am at naming conventions). “We’ve mapped our ACES dependencies” works in a sentence. That’s the bar (I know, it was kind of low).
The upside of ACES I think at this stage is fairly well understood. The idea is that we have fewer repetitive tasks, faster delivery, smaller teams doing more. The part that gets less attention is what it means for how you run things. You’re not just managing engineers and backlog anymore. You’re managing upstream AI platform reliability, model pricing tiers, token economics, and vendor concentration risk.
Your engineering throughput starts to look like a managed service dependency. And that changes how you need to think about it.
Table of contents
Open Table of contents
Agentic AI dependency is already a thing
Teams using coding agents are shipping faster. That part isn’t controversial any more; although I realise that there are question marks around quality. However, I would say that used well and with the right guardrails you can get quality outputs from agents, in less time than it takes to do the same work manually. You just have to understand how to prompt them, how to review their outputs, and how to design workflows that play to their strengths.
What is still under-discussed is that those speed gains ride on external services you don’t control. If model availability drops, rate limits tighten, or your selected tier changes behaviour, your internal delivery capacity can shift overnight.
This is the same kind of dependency management we already do for cloud infrastructure. We just haven’t caught up to it for AI services yet.
Claims worth questioning
A few claims I keep hearing that deserve more scrutiny and attention in general:
- “AI gives us infinite engineering capacity.” It gives you leveraged capacity, but that leverage sits on top of model quality, uptime, and budget. When any of those shift, so does your throughput.
- “This is cheaper by default.” Sometimes. But not when premium tiers become mandatory for quality, context length, or throughput. Jensen Huang suggested at GTC that a $500,000 (£395,000) engineer should be consuming at least $250,000 (£200,000) in tokens per year. At Databricks, their CEO publicly celebrated an engineer who burned through $7,000 (£5,500) in tokens over a two-week period. Meta CTO Andrew Bosworth said his best engineer spends his salary equivalent in tokens and is “5x to 10x more productive” - his response was “this is easy money, no limit.” That’s not cost reduction. That’s a new, significant, variable cost line.
- “Everyone has equal access now.” Access is increasingly stratified by who can absorb premium model costs at scale. The New York Times reported on “tokenmaxxing” — engineers at Meta and OpenAI competing on internal leaderboards that track token consumption. When the culture celebrates spend as a proxy for productivity, the gap between well-funded organisations and everyone else widens really quickly.
None of this is an argument against using AI. It’s an argument for treating it with the same operational seriousness as any other critical dependency.
Model outages as delivery outages
In a ten-person human team, one person off sick hurts, but work continues.
In an agent-heavy ten-person equivalent setup, one upstream AI outage can freeze substantial portions of analysis, implementation, review, and content production all at once.
The 2025 DORA State of DevOps report found that while AI adoption now positively correlates with delivery throughput, it also correlates with increased instability — teams ship faster but production environments become more fragile. Add an upstream model outage on top of that and you’ve got compounding risk.
Every major AI provider publishes service status histories — incidents happen. The response isn’t to avoid the tooling, it’s to design around the failure modes:
- tiered fallback workflows,
- secondary model providers for critical flows,
- and explicit “degraded mode” operating procedures when agent throughput drops.
If your org has incident playbooks for cloud outages but not for model outages, that gap is going to matter.
Cost stratification and who gets left behind
As models improve, the most capable options sit behind higher-price plans or higher token costs. If top-tier capability is what unlocks meaningful productivity, then organisations with tighter budgets end up with second-tier AI leverage by default. That applies to commercial SMEs just as much as it does to nonprofits.
The SME gap is already structural
The numbers on this are stark. Eurostat’s 2025 data shows that 55% of large EU enterprises are using at least one AI technology. For small businesses, that figure drops to 17%. Among EU small businesses not currently using AI, only around 13% had even considered adopting it. I don’t think this feel’s like hesitation waiting to be overcome, I think the data shows that for most of these businesses, AI isn’t on the agenda at all, even if the media portrays the reality differently.
In the UK, the picture is similar. The ONS Business Insights and Conditions Survey found that only 23% of UK businesses were using any form of AI by late 2025. Independent analysis puts true operational adoption, as in where AI is actually embedded in core business processes, closer to 15%. That leaves roughly 5.5 million UK SMEs operating without advanced digital capabilities. The UK Business Data Survey from 2024 (and granted a lot has changed since then within the AI landscape) found adoption varies dramatically by sector: professional services and finance lead at around 28%, while construction and traditional manufacturing sit at just 6%. I can say that I’ve worked with clients in the construction sector and there are a few that want to be innovators in their sector, so these statistics may change over the next 12-24 months.
The primary barrier isn’t even cost, it’s expertise. In the UK, 67% of SMEs cite a lack of internal AI expertise as their main blocker. AI talent gets absorbed by financial services, large multinationals, and well-funded startups. A manufacturer in the Midlands or a logistics company in Yorkshire simply cannot compete for that talent on salary. And in the US, SBA data shows that very small businesses with one to four employees have an AI adoption rate of just 5.8%.
The OECD’s 2025 report on AI adoption by SMEs describes the trajectory directly: without targeted intervention, the adoption gap risks creating an economy with hyper-efficient large enterprises at one end, hyper-local micro-businesses at the other, and the mid-market — UK manufacturing, logistics networks, regional distributors — squeezed out in between. OECD data shows AI-enabled productivity in manufacturing and logistics is growing at 2.8–3.2% annually for companies that have adopted it. That compounds. Over five years, a company capturing that growth operates in a fundamentally different cost structure than one that isn’t. This highlights that it’s not an incremental gap, it’s a structural one.
When Tomasz Tunguz at Theory Ventures estimates that a top-quartile engineer’s compensation is now roughly one dollar in five on compute, it becomes clear that the ability to participate in this economy is itself stratified by revenue.
Nonprofits face the same gap, with less margin to absorb it
The data backs this up on the social sector side as well. According to TechSoup’s 2025 State of AI in Nonprofits report, nonprofits with annual revenues over $1 million (~£800,000) are adopting AI at nearly twice the rate of smaller organisations. Social Current’s research found that roughly 41% of nonprofit organisations rely on a single staff member to make all AI decisions, creating bottlenecks that make scaling almost impossible. Some nonprofits report saving 15–20 hours weekly on administrative tasks with AI, but that benefit skews heavily toward those who can afford the tooling in the first place.
For commercial organisations, this is a margin and competitiveness question. For nonprofits trying to stretch every pound across service delivery, fundraising, and marketing, it can become a mission-impact question. Either way, it creates a new kind of digital inequality: not just who has internet access, but who can afford top-tier cognitive infrastructure.
Actions worth considering now
If you lead engineering, delivery, or digital strategy, here are five practical areas to look at now to get ahead of what I’m calling the ACES operational curve:
- Define an ACES dependency map. Identify where critical delivery steps now rely on one AI provider or one model tier.
- Set internal SLOs for agent-enabled workflows. Treat model latency/availability as delivery inputs, not background noise.
- Design a degraded-mode process. Decide in advance what continues manually when agent services are degraded.
- Create a model cost envelope. Forecast baseline vs peak usage and define budget guardrails before surprise spend lands.
- Segment capability by value, not novelty. Reserve highest-cost model tiers for work where quality delta is material and define how that decision is made by your teams.
A good starting point would be to run a tabletop exercise where your primary model provider is unavailable for eight hours. It tends to surface assumptions you didn’t know you were making.
Where this leaves us
I think ACES is broadly where delivery organisations are heading, and that’s not necessarily bad, or unexpected. But if engineering becomes a service dependency, leadership thinking needs to follow. That means we need to care about reliability planning, failover strategy, procurement discipline, and equitable access.
The tooling is moving fast. The operational maturity around it needs to keep pace.
References
References were pulled together from public statements, industry reports, and news articles. Here are the sources cited in this blog post, I’ve read these (although the status pages are more for reference than narrative), and I recommend checking them out for more context:
- Anthropic, Status - https://status.anthropic.com/
- OpenAI, Status - https://status.openai.com/
- GitHub, Copilot pricing - https://github.com/features/copilot#pricing
- Anthropic, Pricing - https://www.anthropic.com/pricing
- OpenAI, Pricing - https://openai.com/pricing
- OECD, Bridging digital divides for all - https://www.oecd.org/en/topics/sub-issues/bridging-digital-divides-for-all.html
- Forbes, The ‘AI Gods’ Spending As Much As They Can On AI Tokens - https://www.forbes.com/sites/richardnieva/2026/03/31/the-ai-gods-spending-as-much-as-they-can-on-ai-tokens/
- TechCrunch, Are AI tokens the new signing bonus or just a cost of doing business? - https://techcrunch.com/2026/03/21/are-ai-tokens-the-new-signing-bonus-or-just-a-cost-of-doing-business/
- The New York Times, Tokenmaxxing - https://www.nytimes.com/2026/03/20/technology/tokenmaxxing-ai-agents.html
- DORA, 2025 State of DevOps Report - https://dora.dev/research/
- Social Current, The Growing AI Gap Between Social Sector Organizations - https://www.social-current.org/2026/01/the-growing-ai-gap-between-social-sector-organizations/
- TechSoup, 2025 State of AI in Nonprofits Report - https://page.techsoup.org/ai-benchmark-report-2025
- Tomasz Tunguz, Inference as Compensation - https://tomtunguz.com/inference-as-compensation/
- Eurostat, Use of artificial intelligence in enterprises, 2025 - https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Use_of_artificial_intelligence_in_enterprises
- ONS, Business Insights and Conditions Survey - https://www.ons.gov.uk/businessindustryandtrade/business/businessservices/bulletins/businessinsightsandimpactontheukeconomy/2october2025
- OECD, AI adoption by small and medium-sized enterprises - https://www.oecd.org/en/publications/2025/12/ai-adoption-by-small-and-medium-sized-enterprises_9c48eae6.html
- UK Government, AI Adoption Research - https://www.gov.uk/government/publications/ai-adoption-research/ai-adoption-research
- SBA Office of Advocacy, Business Trends and Outlook Survey (BTOS) - https://advocacy.sba.gov/
- Compare the Cloud, The AI Reality Check — State of UK SME Adoption in 2025 - https://www.comparethecloud.net/articles/ai-reality-check-uk-sme-adoption-2025