Skip to content
CTCO
Go back

The NCSC Said the Quiet Part Out Loud

Published:  at  10:00 AM
·
6 min read
· By Joseph Tomkinson
Reality Checks
Human + AI
Concept illustration for NCSC guidance on AI and SaaS security
Concept artwork for the NCSC’s guidance on AI-assisted software delivery and SaaS security.

I have been thinking about one line from the NCSC all week.

The UK’s National Cyber Security Centre, the government agency focused on software and national cyber resilience, published a post that effectively says this: AI-assisted software delivery is coming for parts of SaaS, the commercial pressure is real, and security teams need to adapt now.

They are not saying “do not do this.” They are saying “this will happen, so we need to reduce the risk before it scales.”

If you lead an engineering team, that is worth paying close attention to.

Table of contents

Open Table of contents

Market context: SaaSpocalypse

The NCSC post does not exist in isolation.

In early February 2026, around $285 billion (roughly £220 billion) in software market value was wiped out in days. Jefferies traders called it “SaaSpocalypse”. The iShares Expanded Tech-Software ETF fell by more than 20% year to date. Salesforce, Adobe, ServiceNow and Thomson Reuters all took heavy hits.

Part of the trigger was the changing build-versus-buy equation. If an internal team can build a working internal tool in hours, it raises hard questions about a six-figure annual SaaS bill, especially when renewal pricing rises sharply at the next user tier.

The NCSC describes one startup that saw exactly that. A renewal price doubled after crossing a user threshold, so an engineer built a focused replacement in a couple of hours.

That does not mean two hours of prompting equals a mature SaaS product with compliance, security hardening and operational maturity. It does mean “bespoke enough” is now often good enough to change purchasing decisions.

I touched on this pressure in my earlier piece on Claude Cowork and SaaS pressure, and the pattern has only become clearer.

Chart for software market movement during SaaSpocalypse
Graphic: market movement and SaaS valuation compression.

AI capability levels in practice

If you have not read Dan Shapiro’s “Spicy Autocomplete to Dark Factory” piece, it is worth a read. Shapiro is a technology executive and writer who has been tracking how AI changes software workflows.

His framework maps five levels of AI-assisted delivery, from simple autocomplete to fully autonomous development flows where humans focus on system-level assurance rather than line-by-line review.

Most teams can operate effectively up to the level where engineers still review generated code in detail. Moving beyond that requires a role shift from writing and reviewing outputs to orchestrating agents, constraints and quality controls.

That matches what I have seen in practice. Tools move quickly, but delivery judgement still determines whether something is safe and shippable.

Security implications for delivery teams

This is where the NCSC takes a firm but realistic stance.

The idea that every line of AI-generated code will always be reviewed by a human does not scale well for fast-moving teams. The NCSC points out that organisations under delivery pressure are already testing models where some production code is never reviewed line by line by a person.

That should make security leaders uncomfortable, but pretending it will not happen is not a strategy.

The harder point is this: manually written software is not consistently secure either. The NCSC explicitly acknowledges that reality. So the practical comparison is not AI code versus perfect code. It is AI-assisted code versus the quality of software organisations already ship today.

NCSC recommendations

The NCSC argument can be reduced to four practical recommendations.

  1. Secure-by-default models. Security principles need to be embedded in model behaviour, not bolted on later.
  2. Model assurance and provenance. Teams need confidence in how models are built, updated and governed.
  3. AI for defensive engineering. Use AI for review and hardening as well as code generation, including legacy estate improvements.
  4. Early intervention. Build controls now, because retrofitting security after widespread adoption is slower and more expensive.
Diagram for AI development guardrails
Graphic: policy, assurance and engineering guardrails for AI-assisted delivery.

Which SaaS products are exposed

The NCSC also makes a sensible point about who is most exposed.

Narrow tools with weak differentiation and aggressive per-seat pricing are more vulnerable to internal replacement. Products with strong data moats, deep integrations or difficult regulatory barriers are harder to displace.

Infrastructure and platform providers are less exposed because internally built tools still need hosting, identity, monitoring and operational platforms underneath them.

Practical actions for engineering leaders

If you are a Head of Engineering, CTO or technical lead, this is the practical reading:

Closing thoughts

What stands out to me is not any single NCSC recommendation. It is the framing.

This is no longer a conversation about whether AI-assisted delivery should exist. It is a conversation about how to make an inevitable shift safer, more governable and more useful.

I think that is the right way to approach it: clear-eyed about risk, honest about incentives, and practical about implementation.



You Might Also Like



Comments