
Table of contents
Open Table of contents
- What vibe coding means to me
- We’ve been here before
- The spectrum: from first draft to production
- Why delivery experience changes everything
- Starting with AI prototyping
- Directed AI assistance: being deliberate
- Agent orchestration: working in parallel
- Agentic engineering: building a workflow
- The conversation happening right now
- The reality check
- Practical next steps
- Wrapping up
What vibe coding means to me
There’s a phrase that’s become part of the vocabulary over the past year or so: “vibe coding”. It gets used in a lot of different ways, sometimes with a smirk, sometimes with genuine enthusiasm.
Here’s what I think it actually describes: using AI tools to build something quickly, iteratively, driven mostly by prompts and intuition rather than upfront design. You have an idea, you start prompting, the code appears, you tweak it until it works. It’s fast. It’s accessible. And for a lot of use cases, it’s genuinely useful.
What interests me isn’t the label itself; it’s what happens after that first working version.
Getting something to run is one thing. Getting it to run safely, reliably, and maintainability over time is a different conversation entirely. That’s where the real story is: how do you move from “it works on my machine” to “we can ship this with confidence”?
That journey looks very different depending on your background. And when experienced engineers start using these same tools with intention, something genuinely exciting happens.
We’ve been here before
If this feels familiar, it’s because we’ve seen a version of it before.
When low-code platforms took off (Power Apps, Power Platform, and the rest), suddenly a lot more people could build internal tools and workflows without writing traditional code. That was (and still is) a good thing. Lowering the barrier to entry creates momentum: you can prototype quickly, test an idea, get feedback, and prove value without needing a full engineering team.
But low-code also taught us something important: building something that demos well is not the same as building something you can safely run in production.
AI-assisted coding follows the same pattern, just with a much bigger surface area. It’s easier than ever to get to “it works”. The question is what comes next.
The spectrum: from first draft to production
The conversation about AI and coding sometimes gets framed as an either/or: either you’re using AI to do everything, or you’re a “real developer” who doesn’t need it. That framing isn’t helpful.
What I’ve observed is more of a progression, not a judgement, just a natural path that people move along as they get more deliberate about how they use these tools.
AI Prototyping: Fast, exploratory, driven by prompts. You’re trying to get to “something exists” as quickly as possible. Great for learning, exploring, and proving concepts.
Directed AI Assistance: You’re still moving quickly, but you’re specifying constraints, referencing existing patterns, and defining what success looks like. The tool is a lever; you’re in control.
Agent Orchestration: You’re splitting work across multiple agents, running them in parallel, then integrating the results. You’re thinking about the work the way you’d think about coordinating a team.
Agentic Engineering: You’ve built a workflow around agents, with persistent context, guardrails, and quality gates, and you stay accountable for security, operability, and delivery.
Why delivery experience changes everything
Here’s where I want to be thoughtful, because this isn’t about gatekeeping or dismissing anyone.
AI prototyping is accessible to almost anyone, and that’s wonderful. More people can build things, test ideas, and solve problems. That’s a net positive.
But when you’re building something that’s going to run in production (something that handles real data, serves real users, and needs to stay running), there’s a set of concerns that go beyond “does it work right now”:
- Security: authentication, secrets management, input validation, least privilege
- Resilience: timeouts, retries, idempotency, handling failures gracefully
- Observability: logs, metrics, traces, meaningful error messages
- Maintainability: naming conventions, structure, documentation
- Testing: not just “it worked once”, but “it keeps working as things change”
- Delivery: CI/CD, environments, reviews, rollback strategies
These aren’t glamorous topics. But they’re the difference between a demo and a product that you can hand off to a team and maintain over time.
If you’re new to software development, this is where I’d encourage you to keep learning. Getting something working is a great first step, genuinely. But there’s a whole world of practice around making software reliable, and it’s worth exploring. Not because anyone’s gatekeeping, but because it’ll make the things you build better.
And if you’re an experienced engineer? This is where things get interesting.
When you bring delivery experience to AI tools, when you know what questions to ask, what trade-offs matter, and what “done” actually means, the productivity gains are real. You’re not just generating code faster; you’re building better systems, with less friction, because you can focus your attention on the decisions that matter.
Starting with AI prototyping
There’s nothing wrong with starting in prototyping mode. I do it all the time.
When I’m exploring a new library, proving out a concept, or just trying to get unstuck on something, I’ll ask Copilot for a scaffold, try it, tweak it, repeat. It’s fast and low-stakes, and it helps me learn.
The thing to be mindful of is the gap between “I got it working” and “I can ship it”. Prototypes are meant to be thrown away or refined. If you push prototype-grade outputs into production without thinking through the operational concerns, you’re taking on risk, sometimes more than you realise.
What AI prototyping looks like: “I need a function that does X.” → Generate → Run → “Fix the error” → Generate → Run → “Looks fine” → Move on.
That’s fine for exploration. It’s just not a complete software development lifecycle.
Directed AI assistance: being deliberate
As you get more comfortable with these tools, you start treating them less like magic and more like a very fast collaborator.
You specify constraints. You reference existing patterns. You define what success looks like upfront.
The shift: Instead of “Write a test for this,” you say: “Generate a unit test for this authentication function, covering edge cases for expired tokens and invalid permissions, following our project’s testing conventions.”
You know what matters (tokens, permissions, conventions) because you’ve seen what goes wrong when those things are overlooked.
Agent orchestration: working in parallel
This is where the productivity gains become tangible.
Agent orchestration means splitting work across multiple agents, running them in parallel on different parts of a problem, then integrating the results. It’s a bit like managing a small team.
In my recent SaaS work, I’ve started treating agents as team members. One focuses on the data layer, another on UI components, a third on security review. They don’t work in isolation. I coordinate them, manage dependencies, and integrate their outputs.
Why experience helps: You’re essentially doing the same thing you’d do when onboarding junior developers or breaking down work for a team. You need to know how to decompose a problem, what context each piece needs, and how the parts fit back together.
A practical example: I recently built a feature that required database changes, API updates, and UI scaffolding:
- Agent A: Generated migration scripts based on my schema specs.
- Agent B: Drafted API endpoints using our authentication middleware.
- Agent C: Built UI components matching our design system.
- Me: Reviewed, integrated, and refined the connections.
- Agent D: Generated integration tests covering the full stack.
Could I have done it all myself? Yes. But the agents handled the implementation details while I focused on architecture and integration (the parts where my judgement added the most value).
Agentic engineering: building a workflow
This is the frontier, and I think it’s genuinely exciting.
Agentic engineering means building systems around your collaboration with AI. You maintain instruction directories to give agents persistent context. You develop patterns that make your agents smarter over time. You’re building a hybrid workflow optimised for your specific domain.
The engineer’s role:
- Architecture: high-level system design and decision-making
- Quality: code review, standards enforcement, catching what the agents miss
- Context: maintaining the documentation and context that makes agents effective
- Strategy: deciding what to build and how to approach it
To do this well, you need to be able to do what the agent does, and recognise when it’s gone wrong. That’s where experience comes in. You need to spot the subtle bugs, the security gaps, the maintainability issues. You need the pattern recognition that comes from having built and maintained real systems over time.
The conversation happening right now
There’s a lot of interesting discussion happening in the industry right now, and I think it’s worth grounding this in what people are actually saying.
Simon Willison, back in October 2025, proposed the term “Vibe Engineering” as a way to describe experienced engineers using LLMs to accelerate their work while staying accountable for what they ship. That framing resonates with me; it’s about using these tools deliberately, not abdicating responsibility.
Research from CodeRabbit (December 2025) found that while developers are moving faster with AI assistance, some of those productivity gains are being offset by time spent fixing bugs and addressing security issues downstream. That’s not a reason to avoid AI tools; it’s a reason to think about how you integrate them into a broader quality practice.
And just yesterday, Linus Torvalds made a comment about a Python tool being “basically written by vibe-coding”. Even the people who built this industry are engaging with these shifts, acknowledging both the possibilities and the boundaries.
The throughline across all of these conversations: AI lowers the entry cost to building software, but it doesn’t remove the cost of ownership. Someone still has to be accountable for security, operability, and long-term maintainability.
For experienced engineers, that accountability is the value proposition. These tools make you faster, but they also free you up to focus on the decisions that actually require your judgement.
The reality check
Despite everything I’ve said about agents and orchestration, I still write a lot of code by hand.
When I’m working on complex business logic, I write it. When I’m designing a core architectural component, I’m doing that thinking myself. When there’s a tricky bug in production, I’m the one debugging it.
Agents are tools. They’re force multipliers. But they don’t replace the need for craftsmanship, directed thought, and deep understanding. If anything, they demand more of it, because you spend less time on boilerplate and more time on the work that actually matters.
Practical next steps
If you’re thinking about where you sit on this spectrum, here are some questions to consider:
If you’re starting out with AI prototyping: Brilliant. Keep exploring. But start asking yourself: what would break? What’s the threat model? How would I roll this back? Those questions will serve you well as you grow.
If you’re using directed AI assistance: Think about adding more structure: acceptance criteria, quality gates, existing patterns to reference. The more context you give, the better the outputs.
If you’re orchestrating agents: Document what’s working. Build up your instruction files. Share your patterns with your team.
If you’re doing agentic engineering: You’re on the frontier. Keep experimenting, keep refining, and share what you learn; we’re all figuring this out together.
Wrapping up
The journey from vibe coding to agentic engineering is, I think, one of the more interesting developments in how we build software.
It’s not about replacing human judgement; it’s about amplifying it. The tools are getting better all the time, and the engineers who learn to use them well are going to be able to build things that would have been impractical a few years ago.
If you’re just starting out, welcome. There’s a lot to learn, and these tools can help you get there faster.
If you’ve been doing this for a while, there’s a genuine opportunity to work at a higher level: to spend more time on architecture, quality, and strategy, and less time on the tedious parts.
Either way: keep building, keep learning, and stay curious. The tools will keep evolving, but the engineering mindset stays the same.
Further Reading:
- Simon Willison’s “Vibe Engineering” post (Oct 2025)
- CodeRabbit’s research on AI productivity and downstream quality (Dec 2025)
- Faros AI’s overview of AI coding agents in 2026