Full disclosure: this article was written with AI assistance. I prompted it, shaped it, and proofread it—though if you know me, you'll know my eyes glaze over during that last part. Make of that what you will. It's kind of the whole point.
Something fundamental is shifting in 2026, and it's happening fast enough to make even experienced engineers uncomfortable. The cost of writing software is dropping—not incrementally, but by an order of magnitude. And when the price of something drops a zero, the value doesn't just shrink. It moves somewhere else entirely.
I think it's moving toward data ownership, process control, and the ability to take responsibility for what your systems are actually doing. The code itself? It's becoming disposable.
The 80/20 Problem We Never Fixed Here's something that should have always been true about software development: writing the code should only account for about 20% of the effort. The other 80% should go into understanding the problem, designing the architecture, mapping the processes, and thinking carefully about what the software actually needs to do and why.
In practice, the industry has had this backwards for a long time. The overwhelming majority of time and money has gone into the coding itself—and that meant the architecture was often under-designed, the processes not fully thought through, and the software built on shaky foundations. Our industry shipped a lot of badly architected software, not because engineers didn't care, but because the economics of writing code consumed the budget for thinking about it properly.
AI is now collapsing that coding cost toward zero. Which is brilliant—except it raises an uncomfortable question. If people were already skimping on the design and architecture when coding was expensive, what happens now that code is essentially free? Are we investing the saved time into better thinking, better process design, better architecture? Or are we just generating more code, faster, with even less thought behind it?
And here's what's already happening, whether anyone intended it or not: code is being shipped that nobody has fully read. Not because anyone set out to do that—but because the volume and speed of AI-generated output makes thorough review a genuine bottleneck, and the temptation to trust the machine is strong.
When Code Becomes Cheap, What Becomes Expensive? For decades, the bottleneck in technology was writing software. You needed skilled engineers, significant time, and serious money to build anything meaningful. That constraint shaped entire industries—including recruitment, where agencies have long been locked into expensive platforms because the switching cost of building something better was prohibitive.
AI coding tools have upended that equation. What once took weeks can now take hours. What took a team can now take one person with good judgement and clear intent. We've experienced this firsthand at Hubbado, where we recently built and shipped a tool that helps agencies quickly scope and structure Statements of Work , and another that generates anonymised, tailored candidate profiles —both at a pace that would have been unthinkable two years ago. AI wrote significant chunks of the code. I then took the time to read, refine, and properly architect what it produced—but the raw generation was dramatically faster.
So if writing software is no longer the hard part, what is?
The Real Asset Is Owning Your Data and Understanding Your Processes As AI tools become more capable and more interconnected, something interesting is happening. We're no longer just using AI to write code—we're chaining AI systems together, feeding outputs from one into another, connecting them to external APIs through protocols (like Model Context Protocol), and building processes where data flows through multiple stages of transformation.
In this world, the software at each stage matters less than what's flowing between the stages. The context you manage, the artifacts you produce, the documents you keep on your own systems—these become the durable assets. The code that processes them can be rewritten, regenerated, or replaced entirely. But the data, and your understanding of what's happening to it at each step, cannot.
This is a subtle but profound shift. It means the skill isn't in writing the transformation—it's in owning your data and being able to account for what happens to it at every stage. When you're hooking AI into APIs, and other AIs are interrogating and accessing that same data, you need to know what you have, where it's going, and what's being done to it.
Data ownership isn't just a compliance checkbox. It's becoming the core competency.
The Engineer's Discomfort I want to be honest about something: this shift fills me with a certain amount of horror.
As an engineer, I believe in the right to good software. The idea that we might routinely ship code that nobody has read, that nobody has checked for sound architecture and clean logic—that sits badly with me. And I don't think I'm alone in that. The thing is, nobody sets out to do this. Nobody makes a deliberate decision to stop reading the code. It just happens—incrementally, quietly. The AI output looks reasonable, the deadline is tight, the tests pass, and before you know it you've shipped something that no human fully understood. It's not negligence. It's gravity. And I say that as someone who just told an AI to mention that my own eyes glaze over when I have a huge amount to proofread.
But here's the thing I keep coming back to: typing in code was only ever supposed to be about 20% of the job. At most. The other 80% should always have been architecture, understanding the processes being automated, building deep domain knowledge, and asking tough questions—of the business, of the users, of yourself. The real work of software engineering was never the typing. It was the thinking.
And honestly? Our industry wasn't great at that 80% even before AI arrived. Plenty of projects failed not because the code was bad, but because nobody spent enough time understanding what they were building or why. The domain knowledge was shallow. The architecture was an afterthought. The hard questions went unasked because everyone was too busy writing code.
Now AI has made the 20%—the typing—nearly free. In theory, that should be liberating. It should free engineers to spend all their time on the 80% that actually matters. But I worry about what happens in practice. If people were already skimping on the architecture, the domain understanding, the difficult conversations—are they now going to outsource that thinking to AI as well?
That's the question that genuinely concerns me. Not whether AI can write code—it clearly can. But whether people will hand over the parts of the job that require human judgement, accountability, and deep understanding of context. The parts that were always supposed to be the bulk of the work.
There's an analogy people reach for here, which is the move from assembly language to high-level programming languages. When we moved from assembly to C, and then to high-level languages like Ruby and Python, we stopped reading the machine code our compilers and interpreters produced. We trusted the abstraction layer and focused on the logic we could see. But that analogy doesn't quite hold. With high-level languages, you were still reading code. You were still reasoning about the logic, tracing the flow, understanding what the system would do. The abstraction moved downward, but your comprehension stayed at the level where decisions were being made.
With AI-generated code, the question is starker. If an AI writes a thousand lines in seconds, how much of it does anyone actually read? Does reviewing AI output become such a bottleneck that people stop doing it? And if they do stop—will anyone even push back?
I don't have clean answers to these questions. What I can say is that for the tools we've built at Hubbado, I made the deliberate choice to read every line, refine the architecture, and ensure the code met a standard I was comfortable with. That took time. It was worth it. But I'm also aware that not everyone will make that choice, and the economics are pushing hard in the other direction.
So Where Does That Leave Us? I think 2026 is going to be a genuinely strange year for anyone who builds or buys software. Here's what I'd suggest keeping hold of:
Own your data. Not just in the GDPR sense, but operationally. Know what you have, know where it flows, know what's being done to it at each stage. As software becomes disposable and AI systems become more interconnected, your data and your processes are the things that endure.
Understand your processes. When you chain AI tools together, you're building something that looks less like traditional software and more like a manufacturing process. The value is in the design of the process, not in any single machine on the line.
Don't abandon code quality—but be realistic about where to invest it. Not every piece of generated code needs the same level of scrutiny. But the critical paths, the parts that touch your data, the logic that drives your business decisions—those still deserve an engineer's eye.
Take responsibility for the output. This might be the most important one. When AI is writing code, generating documents, and transforming data, someone still needs to be accountable for what comes out the other end. That responsibility doesn't get cheaper just because the software did.
We're building Hubbado on these principles—using AI to move fast, but maintaining the engineering discipline to know what we've built and why. The SoW Scope Builder and Tailored Profile Builder are live examples of that philosophy in practice: AI-assisted, human-refined, and built on processes we fully understand and control.
The price of software is falling. The price of not understanding what your software is doing? That's going up.
Of course, this is all changing so fast that my opinions will be out of date faster than an AI can write me another opinion piece.
Sam is Technical Director at Hubbado , where we build workflow automation for recruitment agencies who want transparency and control over their processes. If you'd like to talk about how your agency manages its data and processes, get in touch .