Over the past few years, we put a lot of process in place for our development teams.
Coding standards, syntax rules, linting, Phabricator, SVN, GitHub PRs. One reviewer, two reviewers. How to QA. Unit tests, behavior tests. Testing in dev, integration, staging, prod. Monitoring logs after deploy. No deploys on Friday.
We went through many iterations of those processes to make sure the code we shipped was maintainable, respected our standards, and didn't cause problems in production.
In the last year, some of that is being challenged.
The question AI is raising
If you read or listen to what's out there about AI, and more specifically agentic AI, a lot of engineers, CEOs, and CTOs talk about going from idea to production with very little friction. You validate the specification at the beginning and the result at the end, and you don't really need to look at the code in between.
That raises questions for me.
What happens to our processes in that model?
How do we ensure the quality of the code still holds?
How do we know tests are testing the right thing?
Who is accountable when something breaks in production if there is no developer really involved in the middle?
It's not all black or white
From my perspective, it's not all black or white.
There are a lot of good things coming out of AI, but the pace of change is challenging. Things that felt unthinkable at the beginning of 2025 are normal at the beginning of 2026. Keeping up with models, tools, and new ways of working is not trivial.
Over the last year, we tried to leverage these tools in different ways. Code generation, review assistants, test writing, codebase exploration, even creating markdown files to help tools understand how our repositories are structured and what standards we follow.
Some of it worked. Newer and greenfield projects worked better. But we mostly work on old codebases. Repositories and structures that were never designed with AI in mind. Code we never really had time to clean up, with so much legacy that even developers who have been here for a while struggle to understand it.
Expecting an agent to do better in that context is optimistic.
Different reactions, setting boundaries
Reactions vary from developer to developer and from project to project.
Some lean heavily into AI. Others push back. The generated code doesn't follow their standards, doesn't do what they want, and they feel they could have done it faster themselves. Huge PRs get created and people refuse to review them because they're too big. Some code isn't tested because it's assumed the AI handled it. Sometimes the author can't explain what the PR actually does.
That forced us to set boundaries.
For now, developers are responsible for the code they generate. They need to understand it and be able to explain it. That feels necessary today, even if I expect this to evolve as we learn better ways to use these tools and as the models improve.
A different approach to old problems
My own perspective has already changed in the last few months.
Instead of refactoring or adding features to old code, I started working with AI on plans to migrate parts of the system to new technologies. Making sure it could find and read the information it needed. Letting it reason about the target rather than the past.
I was impressed.
It produced a working POC faster than I expected, and close to what I needed. That made me rethink how we approach work.
In the past, I built roadmaps to tackle tech debt with very limited time. We never moved fast enough to really get ahead of it. At the same time, we went from nine development teams, to three, to two, while maintaining the same systems.
Given the current outlook of the market, I wouldn't be surprised if we have to move forward with even smaller teams. That brings the question back: how do we reduce technical debt and still deliver new ideas and features?
Starting from the spec
One idea I keep coming back to is starting from the spec.
Not from scratch, but from a well-defined plan. Features we actually need. Leaning out business logic that accumulated over time and doesn't need to be carried forward. A proper test plan. Letting AI build a clean version of the system around that instead of endlessly patching the old one.
If something is wrong, wipe it, refine the plan, and have the AI redo it.
That doesn't feel optimal, but the constraint is no longer developer time. It has moved elsewhere. It now sits at the plan level: iterating and refining until it's clear, both for you and for the AI.
The shape of work is changing
It feels counter-intuitive.
We spent years protecting developer time. Perfecting specs and designs so we wouldn't have to redo work later, because changing things after the fact was expensive.
Now, creating a POC from minimal specs can take minutes or hours. You can review it, change it, throw it away, and start again. Iterate until it's right, then harden the spec, secure it, test it properly.
That doesn't remove the need for review, security, or quality. But it does change the shape of the work.
I can't see the future, but it feels clear that we need to rethink how we work and what we choose to spend our time on.
Open questions
That leaves me with a few open questions:
- If developer time is no longer the main constraint, where did it move to?
- If code can be redone quickly and we can still ensure stability in production, do all our existing standards and linting rules still make sense? Or do we need to review and adapt them to this new reality?