And it was… Fine. Despite claims that AI today is improving at a fever pitch, it felt largely the same as before. It’s good at writing boilerplate, especially in Javascript, and particularly in React. It’s not good at keeping up with the standards and utilities of your codebase. It tends to struggle with languages like Terraform. It still hallucinates libraries leading to significant security vulnerabilities.
AIs still struggle to absorb the context of a larger codebase, even with a great prompt and CLAUDE.md file. If you use a library that isn’t StackOverflow’s favorite it will butcher it even after an agentic lookup of the documentation. … What LLMs produce is often broken, hallucinated, or below codebase standards. The frequency of these errors go up with the size of the codebase. When that happens you have to re-prompt, which could instantly fix the problem or could be a huge waste of time. Or you can go in and fix the code yourself. But then you’re back to measly 1x engineer status, perhaps worse if you’ve gotten so used to vibe coding you forgot how to code. If you’re “embracing the vibes” and not even looking at the code produced, you’re simply going to hit a productivity wall once the codebase gets large enough. … The problem is that productivity does not scale. I don’t write more than one ESLint rule per year. This burst of productivity was enabled solely by the fact that I didn’t care about this code and wasn’t going to work to make it readable for the next engineer.
This is one of the best takes I’ve read from a practitioner of software development. So much of the supposed “gains” are touted by people who don’t write code for a living. And generating small prototypes is not a business beyond seed stage companies.
Quote Citation: colton.dev, “No, AI is not Making Engineers 10x as Productive”, 2025-08-05, https://colton.dev/blog/curing-your-ai-10x-engineer-imposter-syndrome/
