How AI might not reduce engineering OPEX
One of the stronger motives for introducing AI into engineering organizations is the foreseen reduction in development costs. I’m not aware of a single CFO who doesn’t drool over the anticipated headcount reduction that AI is expected to enable. Claude Code does such a great job writing code faster, so how can it possibly not result in less engineers on the payroll?
There are a few ways in which this logic can fail. By one way, the use of AI for engineering may result in more engineers employed in shepherding it. In another scenario, the cost of the compute resources needed to fuel AI may exceed the cost of many of the redundant engineers. But today I’d like to discuss yet a third way in which the OPEX reduction dream might end up not materializing. It boils down to our treatment of the amount of work required for producing a product as if it was a given constant, where in reality it is a constantly-varying result of a market equilibrium function.
Let us imagine that Apple needs X engineer-days per year to produce a new model of an iPhone. Why is it so? Apple could easily get away with 0.5 X engineer-days per year if it was willing to release a new model once every other year rather than once every single year. Apple chooses to ship a new model every single year because it can, because it increases sales enough to make it worthwhile, and most importantly — because that’s what Samsung does. Apple spends X engineer-days per year and not 0.5 X not because X is what it absolutely takes to make any iPhone, but because this is what it takes to make an iPhone that will stay in the market in light of competition, which is also willing to pay its own X.
Once both Samsung and Apple have AI (assuming for simplicity that it’s the same AI with the same capabilities), they can both decide to cash out on the cost reduction, which is the equivalent of both deciding to release a phone every other year, or they can reinvest the saving into improving their edge against the other.
I am making an assumption that AI is a tide that lifts all boats to similar effect; that it plays no favorites. It is possible that one company makes more of AI than the other. While possible and eventually likely, this is not clearly the case at the moment. So far, players in industries that use AI for code generation use more or less the same tools, and even those who create their own tooling rely on a handful of common LLMs under the hood.
I believe that it is reasonable to expect that the higher efficiency of engineering work thanks to AI will end up being used for faster engineering and competitiveness, rather than put towards reducing costs. Competition may lead vendors to prefer directing the savings into producing yet more features we don’t need over, for example, sending people off, handing market dominance to those vendors who don’t.
Comments
Display comments as Linear | Threaded