I learned a lot of things from Complexity and Strategy by Terry Crowley:
In Fred Brooks’ terms, this was essential complexity, not accidental complexity. Features interact — intentionally — and that makes the cost of implementing the N+1 feature closer to N than 1.
In other words, the ability to change a product is directly proportional to the size of N (features, requirements, spec points, etc.) for the system that express that product. You may find practices that multiply N by 0.9 so you go a little faster. You may back yourself into a corner that multiply N by 1.1 so you go a little slower. But, to borrow again from Fred Brooks, there is no silver bullet. Essential domain complexity is immutable unless you reduce the size of the domain, i.e. cut existing features.
Not even fancy new technologies are correlated with reducing your multiplier, in the long run:
This perspective does cause one to turn a somewhat jaundiced eye towards claims of amazing breakthroughs with new technologies...What I found is that advocates for these new technologies tended to confuse the productivity benefits of working on a small code base (small N essential complexity due to fewer feature interactions and small N cost for features that scale with size of codebase) with the benefits of the new technology itself — efforts using a new technology inherently start small so the benefits get conflated.
Lastly, this is a gem about getting functionality “for free”:
So “free code” tends to be “free as in puppy” rather than “free as in beer”.
All free functionality eventually poops on your rug and chews up your shoes.