Dave Rupert, Write about the future you want:

There’s a lot that’s not going well; politics, tech bubbles, the economy, and so on. I spend most of my day reading angry tweets and blog posts. There’s a lot to be upset about, so that’s understandable. But in the interest of fostering better discourse, I’d like to offer a challenge that I think the world desperately needs right now: It’s cheap and easy to complain and say “[Thing] is bad”, but it’s also free to share what you think would be better.

Co-signed.


This is not my beautiful agent-driven economy

We’re in a weird agent coding moment here. AI maximalists are putting wild, often incoherent, projects out there. Some, seemingly replacing their writing output with LLM-trained output. (I don’t understand the temptation to that one at all. The interesting thing about writing, as with any performance art, is risk!) Others, merely annoying the heck out of each other with crustacean emoji.

It feels like we’ve got two groups pursuing entirely different paths to the next level of agent coding. Both camps are not even wrong:

  • The open problems path: we need to figure out correctness, guardrails, what this means for teams, etc. Only then can we be certain we’re not just burning tokens in pursuit of software we’ll have to clean up or write off as brown-field development in the next 3–6 months. This isn’t even the AI safety crowd of 12–18 months ago. I would have called these folks pragmatists just a couple of months ago.
  • The maximal/acceleration path: figure out how to get the agents working faster so that they solve the problems for us. Build the system that solves the problem, mostly through throwing more agents and tokens at any given problem. These aren’t the (non-political) accelerationists who foresee AI and blockchains coming together to allow people to nope out of civilization and run their own AI-enabled private cities and enclaves. I would have called these folks the hype train conductors a few months ago.

I’m inclined to pursue open problems (validation and verification, collaboration, guardrails) before dialing up the output and delegation (orchestration and parallelism). The silver lining is we’re going to find out (possibly quickly) which camp is relatively right (Has Some Good Points) in addition to not even wrong. 🙃

Below, some solid ideas others have had about this tension between “well there are open problems…” and “…jump in with two feet anyway!”.


Justin Searls asked when we would see this kind of surplus of software production and surfeit?/deficit of taste. It seems like it’s just around the corner and the two following links are the early recognition. In other words, he was only off by a few months.

Armin Ronacher, Agent Psychosis: Are We Going Insane?:

The thing is that the dopamine hit from working with these agents is so very real. I’ve been there! You feel productive, you feel like everything is amazing, and if you hang out just with people that are into that stuff too, without any checks, you go deeper and deeper into the belief that this all makes perfect sense. You can build entire projects without any real reality check. But it’s decoupled from any external validation. For as long as nobody looks under the hood, you’re good. But when an outsider first pokes at it, it looks pretty crazy.

It takes you a minute of prompting and waiting a few minutes for code to come out of it. But actually honestly reviewing a pull request takes many times longer than that. The asymmetry is completely brutal. Shooting up bad code is rude because you completely disregard the time of the maintainer. But everybody else is also creating AI-generated code, but maybe they passed the bar of it being good. So how can you possibly tell as a maintainer when it all looks the same? And as the person writing the issue or the PR, you felt good about it. Yet what you get back is frustration and rejection.

Nate Berkopec:

This is what I’ve been saying since the summer. We are in the middle of software engineering’s productivity crisis.

LLMs inflate all previous productivity metrics (PRs, commits) without a correlation to value. This will be used to justify layoffs in Q1/Q2 of this year.


2026 gonna be wild, and we’re only one month in.