The hard part isn't doing the work now; it's choosing the work.
A day in the life of a relapsed agentic coder
This blog used to be about things like leadership and system patterns. I guess AI has taken it over, and I’m a bit sorry about that. I use this writing to make myself more present in the moment during the week - to observe patterns and think about things I might write about - and most of what I have been doing for the last little bit is thinking about AI, and then more recently, doing a lot of coding with the tools we’ve built. So, it’s all AI now, at least for a bit.
But I actually don’t think that’s bad, even if you aren’t a programmer and don’t care. The coding world is showing how the AI world will likely unfold for other uses and professions. “Code goes first” because it’s easier for the models to deal with and the programming community (well, some of it, anyway) are already primed to understand and use it.
When I sat down to write this, I had a bit more trouble than usual - there are a lot of thoughts in my head right now! So I am going to do something a bit different. Usually I just have, and write, one thought a week. But I think there are a bunch of things that fit together usefully, so I am going to indulge in a longer post this time. Let’s start with a good old “innovation principle” that I used to write about a lot.
Why Not/What If
Something I learned long ago when doing Writely (which became Google Docs), is that there is something really strange that happens when a truly disruptive idea emerges. These ideas are really obvious in retrospect, so at the time people must be really happy to see them, right?
Actually, no. What really happens is that people start by feeling an emotion - there is an uncomfortable feeling when your world view is challenged. There are essentially two ways to deal with this emotion - you can choose to think that your world view is right, and the new idea is wrong, or you can choose to update your world view to accommodate the new idea.
These two states manifest in either telling “why not” stories - inventing reasons why the new thing is wrong, or “what if” stories - what happens if it is. Ordinary changes don’t get this behavior, you get a much broader spread of reactions. But big category changes get “I love it”/ “I hate it” and not much inbetween.
We are seeing this in the agentic coding world. There are still lots of people who strongly believe it won’t work, it’s impossible to build code that way, and so on. And then..I spend time in communities of people who are very successfully using these tools, all day long (me included). It’s like in the early days of Writely, when we had about a half million happy users, and people would still tell me that the browser could never be used to build a satisfying app.
If you are skeptical, the only thing I can say is: try to examine that skepticism as honestly as you can, and talk to people who aren’t with an open mind, if you can find them.
Internet Stages
The other thing that I think is going on with AI broadly and also with coding has to do with timing. People think the internet happened all at once, but it didn’t - the browser was 1993, the dotcom crash was 2000, Writely was 2005 and the iPhone wasn’t even until 2006! We had a lot of tools, infrastructure, best practices and work to do to make the internet what it is today.
The last few years of AI have been amazing but there have been problems. We’ve had the idea of agents, but they haven’t really worked as well as we want. Models are good but not great, and still too inaccurate for meaningful work.
It’s taken a while to fix enough of those issues, and to understand how to make use of the increased capabilities. But now, we have agentic coding frameworks that are astoundingly good - it feels like AI is coming into focus, the way the internet did after the first few rough years. This is a fairly common pattern with new tech - we get the idea before we get the robust implementation. AI coding is showing that maturation process now, or at least a next step in it.
Attention Saturation
This is another interesting thing that’s starting to emerge. Coding agents are true agents - you can easily give them an instruction, and they will do a fairly large amount of work for you. So, it’s very easy (if you have the budget) to start a whole bunch of new coding tasks - if you have an idea, it’s just a minute or so to start up a new agent.
But you still have to pay attention to the output. So you wind up saturating yourself - you will have as many things going on as you can pay attention to. Which means your co-workers will, too. This makes it hard to collaborate - everyone is busy all the time. I wrote about this last week, and I can say - it’s only getting worse! It’s very much the case that, as I get better at using the tools, and build new tools to build my own leverage, I just do more work, not less!
Maybe that’s just how humans are. We have to do much less work to feed ourselves than we did 200 years ago. We don’t spend that efficiency on relaxing; we spend it on ever richer and more complex lives. Most of us live lives that would have been the envy of our ancestors. I’ve suspected this will be the case with AI - it won’t replace jobs, it will change them, and we might well wind up being more busy, not less. So far, that’s very much my personal experience.
The hard part is picking the work
Finally, this goes along with the attention issue above. Because it’s so easy to start things now with these tools, you have to have good taste in what you start. It’s not hard to do work now, it’s hard to pick what work to do. I suspect this is a deep truth of the AI age - as agents get more powerful and cheaper, the actual mechanics of the work will matter less. What will matter is the taste and judgement of deciding what to do.
We could take calories as an analogy here. As food got cheaper, safer, more accessible and cheaper, the key skill isn’t being able to find food (almost everyone can do that), it’s deciding what and how much to eat to stay healthy. As thinking gets cheaper, safer, and more accessible, perhaps the same will be true.
All of this was about coding but we are using these tools for other things. I built a whole website talking about our tool, using it, in just a few minutes this week: Amplifier Stories. This is the only way I do them now. We build research papers with it. It’s usually the first place I go to do any task, and often, if it can’t do what I want, I can build a tool for that1.
Code goes first. Don’t be lulled into thinking “that’s all there is” with AI. It’s still moving. The coding communities are absolutely on fire at the moment.
I haven’t built any automation for these blogs because I like writing them. I was pointed at a substack this week that was obviously AI written with a bad model and I hated it. I write these to help myself think, so I doubt I will ever write them with AI, other than as an occasional experiment.


Great. Ideas are gold again.
Sam,
Been following you for a year. I read your No Prize for Pessimists book in January of last year... it certainly shaped my year - I graduated in December '24.
This hyperuser has absolutely insane throughput - clever dude making the 'clankers' bend to his will. Perhaps you'd entertain his request? He has ~20 CC 20x Max Plans on top of several ChatGPT and Gemini plans. He has so much opensource. He has this 'Agent Flywheel' which is a set of tools that really is exactly how it sounds.
His work displays intelligence, will, grit, and helpfulness. Even you making a connection (if you don't have one already) with him to talk shop would probably be quite fruitful.
https://x.com/doodlestein/status/2017102267355120009
Joey.