
One of the things that’s novel about the AI moment is that we can now talk to our software and tell it what we want. Not all software, of course! And even when established applications have some kind of chat interface, you can’t really ask for anything - you can only tell it what you want in the context of its model - you can’t make Photoshop tell you a story (I think - the AI age is weird).
Chatbots are more open. We can tell them almost anything, and they’ll do it. Not quite yet - you can’t really just ask for a UX to change, though with all of the code, artifacts, image generation and more, we’re starting to get very close to “just tell the LLM what you want and it will do it, within reason”.
This is a little like the magic search box or the location bar in the browser. Ask for something and the internet delivers it to you. It’s not quite as flexible (yet) but it’s so much more open than, say, a CD rom of an encyclopedia was, pre-internet.
So, more and more activity is going into chat interfaces, just like it did into the browser. Which means that the services the browser accesses are becoming more and more like websites. Companies don’t want this - they want to “own” the user experience and have users come to them to chat directly.
I suspect this isn’t going to work out. The main LLM interfaces are building, and will continue to build, context about the user. That context isn’t just valuable to them (if it were, this would be a simpler story), it also saves the user time. If I’ve spent a bunch of time developing an application in, say, Claude code, I don’t want to have to explain all of that again to the Jira chatbot. I want Claude to interact with the Jira service (sorry, Jira, I love you, it’s just an example) on my behalf.
What if Jira doesn’t want that, and blocks Claude? I think very quickly that’s going to feel like the “walled gardens” of the early internet. It worked, a bit, for a time, but … well, let’s talk to AOI about it…wait…where’s America Online? Oh.
Early in the internet era, a reporter said “I don’t think Google Docs will work. What if I’m not online?” My answer was that, in a few years (this was 2006), our word for any device not connected to the internet would be “broken”, not “offline”. Which, more or less, it is.
The same thing is going to happen around AI. A service that can’t be reached and managed effectively by your preferred AI assistant will seem either invisible or broken. Either way, it’ll be frustrated. If you’re building something like this, users will go find another solution. And I wouldn’t assume you have long - we are now in the era where cloning something and moving data is really mostly a matter of dollars and a bit of cleverness, both of which are shrinking rapidly.
Chat is the new browser. We want to talk to our software, tell it what we want, smoothly. Anything else is going to be broken or invisible, soon. 1
Bonus thought of the week. I use ChatGPT to generate the header image and promotional text for linked in. I’ve been doing it so long now, that the entire prompt is just: “Hi there! It's friday. Ready? Here's the blog:” and the contents for the week.
Memory and context are so important. Most builders are STILL making users do too much work for the computer. Also, if any OpenAI folks read this: ok, cool, you went from “nothing” to “all or nothing” on memory. Can we have proper sharing, containers, ACLs etc now? Happy to show you an example you can crib from (snark).
I was just going to comment when I saw what was written at the end of your blog "Sunday Letters is free today".... Same applies to AI. Billions has been invested in getting to where AI is today and the running costs are enormous, one of these days investors are going to become impatient, AI service providers are betting that by then we will be addicted and will be willing to cough up, we won't be willing to go back to keyboards, formal languages, searching. I for one am trying to avoid the AI drug pushers. I use AI, but with great care.
This one really made me lol. Some extra good jokes in here :)