From Floppies to Friendly Chameleons
how shorter feedback loops—and a dash of AI—might nudge us toward a million little apps
(A disclaimer - I write all of the words of these letters, by myself. I do have an LLM generate the images and promotional text for social media, so it’s read all of them. This week is a bit of an experiment - can an advanced model (o3) take a reasonably well-formed idea1 and turn it into a post that feels like me. I’d give this a score of maybe 80% - I don’t like the numbering, and it’s still a bit over the top, but it’s close. More interesting to me, though, is that it added a few insights that I didn’t have in the initial outline, that are very good. See if you can find them.
And I commit to you - since you spend time reading this - I will always call out when something here isn’t written by me. I spend a lot of time thinking about and writing these, because I know folks invest time in reading them. I’m not going to turn it into AI-written slop. But this is kind of a fun sample of where the state of the art is.
Comments from me in italic. I guess this turned into a bit of a conversation with me and the LLM, which is cool)
1. A quick stroll down memory lane
If you wrote software before the web took off, you’ll remember the long pause between “ship it” and “did that help anyone?” We mailed disks or shrink‑wrapped boxes, crossed our fingers, and waited. Sometimes a year, sometimes longer. Feedback arrived in the oddest ways: a trade‑show hallway chat, a fax that had been photocopied one time too many, maybe a bug report scrawled on a green sticky note. There was a kind of charm to it—but it sure wasn’t fast.
Then the internet showed up with a fresh breeze (and faster modems). All at once, release cycles collapsed from years to months, then to weeks. At Google in the mid‑2000s we pushed several times a day. Add a single line of code, watch varz counters spike, see Borgmon trace a new squiggle—often before lunch was cold. That one‑day loop felt magical: make a change, see a real‑world response, iterate. Build–measure–learn in 24 hours instead of 24 months. (Sam: I like this phrasing)
2. The next ratchet
Today we’re staring at another crank of that ratchet: AI. Plenty of teams are using it as a faster compiler or a helpful autocomplete. That’s useful. Faster is good. But the more interesting question, at least to me, is this:
If the cost of understanding user intent keeps heading toward zero, why keep building narrow, one‑size‑fits‑many apps? (Sam: this is a good insight and framing of the initial idea)
What if an application could stretch or shrink depending on who’s talking to it? One codebase, a thousand personal variations—perhaps even a million. Not a bundle of features hammered into a single UI, but a living thing that grows new limbs on request. (Creepy, but yes)
3. Talking to our tools
Think about the way we still interact with most software: menus, forms, and buttons that reflect yesterday’s requirements. When something doesn’t fit, we file a ticket or wait for the next quarterly update. We’re back to fax‑machine feedback, just sped up a little.
Language models suggest a gentler path: speak or type what you need, and the tool reshapes itself—sometimes in seconds. The feedback loop shrinks to a conversation. “I’m budgeting for a road trip; could you show gas costs like a progress bar?” Sure, says the system, and off it goes. No JIRA. No spec doc. Just dialogue leading directly to improvement.
That’s not science fiction; it’s small prototype demos we’re all toying with in labs and side projects. They work well enough to hint at something bigger.
4. A picture I keep coming back to
I picture a thin lattice—call it an intent layer—that sits between people and code. Developers provide safe building blocks (plot a chart, store a file, run a calculation). Users supply the personality, the niche, the last‑mile constraints. Internally, the model stitches blocks together in a way that feels tailor‑made. ( I don’t love this section..)
The result? Maybe “apps” become more like conversations with memory than rigid products. (…but I do love this sentence). Short‑lived micro‑interfaces surface, do the job, and dissolve. Your history with the system becomes the real artifact. (I admit: half thrilling, half unnerving. But mostly thrilling.)
5. What changes for teams?
Ship verbs, not screens. Provide solid, composable actions. Let language wrap them in UI only as needed. (Pixels are free, baby)
Look at conversational outcomes, not click‑through rates. Did the dialogue end with the user feeling “Yep, that’s what I meant”? That’s your north star. (Sam: maybe. I’m a bit more in the camp of “is the user getting what they want done”).
Expect messiness early on. The early web was awash in blinking GIFs. We survived—and standards followed. The same calm patience will help here. (It’s definitely learned from me, lol)
Above all, we’ll need humility. When the user can reshape the product in real time, the product team loses a bit of auteur control (that’s okay!) and gains a new job: crafting sturdy guardrails, clear values, and thoughtful defaults. (This is really the core - that the patterns of software development are changing. We know that. This idea, that applications might be broad rather than deep, and that a different kind of velocity and feedback loop might be emerging, is the core here, I think).
6. Why bother?
Because each step that reduces friction enlarges the circle of who can create. The web let anyone publish words. Cloud hosting let tiny teams run global services. AI, if we build carefully, might let non‑coders mint bespoke tools on a Tuesday afternoon—and discard them by dinner.
I like that future. It feels generous. More human, even. (I like this conclusion. I doubt I would have gotten to this framing of it)
7. A gentle nudge as you head back to work
Next time you reach for AI in your project, pause for a breath. Instead of asking, “How can this speed up the feature list I already have?” try “What tiny act of generosity could this unlock for each individual user?” Sometimes the answer is a brand‑new concept—one you can’t quite squeeze into a static spec sheet.
We won’t get everything right the first time. We never do. But if history repeats, the teams who dare to widen the canvas usually find the bigger opportunities. And the rest of us, happily, follow close behind—coffee in hand, graphs twitching, eager to learn from real people just a little bit faster than before.
Here’s the prompt, if you’re curious:
you've read a bunch of my essays. Here's an outline of an idea for this week. Let's see if you can write something in my voice that fills in these ideas. Be bold and creative - you are a highly experienced technologist and executive, writing about the broad, large direction the tech industry may go. You have taken smart pills today, and they are working - bring your best self! And write like me. Here's the outline. You can think about it first and ask questions if you like: Before internet, the cycle times to learn from the user were years, maybe months if you were quick. And it was super hard to even gather that data - usually you had to watch in person. At Google we could not only push daily/hourly, but we had really good access to analytics. Add a single line of code, varz gets updated, borgmon shows the graph. Get feedback from observing real interactions in an hour or a day. What does it mean to be AI centric now? Developers are focusing on doing the same thing faster with AI. That's helpful, but let's do something different. What if instead of building one narrow app, we can build one that's super wide - 100 or 1000 or a million different apps, for everyone. What if the feedback loop isn't "developer has to ask a question" but "user tells the app what to be". Why can't we talk to our applications now? Why are teams building the same kind of software but with better compilers, essentially? As the tools get better, our ambitions need to grow too. The internet removed a lot of friction on scale and also on iteration. AI is removing more and maybe giving new affordances to what we can even build.
Love this format. Somewhat confusing that some but not all italicized comments start with “Sam:”
"What if an application could stretch or shrink depending on who’s talking to it? One codebase, a thousand personal variations—perhaps even a million. Not a bundle of features hammered into a single UI, but a living thing that grows new limbs on request. "
I love this idea - further democratizing software development. 60 years ago, it was only done on mainframes. 50 years ago, I was doing simple programs on a calculator. 45 years ago, I was doing programs on SZ80, and shortly thereafter on an Apple II. Languages once were costly, but now they are freely available and easy to access. Spreadsheets allowed "what if?" calculations for non-programmers 40+ years ago, starting with VisiCalc and Lotus 123. There is a wealth of applications available to use. I can easily see AI used to create new, simple applications, or even modify OSS to be tailored to requirements. In some cases, AI builds a plug-in extension or custom library for an existing, modifiable application. In some respects, this is not new. In the 1980s, a company built a program called "The Last One" that allowed the user to build a program out of parts, like Lego for software. A decade later, visual programming came and went. Visual Basic was used to tie together different components of Microsoft products to build custom applications. I have used LLMs to design small software functions in languages I was not familiar with. I can see scope for combining languages for different tasks, with the tricky interfaces all handled by the LLM. What would be useful is building applications that can be used by a smartphone, especially to read external inputs.
Arguably, 3D printing was doing the same for manufacturing custom items and parts. It is still not easy, requiring different packages to design, build, and run a print job. Ideally, it should be much easier, with perhaps an AI acting to help with the design.
Marry 3D printing with cheap components to run software to control the printed object...
Could an AI allow you to design a functioning item to do a task that can be sent to a printer [also using off-the-shelf designed sub-components] and returned within a few days, ready to be used? [Amazon, are you listening?] I can see AI used to design biomolecules, e.g. proteins, whose sequence can be sent to existing shops for building the protein [could those machines be made a small as paper printers today?]
Speaking of paper printers, I would like an LLM to design an object to be built as origami, create a file of the pattern, and an associated instruction sheet to fold the pattern, for a regular printer.