Vibe coding is all the rage these days. Just tell the LLM what you want, and code pops out! I think this is real, and it’s an important shift in how software will be built in the future. It completely changes what code means - now it’s a transient artifact, like object code, not something permanent. The first-class object is now the spec itself. This has some interesting implications, but it also has real challenges, chief among them that the code generated is often very sloppy, long, and fragile. I want to write a bit about both and then describe a different approach that my teams call “Lego coding” that produces much better code and is a much more robust and scalable process.
One interesting implication of “spec first” coding is that, if you do it right, you now have the same kind of timeline you get in “parametric” apps like Fusion 360. You can go back to an early part of this spec timeline, change it, and have the code generation solution “re-solve” and generate a new program. This is refactoring, but up in the level of semantics, much faster and better (again, if you can get good code generated).
It’s also changing our relationship to design documents - they now feel hazardous to us, almost like pieces of litter. Why freeze something into static form when we can go back to the assistant that we are talking with, and discuss the spec directly?
But there is a challenge of just writing a spec and saying, “make it so”. A good model can generate a fair bit of code (we’ve had code artifacts over 50K lines generated this way) but it tends to be not great code. The model can fix bugs to a degree, but changes accrete, and the code gets messy and bloated.
So how do we solve this? Remember the idea of recipes? If you don’t, this is the idea that, if you can describe the high level “metacognition” of a task to a model in code, it often does better at the task. Models don’t have reflexive, self-aware memory the way we do, so they tend to do poorly at metacognitive tasks. Being more explicit about how to tackle a long problem often helps keep the model “on the rails” and gives a better result. So, instead of saying “I want you to write a book”, we describe the process for writing a book, step by step, and then the recipe system walks the model through the steps one at a time.
In our case, we started first by building a “recipe to build recipes”. This is pretty simple, but it gives us a regular way to turn our own descriptions of processes into something reliably usable. This lets us talk to the model about a process and have the recipe-builder then generate a reusable recipe from that - very useful. So what did we do with that?
Here’s where Lego comes in. Imagine trying to build a 1000-piece Lego kit, if all you had was a huge pile of pieces on the floor, and a picture. Hard to do! This is what vibe coding is, from the model’s perspective, as the tasks get larger - remember, it doesn’t remember anything other than what’s in the current prompt, so if you ask it to build a large program all at once, that’s kind of like asking it to put together that model as one single, continuous process.
But Lego knows this, and so they give you a nice “recipe” book: build this set of sub-assemblies, in this order, then put them together. So that’s what we’ve done for coding - we have a process that is captured in a series of recipes. The first step of this is breaking down the problem into pieces we are confident the model can handle successfully. Then a set of recipes executes the kind of flow chart that would be familiar to any programmer - implement tests, build modules, do integration etc. Changes to the spec re-run this set of recipes and change what’s needed. The model sees small parts of the problem in a managed way, and is successful with each of them.
This took one of our core pieces of code down from 50K+ lines of sloppy code, to 2K or so of well-written and easy to modify code.
The nice thing about this is it also scales with model capabilities. If the model can handle something larger in one shot, great! We can build bigger and more complex things with it then.
Another really interesting aspect of the recipe idea is that when one engineer on the team finds a good approach and has the recipe builder make it explicit, the rest of the team can make use of it. This means that good cognitive strategies are now explicit, first-class, sharable objects in a team or organization. Everyone can move at the speed of the smartest person in the room.
There are big changes coming for coding. Systems like the one we are experimenting on are making teams much more effective, and “flattening” the traditional roles of product manager and designer down closer to engineer - anyone can write the spec, or make use of the recipe library, and we are seeing more and more of this on our team. Having a live spec that we work from also means the lead engineers are less of a bottleneck, and the team can go faster. And sharing best practices in an actionable way via recipes is really powerful.
Coding with AI is very real. The LLMs are likely going to take the role of compilers, and source code will start to fade into the background. That doesn’t mean we can just vibe/yolo everything. There is still value to how the systems are used and how the code is created and maintained.
If you want to read more from one of the engineers, he wrote about it here.
+1
Embracing this approach reduced the time I spent correcting my coding models
https://www.aitidbits.ai/i/162210580/the-building-block-approach-break-tasks-into-atomic-components
Love this, thinking about how we implement it as part of our platform engineering push