For the past several months, I’ve been working on building code around LLMs. Not “here’s a hand-crafted prompt that does a cool party trick”, but really confronting what it takes to build real, repeatable, robust programs on top of these models, using multiple inference calls to multiple models. They have interesting properties, limits, and capabilities.
Love this articulation of the process — more a negotiation than a dictation. This seems like progress, even though the outcomes are much more variable.
In a way, we've spent 30 years trying to figure out how to tell the machine what to do, and now we've come to the moment where the machine is asking for more agency to work things out for itself — and this is a logical and necessary evolution to higher forms of computing.
Love this articulation of the process — more a negotiation than a dictation. This seems like progress, even though the outcomes are much more variable.
In a way, we've spent 30 years trying to figure out how to tell the machine what to do, and now we've come to the moment where the machine is asking for more agency to work things out for itself — and this is a logical and necessary evolution to higher forms of computing.