A new coding regime is being created
I started writing code on the desktop, like many in my generation. That was ‘one person, one app, one machine, usually one thread’ - total control. The resources were limited and so our ambition had to be, but it was fairly easy to understand a great deal of what you were doing in a program, as an individual.
Then the internet showed up. The definition of application itself changed - now it wasn’t something delivered on a disk, it could be a web site search engine, document editor, ‘everything store’, global connector, messaging system and on and on. In this world, every constraint of the last one was up-ended: many users at the same time (sometimes interacting even!), many machines. many ways of connecting to the app, everything distributed and multi-threaded.
That shift was painful and took a bunch of years to understand. It came with new problems, like data center and machine outages, authentication, network latency and reliability issues, database scale, and more. All of the platforms we developed on, from the browser, to the network stack and backbones, to the databases, to the app servers and debuggers were being invented and revised in real-time as the entire industry tried to figure out how to go from waterfall processes that shipped physical media to…something else? Which wound up being Agile, and then things like CI/CD.
Along the way, all kinds of things we take for granted were invented: functional languages like JavaScript, containerized SDDC techniques, noSQL databases, all kinds of remote debugging and monitoring, and more. Mobile connected devices showed up in the middle of this, just to make things more confusing and fun! There were lots of debates about best practices, lots of competition between tools, techniques and frameworks.
We are here again, with LLM (and more broadly, probably AI) based coding. Everything is up for grabs - what’s an application now? We are building things that are mixtures of code, document, conversation and model that seem like applications. How do we debug and monitor things like “is the agent nice and helpful?” What things do we care about for reliability and regression? How do we mix declarative code with LLM inference in repeatable ways? How do those codebases get organized and maintained over time (I can’t wait for the first “semantic mudball” trainwreck - it’s coming). I’m not even sure we understand all of the questions yet - we are still to a large degree building with reflexes and models from the old paradigm (which happened in the internet transition too).
I don’t have much advice other than: don’t get too attached to how you are doing things today and try to learn and listen (and try) as much as you can. Inevitably, some teams will figure out parts of this, and also there will be false starts. You can’t expect certainty and you can’t expect this to be quick or easy. We are in for a messy few years as we, collectively as an industry, begin to understand how to best build new forms of software from all of this. Personally, I find it super exciting and fun!