There is a lot going on right now! If you read any news about AI at all, you’ll see all kinds of new tools, debates about whether AI is really useful, whether something is AGI or not1, and so on.
I tend to think of things in terms of analogies. Each of these is, by definition, just an approximation. You might think of them as shadows thrown on the wall when you hold a complex object in different poses (in fact, that’s a good example of dimensional reduction, which is one of the analogies!) That means no one of these really tells you the whole story, but each one can be helpful as a tool to understand the moment we are in. I tend to pull these back out sometimes to think about new developments, and I thought it might be useful to put them all in one place. Well, many of them at least, not all.
Darwin
There’s a feeling of discomfort that I think is floating around communities that spend a lot of time thinking about AI. Not the doomer feeling per se, but maybe something underneath that. I think this has an echo to Darwin.
The strongest reaction to Darwin publishing his theory wasn’t that it was wrong per se, but that we really didn’t want it to be right. Until the theory of evolution, we could look at ourselves as truly exceptional and apart from the animal kingdom - we did things like wear clothes and build empires that no other animal did, so clearly we were different and special. And then Dawin showed us, no, actually we are just animals, apes specifically, and we can see a continuous line to how we came into existence. This challenged all kinds of beliefs about ourselves and our place in the world, how we got here, and how we fit in.
And now, AI, or LLMs, are doing that for our minds, too. Or at least for some part of them - LLMs are showing that what we thought was mysterious and special about how we think and generate language…might not be. We don’t have the whole picture yet (just as we didn’t really understand the fine points of evolution when Origin Of Species was first published), but we now have N of more than 1, and we are seeing, more and more, things we thought were special and hard…aren’t. Sometimes surprising things, like image generation, or some kinds of reasoning. Some things are still mysteriously hard to do, but it feels like we will get there too.
Just like in Darwin’s time, the door has been opened, the bell rung. It will take time and turmoil to process, but there’s no going back. AI is stripping away the mystery around how our minds work—or at least the parts of our minds we once thought were uniquely human. The cognitive dissonance is palpable: if machines can think (or “think”—cue debates over dimensional reduction!), what exactly is left for us?
We’ve been here before, of course. Humanity has a remarkable track record of not wanting to be dethroned. And yet, here we are again, trying to reconcile our imagined exceptionalism with reality. History tells us this: we’ll find a new way to define “special.” But right now, we’re in the messy middle—hanging on to the idea that our minds are somehow magical even as the evidence piles up against it.
Motion and Revolutions
Another analogy on my mind lately is this: the first industrial revolution was about producing surplus physical energy, this one is about surplus cognitive energy.
I think that’s true (ish) but it’s more complex than that. It took a long time to figure out how to control, scale, manage, and shape that physical energy. The early phases were really simple motions like hitting things (steam hammer) and pumping. I think we are likely in a similar phase with AI - summarization and chatbots are kind of the “hitting and pumping” motion equivalent, and you can have all of that that you want. What you can’t have (yet) is more complex and diffuse things like autonomous agents, or new industries we haven’t invented yet. But just like v1, in v2 of the IR, these things will arrive, now that the ball has started rolling.
A New Programming Stack
This isn’t exactly an analogy but it’s an historical comparison at least. Going from Desktop to Cloud involved a rethinking of the entire software world: teams went from waterfall to agile, languages from compiled to interpreted, applications from single executable to massively distributed, databases from SQL to noSQL, even software products from package goods to services and ads. The stack had the same pieces of <team, language, tools, data, platform, product> but all of the individual pieces of the stack changed radically.
This is likely going to happen (is happening) with AI: how teams work, the language they work in, the tools those languages and work patterns need, the products, experiences and economics those patterns support - all of this will change and evolve. The important thing here is to keep in mind that we are in the middle of a shift - the last one took about 15 years, from the early browser to the smart phone. This will be faster but likely not instant, and there will be false starts. But right now, you should be re-examining every part of how you build technology and consume it, and that will likely be an evergreen task for at least the next decade.
Dimensional Reduction, Productivity and AGI
This is the one that gets me the most. Dimensional reduction is when you take something that is complex and high dimensional and then reduce it down to a smaller number of dimensions (usually 1) to describe it. We do this all over the place in our culture, tech and politics. Is someone good or bad (1 dimension) or are they complex (good on one dimension, like charity, but bad on another, like personal interaction or ethics)? Is a hot dog a sandwich? Yes or no is 1 dimension, but there are more dimensions than that (type of bread! shape! how it’s eaten!) that we are reducing.
The problem with dimensional reduction is that it involves arbitrary choice in how you weight the dimensions (I could put the math here but I’m lazy - ask an AI to explain it lol). So maybe I think the “can you hold it in one hand” dimension is most important, so of course a hot dog is a sandwich, but maybe you think the “does it have slices of bread on both sides” dimension is more important, and you say it isn’t. There’s no way to resolve this - there are no strict orderings in multidimensional spaces - so the reduction just results in an argument.
“Is it AGI” is the grandmama of this kind of argument. Intelligence is SO complex. There are so many dimensions (evals), and ways to think about it. And we are trying to map this complex object onto another complex object, the totality of human behavior. We are saying “is the dot that is projected onto the ‘is it intelligent’ line by AI to the left or right of the one projected by the average of all humans?” It’s hard! Arbitrary! We can argue forever.
It’s more useful (I think), to understand and examine the dimensions themselves. We might not think something is AGI, but we might talk more intelligently about how long it can perform some class of tasks on its own. That’s useful and, if we pick the tasks well, pragmatic.
In general, look for arguments that are really people doing their own dimensional reduction and then arguing for their arbitrary choice of weights. It’s fine to do if you like, but there are other ways to decompose problems like this that might be more useful.
Language as Weapon and Tool
One last thing - Tyson Dowd (@tysondmsft) made an interesting point last week in the comments, to the effect that language was one of our first technologies (probably after fire, which is what gave us brains enough to have language, it seems). And now that technology is becoming “activated” via LLMs - it’s now a tool (or maybe a weapon) that can reach into the real world. It always ways in some ways - memes, culture and stories have always been a huge part of how humans get things done and are probably what truly does set us apart from everything else. But now, that connection to the real world is becoming much more explicit and direct - language is almost an object unto itself now, a player in the real world.
I don’t know what to make of that, but it’s a very interesting observation.
If we have AGI now, why are the naming conventions for all the models so bad? I think the first proof someone has true AGI should be using it to fix the name (snark).
I like the Darwin analogy. I fear for our future, though, that we're just starting to struggle with our brains not being special, when most of the world still hasn't accepted that our species isn't.