It’s been said that what we are facing with AI right now is akin to the “second” industrial revolution. The first was the first time we had excess physical energy (beyond thew (muscle), wind, and water). This is now the first time we have excess cognitive energy (beyond our minds).
I think it’s much more complex than that, for a number of reasons I want to explore. Let’s start by thinking about what really happened in the industrial revolution—what it was really about.
At the start, two core technologies emerged in the 17th and 18th centuries: steam power and electricity. Steam power was, essentially, a way to amplify motion—to make things move more, faster, and with greater force than ever before. The first uses were simple and utilitarian: pumping water out of mines and hitting things (steam hammers). Over time, though, the motions we could automate grew more complex: lathes, pistons, wheels. Dedicated machines began to emerge—machines designed not just around the engines but with the engines, to produce specific kinds of motion more effectively. Locomotion, for example, became its own class of innovation.
Meanwhile, as this mechanical revolution unfolded, control systems began to emerge alongside the new sources of energy and motion. These systems had two purposes: to manage the immediate mechanics of the new technologies, and to enable the kind of larger-scale coordination we now take for granted. This all accelerated when electricity showed up. Electricity didn’t just bring light; it brought new ways to control motion. The telegraph sent information across vast distances instantly. Electric components allowed machines to move at precisely controlled speeds. Actuators made motion more sophisticated and adaptive.
And, of course, electricity eventually gave rise to the computer age, where motion became programmable. We gained fine-grained control over machines—think CNC mills and assembly lines—and the ability to store state, model reality, and control vast, complex systems (entire countries, even). This was no longer just about moving things; it was about orchestrating them.
Motion as Thought
Now let’s think about AI. Large language models (LLMs) and other AI systems are attempting to automate thought. The parallels to the automation of motion during the industrial revolution are striking. Today, the automation is still crude: we’re doing the cognitive equivalent of pumping water and hammering—basic tasks like summarization, pattern recognition, and text generation. We haven’t yet figured out how to build robust engines for this new source of energy—we’re not even at the locomotive stage of AI yet.
Steam and electricity were transformative, but their potential was unlocked only after an entire culture of mechanical practice and practical understanding grew up around them. Early on, engineers relied on trial and error, intuition, and trade knowledge rather than formal theory. Metallurgy, for example, only became a true science when the demands of industrial machinery forced us to understand metals in fine, reproducible detail. Until then, results were uneven and unpredictable.
This will happen with AI too. It’s confusing, though, because we already have what look like “thinking machines” in the form of programs, services, and technologies we’ve lived with for decades. So it’s easy to assume we already know how to build software. But what if traditional software engineering isn’t fully relevant here? What if building AI requires fundamentally different practices and control systems?
We’re trying to create new kinds of thinking (our analog to motion): higher-level, metacognitive, adaptive systems that can do more than repeat pre-designed patterns. To use these effectively, we’ll need to invent entirely new ways of working, new disciplines. Just as the challenges of early steam power birthed metallurgy, the challenges of AI will force the emergence of new sciences of cognition, reliability, and scalability—fields that don’t yet fully exist.
The Science of Thought at Scale
Consider “horsepower” for a moment. Before steam and electricity, power was literally delivered in units of the work a horse could do. This had huge economic implications: it was hard to scale up (horses are big, expensive, and finite) but also hard to scale down (a horse can’t do a tenth of a horse’s work). Steam and electricity transformed this by giving us finer control over power—letting us deliver exactly the amount of energy needed, wherever and whenever it was needed.
AI will do the same for cognition. Right now, much of our cognitive work is locked into the scale of a single human mind. With AI, we don’t have to work in units of “one human brain” anymore. We can factor intelligence into smaller, bigger, or entirely different shapes. An AI system doesn’t get bored or impatient; it can sift through every record in a massive database or answer the same question a thousand times without fatigue. This gives us a fundamentally different relationship to cognitive labor.
The Long Journey Ahead
The past few years have been fascinating. LLMs have advanced incredibly—and also disappointed us by failing to deliver everything we dreamed of. This analogy to the industrial revolution is a way to make sense of what’s happening and what might come next. Technological shifts take longer than we think, and they’re messier than we expect.
For now, thought, intelligence, and judgment are still ad hoc commodities—dependent on human minds and uneven in their application. But as AI develops, these will become technologies: more effective, better understood, cheaper, and ubiquitous. The near-term challenge is to build the theories, best practices, and industrial systems to support this transition.
The best thing we can do right now is adopt a long-term perspective. Build what we can. Make each step a little more effective and reliable. Look for repeatable patterns and principles—things we can turn into an engineering culture for cognition. LLMs seemed magical at first, like we could skip all the messy work of figuring out how to use them. But history suggests otherwise. Just as it took hundreds of years to go from steam engines to CNC mills, it will take time to go from today’s AI to the systems of the future.
The promise is worth the effort. For the first time, we’re building new kinds of thought, with properties and behaviors we’ve never seen before. It’s going to be messy. But it’s also going to be extraordinary.
I think it's better to think much longer term. Language is pretty the first technology invented by humans, characteristically defining much of the way our species has progressed ever since, and is now making a return home as a tool we can in our hands with our opposable thumbs.
Embedded within that tool is the 10s of thousands of years of accumulation of knowledge and ability to communicate with each other in nuanced ways in precise ways in specialised ways... But we finally have something that doesn't just keep records of our language or reply it for us at will or turn it into digital formats.
By creating it in the format of a tool we can use it by shaping words into instructions that will act, and act in ways that are very close sometimes even better than what we would do ourselves. If you imagine what a throwing spear did for the ability to hunt food for humanity, now we are watching the first spears made of human language that act on our behalf.
Our words have become active verbs, extending our capability to use language in any way we normally use it by creating self-propelled active bundles of words. Given the track record of language being useful to humans in history it seems very likely that this invention kind of makes everything else on the technology tree look like a stepping stone towards getting to a tool that could talk back to us.
Lots of good stuff to think about. One point, that might or might not mean anything... Not sure. But from the history I've read, it wasn't steam per se that made the industrial revolution, but coal. Plenty of the machinery was already in place, including steam engines, but the energy source was finite and specifically located (along waterways). It was coal that opened things up.
An interesting point is in the term "finite". Water was a limited source because it was clearly finite. Coal seemed to have an endless supply. Today we know that's not true. I wonder if we'll be able to anticipate the equivalent situation with automated "thought"?