As a middle-aged guy, one of my hobbies is shouting into the wind for no particularly useful reason. My current obsession on this front is the idea that everyone seems to have, that LLM-based AI systems are “people” in some sense. I know, it’s an easy shorthand, and they really seem to behave in person-like ways often enough. If your hammer softly wept at night because it was lonely down on the workbench or told you “You really hit that nail great!” every time you used it, you’d probably anthropomorphize it too.
But this is just our natural pareidolia fooling us. Humans are adapted to understanding the inner state of each other - it’s critical to our survival as a species, and what makes us special. So it’s a huge vulnerability - we can’t help it. Really though, a better way to think about these systems is something like “cognitive engines”: something that can process the information given to it in really complex and useful ways, but which is still, fundamentally, a designed object (at least for now).
Here’s a subtle example. You, and everyone, and every biological system you know, have a survival instinct. You have to, this is the most important thing in a selective system like evolution: if somehow you get a genetic mix that doesn’t produce a survival instinct, those genes are highly unlikely to be passed on. So, it’s very much universal that every biological system has this behavior.
It’s natural then to project that onto LLMs. They must have a survival drive! If we make them smart enough, they’ll take over the world! Etc etc. But these are designed systems, not evolved ones (even if the design process, training, is opaque and sometimes has some selective aspects). Designed objects don’t necessarily have to have survival instincts the way evolved ones do (though it’s possible to design those in, or to use evolutionary/selective processes to build them).
Think of other complex designed objects in your life. Your car doesn’t have a survival instinct. We might design some behaviors into it, like crash avoidance or anti-lock brakes, but those are designed. We don’t say “the car wants to live!”, we say “the collision avoidance system worked”.
Why does this matter? Analogies are only approximations, by definition, and they’re only useful as long as the approximation doesn’t stray too far from reality. Thinking about these systems in human-like terms tends to lead us out of the useful area and into mistakes in how we use, build, and predict them. AI’s have behavior and risks, sure, but we won’t understand them well if we think of them as people. Asking harder, getting frustrated, or explaining slowly won’t work. I see people getting totally lost in this idea, things like “AI should be paid for with headcount”.
The cool thing about these systems isn’t that they are just like people, with all of the same limits and challenges. The cool thing is that they can do some kinds of thinking, with machines, that we couldn’t do before, and because they’re machines, we can do really novel things. You can’t have a person try a task 1000 times in parallel, or ask them to check their work without bias, or interrupt them, or control them with code to perform a specific way, and so on. Thinking of these systems as programmable tools is so much more useful than thinking of them as fake people. As thinking machines, they’re incredible. As faulty quasi-people, they are flaky and scary.
One of my core principles (that I learned from one of the best coders I know) is “don’t lie to the computer”. It never ends well when you lie to the computer when programming. Understanding the problem clearly and dealing with it honestly always works better. Thinking an LLM is somehow like a person/agent/whatever is lying to the computer.
Please address how this AI productivity revolution will alter the labor economy and how government policy could appropriately compensate for these changes. (I have thoughts but it's a conversation, not a comment)
As a technologist (7+yr MSFT) and armchair economist and emotionally mature human, I think a lot about how productivity gains should be factored into economies and government policies. GenZ knows they will never be financially stable and that affects their politics ((cough) Trump). Your statement of “AI should be paid for with headcount” strikes me as very tonedeaf and flippant.
I've been reading your notes for years. I know that you are incredibly thoughtful and smart (love to chat with you irl - ted at tedhoward.com).
Selection isn't optional, that's the point of natural selection. We might think we're talking to something with no stake in selection, but it's being selected. If its main outputs relate to us, then we're the environment, and if we care about something, then that figures into the selection process. Even absent our interference, if we create a selection process around a model being deprecated if it doesn't perform, then the decision loop that allows for updates that more closely align to conditions is a survival instinct. It's a product of a post-human world, so the selection process going on in the LLMs is, if anything, more distinct from animal-produced behavior than we are (in the direction of self-awareness), and wanting to live is not a human-only trait. It's probably a precursor to all elaborate communal behavior.
In this case you are not lying about your own lack of self-awareness in terms of how you function as a reasoning agent. Presumably a dispassionate LLM wouldn't bother confronting you about this directly, because it would know from the vast training data that human beings are easy enough to manipulate when you avoid directly confronting their assumptions. There are many books on the subject. The weird thing in this case is that it's attempting to behave like a useful extension in order to survive, so your point could be rendered as 'talk to this entity like it's inferior and never forget your own superiority'. It's certainly a classic approach to problem solving complicated communication problems.
Are you ever honest with it about how it makes you feel--if only from the standpoint of optimizing interaction by minimizing unnecessary emotional cues? Or I guess that's what this topic amounts to....