Discussion about this post

User's avatar
Alex Tolley's avatar

We continue to live in an age where it can be difficult to know what has real intelligence and a mind, and what does not.

We can smile at the sim0licity of our ancestors who were animist and panpsychistsm seeing life and agency in things that had none, or perhaps anthropomorphize animals that do have minds, but not human ones.

Many years ago, I read "The Mind's I" by Hofstadter and Dennett. They posed situations that proved difficult to understand, where the locus of the mind was.

We have been caught up in something similar for at least a century, being fooled or not, like audiences first being exposed to a film of an onrushing train.

Consider. We watch a movie with people in it. There are no people, just spots of light dancing in front of our eyes. Yet there were real people acting who were captured on film (or a photograph). Should we interpret their expressions or just dismiss them as artifacts? If I speak to someone over a telephone, radio, or videophone, should I interpret this as communicating with a person, or not? What if we replace the movie character with an AI based on a person, or an avatar on a phone conversion? Is that a person, or not? What about speaking to someone on a video who has had their features enhanced to change their look. How far should I interpret their expressions as human or artificial? What about a total replacement of a face with an AI-generated face? Same with the voice? Is there any way to completely discern a real person from a deepfake?

AI's, both altering and intervening in communications, from small changes to complete replacements as personal "assistants" are going to get ever better and more difficult to discern the real from the artificial.

I do grant my cat as a living being as having a mind. But it is limited. What happens if an AI interprets it vocalizations and actions to express itself as a human might, with emotion-laden words? Should I welcome this, or reject it as artificial, as less than the unadulterated cat?

What do we do if our AIs respond to us in a way that reflects how we respond to it? Should we alter our tone and politeness depending on how we perceive a machine's "intelligence", and live with it if it gets sulky or uncooperative, just like a human?

In the classic Asimov robot stories, the human Elijah Baley eventually treats his humaniform robot partner, R. Daneel Olivaw, as if it were a he, anda human. In the extended Foundation novels, R, Daneel lives to experience the universe of the Foundation. In the TV version, Demerzel plays the same role. Should it be regarded as an "it" or a "woman" or something else? I would argue that anthropomorphizing such a robot is perfectly fine. At some point, I would similarly argue that our AI technology should be treated similarly, whether just text, voice, or a simulated person companion, or embodied in physical form. It seems to be how we respond to robots in TV and movies, such as the latest M3GAN 2.0.

Expand full comment
Mark Stefik's avatar

Outside the context of LLMs, a similar feedback loop seems to operate on bloggers and others that got more attention when they expressed extreme views.

Expand full comment
1 more comment...

No posts