2 Comments
User's avatar
Rob Phair's avatar

o3 passes my personal Turing test as a knowledgeable collaborator. I read and then fact check or do an experiment or a simulation to check what I read. I agree with Alex: it never hurts to be polite or to offer positive feedback. Indeed, it's good practice, especially in today's hyper-polarized world. Moreover, those API calls repeatedly yield responses seasoned with totally unnecessary but humanizing adjectives and adverbs even when the underlying LLM is "reasoning" poorly about immunometabolic cause and effect. Is that humanizing layer built by developers or by prompts? Ultimately, we each have to ask ourselves if Becky Chambers' novel, A Psalm for the Wild-Built, the first of her Monk and Robot series, is impossibly optimistic.

Expand full comment
Alex Tolley's avatar

If you run a local LLM with no internet access, just local documents as context or a RAG folder, of documents, the conversation is private. While it is much slower (now) than using an online connection to theoretically more documents, a local source of curated documents should give a better result.

While I agree with you that these things are not in any way human, not even alive like pets, there is little harm in treating them well as "agents". Humans have anthropomorphized their cars and computers, and animism is an ancient belief. Don't people still talk to their deities as aware beings?

While I don't anthropomorphize my LLMs, I do tend to be polite to my AMZN Alexa devices, thanking them for a good answer or suggestion. The reason is that I am well aware of how we treated servants in the past and I have trained myself to not treat anyone as a "non-person". If we ever have humanoid household robots, I would do the same, but without internally thinking of them as in any way alive. However, even with a robot, as with a car, I wouldn't mistreat it, if only to ensure it doesn't break by over-taxing it.

Expand full comment