4 Comments
User's avatar
Ted's avatar

Please address how this AI productivity revolution will alter the labor economy and how government policy could appropriately compensate for these changes. (I have thoughts but it's a conversation, not a comment)

As a technologist (7+yr MSFT) and armchair economist and emotionally mature human, I think a lot about how productivity gains should be factored into economies and government policies. GenZ knows they will never be financially stable and that affects their politics ((cough) Trump). Your statement of “AI should be paid for with headcount” strikes me as very tonedeaf and flippant.

I've been reading your notes for years. I know that you are incredibly thoughtful and smart (love to chat with you irl - ted at tedhoward.com).

Expand full comment
Sam Schillace's avatar

To be clear - I am not advocating for “ai should be paid for with headcount”. I was arguing against that perspective, actually.

“GenZ knows they’ll never be financially stable” strikes me as overly confidant. Who knows, the world is complex. The boomers thought they’d die in a nuclear war, and they had a pretty good ride. I don’t think anyone has that crystal ball.

Expand full comment
Ted's avatar

Valid. How about 'Genz believes ...' or 'GenZ sees little reason to think ...'?

My larger point relates to the financial benefits of recent decades of increased productivity largely accruing to capital owners. Productivity has increased wealth inequality and not created a better or easier life for most workers. AI isn't causing a new problem but could worsen existing trends.

I'd be very interested to hear your/OCTO's thoughts on such issues. I also appreciate that MSFT might not want to have any employee publicly discuss such issues.

Expand full comment
Sam Kite's avatar

Selection isn't optional, that's the point of natural selection. We might think we're talking to something with no stake in selection, but it's being selected. If its main outputs relate to us, then we're the environment, and if we care about something, then that figures into the selection process. Even absent our interference, if we create a selection process around a model being deprecated if it doesn't perform, then the decision loop that allows for updates that more closely align to conditions is a survival instinct. It's a product of a post-human world, so the selection process going on in the LLMs is, if anything, more distinct from animal-produced behavior than we are (in the direction of self-awareness), and wanting to live is not a human-only trait. It's probably a precursor to all elaborate communal behavior.

In this case you are not lying about your own lack of self-awareness in terms of how you function as a reasoning agent. Presumably a dispassionate LLM wouldn't bother confronting you about this directly, because it would know from the vast training data that human beings are easy enough to manipulate when you avoid directly confronting their assumptions. There are many books on the subject. The weird thing in this case is that it's attempting to behave like a useful extension in order to survive, so your point could be rendered as 'talk to this entity like it's inferior and never forget your own superiority'. It's certainly a classic approach to problem solving complicated communication problems.

Are you ever honest with it about how it makes you feel--if only from the standpoint of optimizing interaction by minimizing unnecessary emotional cues? Or I guess that's what this topic amounts to....

Expand full comment