I suppose I should save this, but it seems like an interesting analogy to discuss. Is AI to thought what High Fructose Corn Syrup is to food? Let me explain a bit.
Way back in the 1800’s, as folks started farming the rich land to the west of the Appalachian Mountains, there was a surplus of corn that was hard to move back east to the markets there. It was rich land, things grew well, but they needed more portable goods - which turned out to be distilled alcohol (and the culture of drunkenness that ensued is a big part of where the temperance movement and then prohibition came from).
I’ve seen folks make the same analogy about high fructose corn syrup: that it’s an economically dense form of a surplus agricultural good that is being packed into all kinds of foods for economic reasons, and to similar effects on health and culture.
It works because there are economic incentives as well as personal ones: the food tastes better and it’s cheap, but not better for you.
It occurred to me that you might draw the same conclusion about LLM usage - that it’s an economically dense use of surplus (it doesn’t feel like it because GPUs and power are still in somewhat short supply, but it’s definitely taking up a bunch of spare capacity in the tech industry), that is convenient, easy, and feels good (just like HFCS) but ultimately not good for you.
It’s a tempting analogy. It’s probably true in some ways. For example, just like with HFCS, eating a little is not horrible for you, eating a huge diet of it is. Using LLM output without critical thought or without moderation is also probably unhealthy. LLM output is already being hidden in many things we consume, just like HFCS. Sometimes that’s a good thing - it makes food (or some kinds of thought) more accessible. But it can also be a bad thing - health outcomes are divergent by income now, largely because of impacts from ingredients like HFCS.
I suppose the conclusion I take from all of this is not that the analogy is perfect, but that we don’t want it to become perfect, and we should work to mitigate the harms it illustrates. It’s great if LLM tech gives access, education and better healthcare to folks that otherwise wouldn’t have it. It’s bad if it kicks off an “intellectual obesity” epidemic. And we should be aware that there’s a real risk of this - we all like things that are easy and feel good, and that there are strong economic incentives to “put AI in everything”.
The two eras of cheap corn weren’t great for us, either our health or our culture. I am a strong believer in the promise of this tech, but that doesn’t mean it’s not without risk. Even though the analogies aren’t perfect (and my history might not be perfectly accurate either), it’s good to contemplate what has gone before as we make our way through this. “Control your inputs” has always been good advice and it’s good advice now - no one wants a flabby brain!
“intellectual obesity” - what a good way to put it :)
> health outcomes are divergent by income now, largely because of impacts from ingredients like HFCS
I think you mean obesity specifically here. In the moonshine era I expect fewer people could afford to be obese. (Cheap food is a very new problem)!
The long arc has been improved health outcomes especially among lower income folks for a long time, though perhaps we’ve regressed somewhat recently.
Maybe the takeaway is that some new unfairnesses will emerge from AI advances, and we will instinctively interpret them as a reflection of a more universal injustice, but on net, many folks may be better off?