> health outcomes are divergent by income now, largely because of impacts from ingredients like HFCS
I think you mean obesity specifically here. In the moonshine era I expect fewer people could afford to be obese. (Cheap food is a very new problem)!
The long arc has been improved health outcomes especially among lower income folks for a long time, though perhaps we’ve regressed somewhat recently.
Maybe the takeaway is that some new unfairnesses will emerge from AI advances, and we will instinctively interpret them as a reflection of a more universal injustice, but on net, many folks may be better off?
If you haven't yet, you really must read Nexus by Yuval Harari. No, don't get an LLM summary, read the book. He goes far beyond your analogy here, and looks at this tech through a historian's lense.
While sucrose isn't good for you either in "excess", HFCS is used in foods as there is a protectionist tariff on cane sugar that keeps out the cheaper sucrose at world prices.
On another blog, the issue of how to use AI in education reared its head again. But other technologies have raised similar issues, like printing has reduced the need to memorize texts and data. Even cuneiform was used as an indelible marker for crude accounts. If LLMs didn't hallucinate, would they still be a problem, or would they just be the next technology leap from printing, and other tools that would make their way as cultural tools? Because they hallucinate, their output must be used with some skepticism, just like a rando person at a bar bending your ear over something. It wouldn't surprise me if, at some point, other AI tools could even solve the hallucination problem by doing the critical thinking for you.
Because of my age, at school we used slide rules and log tables to aid calculations. We also used estimates, and simple ways to understand orders of magnitude. When teaching, I found that the current generation weaned on calculators suffer from "number blindness", unable to even think how to check whether the calculator output was correct or not. Indeed, no one my classes even know how to factor out numbers in divisions to simplify the calculations. All of this is to suggest maybe we will have the same problem with AI output. It is easy to say "use critical thinking", but it may become an unused skill, especially as AIs get better and there is less need to check the output.
> health outcomes are divergent by income now, largely because of impacts from ingredients like HFCS
I think you mean obesity specifically here. In the moonshine era I expect fewer people could afford to be obese. (Cheap food is a very new problem)!
The long arc has been improved health outcomes especially among lower income folks for a long time, though perhaps we’ve regressed somewhat recently.
Maybe the takeaway is that some new unfairnesses will emerge from AI advances, and we will instinctively interpret them as a reflection of a more universal injustice, but on net, many folks may be better off?
If you haven't yet, you really must read Nexus by Yuval Harari. No, don't get an LLM summary, read the book. He goes far beyond your analogy here, and looks at this tech through a historian's lense.
While sucrose isn't good for you either in "excess", HFCS is used in foods as there is a protectionist tariff on cane sugar that keeps out the cheaper sucrose at world prices.
On another blog, the issue of how to use AI in education reared its head again. But other technologies have raised similar issues, like printing has reduced the need to memorize texts and data. Even cuneiform was used as an indelible marker for crude accounts. If LLMs didn't hallucinate, would they still be a problem, or would they just be the next technology leap from printing, and other tools that would make their way as cultural tools? Because they hallucinate, their output must be used with some skepticism, just like a rando person at a bar bending your ear over something. It wouldn't surprise me if, at some point, other AI tools could even solve the hallucination problem by doing the critical thinking for you.
Because of my age, at school we used slide rules and log tables to aid calculations. We also used estimates, and simple ways to understand orders of magnitude. When teaching, I found that the current generation weaned on calculators suffer from "number blindness", unable to even think how to check whether the calculator output was correct or not. Indeed, no one my classes even know how to factor out numbers in divisions to simplify the calculations. All of this is to suggest maybe we will have the same problem with AI output. It is easy to say "use critical thinking", but it may become an unused skill, especially as AIs get better and there is less need to check the output.
YES! Output valued above understanding. Answers over discovery. This is way beyond Plato's problem with text.