3 Comments
User's avatar
Ted's avatar

I'll try to keep quiet from now on, but ...

My main concern is where the productivity accrues and how that destroys society by benefiting the rich. GenAI is essentially a cheap employee. If that benefit accrues to the each employee who learns to use GenAI, then those employees benefit.

If however more naturally GenAI benefits a company who then RIFs 10% of its employees to replace them with GenAI, the only humans (*not* entities, actual humans) who benefit are those rich enough to own stock. How many quarters has MSFT RIF'ed? How many will it continue to do so?

If a company increases its revenue by paying for AI while not paying for humans, should mankind receive no money?

Expand full comment
Alan's avatar

RE: smaller number of human engineers can be much faster, because they don’t have to communicate as much to get work to happen

Why would we expect less communication needed to coordinate AIs? Or are you saying that for some problems the mythical man month is NOT mythical and we perhaps didn't fully explore that space because of overheads associated with hiring and managing humans?

BTW, there is a typo in the second paragraph ("Quantity means" instead of "Quality means") that threw me off all the way to the end. I wonder if a simple prompt of "check this text for typos and miscommunications" would have caught it.

Expand full comment
Alex Tolley's avatar

Brad deLong likes to talk about spinning up sub-Turing instantiations of people. I can see that instead of using a single instantiation of an LLM and replicating it, it may be better to instantiate an LLM with a particular expertise and POV. Now, each team member LLM is a different mind and has very different skills and perspectives. Ensure that they can work collectively and not be at loggerheads, and then each human can work with a team of LLMs. The only other humans will be those who need to do physical work that remains outside teh scope of computational minds.

If correct, that will increase the collaboration ability of humans to work with LLMs, but reduce the skills to collaborate with other humans.

I can see this working in large corporations with money to support expensive LLMs, but to be really useful, it needs to be democratized to be low cost, running on local hardware, so that anyone with the required hardware (or access to inexpensive cloud compute) can make use of the power of this model to handle a range of tasks effectively. I think of this as an upgrade of a personal library of specialist books upgraded to a library of thinking minds that can be quickly harnessed to help with tasks generally beyond the skills of the individual, and unlike books that must be mediated by a human mind to integrate them, the :LLMs can collaborate between themselves too. Then the human becomes more like a manager overseeing a team of specialists.

Expand full comment