One of the challenges programmers and engineers have is predicting which technologies will succeed. Engineers tend to look at things through the lens of, well, engineering. So, we have long, arduous debates about technical merits and minutia. We pick through (and cherry pick!) examples, discuss edge cases. In the case of AI, we run evals, plot graphs and endlessly debate whether AI is useful (it is) or a scam (it’s not). And folks not in the industry stand by and scratch their heads at it all, trying to decide if AI is a big change that will disrupt their lives (it will).
None of this matters as much as sociology (disclaimer, I am not a sociologist, and a friend of mine who is reads this so I have to say that!). Way back in the depths of time (40 years ago!) a debate like this was raging about why Lisp, which seemed “better” on technical merits, was losing to C, which was “worse”. The paper was called ‘The rise of Worse is Better’ and it’s still one of the best things ever written about the dynamics of how technology succeeds or doesn’t.
The core premise is that there are three things in any technology that are in tension, and you can’t have all of them: simplicity, completeness, correctness. You can have about one and a half, maybe two if you’re really lucky. And the answer, which I love because it fits so neatly with my thesis that “Users are Lazy”, is that tech that is Simple first but not fully Correct or Complete, will beat something that strains to be Correct and Complete but leaves Simplicity to whatever can be done after that. People are lazy.
The internet is like this. HTML isn’t really a great way to lay out a page, for a bunch of reasons. Word processor people I knew back at the start of the internet would say things like “you need real page layout, SGML (which HTML was derived from) is better”. But SGML is a huge, complex, brittle spec - complete and correct, but much less simple.
AI is Simple. Talk to it! It does stuff! It’s not complete (it doesn’t do all stuff), and it’s not fully correct (it makes mistakes, makes things up, can’t do everything a person does). Hopefully both of those reduce over time, but right now, for many tasks, AI in the form of LLMs, is simpler.
That’s why it will succeed as a technology. It’s possible there could be business model issues that prevent this (I doubt it - we have a long historical record of how optimizations work, and LLM tech is already way ahead of the typical curve, so I think it’s already clear that costs on current models will come down enough for them to be great businesses, even if they don’t get any “better” from here). But it’s already simpler to use for many tasks, and that’s why it’s spreading so quickly. This is also why the chat interfaces have spread faster than things like the API playground that came before it - chat is easier to understand.
Saying “AI is worse” isn’t a judgment on the tech or its value. Of course, we want the best technologies we can have. Worse is better speaks to the social and practical dynamics of how technology is adopted. Technologies with a certain pattern - simple to use over complete and correct - tend to win over time. AI fits that pattern, and we can see that in how it’s being adopted. And this is why many of the criticisms of it miss the mark.
There's a counterbalance here - quality.
How much hallucination risk are you willing to suffer for a GPT query that you could easily counter with a search on Google? How do you translate "amigo", what is the capitol of Luxemburg, is my business partner laundering money?
The problem is not that it's simple, it's that LLMs really have a lot of quality to simple queries.