Discussion about this post

User's avatar
Alex Tolley's avatar

And don't forget ratities - flightless birds that could use an airplane to get off islands, or travel further.

This reminds me of when I bought my first calculator - a Sinclair Scientific. A friend said it was a waste as he could use log tables much more quickly to multiply and divide numbers, and then tried to prove it in a race. Of course, other functions were ignored!

However, the danger of LLMs and coding is that you don't know if the output is any good. Do novices bother to test the output? Will production code introduce security bugs and malware?

[This isn't to say that hand-coding may do the same as we regularly find out.]

My analogy with calculators is what I call "number blindness". I am old enough that I was taught to estimate expected answers, and of course, you needed to do that using a slide rule. Even with a computer I always "sanity check" output to ensure my code is not making gross errors. However, when teaching students, I discovered that when they used calculators, even relatively simple calculations would sometimes be offered with grossly wrong answers, due to incorrect use of the calculator. Without knowing the likely answer in advance, students would not understand that they had made a mistake. Too late to instill the idea of estimation.

And so it is with LLM coding. It is like a spreadsheet that works for simple, single functions, but starts to fail when more complex equations are built. That is the danger with LLMs. It looks like the code is correct; it might even pass a few unit tests. But it may not catch edge cases, and worse, create code that the user doesn't know how to debug.

You can put all the caveats you like on this, but there is a danger that crap code will poison the well as it becomes the training data for the updated LLM.

Isaac Asimov once wrote a short story about how automation that was largely opaque to the user/manager was subtly producing incorrect answers and thereby undermining the global economy. (I cannot recall if it was deliberately done by robots or not.)

Bad actors are "why we can't have nice things". LLMs can be helpful both to good and bad actors. As we see from the flood of mis- and disinformation by bad actors, there is no reason not to suspect that the same won't apply to code as complexity increases and we rely on libraries designed to be malware. Even birds making airplanes might have to deal with saboteurs using more subtle means than blowing up production lines.

Expand full comment
Devansh's avatar

This is so beautifully written, Sam! This is clearly the most complex tech-dialogue of our times and you've done great justice to it by saying, "it depends", because it quite evidently does.

I've never put much thought behind the "greying of hair" as we grow older, but after reading this, It struck me that it mirrors how as we grow older, we see grey more than we see black and white. This is super grey and I hope people don't give in to the temptations of declaring it black/white.

Loved the parallel you drew with the birds, the plane and the steering.

Expand full comment

No posts