Normally these letters are the result of something I’ve observed during the week at work. This week I took a few days off, so less was happening, but I saw something interesting, nonetheless.
We are all familiar with the idea of the mythical man-month - the idea that software teams get less efficient as you add people, because of communication overhead. This is generally held to be because software is so intensely communication-centered, but I think that’s not the complete story. It’s also interesting that we’ve had something like 40+ years of innovation in programming tools - libraries, languages, frameworks, app engines, different database techniques, better debugging, websites like stack overflow, now LLMs and on and on - and we still seem to have this problem, it hasn’t moved at all.
I was at a vacation house with my family. I do some work around this house when I’m there alone, and I’m very efficient. I often wake up in the morning with 5 or 10 things to get done, and most of them are done by the end of the day. My sister and daughter were here for the weekend, with my sister’s family - so 4-6 people depending on the day. We were much slower to organize and when we have that many people in the house, we’re lucky to do 2 things in a day.
I don’t know why this is, I think it’s at least partially social - no one wants to be the “boss” so (to put it in server terms), instead of having a “primary”, we have to fallback to some kind of consensus mechanism, which is more expensive. I think this is just durable with any group of humans making a choice - it’s either hierarchical, in which case it can be faster, or it’s distributed, in which case it’s socially “fairer and nicer”, but slower. Neither is right, they’re both choices.
AI might break this for a few reasons. In the realm of software, it’s hard for one person to be the iron-fisted dictator of a team beyond a certain scale, because there is just too much to know (I’ve called this the “primadonna deathspiral” before). But an LLM, with good state in a vector database, might not have this problem - it might be able to remember many more details about the project, and direct at scale.
That leaves social. Again, there’s an interesting effect possible with AI - the AI can exist “outside” of social norms. We already do this without realizing it - social media makes choices in communication for us (we call this ranking). It mediates (literally) between humans - so if your friend posts something boring, you are free from the social awkwardness of politely responding (mostly) because they don’t really see you seeing it (again, mostly).
It might be the case that an LLM could act as that kind of neutral arbiter in a group of coders - taking on responsibility for decisions without any one person having to feel that it’s their responsibility directly, making the consensus choice above more efficient. It’s probably more complicated than that, but it’s possible that this is the kind of effect we will see as programming teams make more use of AI. Not just the copilots for individuals we see now, but for whole teams.