(this was meant to go out last week…it went on LinkedIn but not here)
I’ve started to notice an interesting pattern. More enlightened teams and people are using AI to get lots of work done for themselves. I have teams that use it as a brainstorming assistant, we’ve written whole books with it, I’ve used it to do a very complex geothermal design in about 20 minutes that I wouldn’t have been able to do at all - it’s pretty amazing (and as an aside: it’s kind of incredible to me that we can talk to computers now, and they understand us! and there is still a lot of skepticism out there. Yes! It’s not done yet, and we have a lot of work to do. But! We can talk to computers now! That’s amazing, and we shouldn’t get too used to it).
But it’s not always straightforward how to get value out of AI systems. One way I think about this is leverage, or “return on effort”. So, if something would have taken an hour, but it took 5 minutes with AI, that’s really good ROE. We want that to be a high number all the time - incremental value like writing a better email is fine, but that’s not a huge amount of effort or time saved for most people.
Sometimes, though, the leverage can run backwards if you’re not careful. It’s super easy to generate more BS tasks for people if you’re not careful - this is where the “take this outline, turn it into an email → take this email and turn it back into an outline” joke comes from. It’s not helpful to generate huge walls of text, just because they’re cheap, if the outcome is that other humans have to do more work.
I recently used o1 to write a paper on how agent control systems are badly conceived today and could be better. It took maybe 10 minutes of my time to explain and iterate, and I think maybe 2 minutes of o1 time to write three versions until I was happy. But then I realized - if I send this out, it’s a huge amount of work for everyone to read it. Is that actually what I want? Do I want more targeted interactions? Should I be building a demo system instead? What’s a better way to use this?
I read just today about a company that has an automated system to bug everyone to write status reports that then get digested and sent to the CEO daily. Seems like a cool idea at first, automating how an organization works. But something about it bugs me. Is it really helpful, or is this just automating a bunch of busy work? There is always a tendency for leaders to act as though the organization is there for them, instead of the other way around. This seems dangerous to me - an easy way for one person (the CEO) to cause a lot of work for a lot of other people. Negative leverage!
The idea of ROE is simplistic, and sometimes the end product is worth the extra effort - if that company really runs more efficiently and the CEO makes better decisions, maybe that system is worth implementing. But the larger point remains: we have to be very careful when we build “power tools” for work using AI, the same way we have to be thoughtful and careful using power tools in physical work - power means it’s that much easier to do damage, if you’re not thoughtful.