I noticed something the other day - I was in my kitchen and I turned on the lights. A simple, small act that we all do daily. Stop for a second, though, and think about explaining this to someone from the early 1800’s, when not only was light expensive, but it was an immense hassle - you either had to make wax candles, chop wood, or produce oil, and then get it ignited, and then tend it. You’d do it if you had to, but it took a huge amount of work.
But over the intervening years, we’ve gradually smoothed out all of the parts of that process. Energy creation is easier now, and something we never personally think about. We’ve moved past fire to incandescent light (still radiant heat based) and into LED lighting, which is much more durable, efficient, and doesn’t involve heat!
Once we got electric light, you had to be a power user (heh) to use it. Electricity was hard, mostly unknown, there wasn’t wiring in most houses, and so on. The better solution existed, but it was hard to access if you weren’t sophisticated. So, it took time to spread to the larger population. Early uses of light were more likely to be industrial. Fortunately, over time, many engineers and builders worked out a system that gradually became more and more convenient, and more widely spread. It’s highly unlikely we’d have light in our houses today, if everyone had to have their own electrical source, wiring, blow glass for their own bulbs, and so on. We have all of that now, but we barely think about it - it’s smoothed over and automatic. We just have to think about turning the lights on.
Real value happens sometimes in small things that occur often. This is a lesson for builders of applications in general, and probably more specifically in this AI moment. Users are (always) lazy. Asking a user to follow a long, convoluted path to get some value, or asking them to remember a small amount of value added to an infrequent task, seldom works. The products that work well are ones that are approachable for common tasks.
I think this is why LLMs are working so well, but for relatively simple and mundane tasks. They aren’t the best solution for every problem. In fact, often a little bit of code goes a very long way on more complex tasks, and some creativity and care with prompting or data can produce much better results too. But LLMs are convenient for the user - we all know how to ask questions and converse. So, the early “killer features” are likely to be things that happen often, and that this lazy pattern supports. Friction is always deadly; users are always lazy.
The mass of users will always be lazy - they will always do what is easy and obvious to do. It’s our job as builders and power users to take the better solutions we know about and smooth the path so that they are easy to access for most people. Prompt hacking and more sophisticated strategies can produce more value for some users, but until they are as easy to access as a simple conversation, they won’t have broad impact. We learned this lesson building Google Docs - we worked very hard to remove any friction from even the idea of trying it, and for a long time as we built it out, we were very conscious of adding new friction or new things to learn.
Look for problems that happen to everyone, every day, and then make solving them as easy as turning on a light switch.
This convinced me that my AI proof of concept project will minimize what is asked of the user
Takeaways
* Users are always lazy and friction is always deadly
* You should add value to user by making their life easier (i.e. minimize friction) AND it should be for a frequent task