Engineers like to talk about the concept of “tech debt”. Sometimes the metaphor really makes sense - neglecting maintenance on code does sometimes accrue like debt does, making the engineering team slower. But there are other kinds of “tech debt” that are more subtle, and harder to address. One of these is unnecessary complexity. Code that is over-designed can be as hard to maintain as code that is under-designed.
This can come from many places - it usually has to do with not clearly understanding the problem and solving a more general, but harder, case. Sometimes it comes from not making hard choices about user experience, and trying to instead make those choices in the code.
A few examples: in the early days of Writely, we had a fairly complex algorithm for doing merges. It was hard to maintain already, but we started to get into some edge cases that made it really hard - what if two or three users were editing on the same line at the same time, and one of them deleted an edit from the other? The state space here is really complex - we never want to lose data - what do we do? After bogging down on this problem for a while, we finally decided the answer was: “don’t do that”. The problem here isn’t the software - it’s the humans. You can’t expect mere software to stop the humans from fighting, so we didn’t optimize for that case, and it got easier. We added presence instead so people can work it out.
Another favorite example is from the early days of Google Spreadsheets. They had a fairly challenging latency problem, because the numerical models are complex to get on and off disk, and it’s hard to fit that into the amount of time you have for a web response. So keeping it in memory makes some sense, but it’s hard to update a complex in-memory model in two data centers within that latency budget too. The solution they came up with was elegant - since failover is relatively rare, tune the system so that the replication is fast (just some log entries to disk that can be recombined later) but if you need to bring up a replica in the other datacenter, that’s a bit slower. It’s not quite as optimal but the code is simpler and it covers the common case well.
Being thoughtful about the product requirements that make your code more complex, and removing as many as you can, is an important aspect of minimizing “tech debt”. As engineers, we often want to solve all of the details of the problem (which is, pretty much, our job), but the real goal is to have the software work for as many people as well as possible. Sometimes, narrowing the design and making it simpler is the right way to achieve this in the long run.