Great design is ruining software



The arrival of the smartphone has convinced the world of the value of great software design, but it’s not all good news

The smartphone has reached more people and delivered more value faster than any technology ever seen. Much of the world has had to adapt to this arrival, but software design suffered the greatest reckoning. As the smartphone ascended, developers finally adopted reasonable design principles, realizing that they could not pack every feature ever seen into the smartphone experience. This recognition of the value of design - and especially, minimal design - is a good thing. Mostly.

I could not be happier that the industry finally accepts that there are principles of design, and there is a practice and discipline behind building great software. It’s great that we’re seeing more focused software that does little, but does it very well, rather than the previous age of the GUI when software attempted to own large parts of our lives by doing anything and everything. For a long time, Microsoft Word was used by nearly everyone who had a computer, and their strategy was to ensure no one ever had a reason to choose something else by building every feature anyone might ever need; their toolbar was the canonical example of never saying no.

The smartphone changed all that. Those rows of icons would fill the screen on a phone and leave no room for typing, and of course, no one would use them anyway because of how different the usage patterns are. As people realized they could no longer just throw in the kitchen sink, they began hiring (and listening to!) actual designers, and those designers have been steeped in the culture of Dieter Rams and the minimalism of the Bauhaus movement, which is awesome. Mostly.

Unfortunately, the phone caused everyone to focus on the final design principle of Dieter Rams (“Good design is as little design as possible”), without apparently remembering the nine that came before it, or why they were earlier in his list. I get it; the design constraints in a phone are intense, and it might not be a good idea to minimize everything, but it sure is easy.

The consequence of this mobile brutalism is a new movement building simpleton tools: Software that anyone can use, but no one can become an expert in.

Trello is a great example. I adore Trello. I think it’s great software, and it’s clearly a success by any measure. However, for all that I’ve relied on Trello daily for years, I feel no more an expert than I did just after starting to use it. It’s not because I haven’t tried; it’s because there’s no depth. You can pretty much plumb the product in a couple of days.

That’s fantastic for getting new users up to speed quickly, but deeply frustrating after a couple of weeks. Or months. Or years. Compare that with Vim, which I still use for all of my code editing, yet it’s so complicated that most people don’t even know how to quit it, much less use it. I’m not going to claim its lack of user friendliness is a feature, but I will defend to the death that its complexity is.

Apple’s Notes is the ultimate expression of this trend in text editor form. It’s a fine text editor. I know some people have written huge, impressive programs in similarly simplistic editors like Notepad on Windows. But I personally could not imagine giving up keyboard navigation, selection, text munging, and everything else I do. The fact that complicated work can be done on simplistic tools speaks to the value of having them, but in no way invalidates the need for alternatives. Yet, on the current trends, no one will even be trying to build this software I love because they couldn’t imagine two billion people using it on a smartphone.

I think it’s fair to say that that’s an unfair standard, and even a damaging one.

I miss the rogue-esque exploration that tool mastery entails. It’s not that I want tools to be hard; I want them to be deep. I want to never run out of ways to invest in my tools. I don’t want to have to swap software to get upgrades, I want to upgrade my understanding instead.

But I look around my computer, and everything on it was designed for the “average” user. I was not average as a CEO with 40+ hours of meetings a week while receiving more than 200 emails a day, nor am I average now as someone who spends more time writing than in meetings. There’s no such thing as an average user, so attempting to build for one just makes software that works equally poorly for everyone.

It is a rookie mistake to conflate the basic user who will never plumb the depths of their tools with the expert user who will learn every nook and cranny of your software. It is a mistake to treat the person who sometimes has to solve a problem the same as a person who spends 80% of their time working on that problem.

I don’t want to be an expert in all of my tools - for all that I take thousands of photos a year, I don’t think I’m up for switching to Adobe Lightroom - but for those tools that I spend the most time in, that most differentiate me, I want the opportunity for true expertise. And I’d happily pay for it.

Back in the days when computer screens were tiny, there were plenty of stats that showed that paying for an extra screen would often give people a 10% or more boost in productivity. I know it did that for me. As a business owner, it was trivial to justify that expense. Monitors cost a lot less than 10% of a person’s salary, and don’t need to be replaced every year. Heck, the whole point of the automation company I built was to allow people to focus their efforts on the most valuable work they could do.

Yet, when it comes to software being built and purchased today, to the tools we use on a daily basis, somehow our software ecosystem is failing us. There is no calendar I can buy that makes me 10% better, no email client available that I can spend five years getting better at.

It’s great that people are finally making software that everyone can use, but that’s no excuse to stop making software for specialists, for experts, for people who could get the most advantage from that extra 10%.

Please. Go build it. I know I’ll buy it.

Where does your work live?



Most of our software is confused about what job we’ve hired it for

I’ve really enjoyed playing Zelda: Breath of the Wild, but my life has been changed more by one of its reviews than by the game itself. The review had a unique view on what made the game so great. It contrasted Zelda to other games - Destiny, for example - saying that while others tended to distribute gameplay across multiple areas (e.g., in Destiny, the radar is a critical part of the game), Zelda really focuses the game into the main screen where you walk, glide, ride, and fight.

The review (which I unfortunately cannot find, because of the quantity of posts online that all use similar words) called this “where the game lives”. I love what this phrase evokes. I absolutely loved the game Borderlands, but I was deeply frightened of ever finding out how much time I spent at its store screen, because item collection and management was such an important part of the game. A lot of its fun was specifically from the collection, rather than the playing, but that meant a large chunk of the game lived in the store, as opposed to out in the world.

Most of our software could use a similar dissection.

Like Destiny and Borderlands (which are both great, and quite similar), the tools we use show a surprising distance between what they help us do and what we’ve hired them for. If I may be permitted to steal from this review, this distance is a sign that our software is confused about where our work lives.

To pick a counter-example, I’m writing this post in Ulysses. People who choose this software laud its simplicity, which makes it easy to focus. What they really mean is, all you can do with it is write. There’s almost no formatting, very little organization, very little anything but writing. The work lives in the writing. (My first draft was written on an ipad, which further simplifies that focus.)

Contrast that with any task or project management tool. My wife and I are in the middle of planning a bunch of camping, and we’re using Trello to organize many of the options. What is Trello’s opinion about where the work lives?

Last time I looked, my wife had three browser windows open, each with about fifteen tabs. She’s also working in RoadTrippers (Pro, natch). To get this work into Trello is a process of copying, pasting, writing copy about why you pasted it, and then using Trello to file it so you can find and manage it later.

In this operation, where does the work live? It’s scattered across maps, calendars, browsers, and applications like RoadTrippers. Does Trello know that? Does it agree? How does its opinion of where the work lives affect its utility? Brief introspection leads us to conclude Trello has no idea where the work lives, and the humans using it are entirely responsible for connecting the two.

Here’s a simple exercise for anyone using a task tracking app: Envision yourself going into that app and just marking everything done, even though you obviously haven’t done the work. It hurts to even consider, doesn’t it? Your brain has absorbed that these tasks are representations of work, and it’s your job to match the representation to the work, because you know the tool won’t do it for you. When you mark something done, of course nothing goes out and does the work; you’re just lying to your software about the state of the world. And it has no idea! This disconnect is what leads to an allergic response to the idea of marking work done in software that is not yet done in the real world.

I’d like to say that Trello was just a bad example, but I think all task tools share this confusion. Bug trackers and project management tools are specialized examples of this, and they obviously have no idea where the work lives. If I’m writing code, all of the work is done in my text editor, in files on disk, and maybe in my testing tools to ensure the work is done and done right. I then go somewhere entirely different to mark the work done. Why? Shouldn’t GitHub know it already? Why do I have to explain it? The answer is because these trackers think tracking is the work, when of course, the work is the work.

It’s no better in personal tools. I just started using Things 3 for my own tasks, nearly all of which end up being expressed in email or calendars, yet Things 3 has no conception of either. It has no idea where my work lives, and expects me to put out all of the effort necessary to connect them.

Speaking of email and calendars, they have their own role in this conversation.

Email is interesting. Everyone hates it, because it’s so important to everyone that we use it constantly, yet this animosity is a result of its utility and criticality. In other words, people hate it because it works so well. But when you’re doing email, what work are you actually doing?

I’m not sure I know. You’re communicating. But usually, you’re communicating about some other kind of work, like a document, a meeting, or some kind of activity that takes place outside of the inbox. A well designed application will remove the need for communication via email - Google Docs is a great example of this. Its sharing and commenting features have allowed many discussions to move from email to where the work is, in the document itself; their addition of suggestions has doubled down on focusing on the work, rather than talking about the work. (Note that this is completely different from Slack, which advertises that it gets rid of email, by which it means it moves the conversation, not that it does a better job of bringing the work into the software.)

Of course, how do you have Google Docs tell you someone commented on your document? Email. :)

What about calendars? Why do calendars exist? As a tool, where does their work live?

I am thankful to have had to try to explain to a friend my position on this, otherwise I’d think it was easy to understand. It’s so counter to how people work today that a relatively obvious truth is impossibly counter-intuitive: calendars are about how I spend my time.

When using a calendar, the work is what you actually do. You, a person, out the in the world. That’s what the calendar is about. Its job is to ensure you do the right things at the right time, with the right people, in the right place. It’s about doing, not documenting, managing, or notifying. You can put something in a calendar and not do it, or do work that’s not in the calendar; any of us would say, obviously, that it’s what you do that matters, not what the calendar says. Merely creating an event has no effect, and thus no value; it only matters if it then affects your behavior. The work lives in what you do. But does your calendar make even the slightest attempt to directly manage how you spend your time? What would that even look like?

To pick a small example, my calendar apps seem to not care what city I’m currently in, or where I’m physically located. Isn’t this weird? The tool whose primary job is to manage where I am physically located makes no attempt to represent or take into account the core fact it is meant to control. It still dumbfounds me.

Yes, they can tell me in real time when I should leave for a meeting based on travel time (as long as travel involves driving, rather than walking down the hall to a conference room), but they can’t say, “Given that on Tuesday you’ll be in Portland, working from home, you should block out travel time to get downtown to lunch and back”. That is, they can alert me in the moment, but they can’t do their core job - reserving time to ensure I’ll be doing the right thing in the future. Because they can’t do this, I have to create those blocks myself, else I’ll find myself choosing between skipping one appointment or being late to another. The whole point of a calendar is to manage time, but in this simple example they fail to ensure I will have space to transition my corporeal existence between physical locations. Shouldn’t that be step one, rather than an exercise left to the human?

I also reserve time for tasks I do alone every day, like working out and writing. I do this primarily to ensure it gets done, rather than because those times are special (although I do get a bit jittery now if I don’t write first thing in the morning). There’s no way to explain to my calendar what I’ve actually blocked that time out for, and thus no way for it to respond to whether I’ve done it or not, even though my computer knows if I’ve done my writing, and my watch knows if I’ve worked out. Wouldn’t it be great to see your calendar dynamically rearranging your day because it noticed you missed your workout?

My calendar is confused about what work I’ve hired it to do, and therefore does not know it needs to look in those places.

We’re so used to the idea that our software represents the work that we seem to have lost hope that it will actually help us do it. Most of the tools we use are entirely disconnected from the work they’re supposed to help us with. Marking something done does not do it, deleting email does not indicate communication has happened, sitting at your computer while your calendar says you’re writing does not produce text. The representations are not the work, yet we forgive our tools for only dealing in representations, not actual work.

I don’t know if that reviewer was right about why Zelda: BoTW is so great. I can’t even imagine what all the software I use would look like if it were built around where my work lived, rather than merely being used to model and manage it.

What I do know is that our software can and should be built to help us do the jobs we’ve hired it for. But because it is confused about why we use it, what we do every day is lower quality, less fun, and just downright confusing.

This also shows just how much opportunity there is to improve the software we use on a daily basis.

© 2022 Luke Kanies. All Rights Reserved. Contact