Abandoning Trello

On the value of change and focus

I have been a passionate advocate of Trello almost since the day it came out. Visualizing the work is incredibly powerful, and its use of priority by ordering is so obviously better I could not imagine using anything else. I have brought Trello into companies, and caused hundreds of people to use it. Recently, though, I stopped using it in key functions, moving my personal tasks into Things and other work into the applications that own it. It started as an experiment, but quickly snowballed into a full transition as I realized how much better it would be for my core use cases.

I am a productivity junkie, from GTD to Deep Work, and while I know the tools aren’t the work, they’re one of the fastest and easiest ways to change a system, bringing new perspective and abilities.

I initially adopted Trello for managing my personal tasks. From there, it slowly spread to group and company use cases, and by the time I left Puppet in 2016 it was baked into our core business operations.

I had considered switching away from it for my personal tasks a few times, because it’s been obvious for years that it was not a fit for how I managed and processed my life’s flow of work. Where I had always taken great pride in my productivity system, I could no longer muster the discipline it required, which meant it was extraneous process and I was also failing to do my job. I sank so low that I began sending person to-do items to my inbox. shudder However, my life as CEO was busy, yet not really built around task execution, so it was never worth the cost of the tooling change. Or so it seemed.

While I abhor tooling changes for their own sake, sometimes you just have to experiment, and when I had a run of open space I decided it was time. Even if other places were worse, I was better off spending time elsewhere to get out of my funk than I was continuing to stare at this board that just wasn’t working for me.

As I looked to move away from Trello, I wanted to switch to OmniFocus, but the timing is wrong. I recently downloaded the current version, but it reminded me why I stopped using it years ago: It asks too much of me1. The biggest disconnects were the need for contexts, and the inability to set priority by dragging items around. It looks like the impending next version will be a great compromise for me, but I can’t use it until it arrives.

As I began moving tasks into Things, I was slowly reminded what it was like to have my task app be, well, an app. Before Trello I used Asana, at a time when its mobile experience was just a badly packaged web page. It’s been years since I had the pleasure of working with someone who bothered to build for the platforms I used. What a difference it makes.

Processing this small but meaningful change took over my brain for the rest of the day. I could not let go of the transition, but I also could not stop thinking about its consequences. “Thinking” is being too generous. I had not seriously looked outside of Trello for years; all of my habits and instincts are built within its constraints, its physics model. That meant wide open opportunity to construct whatever world I wanted, but also the need to do so, and embarrassing mechanisms for getting in and out. Creating a quick task in Trello is most easily accomplished by sending myself an email, which tells you how seamless the core experience really is. Telling Trello of work to be done in another application usually required custom applescripts to connect the two (e.g., creating message:// urls).

The keyboard’s change in utility keeps hitting me. I did not realize how rarely I used Trello shortcuts for their sheer disagreement with my hands. Well, partially, they were insane — ‘c’ archives cards, where I kept thinking it would create them, and repeating mistake kept my hands off the keyboard — but also, they were just different. I keep visualizing myself sitting at Things, fingers poised over the keyboard, in a world that makes sense to me. It’s pleasant.

I have a lot of work in Trello. Basically everything I’ve recorded in the last 4 years about what I could or should do is there, and it’s been the platform for organization all of that information. I’ve always been pragmatic, so even when I built it I would not have said it was the “best” answer, just that it worked at that time for my use cases.

Now I am questioning everything I’ve done recently — not to ask if I’ve made a mistake, just wondering what the world would look like if I had done that work differently. Are there projects languishing from sheer friction, that would suddenly become easier if migrated?

The answer in at least one other case is ‘yes’. I had a Trello board for managing all of my writing ideas, and I had a standard right-to-left workflow from idea to writing to done. I just never really used it. Instead, I did all of the management of writing in the same place I did my actual writing: Ulysses. I built a workflow from folders2 instead of lists3, and eventually I realized that the Trello board was hurting, not helping. The ideas I put in there languished, and Ulysses is just as good at capturing a sentence or two about a post as it is for writing the entire thing. The board is now closed, and my writing is in the same place as all of the work needed to manage it.

Tools do not make the difference, but tools can make a difference. A thing can go from undoable to tractable with a small change in tools, just as a math problem can go from impossible to trivial if you flip the equation around a bit.

I know Things won’t work for many of the use cases I have in Trello, partially because it’s so obviously a single-person tool, and much of my work does not have a home I can move the organizational work into. E.g., I have a list of lists in Trello of all of the day trips I could consider taking around Portland, and I can’t think of where I’d move that. I don’t know where those other problems get solved now. Maybe they don’t change.

But I know my standards have been reset. I know I made a mistake letting myself sit in one tool for years, unhappy but unwilling to invest in the change. It’s a wonderful feeling to be looking at the world through new eyes again.

  1. To be fair, I also stopped using it in the days when it took 45 seconds to open the app on an iPhone, because that’s how slow the data synchronization was.
  2. Which they call ‘Groups’.
  3. Really, are they any different?

Great design is ruining software

The arrival of the smartphone has convinced the world of the value of great software design, but it’s not all good news

The smartphone has reached more people and delivered more value faster than any technology ever seen. Much of the world has had to adapt to this arrival, but software design suffered the greatest reckoning. As the smartphone ascended, developers finally adopted reasonable design principles, realizing that they could not pack every feature ever seen into the smartphone experience. This recognition of the value of design — and especially, minimal design — is a good thing. Mostly.

I could not be happier that the industry finally accepts that there are principles of design, and there is a practice and discipline behind building great software. It’s great that we’re seeing more focused software that does little, but does it very well, rather than the previous age of the GUI when software attempted to own large parts of our lives by doing anything and everything. For a long time, Microsoft Word was used by nearly everyone who had a computer, and their strategy was to ensure no one ever had a reason to choose something else by building every feature anyone might ever need; their toolbar was the canonical example of never saying no.

The smartphone changed all that. Those rows of icons would fill the screen on a phone and leave no room for typing, and of course, no one would use them anyway because of how different the usage patterns are. As people realized they could no longer just throw in the kitchen sink, they began hiring (and listening to!) actual designers, and those designers have been steeped in the culture of Dieter Rams and the minimalism of the Bauhaus movement, which is awesome. Mostly.

Unfortunately, the phone caused everyone to focus on the final design principle of Dieter Rams (“Good design is as little design as possible”), without apparently remembering the nine that came before it, or why they were earlier in his list. I get it; the design constraints in a phone are intense, and it might not be a good idea to minimize everything, but it sure is easy.

The consequence of this mobile brutalism is a new movement building simpleton tools: Software that anyone can use, but no one can become an expert in.

Trello is a great example. I adore Trello. I think it’s great software, and it’s clearly a success by any measure. However, for all that I’ve relied on Trello daily for years, I feel no more an expert than I did just after starting to use it. It’s not because I haven’t tried; it’s because there’s no depth. You can pretty much plumb the product in a couple of days.

That’s fantastic for getting new users up to speed quickly, but deeply frustrating after a couple of weeks. Or months. Or years. Compare that with Vim, which I still use for all of my code editing, yet it’s so complicated that most people don’t even know how to quit it, much less use it. I’m not going to claim its lack of user friendliness is a feature, but I will defend to the death that its complexity is.

Apple’s Notes is the ultimate expression of this trend in text editor form. It’s a fine text editor. I know some people have written huge, impressive programs in similarly simplistic editors like Notepad on Windows. But I personally could not imagine giving up keyboard navigation, selection, text munging, and everything else I do. The fact that complicated work can be done on simplistic tools speaks to the value of having them, but in no way invalidates the need for alternatives. Yet, on the current trends, no one will even be trying to build this software I love because they couldn’t imagine two billion people using it on a smartphone.

I think it’s fair to say that that’s an unfair standard, and even a damaging one.

I miss the rogue-esque exploration that tool mastery entails. It’s not that I want tools to be hard; I want them to be deep. I want to never run out of ways to invest in my tools. I don’t want to have to swap software to get upgrades, I want to upgrade my understanding instead.

But I look around my computer, and everything on it was designed for the “average” user. I was not average as a CEO with 40+ hours of meetings a week while receiving more than 200 emails a day, nor am I average now as someone who spends more time writing than in meetings. There’s no such thing as an average user, so attempting to build for one just makes software that works equally poorly for everyone.

It is a rookie mistake to conflate the basic user who will never plumb the depths of their tools with the expert user who will learn every nook and cranny of your software. It is a mistake to treat the person who sometimes has to solve a problem the same as a person who spends 80% of their time working on that problem.

I don’t want to be an expert in all of my tools — for all that I take thousands of photos a year, I don’t think I’m up for switching to Adobe Lightroom — but for those tools that I spend the most time in, that most differentiate me, I want the opportunity for true expertise. And I’d happily pay for it.

Back in the days when computer screens were tiny, there were plenty of stats that showed that paying for an extra screen would often give people a 10% or more boost in productivity. I know it did that for me. As a business owner, it was trivial to justify that expense. Monitors cost a lot less than 10% of a person’s salary, and don’t need to be replaced every year. Heck, the whole point of the automation company I built was to allow people to focus their efforts on the most valuable work they could do.

Yet, when it comes to software being built and purchased today, to the tools we use on a daily basis, somehow our software ecosystem is failing us. There is no calendar I can buy that makes me 10% better, no email client available that I can spend five years getting better at.

It’s great that people are finally making software that everyone can use, but that’s no excuse to stop making software for specialists, for experts, for people who could get the most advantage from that extra 10%.

Please. Go build it. I know I’ll buy it.

Putting OKRs Into Practice

The true story of trying to put Google’s planning system into use

When Google was less than a year old, they began using a planning system presented by legendary venture capitalist John Doerr of Kleiner Perkins. When I went to put it into practice at Puppet in the early days of growing the team, things were not as easy as they appeared. Success involved creation of a complete solution, not just a description of the documents you need to create.

When I went to try to use the system as described by Doerr, I had multiple questions it didn’t answer. Just to start with, when and how do you make and update these OKRs? It’s great to say you should have this recording of your goals, but I could easily come up with multiple conflicting mechanisms for developing it, none of which are obviously better:

  • The CEO could develop them independently and deliver them to the team
  • The executive team could develop them collaboratively
  • They could be sourced from the front-line team

None of these is obviously right or wrong, and of course, neither are they sufficient explanations for how to do it. Do you do it one sitting? Multiple revisions? How long should you spend on it? How often should you update them? Can you change them mid-stream if your situation obviously changes? There’s a lot left to the reader. You can say it doesn’t matter, but of course, it does, and even if you’re right, you still have to pick one. Why go through the effort of describing the output but skip the whole process you used to create and maintain it?

Here’s how we did it.

Startup Days

Starting by reading John Doerr’s original presentation, even though it’s relatively thin. In summary, you should have three to five top-level objectives, and each of these should have a couple of key results associated with it. Together these constitute a company’s Objectives and Key Results, or “OKRs”. These should then cascade down to the rest of your team, so that each team and person has OKRs. This is a useful high-level tool for communication and focus, even in small teams. (Note that I’ll use ‘goals’ and ‘objectives’ interchangeably here; far more people use the shorter term in practice, and we treated them equivalently.)

At Puppet, we spoke of an operational rhythm, which is essentially the set of repetitious tasks we run and the cadence we execute them on to keep the business working. But the OKR system as presented includes no operational rhythm, no indication that people are involved in creating these goals or that doing so takes any time. So we invented our own rhythm:

  • As early as possible each period, the management team meets to decide the company OKRs. This started out as a 45-minute meeting that just recorded the goals that were in my head, but evolved over years into a two-day offsite where the leadership team first acquired a shared understanding of where the business was and what we needed to do, then built the goals from there. In retrospect we should have put in these longer days earlier; your team should frequently think deeply about what you should be working on, rather than just running all the time.
  • The rest of the company has some time to build its OKRs from the top-level goals. Initially this was a couple of days, but it eventually morphed into a couple of weeks.
  • These cascaded goals are then used to modify the company OKRs if needed. (In other words, we supported a merged top-down and bottom-up planning model.) This is when management would learn if our view of reality was materially different from that of the people at the front line.
  • At the end of every period, the management team records how we did against our goals. Again, this began as just writing down the score, but grew to become a more complete retrospective run by a project manager. This meeting it at most a couple of hours long, and just includes the leadership team.

When we began this process, we wanted short-term goals, so we ran this cadence eight times a year; thus, we called our planning periods “octaves.” As we matured and could think and execute in a more long-term fashion, we reduced this to quarterly.

I think this system is sufficient for most companies of 15 to 250 people. Some companies might grow out of this at relatively few people, whereas others might scale very well with it. I expect most people could scale this system successfully by gradually increasing the amount of time spent on each session, with more time in deep discussion, and also by assigning a project manager to run it. I ran the whole process until we were probably 250 people, which was a mistake that took too much of my time, resulted in too centralized of an organization, and limited our effectiveness because I suck at project management.

Note that these are pointedly not plans; that is, they are not step by step instructions for how to achieve a goal. We’re declaring what we want done, but not how we expect to do it. This is both a blessing and a curse. On the one hand, it provides a lot of freedom for people at the front line to figure out the right way of accomplishing something, but it also leaves a gaping hole in your organization. At some point, someone has to actually do the work, but where in your operational rhythm does a team translate goals into a plan for accomplishing them? Do you make that time? We didn’t until far too late, and it mattered.

Scaling

As we scaled the company and this system, we found a few critical gaps.

The biggest one is obvious enough that I cringe now just thinking about it. You would never try to build a product without being clear on who would do the work and, of course, you shouldn’t try to accomplish your company’s goals without assigning each objective and key result to an individual, yet our initial version (and the one presented by Doerr) had nothing to say on people. At some point we added the requirement that every objective had a name assigned to it, which was a huge change for us – and a really positive one.

The lack of accountability for each goal was exacerbated by the fact that we didn’t have any mechanism for in-quarter check-ins on the goals. We’d frequently only find out at the end of a quarter that a goal was going to be missed, when it was far too late to do anything about it. So we built a weekly operations review (“ops review”) where we reviewed progress against the goals. This meeting is a predictive exercise, not a status statement. Goals are green if you expect to accomplish them on time, even if you’re still two months away from the deadline. We mostly focus just on the areas we don’t expect to hit, which allows us to invest early in correcting our execution or changing our expectations.

It’s worth reiterating, because this was so hard to get people to understand: The goal of the ops review was not to describe the status of each goal; it was to build a shared understanding of whether we were likely to achieve our goals and then build an action plan to resolve the predicted misses. The majority of people entered that meeting with a belief that they needed to justify their paycheck, and it took a lot of education to get them to understand the real purpose.

This addition to our rhythm was pretty awesome. In one move, it basically eliminated the firefighting that had driven so much of our execution. We still had fires periodically, but they were actual surprises, not just sudden surfacing of old information, or realizing at the end of the quarter that a goal never had an owner.

The downside of the ops review is that it’s expensive (it necessarily includes a lot of top people at the company) and it takes a lot of work to make this kind of meeting worthwhile every week. I got the idea for this meeting from the excellent American Icon, about how Alan Mulally turned around Ford. A long, weekly operations review with his senior team was one of his key tactics. My team often complained that weekly was too frequent, but if a company as big as Ford was responding weekly to the conditions on the ground, shouldn’t a small startup be at least that responsive?

Around this time, we integrated the budgeting process into the planning process. It’s important to recognize they’re different — you should build the plan you want then find a way to budget for it, rather than building a budget for your departments then letting them decide how to spend it. It’s important that your should be good at both, though, and it was around this stage we started to develop the budgeting skill and learning how to integrate it into planning. That was painful, to put it mildly.

As we scaled, the company goals tended to get expressed in terms of departmental targets within sales, marketing, engineering, etc. When we were small, this seemed like a feature because it had natural lines of ownership, but as we grew it became clear it was a critical flaw. It’s important to translate plans to people and teams, but this was dysfunctional. It discouraged people from building goals that relied on other teams, and thus encouraged silos in the company. Talk about a failure mode. When we added names to each objective, we rebuilt the whole process to be structured from the top down around company goals rather than team goals, which allowed us to crack this departmental view and force shared goals and collaborative execution.

We also eventually added a layer of OKRs above our annual goals, giving us a roughly three year time horizon. These became crucial in sharing and deciding what the priorities were for a given year.

What might come next?

The above roughly describes the system as it stood when I stepped down from Puppet in 2016. It was obvious at the time that we were in need of another step-change in capability in our planning system, but the new CEO took responsibility for driving that. By the time I left, we could see many opportunities to improve what we were doing.

The big one is that we needed to push all the local knowledge about this process into code. We were using multiple different formats and tools, because different meetings require different interactions, and it was too difficult for most people to track what was happening, where, and why. For instance, our source of truth for the OKRs themselves tended to reside in Trello, but it’s a poor fit for storing updates and presenting the predictions of whether a goal would land. I couldn’t imagine trying to run a report on quantitive goals based on Trello data. Thus, we ended up storing the weekly updates in spreadsheets, which are exactly as powerful and readable as shell scripts. It meant we couldn’t trust most people to update the data, because the document was so complicated. I would have loved a single source of truth that anyone could use. In addition, I wanted to have an app automatically pull any data from original sources so I didn’t have team members doing manual work that could be automated (I mean, duh, Puppet is an automation company).

I also wanted a significantly better retrospective process that truly helped us improve the business by laying bare how our wonderfully laid plans went wrong. We were good at the work of looking back and being transparent about where we were, but there was a lot of room for improving how we tie that work to how we operate.

Lastly, I hate that our goals were built around quarters. I think having a cadence for building and validating plans is critical, but it’s silly that this cadence got translated into the timelines for the goals themselves. It often implied that each of our goals would take exactly a single planning cycle. Some obviously do — we have quarterly sales targets that we need to hit during exactly a quarter — but many of our top-level objectives were shoehorned into a quarterly system. I’d much prefer a Kanban-style on-demand planning system that would allow us to have a high-fidelity plan for what we’re working on now, and a quality backlog for what we’ll do as goals complete.

Conclusion

I’m not convinced it matters much what planning and execution system you use, but I’m utterly convinced you should have one. In the end, it’s merely a team-wide mechanism for developing, communicating, and tracking what you’re trying to achieve. It’s obviously important to have goals. I think most of us would agree you should, in some way, share those goals with the team so everyone is working toward the same ends. And, of course, your goals tomorrow should probably be somehow related to your goals today. (This is surprisingly hard.)

If you don’t have one yet, you could do worse than building an operational rhythm from what we built at Puppet. You’ll have to work through a lot of initial discomfort as you translate vague words into technical terms whose meaning is widely agreed upon around your team. But it’ll be worth it.

Where does your work live?

Most of our software is confused about what job we’ve hired it for

I’ve really enjoyed playing Zelda: Breath of the Wild, but my life has been changed more by one of its reviews than by the game itself. The review had a unique view on what made the game so great. It contrasted Zelda to other games — Destiny, for example — saying that while others tended to distribute gameplay across multiple areas (e.g., in Destiny, the radar is a critical part of the game), Zelda really focuses the game into the main screen where you walk, glide, ride, and fight.

The review (which I unfortunately cannot find, because of the quantity of posts online that all use similar words) called this “where the game lives”. I love what this phrase evokes. I absolutely loved the game Borderlands, but I was deeply frightened of ever finding out how much time I spent at its store screen, because item collection and management was such an important part of the game. A lot of its fun was specifically from the collection, rather than the playing, but that meant a large chunk of the game lived in the store, as opposed to out in the world.

Most of our software could use a similar dissection.

Like Destiny and Borderlands (which are both great, and quite similar), the tools we use show a surprising distance between what they help us do and what we’ve hired them for. If I may be permitted to steal from this review, this distance is a sign that our software is confused about where our work lives.

To pick a counter-example, I’m writing this post in Ulysses. People who choose this software laud its simplicity, which makes it easy to focus. What they really mean is, all you can do with it is write. There’s almost no formatting, very little organization, very little anything but writing. The work lives in the writing. (My first draft was written on an ipad, which further simplifies that focus.)

Contrast that with any task or project management tool. My wife and I are in the middle of planning a bunch of camping, and we’re using Trello to organize many of the options. What is Trello’s opinion about where the work lives?

Last time I looked, my wife had three browser windows open, each with about fifteen tabs. She’s also working in RoadTrippers (Pro, natch). To get this work into Trello is a process of copying, pasting, writing copy about why you pasted it, and then using Trello to file it so you can find and manage it later.

In this operation, where does the work live? It’s scattered across maps, calendars, browsers, and applications like RoadTrippers. Does Trello know that? Does it agree? How does its opinion of where the work lives affect its utility? Brief introspection leads us to conclude Trello has no idea where the work lives, and the humans using it are entirely responsible for connecting the two.

Here’s a simple exercise for anyone using a task tracking app: Envision yourself going into that app and just marking everything done, even though you obviously haven’t done the work. It hurts to even consider, doesn’t it? Your brain has absorbed that these tasks are representations of work, and it’s your job to match the representation to the work, because you know the tool won’t do it for you. When you mark something done, of course nothing goes out and does the work; you’re just lying to your software about the state of the world. And it has no idea! This disconnect is what leads to an allergic response to the idea of marking work done in software that is not yet done in the real world.

I’d like to say that Trello was just a bad example, but I think all task tools share this confusion. Bug trackers and project management tools are specialized examples of this, and they obviously have no idea where the work lives. If I’m writing code, all of the work is done in my text editor, in files on disk, and maybe in my testing tools to ensure the work is done and done right. I then go somewhere entirely different to mark the work done. Why? Shouldn’t GitHub know it already? Why do I have to explain it? The answer is because these trackers think tracking is the work, when of course, the work is the work.

It’s no better in personal tools. I just started using Things 3 for my own tasks, nearly all of which end up being expressed in email or calendars, yet Things 3 has no conception of either. It has no idea where my work lives, and expects me to put out all of the effort necessary to connect them.

Speaking of email and calendars, they have their own role in this conversation.

Email is interesting. Everyone hates it, because it’s so important to everyone that we use it constantly, yet this animosity is a result of its utility and criticality. In other words, people hate it because it works so well. But when you’re doing email, what work are you actually doing?

I’m not sure I know. You’re communicating. But usually, you’re communicating about some other kind of work, like a document, a meeting, or some kind of activity that takes place outside of the inbox. A well designed application will remove the need for communication via email — Google Docs is a great example of this. Its sharing and commenting features have allowed many discussions to move from email to where the work is, in the document itself; their addition of suggestions has doubled down on focusing on the work, rather than talking about the work. (Note that this is completely different from Slack, which advertises that it gets rid of email, by which it means it moves the conversation, not that it does a better job of bringing the work into the software.)

Of course, how do you have Google Docs tell you someone commented on your document? Email. 🙂

What about calendars? Why do calendars exist? As a tool, where does their work live?

I am thankful to have had to try to explain to a friend my position on this, otherwise I’d think it was easy to understand. It’s so counter to how people work today that a relatively obvious truth is impossibly counter-intuitive: calendars are about how I spend my time.

When using a calendar, the work is what you actually do. You, a person, out the in the world. That’s what the calendar is about. Its job is to ensure you do the right things at the right time, with the right people, in the right place. It’s about doing, not documenting, managing, or notifying. You can put something in a calendar and not do it, or do work that’s not in the calendar; any of us would say, obviously, that it’s what you do that matters, not what the calendar says. Merely creating an event has no effect, and thus no value; it only matters if it then affects your behavior. The work lives in what you do. But does your calendar make even the slightest attempt to directly manage how you spend your time? What would that even look like?

To pick a small example, my calendar apps seem to not care what city I’m currently in, or where I’m physically located. Isn’t this weird? The tool whose primary job is to manage where I am physically located makes no attempt to represent or take into account the core fact it is meant to control. It still dumbfounds me.

Yes, they can tell me in real time when I should leave for a meeting based on travel time (as long as travel involves driving, rather than walking down the hall to a conference room), but they can’t say, “Given that on Tuesday you’ll be in Portland, working from home, you should block out travel time to get downtown to lunch and back”. That is, they can alert me in the moment, but they can’t do their core job — reserving time to ensure I’ll be doing the right thing in the future. Because they can’t do this, I have to create those blocks myself, else I’ll find myself choosing between skipping one appointment or being late to another. The whole point of a calendar is to manage time, but in this simple example they fail to ensure I will have space to transition my corporeal existence between physical locations. Shouldn’t that be step one, rather than an exercise left to the human?

I also reserve time for tasks I do alone every day, like working out and writing. I do this primarily to ensure it gets done, rather than because those times are special (although I do get a bit jittery now if I don’t write first thing in the morning). There’s no way to explain to my calendar what I’ve actually blocked that time out for, and thus no way for it to respond to whether I’ve done it or not, even though my computer knows if I’ve done my writing, and my watch knows if I’ve worked out. Wouldn’t it be great to see your calendar dynamically rearranging your day because it noticed you missed your workout?

My calendar is confused about what work I’ve hired it to do, and therefore does not know it needs to look in those places.

We’re so used to the idea that our software represents the work that we seem to have lost hope that it will actually help us do it. Most of the tools we use are entirely disconnected from the work they’re supposed to help us with. Marking something done does not do it, deleting email does not indicate communication has happened, sitting at your computer while your calendar says you’re writing does not produce text. The representations are not the work, yet we forgive our tools for only dealing in representations, not actual work.

I don’t know if that reviewer was right about why Zelda: BoTW is so great. I can’t even imagine what all the software I use would look like if it were built around where my work lived, rather than merely being used to model and manage it.

What I do know is that our software can and should be built to help us do the jobs we’ve hired it for. But because it is confused about why we use it, what we do every day is lower quality, less fun, and just downright confusing.

This also shows just how much opportunity there is to improve the software we use on a daily basis.

Founding Myths are Pernicious Propaganda

It’s only safe to learn from true stories

Every startup has its founding myth, the story that it uses to help draw in and motivate employees, customers, and investors. In most cases, those myths were cultured years after the company’s founding, and bear little relationship to reality. When you dig deeply into a company’s true origins, what initially looked like the product of far-sighted genius deflates into a mix of insight and smart decisions meshed with a series of serendipitous events.

This gap between myth and reality in no way diminishes the achievement of these companies and their teams, but it collapses our ability to learn the true lessons from those who made it and those who did not. If we can shift our story-telling from creation myths to capturing the collisions actually necessary to germinate greatness, we can better recognize what it will take to support it next time. Even better, it will allow us to pull forth those organizations not lucky enough to get all the draws, enabling our ecosystem to address neglected founders attacking neglected markets.

As I was building Puppet, I assiduously sought founder stories, trying to understand what it really took to do what they did. Over time, this turned from a quest to build “Good to Great” for startups (which was silly anyway, given how stupid that book is) into a collection of more-true founding myths:

I find these funny, but the true goal is to puncture the story people want you to buy so you can understand what it really took. When you do that, you find that yes, people had to work hard, they had to be smart, they had to be creative. Those are necessary, but as testified by the thousands of failed companies full of hard-working, smart, creative people, they aren’t sufficient.

When you can find true founding stories, such as Phil Knight’s excellent Shoe Dog, you gain critical insight that might help you build your own business better, but you also realize how much of building a great company is expertly riding the tide of luck and opportunity.

On first blush, there’s no problem with this, other than the ridiculous levels of delusion necessary to turn serendipitous opportunism into genius founding myths that manage not to include the lucky bits.

On reflection, though, erasing the role chance plays causes systemic problems throughout the startup world.

We’ve already seen that human nature abhors a vacuum, refusing to accept that people don’t always deserve their circumstances. Luck is obviously not the only contributor to a startup’s success, because how they play the hand they’re dealt is at least as important as the hand itself. But let’s not lie to ourselves about the role of randomness in a startup’s circumstances.

When we do, it limits our ability to understand what it really takes to make a great company. This is similar to the deep structural flaws within “Good to Great”: If you just ask the great companies what they did and try to do the same, you sound smart but have only made your readers dumber. I recommend The Halo Effect for a more complete discussion on how this business-book analysis of success fails its readers. You can fatally pierce the concept by focusing on one kind of failure:

“The Delusion of the Wrong End of the Stick – successful companies may do various things but that does not mean that doing those things will make you successful”

As I was trying to build Puppet, I found this discrepancy between the stories and the realities immensely frustrating. I was uninterested in being valorized for something I felt lucky to be doing, but these myths were a screen obscuring information I knew was valuable. My goal in studying my predecessors was to improve my own chance of success, but it’s impossible to learn from propaganda. Instead of a community of founders learning from each other, with each iteration being better than the next, you get cargo cult companies that manage to copy every part of a successful business except the parts that made it work.

As just one example, Google’s own data now shows that its vaunted hiring practices did not actually help, but a whole generation startups copied their worthless brain teasers and discriminatory ivy league requirements. For years, Google’s dominance was partially explained by their great hiring, but what happens when it turns out they weren’t actually better than anyone else? Nothing to see here, please move along. Let’s just start another round, this time copying Netflix and Amazon. Forgive me for thinking that Amazon’s success has more to do with their taking Walmart’s extractive business strategy to the web than on their use of internal APIs.

What I found again and again when I peeled these myths back was far more genius in how people responded to their circumstances than in how they were created. Oracle is a perfect example: Larry Ellison was smart enough to realize that IBM had invented this great new database concept but weren’t trying to sell it, so he took advantage of that to build it himself. You can bet that’s not how Oracle tells its founding story, but only once you know it can you see that Ellison’s true strength was pushing hard and fast enough to ensure that Oracle was the first one to market with this free idea. I don’t look up to Oracle’s founder, but you can bet I learned from that story.

Google is another good one: One of the best CS schools in the world bashes together two random guys, who then go on to invent something and get lucky enough to fail to sell it. What you can learn from Google looks pretty different once you know that. Again, it does not diminish what they did after that failure, but it turns their stories of manifest destiny into a more true telling of wandering any icy landscape looking for safety.

When they say history is written by the victors, they mean both that only the the stories of the winners end up getting recorded, but also that the stories themselves get morphed, corrupted, to present those winners as just and deserving. That corruption is as pervasive in the world of startups as it is in the wider world.

There is tremendous value in understanding the real stories behind the great successes. Good luck getting companies and founders to tell them.

Apple Has a Focus Problem

But it’s not what you think

The iPad has made great strides in the last few years toward being a full-time computing platform. While many still cry that it’s just not very useful yet, others are out there using it as their only computer, or have at least shifted significant usage from desktops and laptops to it. I have been traveling at least a third of the time over the last few years, and the iPad is my only companion on the majority of those trips. It is by far my favorite computer, but it has one glaring defect.

Obviously, the heart of the iPad is its touch interface. The direct interaction is exactly what makes it such a joy to use for so much of it can do. However, even Apple has finally acknowledged that a keyboard is a necessary companion for many iPad uses. While their keyboard has some issues (or rather, their keyboard connection technology does), the overall experience is quite nice. Except for one little detail.

Unlike touch, keyboards are inherently targeted. While touch is powerful specifically because of your ability to directly manipulate the software you’re using, keyboards must first be pointed at a place that needs text. They need focus. And here’s where the iPad falls down.

It has no concept of focus. Or rather, it obviously does, but its designers are in denial about it. Keyboard focus is littered throughout the platform, from the presence of a cursor when inputting text, to the software keyboard auto-hiding when no text field is in use. When you’re producing text, this generally works pretty well.

But the keyboard is used for far more than typing. Whether it’s command-tabbing between applications or using shortcuts within them, the keyboard is a critical control device. And it just does not work right on the iPad.

Honestly, it does not work that well on the mac these days, either. Apple’s Mail application can’t seem to figure out how to pass focus between emails, especially when deleting them, and there are plenty of other situations where focus within a window — as opposed to between them — can neither be perceived nor controlled.

But because of its orientation around touch, the iPad is much worse. The vast majority of the time that I want to, say, delete an email, the delete key just literally does nothing. I can even touch the email I want to delete to “ensure” it’s focused, and yet still the keyboard is useless. I have to resort to touch for interacting with applications the vast majority of the time. Many iPad-specific apps now advertise keyboard shortcuts, but they’re useless. I mean, sometimes they work. But that’s not good enough when you’re trying to get something done; we learn, eventually, to do what’s consistent, rather than what’s fastest when it works but often fails.

I have so many memories of using my keyboard to tell some app to do something, and just starting at the iPad until I realize nothing is going to happen. I tap around, hoping to tell it, “start here”, but it’s usually no use. My expectations collapse, and I feel crippled, reduced to the level of those who just click around their GUIS. It’s not wrong, but it can be so much better.

I follow every conversation I can find about the iPad as a productivity platform. I’m deeply interested in the thoughts of people who are bought into it, such as Federico Viticci, so I am cognizant of the general thread of how this is progressing. As so often happens to me, I seem to be the only person concerned about this problem. Here are all of these people talking and writing about the iPad as a general purpose computing platform, yet they don’t mention this problem of the inconsistency and invisibility of focus at all. Do they just not use keyboard shortcuts? That’s not possible, right?

And don’t tell me it’s my iPad. I’ve had literally every version ever made (I currently have both sizes of the iPad Pro), and every few iterations I do a clean install. I’ve tried every variety of keyboard, from Apple’s to Logitech’s to a $15 piece of junk I bought at Brookstone at the airport one time because I managed to leave mine at home. I’ve changed every variable. The only explanation seems to be that others just don’t care about this.

I’d also like to see people rallying for Apple to shift the default orientation of the iPad Pro to landscape, because the keyboard is so central to its use, along with an automatic disabling of the rotation lock when the keyboard is connected, but these are minor compared to the core utility of a keyboard.

To me, one of the central reasons to want a keyboard, one of the primary purposes of the keyboard on my computer, is to control the computer. At that, the iPad is currently failing miserably, and it deserves better.

I fear that this can only be fixed by introducing completely new design concepts to the iPad. Obviously the OS understands focus, but the design team has worked hard not to ever indicate that to you. That has meant application developers have not bothered to manage it effectively, and has trained users to rely on touch rather than the keyboard.

While Apple is a design powerhouse, they’re also into brutalist design, always opting for less, always preferring to forego a capability if they can’t make it perfect. This causes me to despair that they’ll make material strides on this any time soon.

I’d love to be proven wrong, but for Apple to start caring about this, its customers need to first.

What’s your story? Do you just not use keyboard shortcuts? Do they work great for you? Or they’re totally broken, but the other broken stuff is just so much more important you haven’t mentioned them yet?

My Computers Are All Broken

Software’s focus on world-dominance has crossed with my incessant tinkering to result in a computing environment that is failing me utterly.

I’ve tried everything, and it still doesn’t work.

My first computer was from basically the worst era Apple has ever produced, starting with MacOs 7.6.1 and finally quitting around MacOs 9, switching to BeOS. While I was using these for fun, I was learning Solaris (and AIX, and HP-UX, and FreeBSD, and Debian) for work. I know some are frustrated that the Mac has lost its spacial finder, but I’m frustrated when I have to touch my mouse, because I spent years in a fine-tuned X-Windows interface that allowed me to do essentially everything from the keyboard. It was only marginally graphical; all of the windows except the browser were different kinds of text interfaces, from the terminal to vim to irc. I controlled everything with a Sun Type V keyboard, which relegated the caps lock and backslash keys to some far away corner like God intended, and gave prominence to control, escape, and delete, which matter quite a bit more. I honestly have no idea what mouse I used for the first ten or fifteen years; I don’t normally care about them, so I don’t normally notice them.

As I gradually switched from a sysadmin-turned-developer to a CEO, my computing needs changed dramatically. I spent all my life in email, calendar, and chat rooms, instead of text windows (notice they’re all still text, though). I did not even have a desktop computer for years, because a laptop was a better compromise.

As the iPad got more powerful, though, and Apple’s iMac computers turned from lickable toys into their most powerful computers, I moved most of my computing from laptops up to desktops or down to tablets. For a little while, all was fine, because I still wasn’t spending much time on the computer — as a CEO, most of my time was spent directly communicating with people, either in meetings or via email and chat. Being present means not being on your stupid devices. That was my computing experience.

Once I left my CEO role, I shrugged. I changed a few things (started writing in Ulysses, for example) but basically kept the tools and practices I had.

It’s become clear now that I have a solid decade of debt that’s built up in how I use my computers.

I really hate it.

Some of it is obviously just bugs. Or something. The keyboard (a KÜL tenkeyless with some kind of Cherry MX switches) and mouse (some large Logitech thing) just don’t work most of the time. The keyboard has to be unplugged and plugged back in 90% of the time the computer goes to sleep, and the mouse icon just does not seem to care about my needs, even after swapping mice, mousing platforms, hair shirts, and everything else I can think of (turns out that moving the mouse’s wireless adapter from a USB hub to the desktop might have fixed the mouse).

Beyond that, the software is out to get me.

Am I seriously the only person in the world who cares about keyboard focus?

I just deleted an email in Apple’s Mail.app on my desktop (running the latest os and patches, but this problem has persisted for as long as I’ve used the app), and I literally have no idea where the focus went. I hit delete, the email goes away; I hit delete again, and literally nothing happens. An email is highlighted, but, oh, I see, it’s a gray highlight instead of a blue one. I click on it, now it’s blue, and suddenly delete works.

I have essentially the same problem on my iPad (which I work on at least as much as my desktop). My only feature request for the iPad is to make keyboard focus predictable and functional. You can be scrolling through emails with the arrow key, and suddenly it will just stop working. Delete an email, no idea where focus is. I don’t even know how to tell where the focus is.

I’m in this insane world where I can feel utterly defeated by the need to click on an email to delete it, but if I zoom out, just about every other part of my computing experience causes similar frustration. I’m apparently the only person in the world who has quality studio monitors on my desk, based on how much everyone is freaking out over the HomePod price. I have separate audio installations in 7 parts of my house, and the HomePod in my bathroom is the cheapest one, beating the thirty year old (purchased used, recently) NAD and M&K kit in my bedroom by about $100. Nothing can cause you to lose hope that your needs will be met like the entire internet agreeing your needs don’t exist.

I don’t consider myself an audiophile, but I know enough to know that decent speakers start arriving at around $300 (each, not a pair), rather than topping out there. I never bought into Sonos because that required my believing they could deliver a good speaker, amplifier, and software experience for the price of a single decent speaker. Oh, and I had to join their walled garden; I’m willing to consider that, but it’s got to be worth it, and it never was for them.

So now most rooms in my house have audio streamed to them from an unsupported device that is going to become obsolete any day now.

Getting back to computers, it just could not be more clear to me that I’m in the shitty middle ground of the computer world.

I’m not a specialist any more. When I was a sysadmin, I was a specialist and I could build my computing environment to match that. (Although good luck finding specialized sysadmin hardware these days.) When I was a developer, I was a specialist, and my computing showed it.

Now I do what everyone else does: I email, browse, handle my calendar, chat a bit. My writing is a touch specialist, but not really; I’m using a specialist tool that’s great for writing books, but I’m just producing blog posts instead.

Even though I’m not a specialist, I’m still weird. I know that I’m using all of my tools differently than most of you are. I know we all think we’re special, but I’ve been around enough to know I am. Not in a good way, just in a “why are you doing that?” way.

Take my calendar. I’ve now twice written tools to tell me whether I’m meeting my goals in terms of how I spend my time. Sure, you have some sort of tomato timer to remind you to stand up or something. Amateur. My calendar should be a statement of my priorities, of how I will and do spend my time, and I want to hold myself accountable to my goals. I appear to be the only person who wants this, based on the searching I’ve done. Thankfully Google Calendar has APIs available, but why should I have to write this?

I have tried every email client I can find, but they’re all written for someone else. They all seem to offer, “How can I help you do email without email?” I don’t want that. I want the vim of email clients. I want the Photoshop of email clients. Can you imagine telling a graphic designer that you want to help them make graphics without making graphics? It’s stupid. They’re designers. They design graphics. I’m a communicator. I communicate. I read and write. A lot. Make a client that’s better for that. But nope. Instead you have the modern equivalent of a lickable interface that still can’t do inline reply and only has 5 keyboard shortcuts. BZZZTDELETED!

Some of it is just stupid. I have LED strips mounted to the back of my monitor, so I can get ambient lighting while I work. It’s actually an awesome feature, and I totally recommend it, but it’s a bit of a mess of wires with a ridiculous interface (a switch that keeps falling off my desk). I get not everyone has their monitor against a wall, but this seems so great it should ship by default. Can I just get all the LEDs you want in my keyboard moved back there? And, of course, I want it all connected to the computer so I can control it via software. Instead I’m wiring it all together myself and hoping 12V can’t catch my desk on fire. Yay.

I wish it was just that I’m a curmudgeon, that I can’t give up the old ways, but the truth is I like my iPad more than any other computing device I’ve ever owned, and honestly, I always hated the old ways. I’ve been hating software for as long as I’ve used it, and I’m proud I’ve been able to keep my edge as my career and the world it’s in has changed. I hated X-Windows. I really, really hated MacOS. I loved BeOS, but honestly, I held it to a very low standard. I loved every part of Solaris except the parts that actually existed or that I ever used, and I quit using it just when it stopped sucking quite so much (god, remember having to use Veritas to get decent clustering? Even worse, remember Disksuite?).

So it’s not that I miss the old days. I want to live in a different universe. I want a different physics model for our software world.

I know I can’t have it. I know I won’t get there.

But I’m an optimist. I’ll keep fighting.

Software At the Coal Face

Software should be for the people who do the work

I’m not a technologist. I’m a software person, but not in the way people mean when they say that. I believe in the power of software to improve lives in big and small ways, and I have a hard time imagining doing much except through software.

I’m less fond of the software industry itself. I think it is too slow at solving problems that matter to people, too myopic to see how much it could really do.

I ran a software automation company for twelve years, and when I explained it to executives or salespeople, they would respond, “Oh, so you can fire sysadmins!” (Draw your own conclusions as to why these are the ones who consistently said this.)

In fact, our strategy was the exact opposite: I set out to turn sysadmins from tactical tools into strategic assets, to make the individuals more capable, and thus more valuable and harder to fire.

It was always easy to show my strategy was the right one: Let’s say I can produce software that can either reduce cost (fire some part of your team) or improve service quality (help that team do better work), which do you pick?

Nearly every time, people pick a higher service quality. “Wait, that’s an option?”, they would say. (The rare exceptions to this response usually came from organizations I did not want as customers.)

Software sucks. It really, really sucks. It’s so bad that when people look at their software investments, and especially the infrastructure layer my previous company lives at, they really only know how to talk about cost. Sure, they know the software is critical, they can say how many nines they have in uptime, or what their NPS score is, but they can’t measure quality in the way, say, a car maker or theme park would. Turns out, though, it’s easy as all get out to talk about cost. So, people naturally gravitate from the stuff that matters into the stuff they can measure. It always reminds me of an old joke:

A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, “this is where the light is”.

It might not matter, but it sure is easy.

You could argue that what Puppet did was weird, was special; that I found a seam of critical work that was undervalued, incorrectly measured, and ripe for both automation and an increase in human capital.

I don’t think sysadmins are that special.

There’s been this wicked, stupid, and brutal trend over the last few decades to devalue people. I don’t just mean, to replace people with automation; I mean the practice of reducing the value of individuals in service to financial strategy. It might be my age, but I assume this all started in the 1980s when corporations were compensated for having fewer costs on the books (which also led to idiocy like building an office tower, selling it, then leasing it back; some realtor makes bank, and your stock goes up because your balance sheet looks better according to the skewed spreadsheets of some hedge fund manager). The claim of many companies that their people are their greatest asset is contradicted by their behavior.

This becomes more true as companies get bigger. Most large corporations have spent decades finding ways to rely less on their people, and then use the results to demonstrate the people deserved it. Under-train them, then over-supervise them because they don’t have the skills. Don’t give them any power, then consider them easily replaceable because they can only do small things on their own. Refuse to invest in better tooling, then complain about their productivity.

I have a different perspective. You might might think I’m antiquated, or silly, or just naive. Meh.

My perspective is that people have always mattered, and that they always will.

My perspective is that there will always, forever, to be money made by making people more valuable.

Further, just as a nice bonus, I believe that the things that make people more valuable are generally congruent with the things that make them happier, which means that success for me helps the company and the individuals both financially and personally.

Don’t worry about people stealing an idea. If it’s original, you will have to ram it down their throats. — Howard H. Aiken

Considering that it’s this perspective that’s led to me to what success I’ve seen in life, and I’ve done enough work now to realize it’s also the only thing that I care about enough to put another decade into, it’s worth considering hiding this little gem, this little kernel of wisdom that others can’t seem to spot. I’m not worried. At the least, it goes against decades of corporate training: “People don’t matter, you should replace them, expect them to fail, they’re just cogs”. That kind of mental conditioning is hard to resist, especially when it’s imprinted on our financial system itself.

Beyond that, it just sounds too nice for others to see its value. Modern finance inherently distrusts anything that’s good for people, because they figure if you can do this thing and help people, you’ve got to be able to make more money by doing that thing and not helping people, right? Right?

No. In many cases, the thing that makes it valuable is exactly that it helps people. If anything, you could argue the entire software industry is about helping people spend more time on what makes them special, rather than helping people do something they could not do before.

Photoshop didn’t enable people to make drawings they could not have before; if anything, in many cases it’s less functional than using pens and pencils. It just enabled people to be far more efficient with the time they spent.

The difference between a great editor for coding and a bad one is not that you write better code with a great one, it’s that you can write good code that much faster and thus get a lot more high quality work done.

Often, the difference in speed is so great that the quality of work is actually dramatically better, but it’s a result of the user having far more time to spend on the problem, rather than the software itself directly improving the work.

It’s not a coincidence that boring, low-value work is easy to automate, or that people enjoy their jobs more when that work goes away. (Of course, this is assuming the job sticks around; in too many cases we use this as an excuse to fire someone rather than promote them.)

This was exactly Puppet’s goal.

Spend less time firefighting and doing menial work, and more time shipping great software.

I bet you can think of an industry or two whose people could stand to spend less time doing menial work, and more time on the things that matter.

I am not naïve enough think that every role a human plays today could be improved by better productivity tools; some roles become obsolete over time (not a lot of chimney sweeps these days), they really are just menial work or in an industry that cannot remain relevant. (Although, even in those cases I’d argue that both the people and the industries would be better off trying to find a way to work together on the transition, rather than the companies seeing the people as a sacrifice to the gods of productivity.) At the level of industry rather than individual roles, though, everything could be better through this kind of investment.

And in the end, I don’t have to believe every industry, every role is better off.

I just have to be able to find enough people, companies, and industries that are better off that I can keep doing great work.

And that’s what I plan on doing.

Strategy Is Culture

How you work determines your destiny

Peter Drucker famously said, “Culture eats strategy for breakfast”. This is often interpreted as culture being more important than strategy. These might not even be his words, which makes it hard to know his thoughts, but having built a company to almost 500 people, it’s obvious to me that culture isn’t more important than strategy, it is strategy.

I am confident he knew this. He worked with the Japanese manufacturing companies in developing their culture of kaizen, which helped them to dominate the automobile manufacturing world, and their practices have gone on to fundamentally change manufacturing as an industry.

Your strategy is not what you’re trying to do (those are your goals), but rather how you plan to do it, and your culture determines how you work. Pick any great, differentiated strategy, and you’ll see that how it translates the company culture is what determines its efficacy. We tend to think of culture as being about how people feel, or how they treat each other, but that’s a bare shadow of its importance.

Southwest is an amazing airline. In an industry with a long history of losing money, they’ve managed to make money year after year. How? Their strategy is simple: Be the lowest cost carrier. This isn’t something that the management says to the team who then try to interpret it; it shows up in every decision made by everyone in the company. It’s baked into the culture. If it weren’t, costs would creep in at every level of the organization. They could not be the lowest cost carrier without a culture of parsimony at every layer of the company. You will not be a culture fit at Southwest if you can’t be cheap.

Toyota is legendary for helping to drive a revolution in manufacturing, and their success is based on tight integration between company strategy and behavior at the front line. Many of their practices work against human nature, such as their “Five Whys” tactic, which seeks root causes for quality issues by refusing to blame individuals. Their culture is so unique that one of its standout features is how reliant they are on training and mentoring, from the front-line worker to top executives. Managers get promoted based on their ability to train their teams up, not just for driving results. Toyota understands its success is reliant on its management chain to connect company strategy to front-line execution, and it invests accordingly.

Atlassian has built an amazing business, not by being better at selling, but by ensuring the customer can sell themselves. The entire company maintains the cultural discipline of removing barriers to a sale, of asking why a customer needed help, hit a roadblock, or didn’t buy. This sounds easy, but it’s fundamentally hard, and more importantly it’s directly in conflict with the sales culture of modern software companies. You have to choose whether you care more about this deal or all deals, and Atlassian’s rare choice, and how that has permeated their culture, is a big part of why they’ve done so well. I expect Atlassian has lost countless great people who could not make that shift.

These companies demonstrate the inseparability of culture and strategy. Your strategy is how your company plans to win, and your culture is how people individually behave in alignment with it.

Some might argue that the above are the company missions, not their strategies, but they’re very different. Toyota’s mission is to build great cars, and a culture of kaizen is how they do it. Southwest’s mission is probably something like “Enable access to air travel to all”, and being low-cost is how they do that. Atlassian’s mission is clearly around enabling collaboration between teams, which they support by making their products easy to acquire.

If you’ve got competition, then your strategy needs to specify how, exactly, you’ll beat those competitors. You and your competition can have the exact same mission, but by definition your strategy needs to be different from theirs — you both have the same goal of taking the market, but you necessarily will be approaching it differently. If you don’t have competition, your strategy needs to clarify how you’ll win in a market where no one has thrived before. Neither of these is a simple question of setting goals; it’s about clearly stating how you’ll accomplish them.

Too often, strategies are placed on a pedestal, gesticulated at from afar and never allowed to get dirty, when a strategy’s value is entirely determined by how it hits the ground. Napoleon was one of history’s best military strategists, and he knew the importance of how it translated to individual soldiers. When he gave orders, he demanded they be repeated back word for word, and would do it again until they were right. He knew a strategy’s value was determined where it met the enemy, not when it was laid out in the command tent.

Contrary to what you might hear, I don’t think I’ve ever met a founder who didn’t have a strategy, and usually a pretty good one. In contrast, I have yet to meet an early-stage founder who can explain how that strategy will translate on the ground, or who invested hours every week in ensuring that his or her team was operating according to it. Even basic things like having better usability are hand-wavy at best, relying on constant input from founders rather than permeating the organization’s culture.

Simon Sinek has helped lead the charge in the value of ‘why’ as a force for motivation and alignment. Your strategy for how you plan to win is just as important, and is much harder to align a team around because it necessarily involves daily discipline. It’s not enough to have a great strategy or a great culture; they have to be one and the same.

Your culture is how your strategy is executed, and you’ve got to put in the time to make it work.