Hold on to your why

Founders need to retain their own mission while they build out the company’s.

Photo by Sharon McCutcheon

Managing a high-growth company is the hardest thing I’ve ever done. One big reason is that I only received problems that no one else could figure out. Some were organizational problems that should naturally route to the CEO, but a lot were functional issues that I was no more capable of solving than anyone else.

I eventually discerned a repeated pattern in solving these problems. At first I would just get a few issues. I’d muddle through — do a bit of research, ask for help, and sort things out. As we grew, more and more of my time would be spent on this one kind of problem. I’d become better and better at handling it, and just about the time I’d start feeling like I knew what I was doing, I’d realize, “Oh: There are people out there who specialize in this”. I could just hire someone to do it full time, and they’d be better at it than I ever would. Duh.

I’d then spend three months, or six, or twelve, hiring for the role, and bam, suddenly my time is freed up and I’ve got an actual expert in charge. Well, kind of. At this point I’m a self-taught semi-expert who does not buy into the orthodoxy of the role, and we’ve got a year of my weird solutions, so there’s a lot of friction as we sort out just how to add this new skill set to a growing org. But the point is, my time spent on this problem drops precipitously, and I no longer have much opportunity to put my new-found skills into practice.

Usually just in time for something new to come into focus.

This pattern — gain just enough expertise to hire someone — played out again and again, for me and for other founders I’ve talked to.

In some ways it’s thrilling. You get experience with all of the key areas at the company, and you’re always learning something new.

In other ways, though, it is soul-crushing. Over the eight years I managed Puppet while in fast hiring mode, I rarely got to spend time doing anything I was good at. Humans have a psychological need to feel competent, to feel like they are in control and know what’s going on. I don’t need this all the time, but please, just a little? Sometimes? Nope. Pretty much the second I started to feel like I understood something, I had to hire for it, and my problem changed from doing to managing.

After years of this, I knew just enough about everything to suck at it, but not enough to actually be useful to anyone.

Only as my tenure as CEO came to a close did I begin to see what I uniquely added to the organization. I began being comfortable not delegating certain problems, and felt justified in spending hours on something as an individual contributor, rather than seeking leverage in everything I did.

Only once this happened did I start to feel comfortable as a CEO. I wasn’t just routing problems, I was actually solving some of them. I was not spending 100% of my time in areas I was incompetent; just most of it.

I know the advice as well as you: Great leaders delegate, they empower. If you’re doing the work yourself, you’re not a real leader.

Bullshit.

Yes, building and running a team absolutely requires that you empower the team. But that doesn’t mean you don’t get to do anything yourself, that you hand everything off and have nothing left.

Just like everyone else, you, too, need a reason to show up, to stay engaged. You have to hold on to your own why.

If you don’t remember why you, personally, are in the job, then you’ll look up in a few years and realize it’s not there any more. You’ve moved too far from what gets you up in the morning, and suddenly you can’t do it. Or worse, the company has developed but you haven’t. You’re no better at the thing you want to master than you were when you started, because you haven’t been spending time on the problems you care most about.

Some of this is that you need a place of safety. I am a highly fireable person, and raising venture capital made for downright tenuous tenure. The less confident I was about my own strengths, my own value, the less safe I felt. And humans need to feel safe to do great work.

More than that, though, I needed a platform for learning. I was pursuing mastery, but of what, exactly? Of not mastering things?

I know other leaders really are master delegators, hirers, organizers, etc. But that was never going to be me.

I had to peel things back, really understand why I was there, what I cared about, what I wanted to be the best in the world at. And, really, what I was good enough at that I ended up in this place, running this company. Then, as the problems rolled by, I could be sure to push that forward just a little bit, even if my focus was on the organization’s needs, not my own.

The times I lost this sense of why I was there and what I was getting better at were some of my most depressing days. But the days where I could connect what I felt good at, what I spent my time on, and what the company needed from me were the best days.

I don’t think that’s any different for me, or for other founders, than it is for anyone else.

But all the discussions of leadership I hear leave this bit out: You’re a human, too. You have to provide the why for the whole organization, but every individual deserves to be able to translate that into what they do every day. Even you.

The Morality of a Good Tool

Tools just get the job done, but products demand something in return

Photo by Todd Quackenbush

I love tools, the effortless balance of a well-known hammer in my hands, knowing exactly how to hold it and what it will do. Starting out clumsy is never fun, especially with the tools that crush fingers or spill, but I adore that feeling of developing expertise. It’s hard to conceive of a tool without also thinking of the experts who use it. I secretly wonder if I deserve most of the tools I have.

“That’s a mighty fine hammer you have there, but unless you can show me the callouses from using it, we’re going to have to confiscate it.” Home Depot would make a lot less money if we had to prove we got good usage out of the fancy stuff we buy.

This mythical tester doesn’t have to stick to checking you out. The tools themselves prove it when they’re not just for show. Knives shrink with sharpening, work pants thin, machines drink oil.

This is a feature. Preciousness is the antithesis of a good tool. “That knife is too expensive to use every day.” Ugh. Not my tools. I’d rather break something on the first day than be afraid to use it in real life.

Tools should be scratched. Dented. Aged. Their callouses should pair yours. You and your friends should huddle around your tools, bragging about whose has the better patina. Precious tools are just toys, decoration. They live on a shelf, or more often in the attic, not on your work bench, by your side. In software form, they are so well designed they don’t even function.

Tools only deserve the label if they help you work. Given that that’s the heart of what motivates me, it’s natural I want to build tools. I’ve been madly rushing toward a plan to do so, but was recently pulled up short by a simple question: What do you mean by tools?

I really hate the easy questions.

This was put to me by Jordan Hayles of the Radical Brand Lab. The bit above is one kind of answer, but as I thought about it, it became clear that it’s insufficient.

I’ve been saying I want to build power tools for people. Why not power products? That’s a motor boat of alliteration: ‘power products for people.’ Awesome, right? Right?

Ok, maybe not.

Part of my choice of phrase is that I grew up building houses, and ‘power tools’ just meant the things you plugged in. You know? Because they needed power? It’s a common usage, maybe my word choice here did not mean much.

Except… I’ve spent more than a decade learning product management, describing myself as a product-oriented founder, managing that function in a growing company, and attempting to teach it to other founders. Just a few more years and I might have some clue what I’m doing. Yet here I am ignoring both the term and the field entirely. Why am I so quickly dumping my work of the last ten years? Is it just creative branding, getting that blue color shine? Mere cynicism about the software industry?

Product management as we know it began in the consumer goods industry. You’re handed a train car full of dish soap and told to sell it. I mean, not all at once. You’ve got to package it, price it, convince a local store to carry it, argue with them about location, move it away from competitors, all that. Every product you see in your local grocery store is loved by a product manager who fights for its shelf space, believes it is beautiful, and wants you to give it a good home.

Tide soap is one of the most commonly stolen items, but not because it’s soap. The strong brand makes it easy to sell, even allowing it to be used as a stand-in for money in drug deals. Shows just how far I have to go in product management. Unfortunately, it says nothing about the soap.

Inkjet printers are an example of this gone wrong. Laser printers, their predecessors, had toner cartridges you could refill. Not very clean, but cheap and reliable. The printers themselves were so expensive that this worked out well for everyone. Inkjet printers are instead fantastically cheap, and most people who buy them rarely use them so demand a very low price.

Printer manufacturers have found a way to make up for the money they have to give up in the initial purchase: Disposable cartridges. Initially these were just a consumable, an extra revenue stream, but over time, companies started putting more rules in place to prop up the cartridge prices, and to ensure all that money went to them, not third parties: You had to buy them from the manufacturer, they had to be replaced every year, you could not refill them.

This hurts the user in the name of making money for the vendor. People are unhappy enough about it that the US Supreme Court had to weigh in.

That’s good product management. Well, evil, but you know what I mean.

Reasonable people might disagree on the wisdom of this approach, but it begins to reveal a distinction between the simplicity of “tool” versus a more complete “product”.

When I think of a tool, it is uncomplicated. When I use a hammer, it just has to fit my hand and smash stuff. When I pick up my drill, it works with every bit I own, regardless of where or when I bought it. The battery and charger are proprietary, but the vendor’s most visible role in my life is color choice. My yellow drill works just fine with bits from the blue or green people. (Just mentioning the colors probably caused you to visualize these companies. Branding works, even for tools.) It does not matter whether I bought the drill from Home Depot or inherited it from my dad; once in my hands, it just works.

A product, however, exposes you to its business model. There’s no difference between the dish soap sold at retail and the one sold in bulk, yet they’re separate products, differentiated through packaging, shipping needs, and labeling. Those differences obviously impact price, and how you use the soap.

Tools now become a kind of counter-point, a better offer:

It helps you do your job, and makes no demands of you in return.

I know how DeWalt and Mikita make money, but I don’t think about it when I’m using their tools. I can comfortably recite that my canonical hammer is the Estwing 22oz waffle head with a straight claw1, but none of those details mean I need the vendor’s permission to hit a nail with it. I make a decision about the right tool, I buy it, I use it. End of story.

It is small. If you call something a tool, not a product, you’re saying it’s less, it’s not as complete a solution. “It’s just a tool.” You can see this as belittling, insulting, but it does not have to be. It’s also a statement of choice. Of freedom.

Products have an implicit, ongoing dependence on their vendor. As that vendor, this works well for me: I want you to pay me all the time, not just once. That ongoing relationship is how I afford to keep improving what I’ve built for you. But it’s not always a healthy relationship. These interactions often shift from helping you to sustaining a business model, as they did with inkjet printers.

When I say I like tools, I’m rejecting all of those interactions. I want something self-contained. Independent. Usage is a pragmatic decision, not a lifetime commitment.

That independence has downsides for founders. You don’t get any of those delicious growth-hacker buzzwords. Your product isn’t “sticky”, there’s no “moat.” Those are examples of my customers being forced to experience my business model, and their absence means my business is harder to build, to protect.

One might argue in return I’m better off because I treat my customers with more respect, and I’d probably agree. I think this is often the right answer, but it’s not a popular one. I might be accused of not “wanting to build a real company,” or I might be insulted in the most dire way possible: “That’s just a lifestyle business”.

Tell that to Adobe. And AutoDesk. These companies are built on their tools. They are the behemoths we know today because they knuckled down and solved their customers’ problems. It was a different time, but people have not changed.

I don’t think that every product is compromised when the customer experiences the business model, but I think most are. Some of it is laziness, knowing you don’t need to finish because your product will cover for you. But a lot of it is strategy, recognizing the value of a product over a tool.

I want to build tools.

  1. We told with great pleasure the (most likely apocryphal) story that this hammer was illegal in Florida because the metal haft could cut your thumb off.

The power of better tools

There is a solution to wage and productivity stagnation. Just don’t call it automation

Image courtesy of Washington Department of Transportation

I don’t know what the rest of the world thinks when they use the phrase ‘power tool’, but for me it’s visceral, literal. My experiences using them and watching them transform my family’s work permeated my time building Puppet. These power tools aren’t little plugins to expensive frameworks, they’re large capital investments that dramatically change your job.

I grew up building houses with my dad. The worst task he gave me was trying to paint a set of louvre doors for a closet while in high school; I had to flip the doors over every 90 seconds to catch drips getting through the slats. After three days of misery, my father relented and rented a paint sprayer, with which we finished the job the same day, at a much higher quality.

Around the same time, my dad would rent a pneumatic nailer for big framing jobs. By the time I finished college a few years later, that critical tool went from borrowed to owned and traveled everywhere with him. Initially used only for large jobs, most contractors now have multiple nail guns to cover framing, trim, and every other use case, and the air compressor needed to power it is as important as electricity.

It might not be obvious, but both of these are examples of automation. You replaced a very manual process — applying paint, or nailing things together — with a machine. If this were a factory, these days you’d call those machines robots, but because it’s a construction site, we just call them tools.

And these tools were expensive. Even with how much faster we finished that painting job, I expect it cost more to rent the sprayer than to finish the work manually, because of how little he was paying me. (This does ignore the soft costs of listening to me complain, which were likely high.) Even today paint sprayers and nail guns are often rented rather than purchased, because good ones cost a lot of money and aren’t needed all the time.

It’s no surprise that discussions of tools and productivity are easier to understand from my experience as a carpenter than as a sysadmin. There’s plenty of room for arguments about what is or is not a software power tool, but when it costs more than a week’s wages, it trails a bright orange cord everywhere it goes, and it can nail your hand to the wall while you’re standing at the top of a ladder1? It’s a power tool.

There’s a common story about what robots and automation do to people like my dad (and both of my brothers, who followed in his footsteps): It steals their jobs and ruins their lives.

What utter poppycock.

If you think of your job as driving metal spikes into wood, then a nail gun is a mortal threat. But if this is your value add, your biggest danger was never automation. My dad never sold his ability to join raw materials together quickly; he sold homes, he sold the opportunity to enjoy your house and family more. How did these new power tools affect that?

They were awesome. Painting and nailing are classic examples of menial, low-value work, and yet we spent most of our time on them. All of the differentiation we offered to our customers was packed into a narrow slice of work, because implementation took so much time and money. As we were able to bring more powerful tools to bear, the menial work shrank and larger portions of our time could be spent on design work, customer interaction, and tuning our customers’ homes.

Interestingly, my father’s next career step was even more pointedly about experiences enabled by tooling. He took a job with a state hospital in Tennessee, fabricating custom furniture for severely disabled patients. Suddenly he was using industrial sewing machines for upholstery, and partnering with medical professionals to design multiple beds for each patient, enabling them to be happier and more comfortable (and also avoid bed sores, thus saving hundreds of thousands of dollars per patient). Given the tragically minimal budget allocation for this kind of work, every dollar saved through automation and tooling directly delivered health and happiness to his patients.

It’s no wonder I see the value in power tools, that I am more conscious of the benefit they can deliver than the loss of low-value menial work.

I had a similar experience as I was building Puppet. I would meet executives and salespeople (I don’t know why it was always them) who would say, “Oh, automation? Great, you can fire sysadmins!” No. Beyond the obvious reality that I was selling directly to my users, who would never buy on the promise to fire their coworkers, that was just not why we were valuable.

Puppet gave people a choice between lowering cost but keeping the current service quality, or keeping your costs flat while providing a much better service. “Wait, making things better is an option? I didn’t know that!” Most companies were aware that their IT sucked, but they only knew how to measure and manage cost, so that’s what they did. Once you believed in the power to make things better, power tools turned out to be great investments for both the user and the buyer.

By letting people spend more time on the parts of their work they enjoyed, the work that makes them special, we also delivered higher quality experiences for their customers and constituents. “Spend less time firefighting and doing menial work, and more time shipping great software.” If the heart of your skillset is clicking buttons or responding to outages, Puppet might have been a threat to you, but our users knew where their real value was. We helped them spend more time there and less time on the boring, low value stuff. The sysadmins hated the work, the customers hated to need it, and the executives hated paying for it. Great, done, don’t worry about it.

When you look around the software market, though, power tools are out of style. There are big data companies building for the non-existent average user, minimalist companies building solutions that do little for almost everyone, and there are power tool companies of yesteryear still hanging around. There just aren’t that many modern software companies building large, clunky, expensive tools that just might cut your hand off if you’re not careful.

That’s partially why productivity has stagnated2. The world has not changed that much — some of the greatest improvements to productivity come from making large capital investments in tooling for your workers — but how we spend our money has. People balk at a $5k computer, when the Mac IIci would cost more than $13k in today’s dollars just for the hardware, yet was a powerhouse in desktop publishing. This is to say nothing of how the mobile app stores have driven down what people are willing to spend on software.

Yes, Adobe’s software is expensive, but it’s that price because it delivers so much value. If it didn’t, no one would buy it. Every large market should be so lucky as to have the collection of power tools that graphic designers get. It sounds crazy, but we’re suffering from not enough expensive software. Instead of building the most powerful software possible and finding customers who see its value, companies are building the simplest thing they can and trying to get everyone to use it.

There are bright spots in the industry, like Airtable and Superhuman. I’m hoping they help to shift momentum back to automating away the tedious work and enabling focus on what humans excel at.

More powerful tools improve your life, but they also make you happier even if you can’t buy them. They tantalize you, promising you great returns, if only you can come up with the cash. And they’re maybe just a little bit scary, warning you that buying them is not enough. You must master them.

  1. A friend of ours managed to do this when working alone in the time before cell phones.
  2. Yes, I might be being simplistic to make a point.

The Market Is Wrong About Your Problems

What Voltaire and the Flaw at the Heart of Economics Have to Teach Us About Software That Doesn’t Exist

Voltaire’s Candide juxtaposes an optimistic philosophy with unbelievable tragedy. He was angry at the 19th century philosophers who proclaimed that we lived in the best of all possible world while destruction and death unfolded around Europe on an epic scale.

We might hear the claim that we live in the best of all possible words and scoff. Of course, we’re too enlightened to be such naïve optimists. But are we? Isn’t the belief tempting? Or even, doesn’t the behavior of those around you make more sense if you realize they believe this, at least a little bit?

Economists are theoretically rational, analytical, big picture thinkers, but at the root of modern economics is a belief shockingly close to Candide’s parody of optimism. They have what they call “The Efficient Market Hypothesis” (EMH), which roughly states that all assets are valued fairly. This is built off the idea that asset values in an open market are fair because they include all available information, and all the actors in that market are behaving rationally in regard to both the asset and the available information.

This theory tends not to trigger the cynicism that Voltaire does. Intuitively, it sounds not just right, but defined as so. Isn’t an open market essentially a mechanism for finding the fair value of an asset? It’s not so simple. And when it goes wrong, it does so spectacularly.

Modern economists cannot be as destructive as the great thinkers of the 18th century, whose big ideas justified eugenics and many other horrors. Just because they cannot as easily be used to justify mass murder does not mean they should not be accountable for the downsides of their obviously incorrect theory.

“No”, I hear you say, “the EMH is not wrong; it’s correct by definition.”

Economists have convinced us of what Voltaire was protecting us from: We live in the best of all possible markets, where all information is public and all assets are fairly valued. If the market does not value something, that it must actually be worthless.

But of course, if that were true Warren Buffet would not have become a billionaire buying stocks that were worth more than the market was paying, the finance industry could not have been built on advising clients about public stocks, and you’d have no need for lemon laws or other regulations that fight information discrepancies. Nor would Kahneman and Tversky have won the Nobel Prize for demonstrating that actors in an economic system behave anything but rationally, puncturing the EMH for good. Thankfully, this has forced the field to begin to grapple with its flawed underpinnings, but many modern beliefs are implicitly built around these bankrupt theories.

You might be patting yourself on the back right now for not being silly enough to draw Voltaire’s ire, but it’s baked into the value system of the world around you, especially if you live in the US.

  • The market moves from irrationally ignoring new technologies like the blockchain to irrationally dumping money on them, without any fundamental change to justify the shift
  • Investments are made based on proximity and serendipity rather than rationality and opportunity size
  • We tend to claim that the rich earned their status through hard work, rather than recognizing the role of privilege, inheritance, and luck in their status

Of course, not everyone in the market operates with such optimism, but each of us is biased in this direction. It affects our thinking whether we want it to or not.

“Ok”, you say, “even if I accept some people make optimistic investment decisions, what does that have to do with software?”

Great question. If we live in the best of all possible markets, where all information is public and all assets are fairly valued, then we can trust the market’s assessment of what software should and should not exist. Lack of software to solve a problem is a sign that it’s not worth solving.

If, on the other hand, our world could be better, or if our market is imperfect at valuing assets, then we can’t trust intuitive conclusions about where value resides. This is most true when it comes to valuing unsolved problems. It might be that a given problem has no solutions because it is not worth solving, but mundane reasons are more likely to be at fault.

Most great companies exist because they provided something the market did not know it wanted. Their founders encountered a flaw, and managed to build something great in the opportunity created by it. Henry Ford claimed if he’d have given people what they wanted it would have been a faster horse. The market knew how to value them, but not cars. Before Apple, the market did not value personal computers. Before Google, the market valued directories but not search engines. Before the iPhone, the market valued expensive phones for professional use but not personal.

These value statements were market failures, and their resolution generated billions of dollars for the companies resolving them. Now, of course, the market sees great value in what these founders have created, but not because the market is so smart; it’s because it can no longer fool itself.

It’s easy to grow despondent in the face of such obvious market failures. If the wisdom of the crowds, the great invisible hand of the market, can be so wrong, what hope does a lonely entrepreneur have? I take a different way.

I luxuriate in these misses.

They surround us. We bathe in them. Yes, many great companies have grown into critical market gaps, but even with all these successes, there are untold problems whose solution should be valued but is not.

Only once you reject the market’s flawed opinions about what matters, you begin to see nearly limitless opportunity. There are so many more unmet needs than there are perfect solutions. These are your opportunities.

Of course, just because the market dismisses a space doesn’t mean there’s a great opportunity there. It’s your job to know the problem, your customer, your user, your buyer well enough to draw your own conclusions, to develop enough certainty that you don’t need someone else to tell you what to believe.

Because that’s the real point: Trust yourself, not a bunch of paternalistic optimists.

Why We Hate Working for Big Companies

Modern capitalism raises the flag of the free market while pitting centrally planned organizations against each other

It’s quite a journey from being born on a commune to raising more than $87m in funding at a software company. This journey forced me to wrestle with existential questions about my true beliefs, and how they intersected my life as an entrepreneur. One’s work is rarely a pure reflection of ideology, but companies need a clear and authentic strategy, which requires a tight alignment between company operations and the founder’s philosophy. I have discovered more about those differences between what I believe and the best ways to grow a corporation while studying economics — that is, how money is made and exchanged — than any other area.

A worldwide conflict between communism and capitalism defined the latter half of the twentieth century. The United States’ ideological battle was the central drama of my childhood, and it was with a combination of glee, pride, and “told you so!” that my fellow Americans watched the wall fall in Berlin, and the USSR dissolve shortly thereafter. I expect few would deny that the US is the standard bearer for capitalism.

Yet, there’s a flaw at the heart of this claim. While the United States operates as a free market economy, the key agent within modern capitalism — the corporation — works more like an authoritarian state. Given how much of our world is built around corporations, this truth and its impacts are critical.

I grew up apart from America’s passion for capitalism. In the era of Reagan, I was living on a commune. My parents did not earn money for their labor, and we didn’t have personal property. My family left the Farm when I was 8, and as I matured, my ideological roots were in conflict with the US’s nonstop pro-capitalism message. As I joined the workforce and eventually started my own company, I found myself attached to neither the communal roots of my childhood nor the Wolf of Wall Street world I moved into. I grew slowly in convictions, as I encountered problems in the course of scaling a company.

The first real conflict came when it was time to hire managers. I founded a company primarily because I did not thrive as someone else’s employee, so what led me to think others would? More importantly, anyone who has ever operated at the front line is aware of the severe costs imposed by the separation between the people who do the work and the people who make the decisions in hierarchies. Hiring managers was just going to make the company do worse, not better, right? Right?

I expect three of you are gleefully shouting, “Yay, holacracy!” right now, while the rest are confused and either offended or think I’m an idiot. I did consider a manager-less world, but a little research provided only examples of disaster, because the only available options just replace an explicit power structure with an implicit one. In other words, it’s still hierarchical with the founder on top, but now decision making is opaque and the system is easy to exploit because of the lack of controls (which looks surprisingly like the cult/commune I grew up in).

Those who are confused or offended by the idea that managers make performance worse would be informed by a deep dip in economics. One of the core principles of the free market is that central planning committees can never be as efficient or as effective as the people doing the work. By definition a free market economy lacks a decision-making hierarchy; the ‘free’ means every agent (individual or corporation) can decide for themselves, without needing permission from a manager above.

While there are many aspects of modern American capitalism I reject, this one I wholeheartedly support1. The downsides of a strong central executive were taught to me early.

Like many other communes, the one I grew up on routinely failed to feed its people — my parents speak with horror of the ‘wheat berry winter’, when we lived on little else. While his people were short on food, the founder of the Farm was off touring Europe as the 3rd drummer in a band, “bringing our message to the world”.

Thankfully none of us starved to death, but the failing was similar to what most communist countries experienced: The central organization could not feed everyone. For years, I assumed this was just incompetence, whether at the scale of the Farm or China. The truth was far more structural. Millions starved during the Great Leap Forward because the central organization was trying something impossible: Managing the productive output of an entire country. The Planet Money podcast tells a great story of how this central planning was walked back in China, but the general point here is that these communist countries did not just nationalize the means of production, they tried to centrally control all of it from within a small group.2

When people talk about communist countries not being a free market, this is what they mean: They tell the farms what crops to produce and in what quantity, rather than letting them decide for themselves. China even went so far as to dictate what hours a farmer should start and stop working, and then directed managers to ring a bell for transition times to control every little group of farmers. Anyone who’s ever had to punch a clock into a rigid, dysfunctional hierarchy is likely getting painful flashbacks about now.

It should be immediately obvious why this fails miserably: The distance between the central planning committee and the farmer is so great that good decisions are nearly impossible. It’s nearly impossible for critical feedback to make it from the edge, where the farmers are working, to the central planning committee in time to affect decisions, and then for those decisions to make it back to the edge in time to be useful. The podcast linked above also points out how unmotivated the farmers were under this regime, cutting productivity even further. Those who have studied lean manufacturing, agile development, and DevOps are likely seeing parallels here.

The result was catastrophe. When a corporation is painfully inefficient it loses money and might have to do layoffs, but when a country fails at growing food, its people starve to death. I don’t mean to imply that central planning was the only cause of famine under communist rule — there were political operations that led to mass starvation, just like in the West — but learning more about these helped crystallize what I do truly prefer about capitalist models. It also converted the phrase ‘the free market’ from a catchy slogan into something meaningful to me.3

The most important feature of free market economies is that each person within them is able to make independent decisions in their own best interests4. If you’re a farmer, you can decide what to grow, how much to grow, and when to work to develop your crop. Heck, you can even choose not to be a farmer any more. Success is merely dependent on your finding a buyer for your work at a price you can tolerate. Any given year might not be perfect, but your decision making gets better over time as you learn to respond to customer demand.

This pattern is easy to understand in any system where the people doing the work make the decisions. If you’re a jeweler, you can decide what to make, how much to sell it for, and what to spend your time on. Same if you run a small restaurant, lead local tours, or are a one-person shop doing house remodeling. It’s a free market, where you can charge what the market will bear, and you can quickly and efficiently respond to its whims, ensuring that you are getting the best use of your time.

This was a powerful organizing principle for a long time. The history of human commerce developed largely this way: One person, or as many people as could fit in one shop, would turn labor into a product, then find a buyer for it. Most large-scale efforts were organized by the state of the time: Monarchs and the landed gentry, who were the only ones capable of marshaling enough resources to build palaces, roads, and other large construction projects.

This began to change in the 17th century when corporations like the Dutch East India Company were able to deliver massive windfalls to investors by pooling money and using it to extract resources from colonies. There was a step change in the 19th century, as corporations went from generating wealth to building and owning infrastructure. It’s one thing to outfit a single ship for a year-long voyage, yet another to maintain railroad schedules across the United Kingdom, or run a telegraph network around the whole US. These aren’t just short-term money-making exercises, they’re long-term commitments with big capital outlays and large returns over years and years.

We still live in a free market economy, but it’s not one Adam Smith would recognize. Instead of individual or small operators, ours is composed almost entirely of corporations. Really big corporations. And these companies, they use the same kind of central planning that we so despise in communist systems. I know. I’ve done it.

By the time my company got near 500 people, we had a multi-week planning process, where the leadership (i.e., me and my lieutenants) set out top-level goals, built a top-down plan to accomplish them, then drew information from the front line to see where it needed change. We called this a bottom-up plan, but it was only bottom-up from the perspective of numbers — how much money we’d have, what our costs were, etc. — rather than from the bottom of the organization. We could see no way to have a system where the people doing the work built a plan for the organization. Even thinking about it now, my reaction is, “How would they know what my goals are?”

That’s the kind of question you can only ask in an authoritarian state, not in a free market economy. My goals became my company’s goals, and the only real way to ensure people worked toward them was providing a plan. You might argue that a corporation should focus on shareholder value, but that doesn’t help make decisions about what the company should actually do.

Great leaders find a way to listen to everyone in the company, but in the end, leadership is about making decisions. That’s essentially the definition of the word. And we all know leaders who did not bother to listen, or just did not need to in order to be great; today’s most vaunted tech leader, Steve Jobs, was famously disrespectful of the opinions of others, yet made a lot of world-changing decisions (not all for the better).

This is exactly why working in a big corporation is so stifling. If you’re in a small company, the executives are close enough to the front line that it’s more like working in a tribe, but in a big company, the leadership is so removed from whose who do the work that executive teams operate like the politburo we so decry in communist countries. Certainly the bureaucracies are no more enjoyable or forgiving.

I find it both ironic and painful that my inability to work for someone else resulted in my creating a company that involved a lot of smart, capable people working for someone else.

I wish I had a solution. If this were an easy problem, its solution would already be pervasive, because the benefits are massive. Just in terms of efficiency, we’ve seen how much better the free market is than planned economies, but it also has a hugely positive impact on quality of life. People are happier when they’re in control.

I know the solution is not more freelancing and contract work, which America’s corporations are addicted to. That’s the worst of both worlds: The exploitative nature of capitalism with the inefficient bureaucracies of communism. Transactions on the free market work because they’re good for both sides, but most people only accept part-time contract relationships today when they have no other real choices.

Holacracy certainly isn’t the answer. It’s fundamentally flawed because of its implicit power structure — Tony Hsieh still runs Zappos, even if he does not use a central planning committee to do it — but the biggest problem is it makes no mention of economics. Without a clear system for scoring the transactions (i.e., money) it’s impossible to build a free market.

This problem of how to handle economics within a non-hierarchical company might lead some to think of using blockchain tokens as an internal currency. This is impossible today, beyond the fact that the world of blockchain is mostly about fraud and black market sales. The biggest problem is that we have no idea how to value most of the work people do. I mean, we might know that what a developer should get paid for a year’s work, but how much is that work worth? The majority of the work done in modern corporations is incredibly hard to value, which is partially why companies are so inefficient and make so many bad decisions.

That brings up an even bigger problem — companies today hire workers to make money from their labor. In other words, they generate profit because they pay their employees less than they’re worth. If everyone could trade their labor for exactly the amount of money it was worth, the corporations that employ them would have a much harder time making money. Instead, in modern corporations the shareholders and the executive team — again, the central planning committee we so despise — make the majority of the money, while the front line does all the work and makes very little. This is true even at the big tech firms; software developers might be well paid relative to hotel workers, but they’re paid a pittance compared to the founders and executives. This might speak to why we have no solution yet — free market corporations would tend to reduce concentrations of wealth, which would be terribly disruptive to the current system.

Like I said, I don’t have a solution. But at least now I know what makes the current system so painful, and it gives me some hope that we actually can come up with a better answer. I know I’ll be working harder in the future to manage the downsides of what we have today.

  1. Although I might stress the “well regulated” part more than most modern economists.
  2. Of course, capitalism is just as capable of killing its citizens, whether through starvation or lack of health care.
  3. Note that I’m not taking the capitalist side of the cold war here; while Americans were decrying the oppression of the Soviets, we were actively clawing back progress on civil rights and knocking over democratically elected governments. This article is about principles, which political regimes rarely show a great track record in following.
  4. But not so independent that you should be as pathological as Ayn Rand.

Putting OKRs Into Practice

The true story of trying to put Google’s planning system into use

When Google was less than a year old, they began using a planning system presented by legendary venture capitalist John Doerr of Kleiner Perkins. When I went to put it into practice at Puppet in the early days of growing the team, things were not as easy as they appeared. Success involved creation of a complete solution, not just a description of the documents you need to create.

When I went to try to use the system as described by Doerr, I had multiple questions it didn’t answer. Just to start with, when and how do you make and update these OKRs? It’s great to say you should have this recording of your goals, but I could easily come up with multiple conflicting mechanisms for developing it, none of which are obviously better:

  • The CEO could develop them independently and deliver them to the team
  • The executive team could develop them collaboratively
  • They could be sourced from the front-line team

None of these is obviously right or wrong, and of course, neither are they sufficient explanations for how to do it. Do you do it one sitting? Multiple revisions? How long should you spend on it? How often should you update them? Can you change them mid-stream if your situation obviously changes? There’s a lot left to the reader. You can say it doesn’t matter, but of course, it does, and even if you’re right, you still have to pick one. Why go through the effort of describing the output but skip the whole process you used to create and maintain it?

Here’s how we did it.

Startup Days

Starting by reading John Doerr’s original presentation, even though it’s relatively thin. In summary, you should have three to five top-level objectives, and each of these should have a couple of key results associated with it. Together these constitute a company’s Objectives and Key Results, or “OKRs”. These should then cascade down to the rest of your team, so that each team and person has OKRs. This is a useful high-level tool for communication and focus, even in small teams. (Note that I’ll use ‘goals’ and ‘objectives’ interchangeably here; far more people use the shorter term in practice, and we treated them equivalently.)

At Puppet, we spoke of an operational rhythm, which is essentially the set of repetitious tasks we run and the cadence we execute them on to keep the business working. But the OKR system as presented includes no operational rhythm, no indication that people are involved in creating these goals or that doing so takes any time. So we invented our own rhythm:

  • As early as possible each period, the management team meets to decide the company OKRs. This started out as a 45-minute meeting that just recorded the goals that were in my head, but evolved over years into a two-day offsite where the leadership team first acquired a shared understanding of where the business was and what we needed to do, then built the goals from there. In retrospect we should have put in these longer days earlier; your team should frequently think deeply about what you should be working on, rather than just running all the time.
  • The rest of the company has some time to build its OKRs from the top-level goals. Initially this was a couple of days, but it eventually morphed into a couple of weeks.
  • These cascaded goals are then used to modify the company OKRs if needed. (In other words, we supported a merged top-down and bottom-up planning model.) This is when management would learn if our view of reality was materially different from that of the people at the front line.
  • At the end of every period, the management team records how we did against our goals. Again, this began as just writing down the score, but grew to become a more complete retrospective run by a project manager. This meeting it at most a couple of hours long, and just includes the leadership team.

When we began this process, we wanted short-term goals, so we ran this cadence eight times a year; thus, we called our planning periods “octaves.” As we matured and could think and execute in a more long-term fashion, we reduced this to quarterly.

I think this system is sufficient for most companies of 15 to 250 people. Some companies might grow out of this at relatively few people, whereas others might scale very well with it. I expect most people could scale this system successfully by gradually increasing the amount of time spent on each session, with more time in deep discussion, and also by assigning a project manager to run it. I ran the whole process until we were probably 250 people, which was a mistake that took too much of my time, resulted in too centralized of an organization, and limited our effectiveness because I suck at project management.

Note that these are pointedly not plans; that is, they are not step by step instructions for how to achieve a goal. We’re declaring what we want done, but not how we expect to do it. This is both a blessing and a curse. On the one hand, it provides a lot of freedom for people at the front line to figure out the right way of accomplishing something, but it also leaves a gaping hole in your organization. At some point, someone has to actually do the work, but where in your operational rhythm does a team translate goals into a plan for accomplishing them? Do you make that time? We didn’t until far too late, and it mattered.

Scaling

As we scaled the company and this system, we found a few critical gaps.

The biggest one is obvious enough that I cringe now just thinking about it. You would never try to build a product without being clear on who would do the work and, of course, you shouldn’t try to accomplish your company’s goals without assigning each objective and key result to an individual, yet our initial version (and the one presented by Doerr) had nothing to say on people. At some point we added the requirement that every objective had a name assigned to it, which was a huge change for us – and a really positive one.

The lack of accountability for each goal was exacerbated by the fact that we didn’t have any mechanism for in-quarter check-ins on the goals. We’d frequently only find out at the end of a quarter that a goal was going to be missed, when it was far too late to do anything about it. So we built a weekly operations review (“ops review”) where we reviewed progress against the goals. This meeting is a predictive exercise, not a status statement. Goals are green if you expect to accomplish them on time, even if you’re still two months away from the deadline. We mostly focus just on the areas we don’t expect to hit, which allows us to invest early in correcting our execution or changing our expectations.

It’s worth reiterating, because this was so hard to get people to understand: The goal of the ops review was not to describe the status of each goal; it was to build a shared understanding of whether we were likely to achieve our goals and then build an action plan to resolve the predicted misses. The majority of people entered that meeting with a belief that they needed to justify their paycheck, and it took a lot of education to get them to understand the real purpose.

This addition to our rhythm was pretty awesome. In one move, it basically eliminated the firefighting that had driven so much of our execution. We still had fires periodically, but they were actual surprises, not just sudden surfacing of old information, or realizing at the end of the quarter that a goal never had an owner.

The downside of the ops review is that it’s expensive (it necessarily includes a lot of top people at the company) and it takes a lot of work to make this kind of meeting worthwhile every week. I got the idea for this meeting from the excellent American Icon, about how Alan Mulally turned around Ford. A long, weekly operations review with his senior team was one of his key tactics. My team often complained that weekly was too frequent, but if a company as big as Ford was responding weekly to the conditions on the ground, shouldn’t a small startup be at least that responsive?

Around this time, we integrated the budgeting process into the planning process. It’s important to recognize they’re different — you should build the plan you want then find a way to budget for it, rather than building a budget for your departments then letting them decide how to spend it. It’s important that your should be good at both, though, and it was around this stage we started to develop the budgeting skill and learning how to integrate it into planning. That was painful, to put it mildly.

As we scaled, the company goals tended to get expressed in terms of departmental targets within sales, marketing, engineering, etc. When we were small, this seemed like a feature because it had natural lines of ownership, but as we grew it became clear it was a critical flaw. It’s important to translate plans to people and teams, but this was dysfunctional. It discouraged people from building goals that relied on other teams, and thus encouraged silos in the company. Talk about a failure mode. When we added names to each objective, we rebuilt the whole process to be structured from the top down around company goals rather than team goals, which allowed us to crack this departmental view and force shared goals and collaborative execution.

We also eventually added a layer of OKRs above our annual goals, giving us a roughly three year time horizon. These became crucial in sharing and deciding what the priorities were for a given year.

What might come next?

The above roughly describes the system as it stood when I stepped down from Puppet in 2016. It was obvious at the time that we were in need of another step-change in capability in our planning system, but the new CEO took responsibility for driving that. By the time I left, we could see many opportunities to improve what we were doing.

The big one is that we needed to push all the local knowledge about this process into code. We were using multiple different formats and tools, because different meetings require different interactions, and it was too difficult for most people to track what was happening, where, and why. For instance, our source of truth for the OKRs themselves tended to reside in Trello, but it’s a poor fit for storing updates and presenting the predictions of whether a goal would land. I couldn’t imagine trying to run a report on quantitive goals based on Trello data. Thus, we ended up storing the weekly updates in spreadsheets, which are exactly as powerful and readable as shell scripts. It meant we couldn’t trust most people to update the data, because the document was so complicated. I would have loved a single source of truth that anyone could use. In addition, I wanted to have an app automatically pull any data from original sources so I didn’t have team members doing manual work that could be automated (I mean, duh, Puppet is an automation company).

I also wanted a significantly better retrospective process that truly helped us improve the business by laying bare how our wonderfully laid plans went wrong. We were good at the work of looking back and being transparent about where we were, but there was a lot of room for improving how we tie that work to how we operate.

Lastly, I hate that our goals were built around quarters. I think having a cadence for building and validating plans is critical, but it’s silly that this cadence got translated into the timelines for the goals themselves. It often implied that each of our goals would take exactly a single planning cycle. Some obviously do — we have quarterly sales targets that we need to hit during exactly a quarter — but many of our top-level objectives were shoehorned into a quarterly system. I’d much prefer a Kanban-style on-demand planning system that would allow us to have a high-fidelity plan for what we’re working on now, and a quality backlog for what we’ll do as goals complete.

Conclusion

I’m not convinced it matters much what planning and execution system you use, but I’m utterly convinced you should have one. In the end, it’s merely a team-wide mechanism for developing, communicating, and tracking what you’re trying to achieve. It’s obviously important to have goals. I think most of us would agree you should, in some way, share those goals with the team so everyone is working toward the same ends. And, of course, your goals tomorrow should probably be somehow related to your goals today. (This is surprisingly hard.)

If you don’t have one yet, you could do worse than building an operational rhythm from what we built at Puppet. You’ll have to work through a lot of initial discomfort as you translate vague words into technical terms whose meaning is widely agreed upon around your team. But it’ll be worth it.

Where does your work live?

Most of our software is confused about what job we’ve hired it for

I’ve really enjoyed playing Zelda: Breath of the Wild, but my life has been changed more by one of its reviews than by the game itself. The review had a unique view on what made the game so great. It contrasted Zelda to other games — Destiny, for example — saying that while others tended to distribute gameplay across multiple areas (e.g., in Destiny, the radar is a critical part of the game), Zelda really focuses the game into the main screen where you walk, glide, ride, and fight.

The review (which I unfortunately cannot find, because of the quantity of posts online that all use similar words) called this “where the game lives”. I love what this phrase evokes. I absolutely loved the game Borderlands, but I was deeply frightened of ever finding out how much time I spent at its store screen, because item collection and management was such an important part of the game. A lot of its fun was specifically from the collection, rather than the playing, but that meant a large chunk of the game lived in the store, as opposed to out in the world.

Most of our software could use a similar dissection.

Like Destiny and Borderlands (which are both great, and quite similar), the tools we use show a surprising distance between what they help us do and what we’ve hired them for. If I may be permitted to steal from this review, this distance is a sign that our software is confused about where our work lives.

To pick a counter-example, I’m writing this post in Ulysses. People who choose this software laud its simplicity, which makes it easy to focus. What they really mean is, all you can do with it is write. There’s almost no formatting, very little organization, very little anything but writing. The work lives in the writing. (My first draft was written on an ipad, which further simplifies that focus.)

Contrast that with any task or project management tool. My wife and I are in the middle of planning a bunch of camping, and we’re using Trello to organize many of the options. What is Trello’s opinion about where the work lives?

Last time I looked, my wife had three browser windows open, each with about fifteen tabs. She’s also working in RoadTrippers (Pro, natch). To get this work into Trello is a process of copying, pasting, writing copy about why you pasted it, and then using Trello to file it so you can find and manage it later.

In this operation, where does the work live? It’s scattered across maps, calendars, browsers, and applications like RoadTrippers. Does Trello know that? Does it agree? How does its opinion of where the work lives affect its utility? Brief introspection leads us to conclude Trello has no idea where the work lives, and the humans using it are entirely responsible for connecting the two.

Here’s a simple exercise for anyone using a task tracking app: Envision yourself going into that app and just marking everything done, even though you obviously haven’t done the work. It hurts to even consider, doesn’t it? Your brain has absorbed that these tasks are representations of work, and it’s your job to match the representation to the work, because you know the tool won’t do it for you. When you mark something done, of course nothing goes out and does the work; you’re just lying to your software about the state of the world. And it has no idea! This disconnect is what leads to an allergic response to the idea of marking work done in software that is not yet done in the real world.

I’d like to say that Trello was just a bad example, but I think all task tools share this confusion. Bug trackers and project management tools are specialized examples of this, and they obviously have no idea where the work lives. If I’m writing code, all of the work is done in my text editor, in files on disk, and maybe in my testing tools to ensure the work is done and done right. I then go somewhere entirely different to mark the work done. Why? Shouldn’t GitHub know it already? Why do I have to explain it? The answer is because these trackers think tracking is the work, when of course, the work is the work.

It’s no better in personal tools. I just started using Things 3 for my own tasks, nearly all of which end up being expressed in email or calendars, yet Things 3 has no conception of either. It has no idea where my work lives, and expects me to put out all of the effort necessary to connect them.

Speaking of email and calendars, they have their own role in this conversation.

Email is interesting. Everyone hates it, because it’s so important to everyone that we use it constantly, yet this animosity is a result of its utility and criticality. In other words, people hate it because it works so well. But when you’re doing email, what work are you actually doing?

I’m not sure I know. You’re communicating. But usually, you’re communicating about some other kind of work, like a document, a meeting, or some kind of activity that takes place outside of the inbox. A well designed application will remove the need for communication via email — Google Docs is a great example of this. Its sharing and commenting features have allowed many discussions to move from email to where the work is, in the document itself; their addition of suggestions has doubled down on focusing on the work, rather than talking about the work. (Note that this is completely different from Slack, which advertises that it gets rid of email, by which it means it moves the conversation, not that it does a better job of bringing the work into the software.)

Of course, how do you have Google Docs tell you someone commented on your document? Email. 🙂

What about calendars? Why do calendars exist? As a tool, where does their work live?

I am thankful to have had to try to explain to a friend my position on this, otherwise I’d think it was easy to understand. It’s so counter to how people work today that a relatively obvious truth is impossibly counter-intuitive: calendars are about how I spend my time.

When using a calendar, the work is what you actually do. You, a person, out the in the world. That’s what the calendar is about. Its job is to ensure you do the right things at the right time, with the right people, in the right place. It’s about doing, not documenting, managing, or notifying. You can put something in a calendar and not do it, or do work that’s not in the calendar; any of us would say, obviously, that it’s what you do that matters, not what the calendar says. Merely creating an event has no effect, and thus no value; it only matters if it then affects your behavior. The work lives in what you do. But does your calendar make even the slightest attempt to directly manage how you spend your time? What would that even look like?

To pick a small example, my calendar apps seem to not care what city I’m currently in, or where I’m physically located. Isn’t this weird? The tool whose primary job is to manage where I am physically located makes no attempt to represent or take into account the core fact it is meant to control. It still dumbfounds me.

Yes, they can tell me in real time when I should leave for a meeting based on travel time (as long as travel involves driving, rather than walking down the hall to a conference room), but they can’t say, “Given that on Tuesday you’ll be in Portland, working from home, you should block out travel time to get downtown to lunch and back”. That is, they can alert me in the moment, but they can’t do their core job — reserving time to ensure I’ll be doing the right thing in the future. Because they can’t do this, I have to create those blocks myself, else I’ll find myself choosing between skipping one appointment or being late to another. The whole point of a calendar is to manage time, but in this simple example they fail to ensure I will have space to transition my corporeal existence between physical locations. Shouldn’t that be step one, rather than an exercise left to the human?

I also reserve time for tasks I do alone every day, like working out and writing. I do this primarily to ensure it gets done, rather than because those times are special (although I do get a bit jittery now if I don’t write first thing in the morning). There’s no way to explain to my calendar what I’ve actually blocked that time out for, and thus no way for it to respond to whether I’ve done it or not, even though my computer knows if I’ve done my writing, and my watch knows if I’ve worked out. Wouldn’t it be great to see your calendar dynamically rearranging your day because it noticed you missed your workout?

My calendar is confused about what work I’ve hired it to do, and therefore does not know it needs to look in those places.

We’re so used to the idea that our software represents the work that we seem to have lost hope that it will actually help us do it. Most of the tools we use are entirely disconnected from the work they’re supposed to help us with. Marking something done does not do it, deleting email does not indicate communication has happened, sitting at your computer while your calendar says you’re writing does not produce text. The representations are not the work, yet we forgive our tools for only dealing in representations, not actual work.

I don’t know if that reviewer was right about why Zelda: BoTW is so great. I can’t even imagine what all the software I use would look like if it were built around where my work lived, rather than merely being used to model and manage it.

What I do know is that our software can and should be built to help us do the jobs we’ve hired it for. But because it is confused about why we use it, what we do every day is lower quality, less fun, and just downright confusing.

This also shows just how much opportunity there is to improve the software we use on a daily basis.

Founding Myths are Pernicious Propaganda

It’s only safe to learn from true stories

Every startup has its founding myth, the story that it uses to help draw in and motivate employees, customers, and investors. In most cases, those myths were cultured years after the company’s founding, and bear little relationship to reality. When you dig deeply into a company’s true origins, what initially looked like the product of far-sighted genius deflates into a mix of insight and smart decisions meshed with a series of serendipitous events.

This gap between myth and reality in no way diminishes the achievement of these companies and their teams, but it collapses our ability to learn the true lessons from those who made it and those who did not. If we can shift our story-telling from creation myths to capturing the collisions actually necessary to germinate greatness, we can better recognize what it will take to support it next time. Even better, it will allow us to pull forth those organizations not lucky enough to get all the draws, enabling our ecosystem to address neglected founders attacking neglected markets.

As I was building Puppet, I assiduously sought founder stories, trying to understand what it really took to do what they did. Over time, this turned from a quest to build “Good to Great” for startups (which was silly anyway, given how stupid that book is) into a collection of more-true founding myths:

I find these funny, but the true goal is to puncture the story people want you to buy so you can understand what it really took. When you do that, you find that yes, people had to work hard, they had to be smart, they had to be creative. Those are necessary, but as testified by the thousands of failed companies full of hard-working, smart, creative people, they aren’t sufficient.

When you can find true founding stories, such as Phil Knight’s excellent Shoe Dog, you gain critical insight that might help you build your own business better, but you also realize how much of building a great company is expertly riding the tide of luck and opportunity.

On first blush, there’s no problem with this, other than the ridiculous levels of delusion necessary to turn serendipitous opportunism into genius founding myths that manage not to include the lucky bits.

On reflection, though, erasing the role chance plays causes systemic problems throughout the startup world.

We’ve already seen that human nature abhors a vacuum, refusing to accept that people don’t always deserve their circumstances. Luck is obviously not the only contributor to a startup’s success, because how they play the hand they’re dealt is at least as important as the hand itself. But let’s not lie to ourselves about the role of randomness in a startup’s circumstances.

When we do, it limits our ability to understand what it really takes to make a great company. This is similar to the deep structural flaws within “Good to Great”: If you just ask the great companies what they did and try to do the same, you sound smart but have only made your readers dumber. I recommend The Halo Effect for a more complete discussion on how this business-book analysis of success fails its readers. You can fatally pierce the concept by focusing on one kind of failure:

“The Delusion of the Wrong End of the Stick – successful companies may do various things but that does not mean that doing those things will make you successful”

As I was trying to build Puppet, I found this discrepancy between the stories and the realities immensely frustrating. I was uninterested in being valorized for something I felt lucky to be doing, but these myths were a screen obscuring information I knew was valuable. My goal in studying my predecessors was to improve my own chance of success, but it’s impossible to learn from propaganda. Instead of a community of founders learning from each other, with each iteration being better than the next, you get cargo cult companies that manage to copy every part of a successful business except the parts that made it work.

As just one example, Google’s own data now shows that its vaunted hiring practices did not actually help, but a whole generation startups copied their worthless brain teasers and discriminatory ivy league requirements. For years, Google’s dominance was partially explained by their great hiring, but what happens when it turns out they weren’t actually better than anyone else? Nothing to see here, please move along. Let’s just start another round, this time copying Netflix and Amazon. Forgive me for thinking that Amazon’s success has more to do with their taking Walmart’s extractive business strategy to the web than on their use of internal APIs.

What I found again and again when I peeled these myths back was far more genius in how people responded to their circumstances than in how they were created. Oracle is a perfect example: Larry Ellison was smart enough to realize that IBM had invented this great new database concept but weren’t trying to sell it, so he took advantage of that to build it himself. You can bet that’s not how Oracle tells its founding story, but only once you know it can you see that Ellison’s true strength was pushing hard and fast enough to ensure that Oracle was the first one to market with this free idea. I don’t look up to Oracle’s founder, but you can bet I learned from that story.

Google is another good one: One of the best CS schools in the world bashes together two random guys, who then go on to invent something and get lucky enough to fail to sell it. What you can learn from Google looks pretty different once you know that. Again, it does not diminish what they did after that failure, but it turns their stories of manifest destiny into a more true telling of wandering any icy landscape looking for safety.

When they say history is written by the victors, they mean both that only the the stories of the winners end up getting recorded, but also that the stories themselves get morphed, corrupted, to present those winners as just and deserving. That corruption is as pervasive in the world of startups as it is in the wider world.

There is tremendous value in understanding the real stories behind the great successes. Good luck getting companies and founders to tell them.

Apple Has a Focus Problem

But it’s not what you think

The iPad has made great strides in the last few years toward being a full-time computing platform. While many still cry that it’s just not very useful yet, others are out there using it as their only computer, or have at least shifted significant usage from desktops and laptops to it. I have been traveling at least a third of the time over the last few years, and the iPad is my only companion on the majority of those trips. It is by far my favorite computer, but it has one glaring defect.

Obviously, the heart of the iPad is its touch interface. The direct interaction is exactly what makes it such a joy to use for so much of it can do. However, even Apple has finally acknowledged that a keyboard is a necessary companion for many iPad uses. While their keyboard has some issues (or rather, their keyboard connection technology does), the overall experience is quite nice. Except for one little detail.

Unlike touch, keyboards are inherently targeted. While touch is powerful specifically because of your ability to directly manipulate the software you’re using, keyboards must first be pointed at a place that needs text. They need focus. And here’s where the iPad falls down.

It has no concept of focus. Or rather, it obviously does, but its designers are in denial about it. Keyboard focus is littered throughout the platform, from the presence of a cursor when inputting text, to the software keyboard auto-hiding when no text field is in use. When you’re producing text, this generally works pretty well.

But the keyboard is used for far more than typing. Whether it’s command-tabbing between applications or using shortcuts within them, the keyboard is a critical control device. And it just does not work right on the iPad.

Honestly, it does not work that well on the mac these days, either. Apple’s Mail application can’t seem to figure out how to pass focus between emails, especially when deleting them, and there are plenty of other situations where focus within a window — as opposed to between them — can neither be perceived nor controlled.

But because of its orientation around touch, the iPad is much worse. The vast majority of the time that I want to, say, delete an email, the delete key just literally does nothing. I can even touch the email I want to delete to “ensure” it’s focused, and yet still the keyboard is useless. I have to resort to touch for interacting with applications the vast majority of the time. Many iPad-specific apps now advertise keyboard shortcuts, but they’re useless. I mean, sometimes they work. But that’s not good enough when you’re trying to get something done; we learn, eventually, to do what’s consistent, rather than what’s fastest when it works but often fails.

I have so many memories of using my keyboard to tell some app to do something, and just starting at the iPad until I realize nothing is going to happen. I tap around, hoping to tell it, “start here”, but it’s usually no use. My expectations collapse, and I feel crippled, reduced to the level of those who just click around their GUIS. It’s not wrong, but it can be so much better.

I follow every conversation I can find about the iPad as a productivity platform. I’m deeply interested in the thoughts of people who are bought into it, such as Federico Viticci, so I am cognizant of the general thread of how this is progressing. As so often happens to me, I seem to be the only person concerned about this problem. Here are all of these people talking and writing about the iPad as a general purpose computing platform, yet they don’t mention this problem of the inconsistency and invisibility of focus at all. Do they just not use keyboard shortcuts? That’s not possible, right?

And don’t tell me it’s my iPad. I’ve had literally every version ever made (I currently have both sizes of the iPad Pro), and every few iterations I do a clean install. I’ve tried every variety of keyboard, from Apple’s to Logitech’s to a $15 piece of junk I bought at Brookstone at the airport one time because I managed to leave mine at home. I’ve changed every variable. The only explanation seems to be that others just don’t care about this.

I’d also like to see people rallying for Apple to shift the default orientation of the iPad Pro to landscape, because the keyboard is so central to its use, along with an automatic disabling of the rotation lock when the keyboard is connected, but these are minor compared to the core utility of a keyboard.

To me, one of the central reasons to want a keyboard, one of the primary purposes of the keyboard on my computer, is to control the computer. At that, the iPad is currently failing miserably, and it deserves better.

I fear that this can only be fixed by introducing completely new design concepts to the iPad. Obviously the OS understands focus, but the design team has worked hard not to ever indicate that to you. That has meant application developers have not bothered to manage it effectively, and has trained users to rely on touch rather than the keyboard.

While Apple is a design powerhouse, they’re also into brutalist design, always opting for less, always preferring to forego a capability if they can’t make it perfect. This causes me to despair that they’ll make material strides on this any time soon.

I’d love to be proven wrong, but for Apple to start caring about this, its customers need to first.

What’s your story? Do you just not use keyboard shortcuts? Do they work great for you? Or they’re totally broken, but the other broken stuff is just so much more important you haven’t mentioned them yet?

My Computers Are All Broken

Software’s focus on world-dominance has crossed with my incessant tinkering to result in a computing environment that is failing me utterly.

I’ve tried everything, and it still doesn’t work.

My first computer was from basically the worst era Apple has ever produced, starting with MacOs 7.6.1 and finally quitting around MacOs 9, switching to BeOS. While I was using these for fun, I was learning Solaris (and AIX, and HP-UX, and FreeBSD, and Debian) for work. I know some are frustrated that the Mac has lost its spacial finder, but I’m frustrated when I have to touch my mouse, because I spent years in a fine-tuned X-Windows interface that allowed me to do essentially everything from the keyboard. It was only marginally graphical; all of the windows except the browser were different kinds of text interfaces, from the terminal to vim to irc. I controlled everything with a Sun Type V keyboard, which relegated the caps lock and backslash keys to some far away corner like God intended, and gave prominence to control, escape, and delete, which matter quite a bit more. I honestly have no idea what mouse I used for the first ten or fifteen years; I don’t normally care about them, so I don’t normally notice them.

As I gradually switched from a sysadmin-turned-developer to a CEO, my computing needs changed dramatically. I spent all my life in email, calendar, and chat rooms, instead of text windows (notice they’re all still text, though). I did not even have a desktop computer for years, because a laptop was a better compromise.

As the iPad got more powerful, though, and Apple’s iMac computers turned from lickable toys into their most powerful computers, I moved most of my computing from laptops up to desktops or down to tablets. For a little while, all was fine, because I still wasn’t spending much time on the computer — as a CEO, most of my time was spent directly communicating with people, either in meetings or via email and chat. Being present means not being on your stupid devices. That was my computing experience.

Once I left my CEO role, I shrugged. I changed a few things (started writing in Ulysses, for example) but basically kept the tools and practices I had.

It’s become clear now that I have a solid decade of debt that’s built up in how I use my computers.

I really hate it.

Some of it is obviously just bugs. Or something. The keyboard (a KÜL tenkeyless with some kind of Cherry MX switches) and mouse (some large Logitech thing) just don’t work most of the time. The keyboard has to be unplugged and plugged back in 90% of the time the computer goes to sleep, and the mouse icon just does not seem to care about my needs, even after swapping mice, mousing platforms, hair shirts, and everything else I can think of (turns out that moving the mouse’s wireless adapter from a USB hub to the desktop might have fixed the mouse).

Beyond that, the software is out to get me.

Am I seriously the only person in the world who cares about keyboard focus?

I just deleted an email in Apple’s Mail.app on my desktop (running the latest os and patches, but this problem has persisted for as long as I’ve used the app), and I literally have no idea where the focus went. I hit delete, the email goes away; I hit delete again, and literally nothing happens. An email is highlighted, but, oh, I see, it’s a gray highlight instead of a blue one. I click on it, now it’s blue, and suddenly delete works.

I have essentially the same problem on my iPad (which I work on at least as much as my desktop). My only feature request for the iPad is to make keyboard focus predictable and functional. You can be scrolling through emails with the arrow key, and suddenly it will just stop working. Delete an email, no idea where focus is. I don’t even know how to tell where the focus is.

I’m in this insane world where I can feel utterly defeated by the need to click on an email to delete it, but if I zoom out, just about every other part of my computing experience causes similar frustration. I’m apparently the only person in the world who has quality studio monitors on my desk, based on how much everyone is freaking out over the HomePod price. I have separate audio installations in 7 parts of my house, and the HomePod in my bathroom is the cheapest one, beating the thirty year old (purchased used, recently) NAD and M&K kit in my bedroom by about $100. Nothing can cause you to lose hope that your needs will be met like the entire internet agreeing your needs don’t exist.

I don’t consider myself an audiophile, but I know enough to know that decent speakers start arriving at around $300 (each, not a pair), rather than topping out there. I never bought into Sonos because that required my believing they could deliver a good speaker, amplifier, and software experience for the price of a single decent speaker. Oh, and I had to join their walled garden; I’m willing to consider that, but it’s got to be worth it, and it never was for them.

So now most rooms in my house have audio streamed to them from an unsupported device that is going to become obsolete any day now.

Getting back to computers, it just could not be more clear to me that I’m in the shitty middle ground of the computer world.

I’m not a specialist any more. When I was a sysadmin, I was a specialist and I could build my computing environment to match that. (Although good luck finding specialized sysadmin hardware these days.) When I was a developer, I was a specialist, and my computing showed it.

Now I do what everyone else does: I email, browse, handle my calendar, chat a bit. My writing is a touch specialist, but not really; I’m using a specialist tool that’s great for writing books, but I’m just producing blog posts instead.

Even though I’m not a specialist, I’m still weird. I know that I’m using all of my tools differently than most of you are. I know we all think we’re special, but I’ve been around enough to know I am. Not in a good way, just in a “why are you doing that?” way.

Take my calendar. I’ve now twice written tools to tell me whether I’m meeting my goals in terms of how I spend my time. Sure, you have some sort of tomato timer to remind you to stand up or something. Amateur. My calendar should be a statement of my priorities, of how I will and do spend my time, and I want to hold myself accountable to my goals. I appear to be the only person who wants this, based on the searching I’ve done. Thankfully Google Calendar has APIs available, but why should I have to write this?

I have tried every email client I can find, but they’re all written for someone else. They all seem to offer, “How can I help you do email without email?” I don’t want that. I want the vim of email clients. I want the Photoshop of email clients. Can you imagine telling a graphic designer that you want to help them make graphics without making graphics? It’s stupid. They’re designers. They design graphics. I’m a communicator. I communicate. I read and write. A lot. Make a client that’s better for that. But nope. Instead you have the modern equivalent of a lickable interface that still can’t do inline reply and only has 5 keyboard shortcuts. BZZZTDELETED!

Some of it is just stupid. I have LED strips mounted to the back of my monitor, so I can get ambient lighting while I work. It’s actually an awesome feature, and I totally recommend it, but it’s a bit of a mess of wires with a ridiculous interface (a switch that keeps falling off my desk). I get not everyone has their monitor against a wall, but this seems so great it should ship by default. Can I just get all the LEDs you want in my keyboard moved back there? And, of course, I want it all connected to the computer so I can control it via software. Instead I’m wiring it all together myself and hoping 12V can’t catch my desk on fire. Yay.

I wish it was just that I’m a curmudgeon, that I can’t give up the old ways, but the truth is I like my iPad more than any other computing device I’ve ever owned, and honestly, I always hated the old ways. I’ve been hating software for as long as I’ve used it, and I’m proud I’ve been able to keep my edge as my career and the world it’s in has changed. I hated X-Windows. I really, really hated MacOS. I loved BeOS, but honestly, I held it to a very low standard. I loved every part of Solaris except the parts that actually existed or that I ever used, and I quit using it just when it stopped sucking quite so much (god, remember having to use Veritas to get decent clustering? Even worse, remember Disksuite?).

So it’s not that I miss the old days. I want to live in a different universe. I want a different physics model for our software world.

I know I can’t have it. I know I won’t get there.

But I’m an optimist. I’ll keep fighting.