The Rights You Lost When the Document Died

There are many upsides to the era of the smartphone and the cloud. But I’ll never forgive them for killing documents.

Photo by Daniel Zurnau

The limitations of mobile devices perfectly complement the strength of the cloud, as foretold by Sun Microsystems two decades ago: Your computers will be weak and hold no data, and the servers will be powerful and store everything. They were just wrong about what form those weak computers took (and, of course, who would be selling the servers).

I obviously love the benefits of mobility, of having an amazing computer in my pocket and having access to the world’s information pretty much wherever I am. And there are many capabilities we take for granted that you just could not provide without large central collections of data that the cloud enables.

But many of the changes in our tech landscape are accidental outcomes of cloud + smartphone. I regret them. And I want to fix them.

One of those big changes is the demise of the document.

You might think, no, I still have documents. I mean, yeah, I used to have Microsoft Word documents, but now I have Google Documents. Right?

No. The content you have in Google Docs is stored in a big database. Sometimes, when they choose to, you can treat it like a collection of documents. But it’s not.

This is pretty obvious when you try to use Google Drive. Compare using documents there to a Dropbox folder full of Word (or Pages1) documents. One comfortably exists in a world of folders, hard drives, and file systems, and the other just feels…. not quite right. That’s because Google Drive is wearing the camouflage of a filesystem, but it’s a database in the back end, and the truth leaks through. We’re not fooled that easily.

It starts with a miserable user experience, but doesn’t end there. Because Google is storing all of your data centrally, you need their permission to use it. This is new.

Until the smartphone and cloud took off, Microsoft had a comprehensive monopoly in digital documents, in text, spreadsheets, and presentations.2 To participate in business, you pretty much had to own Office. Their position was so strong they built a Mac version just to prop that platform up enough for it to look like a viable competitor. The market just didn’t see an OS as competitive without office.

But lo and behold, times change, and now you want all of your files online. Google wants to help you do it, and just happens to have a couple of fancy features you couldn’t (at the time) get without uploading everything. Real-time collaborative editing is actually pretty sweet.

Microsoft worked for years to prevent other apps from reading their documents, but they seem to have stopped that at some point. I don’t know if they just gave up the arms race, realized they had already won so it didn’t matter, or actually felt the need to reduce their market power. But by the time Google acquired Writely and rebranded it as Google Docs, it wasn’t that hard to read these docs separately. This was a massive boost for Google (and theoretically smaller companies, but it didn’t turn out that way).

After all, all the docs you care about were right there, on your computer. You didn’t need to ask Microsoft for a copy; you did not have to export them, wondering what data was included and what was kept back. And the form you’d send to Google is the exact form you’d send to anyone else, via email or on a USB drive. Their ingesting of all of your critical data was pretty easy as a result.

But in 2019, things are very different. Want all of your data from Google Docs in the next new company’s fancy web app? Step 1: Export. That’s right. You have to ask Google to give your data. Because, and I hate to belabor this, you don’t have it.

Then your fancy app needs the ability to import the special arbitrary 100% proprietary format Google exports in. It’s true that some apps might allow you to skip this step: They’ll authenticate directly to Google and slurp your data down. But just like when Facebook shut down data access for Twitter and other competitors after building its own network by copying data from Friendster and others, Google will only tolerate this kind of integration when they don’t feel threatened.

You need their permission, their tolerance. Given their use of monopoly power to weaken Yelp, among many others, you can be sure they’ll have no qualms about quashing a budding competitor by making this hard if someone gets close.

So here we have two analogous situations, with almost identical data, but in one case you have your data, and in the other, you’ve got to ask permission for it. There are downsides to each, but there’s no argument they’re different.

Note that this isn’t really a question of data “ownership”. Google would probably argue that you do actually own your data, as might Facebook. You just can’t access it in a useful way.

I’m thrilled that the cryptocurrency/blockchain communities are driving a conversation around data ownership, but it’s still disappointingly naive. This concept runs up hard against the reality that digital copies are free, and it’s basically impossible to prevent people from copying data you’ve given them read access to. Conversely, “ownership” means nothing if I can’t get all — and I mean all — of my data in a useful form.

What they need to talk about instead is rights. Realistically, I can’t own my birthday. Would that be a copyright? Trademark? Patent? Of course not. It’s just a fact, and facts can’t be property. But we all know that my birthdate matters.3 I need the ability to prevent you from, say, publishing it widely, or using it in combination with other facts to impersonate me. These are legal rights, not aspects of ownership.

I miss the rights that documents gave us, now that we no longer have them. Because these rights were implicit, a consequence of the technology reality at the time, we did not even know we were giving them up. But we’ve got to fight now to get them back.

The first thing you can do is be conscious of this when you choose your tools. All life is a compromise, and sometimes it’s the right answer to give up rights for functionality. But many apps are functionally equivalent, yet make vastly different choices about your rights.

As one example, I recently migrated away from Evernote, because their business model is shifting to a focus on businesses, which, well, I am not. It was a nightmare. Even though everything in my Evernote notebooks was either a text file or a PDF, I couldn’t export literally a single thing as text or PDF. Well, that’s not true. I could export each individual item that way. But not the whole collection. My choices were HTML or a proprietary format. It took hours of manual work, and a lot of it I just dumped in a folder, never to look at again unless disaster strikes, because it wasn’t worth it.

Compare that to what I’m replacing it with: Keep It (as of today, anyway). I’m sure I’ll give up some features to pick it, but, ah, I haven’t found any yet. And all the files I put in it? They’re just — hold on to your seat, folks — files. I can open that directory on my Mac. I can add things to it. I can remove them. Then I can see them in Keep It. If I stopped using it tomorrow, I would have to, um, add the files to something else. Or use the Finder, or Dropbox, or something similar.

It’s obvious that Keep It respects the document, and they see their value as adding functionality on top of it, rather than subsuming it in some way.

This should be the gold standard. You should be able to adopt an app that gives you functionality, but does not take away rights.

In the age of documents, apps like Microsoft Word could try to curtail your rights, but other developers would be on your side trying to give them back. In the age of the cloud, and the smartphone, you don’t get that option. You no longer have rights, you have “permission”, with a side of binding arbitration.

I don’t think we can go back to the era of documents on a disk. But it’s worth looking back and asking: As we’ve gained so much, what have lost?

And then demanding that our software providers begin to give some of that back.

  1. Although even Pages, and all of Apple’s productivity apps, weaken the definition of a document, because they use bundles instead of a single file.
  2. I was on team break-up.
  3. I can’t believe you forgot mine last year.

Follow your weird

To really win, you have to seem strange to your true peers, not just the world at large.

Photo by Elias Castillo

Look, I have to say it: You’re weird. Even if I don’t know you, I’m confident: Somewhere, maybe lurking deep inside, something about you is just not right. I don’t know what, specifically. For all I know, you might be one of those weirdos whose particular strangeness is just how authentically normal you are. *shudder*.

This might be insulting to you, calling you weird. It happens a lot: I think I’m complimenting someone and they get all huffy. Conversely, people are often afraid I’ll be hurt when they shyly let me know that I, ah, don’t really fit. Don’t worry; you’d need to know me a lot better to successfully offend me.

Society is not a huge fan of weirdness — I mean, the definition is pretty much, “does not fit into society” — and it trains you away from it. We’re social animals, so you probably do what you can to conceal, or at least downplay, anything different. It makes sense. It’s a basic survival mechanism.

I know I do it. I can’t hide everything — some stuff just can’t be covered up — but I can usually skate through a conversation or two before people back up a step and give me that funny, sometimes frightened, look. Being on the west coast helps; I’m a little less weird here than I was in the south. It probably also helps that I cut my mohawk, and the spiked leather jacket and knee high boots stay in the closet now.

I’ve written a bit about my struggles to balance authenticity and fitting in. I think it’s important to call out it out, because those who experience this struggle rarely have the luxury of admitting it. I’m lucky enough in multiple ways that I can be up front about it now. But resolving this conflict matters for more than psychological reasons. Our own goals usually require that we learn to embrace our weird. Not just grab on to it, actually, but really live in it. Inhabit it.

That weirdness is how we win.

This is easiest to show in investing. We have a natural tendency to do what is proven to work, but that is only assured of getting “market” — in other words, mediocre — returns. If you study the best investors, they’re all doing something that seems weird. Or at least, it did when they started. The first people who paid to string fiber from NYC to Chicago to make trades a couple milliseconds faster were considered pretty weird, but they knew the truth: Normal behavior gets normal returns, anything more requires true weirdness. (Well, or fraud. There’s always that if you’re afraid to stand out.)

It’s the same way in life. You can’t say you want something different, you want to be special, but then follow the same path as everyone else. “I’ll embrace what makes me special just as soon as I get financial security via a well-trodden path to success.” Oh yeah. We definitely believe that.

There’s a nice sleight of hand you can do, where you can say you’re doing something different, but really you’re a rare form of normal. The first few doctors and nurses were really weird. Those who recommended you wash hands before surgery were literally laughed at, considered dangerous crackpots1. But now? Most people become a doctor in pretty much the same way. Being a doctor is normal now, even if it’s not common. That’s probably good.

But what if your job is innovation? What if you’re whole story revolves around being different? Can you still follow a common path?

Because that’s what too many entrepreneurs today are doing: Trying to succeed at something different, by doing what everyone else is doing.

I mean. Not literally everyone else. But close enough.

It starts out innocently enough. There aren’t many people starting tech companies at first, and boy howdy are they weird. Someone makes a ton of money, all their weirdness gets written up — “hah hah, see how he has no sense of humanity but is somehow still a billionaire?” — and now we’ve got something to compare to. Hmm. Well. We can’t consistently duplicate Jobs, Gates, Packard. But if we tell enough stories enough times, we find some kind of average path through them. Ah! Enlightenment!

Now that we know what “most” people do, we can try it too. I mean, we have no idea if the stories about those people have anything to do with why they succeeded, but why let that get in our way? Conveniently, every time it works we’ll loudly claim success, but silently skip publishing any failures. Just ask Jim Collins: He got rich by cherry-picking data in Good to Great to “prove” there was a common path to business success. It turned out to have as much predictive value as an astrological reading, and is just business garbage dressed up in intellectual rigor, but that doesn’t seem to have hurt him.

The business world keeps buying his books. They need to believe there’s a common path that anyone can travel to victory. Otherwise, what would they sell? What would they buy?

Obviously this doesn’t work. There is no standard playbook to winning an arms race. Once there’s even a sniff of one, people copy it until it doesn’t work any more. This is pretty much the definition of the efficient market hypothesis: There’s no standard way to get above-average results. Once Warren Buffet got sufficiently rich as a value investor, so many people adopted the strategy that, well, it’s hard to make money that way. Not impossible, but nowhere near as easy as it was fifty years ago.

Of course, you can go too far in being weird. There has to be something in your business, in your strategy, that makes you different enough that you just might win. But adding a lot of other strangeness for no good reason worsens already long odds. The fact that Steve Jobs did so well even though he was a raging asshole, even to his best friends, made his success just that much less likely. Most people are a bit more like Gates and Bezos: Utterly ruthless in business, and caring not a whit for the downsides of their success, but perfectly capable of coming off as a decent person whenever required.

I’m rarely accused of being a world-class jerk, but I don’t pass the smell test as normal for very long. Jim Collins might say maybe if I were more pathological I would have succeeded more. With Jobs and Musk as examples, it seems reasonable, right? In truth, it’s just as reasonable that I would have done better by dropping out of Reed College, like Jobs did, rather than foolishly graduating from it. Think it’s too late to retroactively quit early?

Yes, you have to learn to love your weird, but it shouldn’t be arbitrary. You can’t realistically say that you’re going to rock it in business because you’re addicted to collecting gum wrappers from the 50s. I agree that that’s weird, but is it usefully so? Being a jerk is weird, and bad, but it’s not helpfully so. And really, dropping out of college isn’t that weird for someone in Jobs’s financial position at the time. It’s only if you have a bunch of money that it seems so.

I recommend you take the time, think deeply on what opinions you hold that no one else seems to, what beliefs you have that constantly surprise you by their lack in others. What do you find easy that others find impossible? What’s natural to you, but somewhere between confounding and an abomination to those who notice you doing it?

Those things aren’t all good. And in many cases, you’ll need to spend your entire professional life managing their downsides, like I have. But somewhere in that list is what sets you apart, what gives you the opportunity to truly stand out. They’re the ground you need to build your future on.

Unless you just want to be normal. In that case, I don’t think I can help you.

  1. This is an amazing example of sexism. The doctor’s wards had three times the fatality rates of the midwife wards, but of course, they were doing nothing wrong at all.

Great advisors reveal your truth, not theirs

Giving answers is easy, and usually worthless

Photo by Blake Cheek

Being an advisor to other founders is a contradictory affair: Be helpful, but do not give advice. That is, I want to help you do your best work, but I don’t think I can or should do it by telling you what to do or think.

I obviously think I have value to add or I would not sign up to help. Well, maybe it’s not obvious; our industry is rife with advisors who attach their names and little else to projects. It’s true I’m motivated to join partly by the possible long-term reward, but mostly I’m helping because I enjoy it and am learning a lot.

While running Puppet, I was constantly confronted with a classic leadership struggle: How do I simultaneously help people improve their own answers, yet get them to do what I want? There are many who will say this is a false struggle, that I could have avoided it by focusing on empowering people instead of trying to get them to do what I wanted. Pfft. The literal definition of leadership is providing direction and getting people there, and that’s doubly so for a fast-growing startup where alignment is critical to execution. I spent a decade slowly, incrementally, getting better at this, but felt my incompetence as keenly at the end as I did at the beginning.1

Advising companies allows me to practice the empowerment-half of this skill without the other complications. Unlike when I was a CEO, I know I should not be setting direction or making decisions. My job is not to provide answers, but to help people do their own best work.

My only explicit training for this was when I was an organic chemistry lab tech in in college. My primary task was repeating questions back to the students: “I don’t know, which layer do you keep?”2 When I started dating my now-wife in college, she told me her friends were bitter that I would not give them answers. I knew my job. I was there to help them get an education, which required they did the work on their own. This has also been helpful experience for being a parent: “I don’t know, what is 12 times 9?”

Advising CEOs has similar constraints, but it’s a lot more open-ended, and has no answer sheet. In the lab, there was one right answer, it was always the same, and you could reason it out with the information at hand. Labs were also usually a day of work, maybe three days, and mistakes were pretty cheap, in the grand scheme of things. I don’t expect one of those students to track me down later in life and lay at my feet all of their struggles or successes. Most importantly, we were studying an objective space that I did actually know more about. When push came to shove, I knew the answers, and I could reason out anything that wasn’t obvious.

Helping CEOs is considerably harder. I’m rarely asked about questions that have a single right answer. No competent CEO would bother getting advice on an easy question, or one whose answer wasn’t important. Wrap into this the fact that I can’t possibly know the company as well as the person asking me the question.3 It’s inconceivable that I would often have answers available that the expert in the seat doesn’t.

That simplifies the challenge: Prod the questioner into getting to their own answer, no matter how much they complain. And they do sometimes get upset: I had a CEO exasperatedly demand what I would do, after a long session of forcing him to work through what he cared about, what he saw as the right answer. When I relented — only after he had already done all the hard work — he could see how thin and useless my answer was. By the time he’d decided what to do, he saw that what he learned from the process was at least as important as the answer, and my just providing a solution could never give that.

There is still some risk. I’m by no means a master of this technique. I know I have at times presented people’s options in stark ways, which sometimes felt like no choice at all. My own predilections, such as toward a consumer-style sales model, are hard to separate from any guidance I might provide. It’s honestly just hard to know sometimes whether you’re successfully getting someone to express their own implicit belief or leading them to agree with one of yours.

It’s a skill I expect to spend the rest of my life trying to master. But it’s worth doing, and I’m enjoying the learning process.

Helping CEOs instead of running my own company provides a kind of repeatable laboratory environment. I get to learn at the same time, though, because it’s much harder than being a lab tech.

It’s not enough to just parrot questions back. I spend my time listening closely and drawing out more information, then replaying back what I heard. Listening is a woefully underrated skill. I’ve been loving the opportunity to practice really hearing what people are saying, and trying to differentiate between the words they use, the meaning behind them, and their intent in saying it at all.

As you look for advisors, be sure you demand the same discipline from them. Don’t accept answers. They should hear you, understand your dilemma, and be able able to point out where you haven’t thought completely, or clearly.

A great advisor should provide light, not direction.

  1. If this whole definition of leadership annoys or offends you, I’d ask how you differentiate between leadership and management, and also how you expect a company to align around a direction without someone picking the direction.
  2. Nearly every experiment in organic chemistry involves using liquids to separate chemicals, where part of the solution ends up in an aqueous (watery) layer, and the other ends up in another layer, like separated oil and vinegar in salad dressing. One of those layers is now waste, and the other one has the chemical you’re working on. Don’t throw away the wrong one!
  3. This is another big difference from when I was the leader; I knew Puppet itself better than anyone, even if I could not know your specific area as well.

How to Neg a Founder

Is that a compliment, or an insult?

Photo by John Salvino

My experience growing and fundraising for Puppet was full of inspirational-sounding phrases that cut like a knife. Aggressive goals got praise for wanting to “build a real product” and “really scale this thing.” These are some of my favorites. And when I say “favorites,” what I mean is, I hate them. Deeply.

The one that I heard most often made me want to walk out of the room. I’d pitch an investor while fundraising, and he (always he) would say: “So you’re going to try to turn this into a real company, eh?” As if being my full time job for years was somehow not real. As if you are the arbiter of truth, not my customers. Or me.

If you want to make an entrepreneur feel small, you really want to piss them off, try to inspire them this way. I assume most people who used it thought they were complimenting me, impressed that I was taking this big step or something. But it was a sure fire way to trigger my defenses. When you diminish the work I’ve done so far, it’s hard to see you as a potential partner. I quit my full time job five years ago, and have missed out on hundreds of thousands of dollars of earnings, but asking you for money is what shows I’m serious?

I’m convinced at least some investors did it on purpose, as a form of negging — trying to position themselves as an authority and me as someone who needed their help and wisdom. “That’s pretty cute. Why don’t you get some help from the professionals?” I’m good, thanks.

I know most people didn’t mean it that way, though. Their worldview is just so skewed that if you haven’t raised a ton of money, you’re not really trying. They can only conceive of success if it looks a specific way. You literally cannot succeed unless you do what they do, what all their friends do.

If you’re an investor, advisor, or executive, take a deep look at how you talk to founders. Are you truly complimenting them, or actually diminishing their work? Are you presenting yourself as the arbiter of success, even while you think you’re saying the other person has done so well?

If you’re a founder, know that you don’t have to take it. No one else gets to define success for you. There’s always an in-crowd, but by definition the best results come from being outside of it. Even if you decide you need their money, you don’t have to accept their framing.

The virtue of a tool

Vendor success should be about customer needs, not its own.

Photo by Philip Swinburn

I am a tool junkie. I love the effortless balance of a well-known chef’s knife, like my hands know what to do all on their own. Heavy usage builds callouses and tunes muscles, its usefulness evidenced by scuff marks and changed infrastructure. Failure leaves blisters or even hospital visits in its wake.

A good tool proves its utility. Knives slowly shrink with sharpening, work pants thin, machines need oil. If they don’t, you’re either not maintaining your tools, or barely using them.

This wear is proof of your usage. They should be scratched. Dented. Aged. Patinas should be acquired from the shop, not factory treatments. Their callouses should pair yours. Tools can not be precious. They’ll just live on a shelf, then retire to your attic. You should seek that perfect middle ground, where you spend enough money that your kids can inherit them, but not so much that you are squeamish about giving them a job.

Tools only deserve the label if they help you work.

You might say I have strong feelings about them. I’m assuming this love led to my focus as a software entrepreneur on helping people people work. Or maybe my experience with tools in the physical world led me to seek them in the digital world, learning to make what I could not buy.

Given my tool fetish, you’d think I’d have a solid grasp of what I mean when I use the word. Apparently, not so much. I was recently pulled up short by a simple question, asked by Jordan Hayles of the Radical Brand Lab: What do you mean by tools?

What do you mean, what do I mean? It’s a simple question, right? The above text gives one example, but I would have thought I could answer it in a bunch of reasonable ways, none of which seem terribly controversial.

But the more I explored, the less simple the question became.

I’ve been describing my goal as building power tools for people. This phrase comes from my time building houses with my dad, and ‘power tools’ just meant the things you plugged in. You know? Because they needed power? It’s a common usage, maybe the word choice here did not mean much.

Except… I’ve spent more than a decade learning product management, describing myself as a product-oriented founder, managing that function in a growing company, and attempting to teach it to others. Yet here I am ignoring both the term and the field entirely. Why am I so quickly dumping my work of the last ten years? Is it just creative branding? Cynicism about my industry?

Why not power products? That’s a motor boat of alliteration: ‘power products for people.’ Awesome, right?

Ok, maybe not.

Product management as we know it began in the consumer goods industry. You’re handed a train car full of dish soap and told to sell it. You’ve got to package it, set pricing, convince a local store to carry it, argue with them about location, move it away from competitors, all that. Every product you see in your local grocery store is loved by a product manager who fights for its shelf space, believes it is beautiful, and wants you to give it a good home.

Tide soap is one of the most commonly stolen consumer goods, but not because it’s soap. The strong brand makes it easy to resell, even allowing it to be used as a stand-in for money in drug deals. I wish I was that good at product management. For all that, it says nothing about the soap.

Product management can also be used for evil. Laser printers had toner cartridges you could just refill. Not very clean, but cheap and reliable to run once you plonked down the cash for the expensive printer. Modern inkjet printers instead use disposable cartridges. To sustain profit margins in a rapidly commoditizing industry, manufacturers started putting rules in place on the cartridges: You had to buy them from the manufacturer, they had to be replaced every year, you could not refill them, you could not print in black and white if any color cartridges are empty.

The user was getting hurt so the vendor could make more money. People got pissed of enough that the US Supreme Court weighed in.

That’s good product management. Well, it’s evil, but you know what I mean. It’s effective. We’re talking big-B revenue effective. Hmm. A moral distinction begins to reveal itself.

These are examples of companies forcing their business model onto their customers. There’s no difference between the dish soap sold at retail and the one sold in bulk, yet they’re separate products, differentiated through packaging, shipping needs, and labeling. You pay much more for the retail package than the wholesale one, primarily because the business model behind them is so different.

But when I think of a tool, these complications are missing. When I use a hammer, it just has to fit my hand and smash stuff. When I pick up my drill, it works with every bit I own, regardless of the logo. The battery and charger are proprietary, but the vendor’s most visible role in my life is color choice. My yellow drill works just fine with bits from the blue or green companies. (You probably visualized brands by my just mentioning colors. That’s still effective here.) It does not matter whether I bought the drill from Home Depot or inherited it from my dad; once in my hands, it just works.

I think this begins to answer the question of what a tool is.

It helps you do your job, without your worrying about the vendor’s needs.

I know that DeWalt and Mikita need to make money to sell me a drill, but I don’t think about it when I’m using their tools. Even after more than two decades without one, I can comfortably recite that “my” hammer is the Estwing 22oz waffle head with a straight claw1, but none of those details mean I need the vendor’s permission to hit a nail with it. I make a decision about the right tool, I buy it, I use it. End of story.

It is small. If you call something a tool, not a product, you’re saying it’s less, it’s not as complete a solution. This can be belittling, insulting, but it does not have to be. It’s also a statement of independence. Of freedom. Of, and this is going to sound crazy, compatibility.

Products have an implicit, ongoing dependence on their vendor. If that’s me, I love it: I want you to pay me all the time, not just once. That ongoing relationship is how I afford to keep improving what I’ve built for you. This can be a great way to ensure we have a long-term, sustainable partnership. But it’s not always a healthy relationship. The more you have to deal with how I make money, the worse the experience is for you.

I think this is what I like about tools. They’re self-contained. Independent. Using them is fundamentally pragmatic, not a lifetime commitment.

That independence has downsides for me as a vendor. You don’t get any of those delicious growth-hacker buzzwords. Your product isn’t “sticky”, there’s no “moat.” Those are examples of my customers being constrained by my business model, and their absence means revenue is harder to build, to protect.

One might argue I’m better off because treating my customers with more respect makes a better business in the long term, and I’d probably agree. This kind of respectful partnership should deliver higher returns than one that traps and mistreats its customers. I think this is often the right answer, but it’s not a popular one. It’s harder to get funding, to get off the ground. I might be accused of not “wanting to build a real company,” or I might have Silicon Valley’s most dire insult hurled at me: “That’s just a lifestyle business”.

Tell that to Adobe. Or AutoDesk. These are great tools companies. They are the behemoths we know today because they knuckled down and solved their customers’ problems. They worried about that, rather than how they could extract maximum revenue over time. It was a different time, but people have not changed.

I don’t think that every product is compromised when the vendor’s needs show up in the customer’s life, but I think most are. Some of it is laziness, shoring up product limitations with business model innovations, but a lot of it is strategy, recognizing the value of painting your customer into a corner.

Honestly, some of it is just survival. A lot of those inkjet printers are unaffordably cheap, but buyers care only about cost, not value. Some markets are intrinsically dysfunctional, with users and vendors slowly killing each through bad deals and cynical behavior. But as a vendor, I get to make a choice about what markets to play in, and how to work with my customers.

I am a simple person with a simple dream: I want to build something that helps someone work. I have to make money while doing it, because that’s the nature of the job, but I’m more interested in my customers’ work than my own. I know I need a business model, a go-to-market strategy, a plan for growing and supporting my business. But my customers should not need to care about that, should they? If they like what I’m building, they should be able to buy it, and use it. And tell all their friends how great it is. They should not wake up one day to find they’ve accidentally gotten married to me.

I just want to build tools. And I’m proud of it.

  1. We told with great pleasure the (most likely apocryphal) story that this hammer was illegal in Florida because the metal haft could cut your thumb off.

The power of better tools

There is a solution to wage and productivity stagnation. Just don’t call it automation

Image courtesy of Washington Department of Transportation

I don’t know what the rest of the world thinks when they use the phrase ‘power tool’, but for me it’s visceral, literal. My experiences using them and watching them transform my family’s work permeated my time building Puppet. These power tools aren’t little plugins to expensive frameworks, they’re large capital investments that dramatically change your job.

I grew up building houses with my dad. The worst task he gave me was trying to paint a set of louvre doors for a closet while in high school; I had to flip the doors over every 90 seconds to catch drips getting through the slats. After three days of misery, my father relented and rented a paint sprayer, with which we finished the job the same day, at a much higher quality.

Around the same time, my dad would rent a pneumatic nailer for big framing jobs. By the time I finished college a few years later, that critical tool went from borrowed to owned and traveled everywhere with him. Initially used only for large jobs, most contractors now have multiple nail guns to cover framing, trim, and every other use case, and the air compressor needed to power it is as important as electricity.

It might not be obvious, but both of these are examples of automation. You replaced a very manual process — applying paint, or nailing things together — with a machine. If this were a factory, these days you’d call those machines robots, but because it’s a construction site, we just call them tools.

And these tools were expensive. Even with how much faster we finished that painting job, I expect it cost more to rent the sprayer than to finish the work manually, because of how little he was paying me. (This does ignore the soft costs of listening to me complain, which were likely high.) Even today paint sprayers and nail guns are often rented rather than purchased, because good ones cost a lot of money and aren’t needed all the time.

It’s no surprise that discussions of tools and productivity are easier to understand from my experience as a carpenter than as a sysadmin. There’s plenty of room for arguments about what is or is not a software power tool, but when it costs more than a week’s wages, it trails a bright orange cord everywhere it goes, and it can nail your hand to the wall while you’re standing at the top of a ladder1? It’s a power tool.

There’s a common story about what robots and automation do to people like my dad (and both of my brothers, who followed in his footsteps): It steals their jobs and ruins their lives.

What utter poppycock.

If you think of your job as driving metal spikes into wood, then a nail gun is a mortal threat. But if this is your value add, your biggest danger was never automation. My dad never sold his ability to join raw materials together quickly; he sold homes, he sold the opportunity to enjoy your house and family more. How did these new power tools affect that?

They were awesome. Painting and nailing are classic examples of menial, low-value work, and yet we spent most of our time on them. All of the differentiation we offered to our customers was packed into a narrow slice of work, because implementation took so much time and money. As we were able to bring more powerful tools to bear, the menial work shrank and larger portions of our time could be spent on design work, customer interaction, and tuning our customers’ homes.

Interestingly, my father’s next career step was even more pointedly about experiences enabled by tooling. He took a job with a state hospital in Tennessee, fabricating custom furniture for severely disabled patients. Suddenly he was using industrial sewing machines for upholstery, and partnering with medical professionals to design multiple beds for each patient, enabling them to be happier and more comfortable (and also avoid bed sores, thus saving hundreds of thousands of dollars per patient). Given the tragically minimal budget allocation for this kind of work, every dollar saved through automation and tooling directly delivered health and happiness to his patients.

It’s no wonder I see the value in power tools, that I am more conscious of the benefit they can deliver than the loss of low-value menial work.

I had a similar experience as I was building Puppet. I would meet executives and salespeople (I don’t know why it was always them) who would say, “Oh, automation? Great, you can fire sysadmins!” No. Beyond the obvious reality that I was selling directly to my users, who would never buy on the promise to fire their coworkers, that was just not why we were valuable.

Puppet gave people a choice between lowering cost but keeping the current service quality, or keeping your costs flat while providing a much better service. “Wait, making things better is an option? I didn’t know that!” Most companies were aware that their IT sucked, but they only knew how to measure and manage cost, so that’s what they did. Once you believed in the power to make things better, power tools turned out to be great investments for both the user and the buyer.

By letting people spend more time on the parts of their work they enjoyed, the work that makes them special, we also delivered higher quality experiences for their customers and constituents. “Spend less time firefighting and doing menial work, and more time shipping great software.” If the heart of your skillset is clicking buttons or responding to outages, Puppet might have been a threat to you, but our users knew where their real value was. We helped them spend more time there and less time on the boring, low value stuff. The sysadmins hated the work, the customers hated to need it, and the executives hated paying for it. Great, done, don’t worry about it.

When you look around the software market, though, power tools are out of style. There are big data companies building for the non-existent average user, minimalist companies building solutions that do little for almost everyone, and there are power tool companies of yesteryear still hanging around. There just aren’t that many modern software companies building large, clunky, expensive tools that just might cut your hand off if you’re not careful.

That’s partially why productivity has stagnated2. The world has not changed that much — some of the greatest improvements to productivity come from making large capital investments in tooling for your workers — but how we spend our money has. People balk at a $5k computer, when the Mac IIci would cost more than $13k in today’s dollars just for the hardware, yet was a powerhouse in desktop publishing. This is to say nothing of how the mobile app stores have driven down what people are willing to spend on software.

Yes, Adobe’s software is expensive, but it’s that price because it delivers so much value. If it didn’t, no one would buy it. Every large market should be so lucky as to have the collection of power tools that graphic designers get. It sounds crazy, but we’re suffering from not enough expensive software. Instead of building the most powerful software possible and finding customers who see its value, companies are building the simplest thing they can and trying to get everyone to use it.

There are bright spots in the industry, like Airtable and Superhuman. I’m hoping they help to shift momentum back to automating away the tedious work and enabling focus on what humans excel at.

More powerful tools improve your life, but they also make you happier even if you can’t buy them. They tantalize you, promising you great returns, if only you can come up with the cash. And they’re maybe just a little bit scary, warning you that buying them is not enough. You must master them.

  1. A friend of ours managed to do this when working alone in the time before cell phones.
  2. Yes, I might be being simplistic to make a point.

Why We Hate Working for Big Companies

Modern capitalism raises the flag of the free market while pitting centrally planned organizations against each other

It’s quite a journey from being born on a commune to raising more than $87m in funding at a software company. This journey forced me to wrestle with existential questions about my true beliefs, and how they intersected my life as an entrepreneur. One’s work is rarely a pure reflection of ideology, but companies need a clear and authentic strategy, which requires a tight alignment between company operations and the founder’s philosophy. I have discovered more about those differences between what I believe and the best ways to grow a corporation while studying economics — that is, how money is made and exchanged — than any other area.

A worldwide conflict between communism and capitalism defined the latter half of the twentieth century. The United States’ ideological battle was the central drama of my childhood, and it was with a combination of glee, pride, and “told you so!” that my fellow Americans watched the wall fall in Berlin, and the USSR dissolve shortly thereafter. I expect few would deny that the US is the standard bearer for capitalism.

Yet, there’s a flaw at the heart of this claim. While the United States operates as a free market economy, the key agent within modern capitalism — the corporation — works more like an authoritarian state. Given how much of our world is built around corporations, this truth and its impacts are critical.

I grew up apart from America’s passion for capitalism. In the era of Reagan, I was living on a commune. My parents did not earn money for their labor, and we didn’t have personal property. My family left the Farm when I was 8, and as I matured, my ideological roots were in conflict with the US’s nonstop pro-capitalism message. As I joined the workforce and eventually started my own company, I found myself attached to neither the communal roots of my childhood nor the Wolf of Wall Street world I moved into. I grew slowly in convictions, as I encountered problems in the course of scaling a company.

The first real conflict came when it was time to hire managers. I founded a company primarily because I did not thrive as someone else’s employee, so what led me to think others would? More importantly, anyone who has ever operated at the front line is aware of the severe costs imposed by the separation between the people who do the work and the people who make the decisions in hierarchies. Hiring managers was just going to make the company do worse, not better, right? Right?

I expect three of you are gleefully shouting, “Yay, holacracy!” right now, while the rest are confused and either offended or think I’m an idiot. I did consider a manager-less world, but a little research provided only examples of disaster, because the only available options just replace an explicit power structure with an implicit one. In other words, it’s still hierarchical with the founder on top, but now decision making is opaque and the system is easy to exploit because of the lack of controls (which looks surprisingly like the cult/commune I grew up in).

Those who are confused or offended by the idea that managers make performance worse would be informed by a deep dip in economics. One of the core principles of the free market is that central planning committees can never be as efficient or as effective as the people doing the work. By definition a free market economy lacks a decision-making hierarchy; the ‘free’ means every agent (individual or corporation) can decide for themselves, without needing permission from a manager above.

While there are many aspects of modern American capitalism I reject, this one I wholeheartedly support1. The downsides of a strong central executive were taught to me early.

Like many other communes, the one I grew up on routinely failed to feed its people — my parents speak with horror of the ‘wheat berry winter’, when we lived on little else. While his people were short on food, the founder of the Farm was off touring Europe as the 3rd drummer in a band, “bringing our message to the world”.

Thankfully none of us starved to death, but the failing was similar to what most communist countries experienced: The central organization could not feed everyone. For years, I assumed this was just incompetence, whether at the scale of the Farm or China. The truth was far more structural. Millions starved during the Great Leap Forward because the central organization was trying something impossible: Managing the productive output of an entire country. The Planet Money podcast tells a great story of how this central planning was walked back in China, but the general point here is that these communist countries did not just nationalize the means of production, they tried to centrally control all of it from within a small group.2

When people talk about communist countries not being a free market, this is what they mean: They tell the farms what crops to produce and in what quantity, rather than letting them decide for themselves. China even went so far as to dictate what hours a farmer should start and stop working, and then directed managers to ring a bell for transition times to control every little group of farmers. Anyone who’s ever had to punch a clock into a rigid, dysfunctional hierarchy is likely getting painful flashbacks about now.

It should be immediately obvious why this fails miserably: The distance between the central planning committee and the farmer is so great that good decisions are nearly impossible. It’s nearly impossible for critical feedback to make it from the edge, where the farmers are working, to the central planning committee in time to affect decisions, and then for those decisions to make it back to the edge in time to be useful. The podcast linked above also points out how unmotivated the farmers were under this regime, cutting productivity even further. Those who have studied lean manufacturing, agile development, and DevOps are likely seeing parallels here.

The result was catastrophe. When a corporation is painfully inefficient it loses money and might have to do layoffs, but when a country fails at growing food, its people starve to death. I don’t mean to imply that central planning was the only cause of famine under communist rule — there were political operations that led to mass starvation, just like in the West — but learning more about these helped crystallize what I do truly prefer about capitalist models. It also converted the phrase ‘the free market’ from a catchy slogan into something meaningful to me.3

The most important feature of free market economies is that each person within them is able to make independent decisions in their own best interests4. If you’re a farmer, you can decide what to grow, how much to grow, and when to work to develop your crop. Heck, you can even choose not to be a farmer any more. Success is merely dependent on your finding a buyer for your work at a price you can tolerate. Any given year might not be perfect, but your decision making gets better over time as you learn to respond to customer demand.

This pattern is easy to understand in any system where the people doing the work make the decisions. If you’re a jeweler, you can decide what to make, how much to sell it for, and what to spend your time on. Same if you run a small restaurant, lead local tours, or are a one-person shop doing house remodeling. It’s a free market, where you can charge what the market will bear, and you can quickly and efficiently respond to its whims, ensuring that you are getting the best use of your time.

This was a powerful organizing principle for a long time. The history of human commerce developed largely this way: One person, or as many people as could fit in one shop, would turn labor into a product, then find a buyer for it. Most large-scale efforts were organized by the state of the time: Monarchs and the landed gentry, who were the only ones capable of marshaling enough resources to build palaces, roads, and other large construction projects.

This began to change in the 17th century when corporations like the Dutch East India Company were able to deliver massive windfalls to investors by pooling money and using it to extract resources from colonies. There was a step change in the 19th century, as corporations went from generating wealth to building and owning infrastructure. It’s one thing to outfit a single ship for a year-long voyage, yet another to maintain railroad schedules across the United Kingdom, or run a telegraph network around the whole US. These aren’t just short-term money-making exercises, they’re long-term commitments with big capital outlays and large returns over years and years.

We still live in a free market economy, but it’s not one Adam Smith would recognize. Instead of individual or small operators, ours is composed almost entirely of corporations. Really big corporations. And these companies, they use the same kind of central planning that we so despise in communist systems. I know. I’ve done it.

By the time my company got near 500 people, we had a multi-week planning process, where the leadership (i.e., me and my lieutenants) set out top-level goals, built a top-down plan to accomplish them, then drew information from the front line to see where it needed change. We called this a bottom-up plan, but it was only bottom-up from the perspective of numbers — how much money we’d have, what our costs were, etc. — rather than from the bottom of the organization. We could see no way to have a system where the people doing the work built a plan for the organization. Even thinking about it now, my reaction is, “How would they know what my goals are?”

That’s the kind of question you can only ask in an authoritarian state, not in a free market economy. My goals became my company’s goals, and the only real way to ensure people worked toward them was providing a plan. You might argue that a corporation should focus on shareholder value, but that doesn’t help make decisions about what the company should actually do.

Great leaders find a way to listen to everyone in the company, but in the end, leadership is about making decisions. That’s essentially the definition of the word. And we all know leaders who did not bother to listen, or just did not need to in order to be great; today’s most vaunted tech leader, Steve Jobs, was famously disrespectful of the opinions of others, yet made a lot of world-changing decisions (not all for the better).

This is exactly why working in a big corporation is so stifling. If you’re in a small company, the executives are close enough to the front line that it’s more like working in a tribe, but in a big company, the leadership is so removed from whose who do the work that executive teams operate like the politburo we so decry in communist countries. Certainly the bureaucracies are no more enjoyable or forgiving.

I find it both ironic and painful that my inability to work for someone else resulted in my creating a company that involved a lot of smart, capable people working for someone else.

I wish I had a solution. If this were an easy problem, its solution would already be pervasive, because the benefits are massive. Just in terms of efficiency, we’ve seen how much better the free market is than planned economies, but it also has a hugely positive impact on quality of life. People are happier when they’re in control.

I know the solution is not more freelancing and contract work, which America’s corporations are addicted to. That’s the worst of both worlds: The exploitative nature of capitalism with the inefficient bureaucracies of communism. Transactions on the free market work because they’re good for both sides, but most people only accept part-time contract relationships today when they have no other real choices.

Holacracy certainly isn’t the answer. It’s fundamentally flawed because of its implicit power structure — Tony Hsieh still runs Zappos, even if he does not use a central planning committee to do it — but the biggest problem is it makes no mention of economics. Without a clear system for scoring the transactions (i.e., money) it’s impossible to build a free market.

This problem of how to handle economics within a non-hierarchical company might lead some to think of using blockchain tokens as an internal currency. This is impossible today, beyond the fact that the world of blockchain is mostly about fraud and black market sales. The biggest problem is that we have no idea how to value most of the work people do. I mean, we might know that what a developer should get paid for a year’s work, but how much is that work worth? The majority of the work done in modern corporations is incredibly hard to value, which is partially why companies are so inefficient and make so many bad decisions.

That brings up an even bigger problem — companies today hire workers to make money from their labor. In other words, they generate profit because they pay their employees less than they’re worth. If everyone could trade their labor for exactly the amount of money it was worth, the corporations that employ them would have a much harder time making money. Instead, in modern corporations the shareholders and the executive team — again, the central planning committee we so despise — make the majority of the money, while the front line does all the work and makes very little. This is true even at the big tech firms; software developers might be well paid relative to hotel workers, but they’re paid a pittance compared to the founders and executives. This might speak to why we have no solution yet — free market corporations would tend to reduce concentrations of wealth, which would be terribly disruptive to the current system.

Like I said, I don’t have a solution. But at least now I know what makes the current system so painful, and it gives me some hope that we actually can come up with a better answer. I know I’ll be working harder in the future to manage the downsides of what we have today.

  1. Although I might stress the “well regulated” part more than most modern economists.
  2. Of course, capitalism is just as capable of killing its citizens, whether through starvation or lack of health care.
  3. Note that I’m not taking the capitalist side of the cold war here; while Americans were decrying the oppression of the Soviets, we were actively clawing back progress on civil rights and knocking over democratically elected governments. This article is about principles, which political regimes rarely show a great track record in following.
  4. But not so independent that you should be as pathological as Ayn Rand.

Great design is ruining software

The arrival of the smartphone has convinced the world of the value of great software design, but it’s not all good news

The smartphone has reached more people and delivered more value faster than any technology ever seen. Much of the world has had to adapt to this arrival, but software design suffered the greatest reckoning. As the smartphone ascended, developers finally adopted reasonable design principles, realizing that they could not pack every feature ever seen into the smartphone experience. This recognition of the value of design — and especially, minimal design — is a good thing. Mostly.

I could not be happier that the industry finally accepts that there are principles of design, and there is a practice and discipline behind building great software. It’s great that we’re seeing more focused software that does little, but does it very well, rather than the previous age of the GUI when software attempted to own large parts of our lives by doing anything and everything. For a long time, Microsoft Word was used by nearly everyone who had a computer, and their strategy was to ensure no one ever had a reason to choose something else by building every feature anyone might ever need; their toolbar was the canonical example of never saying no.

The smartphone changed all that. Those rows of icons would fill the screen on a phone and leave no room for typing, and of course, no one would use them anyway because of how different the usage patterns are. As people realized they could no longer just throw in the kitchen sink, they began hiring (and listening to!) actual designers, and those designers have been steeped in the culture of Dieter Rams and the minimalism of the Bauhaus movement, which is awesome. Mostly.

Unfortunately, the phone caused everyone to focus on the final design principle of Dieter Rams (“Good design is as little design as possible”), without apparently remembering the nine that came before it, or why they were earlier in his list. I get it; the design constraints in a phone are intense, and it might not be a good idea to minimize everything, but it sure is easy.

The consequence of this mobile brutalism is a new movement building simpleton tools: Software that anyone can use, but no one can become an expert in.

Trello is a great example. I adore Trello. I think it’s great software, and it’s clearly a success by any measure. However, for all that I’ve relied on Trello daily for years, I feel no more an expert than I did just after starting to use it. It’s not because I haven’t tried; it’s because there’s no depth. You can pretty much plumb the product in a couple of days.

That’s fantastic for getting new users up to speed quickly, but deeply frustrating after a couple of weeks. Or months. Or years. Compare that with Vim, which I still use for all of my code editing, yet it’s so complicated that most people don’t even know how to quit it, much less use it. I’m not going to claim its lack of user friendliness is a feature, but I will defend to the death that its complexity is.

Apple’s Notes is the ultimate expression of this trend in text editor form. It’s a fine text editor. I know some people have written huge, impressive programs in similarly simplistic editors like Notepad on Windows. But I personally could not imagine giving up keyboard navigation, selection, text munging, and everything else I do. The fact that complicated work can be done on simplistic tools speaks to the value of having them, but in no way invalidates the need for alternatives. Yet, on the current trends, no one will even be trying to build this software I love because they couldn’t imagine two billion people using it on a smartphone.

I think it’s fair to say that that’s an unfair standard, and even a damaging one.

I miss the rogue-esque exploration that tool mastery entails. It’s not that I want tools to be hard; I want them to be deep. I want to never run out of ways to invest in my tools. I don’t want to have to swap software to get upgrades, I want to upgrade my understanding instead.

But I look around my computer, and everything on it was designed for the “average” user. I was not average as a CEO with 40+ hours of meetings a week while receiving more than 200 emails a day, nor am I average now as someone who spends more time writing than in meetings. There’s no such thing as an average user, so attempting to build for one just makes software that works equally poorly for everyone.

It is a rookie mistake to conflate the basic user who will never plumb the depths of their tools with the expert user who will learn every nook and cranny of your software. It is a mistake to treat the person who sometimes has to solve a problem the same as a person who spends 80% of their time working on that problem.

I don’t want to be an expert in all of my tools — for all that I take thousands of photos a year, I don’t think I’m up for switching to Adobe Lightroom — but for those tools that I spend the most time in, that most differentiate me, I want the opportunity for true expertise. And I’d happily pay for it.

Back in the days when computer screens were tiny, there were plenty of stats that showed that paying for an extra screen would often give people a 10% or more boost in productivity. I know it did that for me. As a business owner, it was trivial to justify that expense. Monitors cost a lot less than 10% of a person’s salary, and don’t need to be replaced every year. Heck, the whole point of the automation company I built was to allow people to focus their efforts on the most valuable work they could do.

Yet, when it comes to software being built and purchased today, to the tools we use on a daily basis, somehow our software ecosystem is failing us. There is no calendar I can buy that makes me 10% better, no email client available that I can spend five years getting better at.

It’s great that people are finally making software that everyone can use, but that’s no excuse to stop making software for specialists, for experts, for people who could get the most advantage from that extra 10%.

Please. Go build it. I know I’ll buy it.

Where does your work live?

Most of our software is confused about what job we’ve hired it for

I’ve really enjoyed playing Zelda: Breath of the Wild, but my life has been changed more by one of its reviews than by the game itself. The review had a unique view on what made the game so great. It contrasted Zelda to other games — Destiny, for example — saying that while others tended to distribute gameplay across multiple areas (e.g., in Destiny, the radar is a critical part of the game), Zelda really focuses the game into the main screen where you walk, glide, ride, and fight.

The review (which I unfortunately cannot find, because of the quantity of posts online that all use similar words) called this “where the game lives”. I love what this phrase evokes. I absolutely loved the game Borderlands, but I was deeply frightened of ever finding out how much time I spent at its store screen, because item collection and management was such an important part of the game. A lot of its fun was specifically from the collection, rather than the playing, but that meant a large chunk of the game lived in the store, as opposed to out in the world.

Most of our software could use a similar dissection.

Like Destiny and Borderlands (which are both great, and quite similar), the tools we use show a surprising distance between what they help us do and what we’ve hired them for. If I may be permitted to steal from this review, this distance is a sign that our software is confused about where our work lives.

To pick a counter-example, I’m writing this post in Ulysses. People who choose this software laud its simplicity, which makes it easy to focus. What they really mean is, all you can do with it is write. There’s almost no formatting, very little organization, very little anything but writing. The work lives in the writing. (My first draft was written on an ipad, which further simplifies that focus.)

Contrast that with any task or project management tool. My wife and I are in the middle of planning a bunch of camping, and we’re using Trello to organize many of the options. What is Trello’s opinion about where the work lives?

Last time I looked, my wife had three browser windows open, each with about fifteen tabs. She’s also working in RoadTrippers (Pro, natch). To get this work into Trello is a process of copying, pasting, writing copy about why you pasted it, and then using Trello to file it so you can find and manage it later.

In this operation, where does the work live? It’s scattered across maps, calendars, browsers, and applications like RoadTrippers. Does Trello know that? Does it agree? How does its opinion of where the work lives affect its utility? Brief introspection leads us to conclude Trello has no idea where the work lives, and the humans using it are entirely responsible for connecting the two.

Here’s a simple exercise for anyone using a task tracking app: Envision yourself going into that app and just marking everything done, even though you obviously haven’t done the work. It hurts to even consider, doesn’t it? Your brain has absorbed that these tasks are representations of work, and it’s your job to match the representation to the work, because you know the tool won’t do it for you. When you mark something done, of course nothing goes out and does the work; you’re just lying to your software about the state of the world. And it has no idea! This disconnect is what leads to an allergic response to the idea of marking work done in software that is not yet done in the real world.

I’d like to say that Trello was just a bad example, but I think all task tools share this confusion. Bug trackers and project management tools are specialized examples of this, and they obviously have no idea where the work lives. If I’m writing code, all of the work is done in my text editor, in files on disk, and maybe in my testing tools to ensure the work is done and done right. I then go somewhere entirely different to mark the work done. Why? Shouldn’t GitHub know it already? Why do I have to explain it? The answer is because these trackers think tracking is the work, when of course, the work is the work.

It’s no better in personal tools. I just started using Things 3 for my own tasks, nearly all of which end up being expressed in email or calendars, yet Things 3 has no conception of either. It has no idea where my work lives, and expects me to put out all of the effort necessary to connect them.

Speaking of email and calendars, they have their own role in this conversation.

Email is interesting. Everyone hates it, because it’s so important to everyone that we use it constantly, yet this animosity is a result of its utility and criticality. In other words, people hate it because it works so well. But when you’re doing email, what work are you actually doing?

I’m not sure I know. You’re communicating. But usually, you’re communicating about some other kind of work, like a document, a meeting, or some kind of activity that takes place outside of the inbox. A well designed application will remove the need for communication via email — Google Docs is a great example of this. Its sharing and commenting features have allowed many discussions to move from email to where the work is, in the document itself; their addition of suggestions has doubled down on focusing on the work, rather than talking about the work. (Note that this is completely different from Slack, which advertises that it gets rid of email, by which it means it moves the conversation, not that it does a better job of bringing the work into the software.)

Of course, how do you have Google Docs tell you someone commented on your document? Email. 🙂

What about calendars? Why do calendars exist? As a tool, where does their work live?

I am thankful to have had to try to explain to a friend my position on this, otherwise I’d think it was easy to understand. It’s so counter to how people work today that a relatively obvious truth is impossibly counter-intuitive: calendars are about how I spend my time.

When using a calendar, the work is what you actually do. You, a person, out the in the world. That’s what the calendar is about. Its job is to ensure you do the right things at the right time, with the right people, in the right place. It’s about doing, not documenting, managing, or notifying. You can put something in a calendar and not do it, or do work that’s not in the calendar; any of us would say, obviously, that it’s what you do that matters, not what the calendar says. Merely creating an event has no effect, and thus no value; it only matters if it then affects your behavior. The work lives in what you do. But does your calendar make even the slightest attempt to directly manage how you spend your time? What would that even look like?

To pick a small example, my calendar apps seem to not care what city I’m currently in, or where I’m physically located. Isn’t this weird? The tool whose primary job is to manage where I am physically located makes no attempt to represent or take into account the core fact it is meant to control. It still dumbfounds me.

Yes, they can tell me in real time when I should leave for a meeting based on travel time (as long as travel involves driving, rather than walking down the hall to a conference room), but they can’t say, “Given that on Tuesday you’ll be in Portland, working from home, you should block out travel time to get downtown to lunch and back”. That is, they can alert me in the moment, but they can’t do their core job — reserving time to ensure I’ll be doing the right thing in the future. Because they can’t do this, I have to create those blocks myself, else I’ll find myself choosing between skipping one appointment or being late to another. The whole point of a calendar is to manage time, but in this simple example they fail to ensure I will have space to transition my corporeal existence between physical locations. Shouldn’t that be step one, rather than an exercise left to the human?

I also reserve time for tasks I do alone every day, like working out and writing. I do this primarily to ensure it gets done, rather than because those times are special (although I do get a bit jittery now if I don’t write first thing in the morning). There’s no way to explain to my calendar what I’ve actually blocked that time out for, and thus no way for it to respond to whether I’ve done it or not, even though my computer knows if I’ve done my writing, and my watch knows if I’ve worked out. Wouldn’t it be great to see your calendar dynamically rearranging your day because it noticed you missed your workout?

My calendar is confused about what work I’ve hired it to do, and therefore does not know it needs to look in those places.

We’re so used to the idea that our software represents the work that we seem to have lost hope that it will actually help us do it. Most of the tools we use are entirely disconnected from the work they’re supposed to help us with. Marking something done does not do it, deleting email does not indicate communication has happened, sitting at your computer while your calendar says you’re writing does not produce text. The representations are not the work, yet we forgive our tools for only dealing in representations, not actual work.

I don’t know if that reviewer was right about why Zelda: BoTW is so great. I can’t even imagine what all the software I use would look like if it were built around where my work lived, rather than merely being used to model and manage it.

What I do know is that our software can and should be built to help us do the jobs we’ve hired it for. But because it is confused about why we use it, what we do every day is lower quality, less fun, and just downright confusing.

This also shows just how much opportunity there is to improve the software we use on a daily basis.

Founding Myths are Pernicious Propaganda

It’s only safe to learn from true stories

Every startup has its founding myth, the story that it uses to help draw in and motivate employees, customers, and investors. In most cases, those myths were cultured years after the company’s founding, and bear little relationship to reality. When you dig deeply into a company’s true origins, what initially looked like the product of far-sighted genius deflates into a mix of insight and smart decisions meshed with a series of serendipitous events.

This gap between myth and reality in no way diminishes the achievement of these companies and their teams, but it collapses our ability to learn the true lessons from those who made it and those who did not. If we can shift our story-telling from creation myths to capturing the collisions actually necessary to germinate greatness, we can better recognize what it will take to support it next time. Even better, it will allow us to pull forth those organizations not lucky enough to get all the draws, enabling our ecosystem to address neglected founders attacking neglected markets.

As I was building Puppet, I assiduously sought founder stories, trying to understand what it really took to do what they did. Over time, this turned from a quest to build “Good to Great” for startups (which was silly anyway, given how stupid that book is) into a collection of more-true founding myths:

I find these funny, but the true goal is to puncture the story people want you to buy so you can understand what it really took. When you do that, you find that yes, people had to work hard, they had to be smart, they had to be creative. Those are necessary, but as testified by the thousands of failed companies full of hard-working, smart, creative people, they aren’t sufficient.

When you can find true founding stories, such as Phil Knight’s excellent Shoe Dog, you gain critical insight that might help you build your own business better, but you also realize how much of building a great company is expertly riding the tide of luck and opportunity.

On first blush, there’s no problem with this, other than the ridiculous levels of delusion necessary to turn serendipitous opportunism into genius founding myths that manage not to include the lucky bits.

On reflection, though, erasing the role chance plays causes systemic problems throughout the startup world.

We’ve already seen that human nature abhors a vacuum, refusing to accept that people don’t always deserve their circumstances. Luck is obviously not the only contributor to a startup’s success, because how they play the hand they’re dealt is at least as important as the hand itself. But let’s not lie to ourselves about the role of randomness in a startup’s circumstances.

When we do, it limits our ability to understand what it really takes to make a great company. This is similar to the deep structural flaws within “Good to Great”: If you just ask the great companies what they did and try to do the same, you sound smart but have only made your readers dumber. I recommend The Halo Effect for a more complete discussion on how this business-book analysis of success fails its readers. You can fatally pierce the concept by focusing on one kind of failure:

“The Delusion of the Wrong End of the Stick – successful companies may do various things but that does not mean that doing those things will make you successful”

As I was trying to build Puppet, I found this discrepancy between the stories and the realities immensely frustrating. I was uninterested in being valorized for something I felt lucky to be doing, but these myths were a screen obscuring information I knew was valuable. My goal in studying my predecessors was to improve my own chance of success, but it’s impossible to learn from propaganda. Instead of a community of founders learning from each other, with each iteration being better than the next, you get cargo cult companies that manage to copy every part of a successful business except the parts that made it work.

As just one example, Google’s own data now shows that its vaunted hiring practices did not actually help, but a whole generation startups copied their worthless brain teasers and discriminatory ivy league requirements. For years, Google’s dominance was partially explained by their great hiring, but what happens when it turns out they weren’t actually better than anyone else? Nothing to see here, please move along. Let’s just start another round, this time copying Netflix and Amazon. Forgive me for thinking that Amazon’s success has more to do with their taking Walmart’s extractive business strategy to the web than on their use of internal APIs.

What I found again and again when I peeled these myths back was far more genius in how people responded to their circumstances than in how they were created. Oracle is a perfect example: Larry Ellison was smart enough to realize that IBM had invented this great new database concept but weren’t trying to sell it, so he took advantage of that to build it himself. You can bet that’s not how Oracle tells its founding story, but only once you know it can you see that Ellison’s true strength was pushing hard and fast enough to ensure that Oracle was the first one to market with this free idea. I don’t look up to Oracle’s founder, but you can bet I learned from that story.

Google is another good one: One of the best CS schools in the world bashes together two random guys, who then go on to invent something and get lucky enough to fail to sell it. What you can learn from Google looks pretty different once you know that. Again, it does not diminish what they did after that failure, but it turns their stories of manifest destiny into a more true telling of wandering any icy landscape looking for safety.

When they say history is written by the victors, they mean both that only the the stories of the winners end up getting recorded, but also that the stories themselves get morphed, corrupted, to present those winners as just and deserving. That corruption is as pervasive in the world of startups as it is in the wider world.

There is tremendous value in understanding the real stories behind the great successes. Good luck getting companies and founders to tell them.