disruption disrupted
disruption disrupted

frank rose is the author of The Art of Immersion and a senior fellow at Columbia University School of the Arts, where he leads an executive education seminar in digital story-telling strategy.

Illustrations by Jan Feindt

Published July 15, 2016.

frank rose on Twitter

In 1995, when he was a little-known assistant professor at Harvard Business School, Clayton Christensen published an article in the Harvard Business Review that would revolutionize the way we think about business. Written with his mentor, Joseph L. Bower, "Disruptive Technologies: Catching the Wave" offered an answer to an urgent question: what is wrong with corporate America?

The paper built on Christensen's research on the disk drive industry, which had seen one successful company after another fall victim to younger firms with innovative new products. But it wasn't just disk drives. Xerox. IBM. Digital Equipment Corporation. Time and again, once-proud standard bearers were being humbled, if not felled, by scrappy upstarts. Christensen's answer, which would find full expression two years later in his book

The Innovator's Dilemma, was surprising. The giants weren't doing anything wrong, he wrote. They were doing it right – right for their existing customers, that is. And that was the problem, because in a time of rapidly advancing technology, what was right for customers today might soon be superseded by something that seemed at first too trifling to bother with.

Last December, 20 years after that landmark paper, Christensen published again in the HBR, this time under very different circumstances. Disruption theory had made him a superstar in the business world. And in the previous 18 months, it had made him a target as well.

At this point, Christensen has gotten disruption theory into such a tangle that even he is tripping over it.

Jill Lepore, a Harvard historian, had chewed over disruption theory at length in The New Yorker and spat it out like so much bad fish. A professor in the engineering school celebrated her takedown by publicly branding Christensen a "snake oil salesman." Bloomberg Businessweek headlined its coverage "The Innovator's New Clothes: Is Disruption a Failed Model?" Justin Fox, editorial director of the HBR, wondered in The Atlantic if disruption were even still happening – and noted in an aside the appearance of an extension for Web browsers that automatically replaces the word "disrupt" with "bullshit." Andrew King, a professor at Dartmouth's Tuck School of Business, was co-author of a paper in the MIT Sloan Management Review that examined 77 of Christensen's case studies and asked, "How Useful Is the Theory of Disruptive Innovation?" The answer, some 8,900 words later: not very.

But it may have been Christensen's response in the December HBR that was most telling. Written with Michael Raynor of Deloitte and Rory McDonald of Harvard Business School, "What Is Disruptive Innovation?" was an attempt to restate and rein in an idea that, for good or for ill, has taken on a life of its own. Among other things, Christensen and company raised the question of Uber, a company that, by any commonly accepted English-language definition of the word, is seriously disrupting the global taxi industry. But is Uber really disruptive? No, they maintained, because it fails to meet certain basic criteria of Christensen's theory, rooted in observations he made on the disk-drive industry when he was writing his doctoral dissertation a quarter-century ago. Uber would have to be considered a "sustaining" innovation – one that improves an existing product or service rather than challenging it with something that, at least at the outset, seems not as good.

When Christensen introduced the idea of sustaining innovations, they were described as working to the benefit of established companies. But if Uber is classified as a sustaining innovation, you have to wonder what, exactly, it is sustaining. Science and economics are rife with specialized terms (starting with the word "theory") that don't carry the same meaning they have in common parlance. But at this point, Christensen has gotten disruption theory into such a tangle that even he is tripping over it. This leaves a big hole for anyone looking for a viable explanation of how and why mighty corporations fall.

disruption disrupted 01

One of the most compelling aspects of Christensen’s analysis was the paradox it presented — a company at its zenith could be making the very decisions that would allow it to be undercut from below
Scary Times

It's hard to overstate the impact Christensen has had on business thinking since The Innovator's Dilemma was published in 1997. Here was a catchily named, easily grasped theory (a little too easily grasped, perhaps) that appeared to explain a critical enigma in American business. "These are scary times for managers in big companies," Christensen and a co-author wrote in the March-April 2000 issue of the Harvard Business Review – an opening line that perfectly encapsulated disruption theory's appeal.

In an indirect, yet unsettling, way, the American corporate establishment's inability to compete echoed the muscle-bound powerlessness of the United States itself, which seemed as ill-equipped to fight insurgencies in less developed countries as Xerox was to fight Canon and its small personal copiers.

Later, as attention shifted to tech entrepreneurs and their fabulous success stories, the focus of disruption theory shifted along with it: the same theory that spelled out how big companies could lose showed how small companies could win. Either way you looked at it, Christensen offered an explanation – a solution to questions that ultimately involved not just disk drives or copiers or computers or even geopolitics, but fed on far deeper and more personal concerns. Whether it was the fear of growing old or the excitement of being young, disruption theory played to the rational and the subconscious alike.

One of the most compelling aspects of Christensen's analysis was the paradox it presented – a company at its zenith could be making the very decisions that would allow it to be undercut from below. In The Innovator's Dilemma, he cited a 1986 Businessweek article that compared Digital Equipment, which reigned supreme during the brief era of the minicomputer, to an oncoming freight train: competitors, watch out. Eight years later, the same magazine declared DEC "in need of triage." Sears, Christensen wrote, "received its accolades at exactly the time – in the mid-1960s – when it was ignoring the rise of discount retailing and home centers, the lower cost formats for marketing name-brand hard goods that ultimately stripped Sears of its core franchise."

You could argue, he pointed out, that these companies were never well managed – that they got to the top by luck alone. But his own explanation was more intriguing and ultimately more satisfying. It's not that the companies were poorly run, but "that there is something about the way decisions get made in successful organizations that sows the seeds of eventual failure." What that something might be was explained in the 200 pages that followed – and in the many books, scores of papers and case studies and countless speeches that came after that.

In the business world, it was as if Christensen had bottled lightning. The Innovator's Dilemma arrived with high praise from the likes of Michael Bloomberg and the techno-guru George Gilder. In 1999, put him on its cover with Intel's chairman, Andy Grove, under the headline, "Andy Grove's Big Thinker." After that, the accolades piled up. Eric Schmidt, executive chairman of Google's parent company, declared that Christensen's recommendations had helped make Google's success possible. In 2011, and again in 2013, he was named the world's top management thinker in a leading poll of academics and business executives. When he appeared on the cover of Forbes a second time in 2011, the magazine hailed him as "one of the most influential business theorists of the last 50 years" – a testimonial that put him in the company of Peter Drucker and few others.

As disruption theory grew in popularity, its scope widened as well. In The Innovator's Dilemma, Christensen applied the theory to consumer offerings and business products alike, but the focus was on technologies – smaller disk drives, smaller and more-personal computers, smaller and more-efficient steel mills. Even the failure of once-dominant Sears to defend itself against the likes of Kmart was cast in technological terms.

Then, the horizons started growing broader, in tandem with Christensen's ambitions. He set up a think tank, the Clayton Christensen Institute for Disruptive Innovation, and a consulting firm, Innosight. He lent his name, though apparently not much else, to the Disruptive Growth Fund, an investment vehicle that had the misfortune to be created just in time for the 2000 dot-com meltdown. In 2003, he and Raynor published a follow-up volume, The Innovator's Solution, which aimed not just to explain what was happening to disrupted companies but to show how they could counter it. He extended disruption theory to health care in the 2008 book The Innovator's Prescription, to public schools in Disrupting Class and to higher education in The Innovative University. "As I helped people to try and use the ideas, it became very clear there really isn't anything [it doesn't apply to]," he told Businessweek in 2007. "Disruption really is a business model innovation."

Standards of design and convenience have advanced to the point that people expect more, not less — and digital technology has advanced to the point that just about any startup can give it to them.

Grove had seen as much when they were discussing DEC in the late '90s. He said, "It wasn't a technology problem,'" Christensen recalled in a 2006 paper. "Digital's engineers could design a PC with their eyes shut. It was a business model problem, and that's what made the PC so difficult for DEC." This was not all that startling an insight, considering that Ken Olsen, DEC's longtime CEO, had famously dismissed the PC as a toy that would "fall flat on its face" in business. Nonetheless, it helped provide cover for disruption theory to go just about anywhere Christensen wanted to take it.

Yet Christensen endorsed only minor adjustments to his original definition. To him, disruptive innovation remained a process that starts at the low end of the market, or in new markets that don't seem lucrative enough to bother with, and only later advances to the point that it becomes a threat.

This is the pattern Sony followed again and again between 1955, when it introduced Japan's first commercially available transistor radio, and the early '80s, when a co-founder, Akio Morita, began to step back from day-to-day management. It's what enabled Toyota to post profits in the mid-2000s, Christensen reports, that exceeded those of every other auto manufacturer combined. But some innovations are anomalous; they seem disruptive, yet fail to fit the pattern. In the same 2006 paper, Christensen noted an exception in the case of Whole Foods, an upmarket company that took business away from established mass-market chains like Kroger. "Some have suggested that these are instances of high-end disruption," he wrote. He disagreed. Since disruption by definition occurs at the low end, whatever was happening at the high end had to be something else. Disruption theory was powerful, evangelistic – and rigid.

But disruption has been hard to get right, even for Christensen. A decade ago, when Apple was racking up win after win, he consulted his theory and saw failure right around the corner. The iPod, Apple's portable music player, struck him as ripe for disruption. Mobile phones didn't have as much storage and their user interfaces weren't as good, but for lots of people they would be "good enough." As it happened, not even the ROKR, the music phone Apple introduced in 2005 in partnership with Motorola, was good enough to interfere with iPod sales.

In 2007, when Apple was about to introduce the iPhone, Christensen saw it as too good for most people's needs; to him it was a sustaining technology, improving on existing phones rather than disrupting them. "The prediction of the theory would be that Apple won't succeed with the iPhone," he declared in the 2007 Businessweek interview. "It's not [truly] disruptive. History speaks pretty loudly on that, that the probability of success is going to be limited."

At the time, Apple was valued at approximately $105 billion. Five years later, the iPhone would be hailed as the world's most profitable product. And today, having sold more than one billion iPhones worldwide, Apple still derives two-thirds of its revenues from the device.

This became a problem when iPhone sales finally slowed last spring, producing the company's first quarter-over-quarter revenue decline since 2003. Apple's market cap dropped by $46 billion overnight on the news – but even then, at $540 billion, it was still the world's most valuable company.

Although Walter Isaacson reported in his biography that Steve Jobs was "deeply influenced" by The Innovator's Dilemma, this wasn't the first time Christensen had made a wrong call on Apple. In The Innovator's Solution, published as Apple was rebounding from the near-death experience it suffered in the '90s, he and Raynor wrote that it had been "relegated to niche-player status" as a result of the Mac's proprietary architecture, which put it at a disadvantage against "IBM's modular open-standard architecture."

This formulation ignores the real reasons for Apple's near-demise, which included the special appeal of the IBM brand to conservative corporate purchasers in the '80s, the board's 1985 decision to back CEO John Sculley at the expense of Jobs, overweening hubris on the part of all concerned and a long string of bad management decisions that included, in fact, opening up the Mac and letting rival manufacturers come out with cheap clones. (As for the modular architecture of the IBM PC, it ended up benefiting Microsoft, not IBM, which found itself undercut by Compaq, Dell, HP and pretty much any manufacturer that could slap together a circuit board.)

Christensen's explanation also fails the hindsight test, since Apple's comeback was based on the Mac and its once-again closed architecture. But it wasn't as egregiously wrong as the iPhone proclamation.

Christensen has explained that flub by saying he failed to realize that the iPhone was actually disrupting the laptop, not improving on existing smartphones. This is a bit like saying you didn't hit the barn door because you had your eye on the house. It also fits a pattern. In economics as in science, laws describe, theories explain. But over the past decade or so, disruption theory has failed to explain, among other things, the iPod, the iPhone, Whole Foods and Uber – four developments that have wrought havoc with established businesses. At this point it seems reasonable to wonder if disruption theory, as Christensen has formulated it, still holds, or whether it's an artifact of mid-to-late-20th-century industrial development that no longer applies in an era of ubiquitous digital media and ever-more-sophisticated consumer electronics.

disruption disrupted

Pathfinder does serve as a prime example of the "innovator's dilemma": disrupt yourself or be disrupted.
When Good Enough Will no Longer Do

The "good enough" formulation, as in Christensen's assertion that pre-iPhone smartphones would be "good enough" as music players to disrupt sales of the iPod, has been critical to disruption theory from the start. Companies coming in at the low end – Japanese auto manufacturers, discount retail chains, early PC manufacturers – would gain a foothold because their offerings were "good enough" for some people. Only when those offerings improved would established corporations like GM, Sears and DEC start to feel the heat.

But over the past decade or so, that pattern has started to break down. Early smartphones, including the ill-fated ROKR, were not in fact good enough to counter the iPod. Whole Foods turned the idea of "good enough" on its head; it was actually too good, meeting a demand for high-end natural foods that had gone unnoticed by the big supermarket chains. Uber is too good in much the same way – taxis can't match the expectations of comfort and convenience it has created. Disruption has gone haywire.

One explanation for this is a rise in consumer expectations. For many of us, "good enough" will no longer do, especially in the absence of some trade-off to balance the equation. Transistor radios may have sounded tinny, but they were good enough for teenagers in the '50s because they let you listen to rock 'n' roll where your parents couldn't hear it. The first personal computers were clunky and underpowered, and the first portable computers even more affordable and convenient in a way that minicomputers could never be. Early Japanese auto imports were tinny, clunky and underpowered, but they were good enough for people who wanted cheap cars that didn't guzzle gas.

Today, however, such trade-offs hardly seem necessary. Standards of design and convenience have advanced to the point that people expect more, not less – and digital technology has advanced to the point that just about any startup can give it to them.

This is why the iPod was not disrupted by early music phones. It's why the iPhone, contrary to Christensen's prediction, did not "overshoot" already available smartphones from the likes of Nokia and RIM's BlackBerry. As Ben Thompson, a well-regarded tech blogger, has observed, "it is impossible for a user experience to be too good." Not only did the iPod and the iPhone have superior user interfaces; Apple employed them to browbeat two notoriously customer-unfriendly industries, music and mobile telephony respectively, into submission. Branding was another factor: Apple has been able to charge premium prices because iPhones offer a coolness factor that other smartphones lack.

For a while the iPhone seemed threatened by Google's Android mobile operating system, which was adopted by major smartphone manufacturers, including Samsung and LG. In 2014, Christensen asserted that the competition was "killing Apple," but this judgment, too, proved wrong. Samsung has risen to No. 1 in unit sales, but that hasn't stopped Apple from vacuuming up an ever-increasing share of the industry's profits – more than 90 percent at last report.

Lepore cited Christensen's Apple miscues in her 2014 New Yorker article, but that accounted for only a small portion of her overall barrage. Finding his sources "often dubious and his logic questionable," she accused him of relying on circular arguments ("If an established company doesn't disrupt, it will fail, and if it fails it must be because it didn't disrupt") and on carefully selected case studies that mainly buttress his theory in retrospect. She herself cited one case after another in which disruptees eventually failed, or supposedly disrupted companies ended up recovering (or at least surviving), or established companies tried to disrupt themselves and ended up in the toilet.

Unfortunately, many of her examples turn out to be problematic as well. As Raynor pointed out in rebuttal, Kmart's subsequent travails had no bearing on its successful disruption of the department-store business, nor should US Steel's survival be taken as evidence that Nucor, with its upstart mini-mill technology, did not actually disrupt its business.

As for Time Inc.'s hapless Pathfinder Web initiative, which Lepore offers as an example of in-house innovation run amok, on closer examination it doesn't look like the open-and-shut case of ill-advised auto-disruption she describes. Nor does Raynor's assessment, that it was a sustaining innovation undertaken "to improve the experience of the company's existing readers and the reach of its existing advertisers," bear scrutiny.

Existing readers and advertisers had little to do with it. Despite a promising start, Pathfinder was a misguided effort to fit a disruptive innovation (the Internet) into a sustaining role – help our magazines, or at least don't cannibalize them – by print executives who didn't have a clue what they were doing. If they had, they might not have found themselves on a road to nowhere.

But Pathfinder does serve as a prime example of the "innovator's dilemma" – that is, disrupt yourself or be disrupted. Time Inc., cast adrift by the corporate mother ship two years ago in a concrete canoe laden with $1.3 billion in below-investment-grade debt, has certainly been that.

Whatever the merits of Lepore's arguments, Christensen didn't help his case when he responded with an anguished wail in Businessweek. Referring repeatedly to himself in the third person ("Clayton Christensen") and to his critic as "Jill," he offered an inadvertent tipoff as to why he might be collecting detractors. The interview ends with this exchange:

You keep referring to Lepore by her first name. Do you know her?

I've never met her in my life.

King's dissection of disruption theory, which appeared a year later in MIT Sloan Management Review, provided a less writerly but arguably more rigorous examination of the record.

Recruiting a panel of experts to review 77 instances of disruption described in The Innovator's Dilemma and The Innovator's Solution, King and his co-author, Baljir Baatartogtokh, asked each expert to rate a single example according to four factors they deemed critical to disruption theory. Of the 77 examples, which ranged from Amazon to Xerox, only seven were judged to meet all four criteria. Most were determined to have been the result of one or two of the criteria plus additional factors that the theory doesn't account for – onerous pension obligations, for example.

The most recent authority to weigh in is Joshua Gans of the University of Toronto, whose new book, The Disruption Dilemma, comes with an endorsement from Christensen himself. Not surprisingly, it offers a friendlier assessment of his theory, but it also suggests that some adjustments are in order. In Gans's view, for example, organizational structure is key: companies that have different teams responsible for individual product lines tend to fare less well than those that take an integrated approach, largely because stand-alone teams lose sight of the bigger picture.

Christensen, however, has become increasingly focused on the need to bring "discipline" to disruption theory. The problem as he sees it is that people are using the word "disruption" to mean, well, disruption, when they should be using it in the narrow, technical sense of the term as he has defined it. Innovations that don't depend on being "good enough" don't fit, he told Forbes after publishing his most recent HBR paper. "We shouldn't ignore the existence of this phenomenon – it is important and notable – but we also shouldn't call it disruption."

This pretty well sums up the devolution of disruption theory over the past two decades, from breakthrough realization to squabble over semantics. In centering his most recent paper on Uber, which he contrasts with the companies that disrupted the copier market in the 1970s, Christensen unwittingly highlights the biggest problem with disruption theory today: it seems dated.

Some examples still fit the old pattern. Airbnb began with crash pads before migrating upriver to luxury homes. Streaming video players, which have been nibbling at cable and satellite providers for years, are finally delivering on the threat with the recent upgrades of Roku and Apple TV.

But 20 years on, disruption as defined by Christensen looks like something of a relic. "My research on disruptive innovation explains only how the world works in a very specific set of circumstances," Christensen explained to the Boston Globe last fall. Too specific, unfortunately, to take in the big, wide world that's out there now. What you really want to hear him say is, "Toto, I have a feeling we're not in disk drives anymore.