How Progress Ends:
Technology, Innovation, and the Fate of Nations
by carl benedikt frey

*Princeton University Press (2025).
All rights reserved.
Carl Benedikt Frey, a Swedish-German economist-technologist who teaches at Oxford, is just 41 – young for a superstar in the dismal profession. But he caught the zeitgeist with “The Future of Employment: How Susceptible Are Jobs to Computerization?”, an article he co-authored (with Michael Osborne) back in 2013. That research represented the most serious (and disquieting) effort to date to quantify the likely impact of automation; they estimated that 47 percent of U.S. jobs were vulnerable. And Frey has been in the thick of the argument about how to adapt to artificial intelligence ever since. His new book, How Progress Ends: Technology, Innovation, and the Fate of Nations,* which is excerpted here, is about an entirely – well, mostly – different subject, but no less ambitious in its reach. Here Frey engages in the ever-more-relevant debate about whether free markets or top-down planning is better suited to create and sustain economic growth. And his answer, delivered in delightfully accessible prose, is as surprising as it is convincing.
— Peter Passell
Published April 30, 2026
Arguments about technological progress and economic development often fall along a familiar divide, an intellectual echo of a Cold War now more than 30 years behind us.
They either exalt decentralized systems, in which small firms experiment and proliferate with little interference from the government, or they extol centralized bureaucratic systems in which strong states direct the economy through rational industrial policy. Such arguments are bound to come up short, for they mistakenly assume that the optimal form of economic governance is invariant across time and place. Instead, I argue, these two ideal types each have their own ecological niche – that is, they are each well suited to different environments.
Stated simply, centralized bureaucratic management is most advantageous for exploiting low-hanging technological fruit and spearheading catch-up, while decentralized systems are better for exploring new technological trajectories – which is the only way to make progress once the technological frontier is reached.
A system that was optimal in one stage of development will almost inevitably prove ill-suited for what lies ahead. When this happens, it must either adapt or perish. This also means that the primary sources of stagnation that threaten progress will look quite different, depending on the form of governance and the prevailing stage of development.
Prohibiting Innovation
On January 17, 1920, the Volstead Act marked the start of national prohibition in America. In historical memory, the movement to prohibit alcohol is most strongly associated with women’s groups and Protestant denominations, but its most ardent proponents included another, perhaps more surprising sect: economists.
Among the most vocal was Irving Fisher, a prominent scholar and former president of the American Economic Association. When Fisher organized a roundtable discussion on the subject during the AEA Annual Meetings of 1927, he was unable to find a single economist to argue the case against it. All present agreed that alcohol was harmful, not just to health but to economic efficiency.
Much like during the Covid-19 pandemic, the poor, crammed into small apartments, endured the greatest hardships. Indeed, for the wealthy, the Roaring Twenties were perhaps the greatest, gaudiest spree in American history, as famously depicted in F. Scott Fitzgerald’s The Great Gatsby. Even before prohibition went into force, the upper classes commonly stockpiled alcohol for legal home consumption. President Woodrow Wilson, for instance, moved his stash to his private Washington residence at the end of his term, while his successor, Warren Harding, took his own large supply to the White House.
The disappearance of the saloon disrupted ordinary people’s daily lives and social networks. And by bringing the habit of drinking back into the shadows, prohibition would have far-reaching consequences for the advancement of industry.
The disappearance of the saloon, meanwhile, disrupted ordinary people’s daily lives and social networks. And by bringing the habit of drinking back into the shadows, prohibition would have far-reaching consequences for the advancement of industry.
The saloon had a long social tradition in America. In the words of one contemporary, it was “the rooster-crow of the spirit of democracy,” rivaled only by the church as the place where the working class met after work. As is evident from saloon names like “Mechanics’ Exchange” and “Stonecutters’ Exchange,” many establishments catered to specific occupations or industries. Skilled workers and craftsmen went there not just to drink but to socialize and exchange ideas. And because these workers were responsible for developing the most inventive contrivances of the era, it should be no surprise that innovation took a hit as saloons across the country were forcibly shut down.
Taking advantage of the fact that U.S. states introduced prohibition at different times, economist Michael Andrews found that such bans were followed by an 18 percent decline in patenting. Not only did collaborative innovation suffer, but patenting among solo inventors plummeted as they ceased to socialize at the saloon and were exposed to fewer ideas. The rate of patents only rebounded to its prior level half a decade later, once people rebuilt their social networks.
Although decisions are always easier to judge with the benefit of hindsight, economists of the Roaring Twenties should have anticipated the costs of this antisocial policy. Principles of Economics (1890), the dominant economic textbook of the time, had been in circulation for nearly three decades. In it, Alfred Marshall – one of the founding fathers of neoclassical economics and perhaps the most influential economist of his generation – famously wrote that “each man profits by the ideas of his neighbors: he is stimulated by contact with those who are interested in his own pursuit to make new experiments.”
Marshall’s writings, of course, focused on the social networks of industrial districts. But drinking establishments performed the same function during the Enlightenment, which preceded the rise of modern industry, though their importance faded in Europe as coffee houses sprung up across the continent.

Historian Brian Cowan has shown in some detail how this bitter Turkish beverage came with a “culture of curiosity” that accompanied a growing and increasingly interconnected commercial world. In Britain it first took root in academic circles. Oxford, with its vibrant experimental scientific community and unique strength in orientalist scholarship, provided particularly fertile soil for coffee consumption, although London, where a national virtuoso community began investigating this peculiar new beverage, was not far behind. On the other side of the Atlantic, however, neither tea houses nor coffee houses became as popular. The same role was filled by taverns and saloons. The importance of social networks for innovation is no mystery. As Montesquieu wrote in his 1748 classic, The Spirit of the Laws, “commerce cures destructive prejudices.” Contact with other people turns the unfamiliar into the familiar, so that regular trading relationships make the prejudices that come with isolation disappear along the way. Higher trust, in turn, reduces what economists call transaction costs and allows societies to scale up beyond the family or even the nation. Much early learning happens within the family. But over time, parents worldwide have outsourced more of the socialization process to schools and other institutions.
Google co-founders Sergey Brin and Larry Page, for instance, both had the fortune of having parents working in science and technology. Yet as they grew up on different sides of the Iron Curtain – Brin in Moscow, Page in the Midwest – it was Stanford that brought them together in 1996. While vertical learning, by which skills and knowledge are passed down within the family over generations, dominated knowledge-transmission for much of human history, horizontal learning now rules supreme.
In one of the most cited studies in sociology, published in 1973, Stanford’s Mark Granovetter demonstrated that a network with a plethora of weak ties generates a greater circulation of information than a network with a few strong ones. From an innovation perspective, this is paramount because the possibilities for new discoveries expand when populations become more interconnected. In a world where wealth is derived from ideas rather than land and objects, one of our most important resources is our social network, which acts as our “collective brain.” And when networked people are free to explore, they test more technological pathways.
As the case of Prohibition illustrates, innovation happens in serendipitous ways – which is probably why Irving Fisher and his peers ignored it. They were more concerned that Prohibition would reduce what economists call “static efficiency.” Benjamin Franklin’s dictum that “time is money” captures the essence of the concept. Static efficiency is achieved when machinery and labor are put to optimal use, so that as much as possible is produced at a given point in time. Under this logic, time spent at the local saloon was wasted – even if those barroom conversations sparked ideas that would increase productivity in the future.
If all everyone did was repetitive assembly, few new ideas would emerge. We have to sacrifice some output today to explore and develop new technologies that allow us to do things better tomorrow.
The AEA roundtable participants simply reasoned that because alcoholism makes factory discipline harder to maintain, the economic consequences of failing to act would be dire. Yet the real engine of economic growth, which distinguishes modern societies from their earliest ancestors, is dynamic efficiency – the kind that comes with technological progress over time. And this kind of progress necessarily requires a loss of static efficiency. If all everyone did was repetitive assembly, few new ideas would emerge. We have to sacrifice some output today to explore and develop new technologies that allow us to do things better tomorrow.
The Making of mRNA
The remarkable journey that led to the vaccines that rescued humanity from the Covid- 19 pandemic is a testament to this point. Although the most efficacious vaccines were produced in months, the breakthrough technology behind them had been decades in the making. At the heart of this story is Katalin Karikó, a Hungarian biochemist whose pioneering research made the coronavirus vaccines of BioNTech and Moderna possible. Karikó, who joined BioNTech in 2014, had been studying RNA molecules since the 1980s, but her funding dried up in Hungary. Undeterred, she emigrated to the United States in 1985 to take up a post at Temple University in Philadelphia, overcoming a host of obstacles including strict currency controls because Hungary remained behind the Iron Curtain. Yet even in the United States, where there were significantly more funding opportunities, she found that harnessing mRNA to fight disease was “too far-fetched for government grants, corporate funding, and even support from her own colleagues.”
After six years on the faculty at the University of Pennsylvania, she was denied tenure in 1995. Without funding to support her research, her superiors believed mRNA was a dead end. But Karikó remained convinced that mRNA held the key to future therapeutics. It was a stroke of luck that brought her together with Drew Weissman in 1997; Weissman had recently joined the university to work on dendritic cells, which are critical to the body’s immune system.
They met not while drinking at the saloon, but during an equally serendipitous activity – taking turns on a Xerox machine. They began talking about their work and, eventually, in Weissman’s words, “decided to try adding her mRNA to my cells.”
The early results were not encouraging. They even suggested that it might be impossible to turn RNA molecules into therapeutics, which helps explain why mRNA research remained a scientific backwater for so long. “It was too inflammatory, too difficult to work with,” so people just gave up, Weissman explains. But Karikó and Weissman did not, and in 2005 a breakthrough came at last.
By making chemical modifications to mRNA, they discovered that they could insert it into the dendritic cells without triggering an immune response. This would allow them to trick the cells into thinking that the molecules had been made inside the body instead of the lab, which in theory meant that the technology could be used for therapeutic purposes.
In retrospect, this discovery should have turbocharged their careers. But even after the publication of their findings, their funding applications kept being rejected. Part of the challenge was that the technology was still experimental, and so its applications were still uncertain. A meeting between Karikó and an intellectual property officer at her university is telling in this regard. The officer kept asking, “What’s it good for?” without getting a clear answer.

Although the patent was finally granted, the decisive next step was taken in 2008 by another scientist, Derrick Rossi, a researcher at Harvard Medical School who was trying to use mRNA to make stem cells. He had never heard of Karikó and Weissman at the time. Their 2005 research paper had gone largely unnoticed, even in the scientific community. Instead, Rossi was inspired by Shinya Yamanaka, a Japanese scientist, who, like Karikó, would go on to win the Nobel Prize in Medicine.
Yamanaka had demonstrated that it was possible to turn human cells into an embryonic stem cell-like state by inserting four genes. The problem was that the genes he inserted ended up back in the DNA, which increased a person’s risk of cancer. Rossi figured that by using mRNA instead, it would be possible to reprogram human skin cells to act as though they were stem cells. But he soon ran into the same problem that had long perplexed Karikó and Weissman: “The cell was responding as though a virus was coming in, they were killing themselves.”
Looking for a solution, Rossi stumbled on their 2005 paper, and with some chemical modifications, he made their approach work. To turn his discovery into a medical reality, however, Rossi needed funding, and was introduced to Noubar Afeyan – a venture capitalist who would found a company, Moderna, to commercialize the science behind mRNA.
This path to discovery could not have been choreographed or planned. Chance played a big role, just as it did in America’s saloons before prohibition. Yet it is also true that what happened thereafter followed a much more predictable pattern. If the chain of events that culminated in Karikó and Weissman’s discovery, not to mention Rossi’s application of it, would have been impossible to conceive beforehand, the development and rollout of the coronavirus vaccine concerned a much narrower challenge. And because the challenge could be clearly defined, research efforts could be planned accordingly and executed at staggering speed.
On a Friday in late January 2020, Ugur Sahin, the co-founder and CEO of BioNTech, learned that a new coronavirus had been discovered in China. The following Monday, he summoned his board to make an announcement: BioNTech, which had previously focused on the next generation of cancer treatments, would make developing a Covid- 19 vaccine its new priority. What he called Project Lightspeed started at BioNTech’s laboratories in Mainz, Germany, just days after the SARS-CoV-2 genetic sequence was first made public. From that point forward, the task was “to remove all elements of chance” by making innovation a regular process of disciplined attack. By late February, 20 vaccine candidates had been identified, of which four were selected for a trial.
Another critical point about technological progress is that every breakthrough begins life facing ubiquitous uncertainty. We cannot know if something new will catch on until someone has taken the risk of investing in it.
Taking a candidate into production, however, required the capacity to test, develop, produce, and distribute vaccines at mass scale, which neither Moderna nor BioNTech possessed. To overcome this deficit, BioNTech partnered with pharmaceutical giant Pfizer, while Moderna relied on Operation Warp Speed, a U.S. government program set up by the Trump administration. If the process of exploration was largely horizontal, exploitation was almost entirely vertical: it required large-scale bureaucracy and managerial hierarchies to succeed.
From Hayek to Weber
The case of mRNA also underlines another critical point about technological progress, which is that every breakthrough begins life facing ubiquitous uncertainty. We cannot know if something new will catch on until someone has taken the risk of investing in it. In 1999, for example, the venture capital firms Sequoia Capital and Kleiner Perkins each invested $12.5 million in Google. When Sequoia sold its stake six years later, it was worth over $4 billion and had returned 320 times the initial investment.
Yet such numbers simply underscore that Google was not a sure bet in 1999. Other companies like Yahoo! and AltaVista dominated the search-engine space at the time, and several experienced venture capitalists decided not to invest. When Google’s marketing manager, Susan Wojcicki, asked one of Bessemer Venture’s partners to meet with Page and Brin, who had rented space in her garage, he allegedly joked, “How can I get out of this house without going anywhere near your garage?”
In the uncertain world of discovery, not even the smartest people can be expected to get things right every time. But Bessemer Venture’s failure to see the promise of Google’s search engine did not contain its rise. Brin and Page were fortunate to operate in a decentralized economic system where many investors could bet on different technologies.
Had Brin’s family not left Soviet Russia, chances are he would not be the co-founder of Google, just like Karikó is unlikely to have pioneered mRNA in socialist Hungary.
Behind the Iron Curtain inventors needed permission for almost anything, and if they were turned down by the state, they had few alternative options. So, fewer bets were naturally made. This helps explain why none of the great commercial inventions of the 20th century were made in planned economies. Decentralized systems allow for thousands of barren trials so that one might eventually succeed; centrally planned ones do not. As Friedrich von Hayek put it, the dispersed nature of what people know cannot be overcome “by first communicating all this knowledge to a central board which, after integrating all knowledge, issues its orders.” It must be solved by “some form of decentralization.”
A paradox of the postwar era is that it symbolizes the historic confluence of oppressive factory work, cascading productivity, and shared prosperity in the West, as well as oppressive political regimes and rapid growth in the centrally planned economies of the East.
Hayek, of course, was writing at end of World War II, when bureaucratic planning had reached new commanding heights. But his insights remain as relevant today. As then, there are many instances where experts are better placed to make decisions about what must be done, pandemics and the climate crisis being two prominent examples. Yet in other cases, where no real consensus has been formed, the decision to rely on expert opinion merely shifts the problem to selecting the experts. And when a field or a discovery is new, this is a particularly difficult task.
Karikó and Weissman’s seminal paper was long unknown, not only to the outside world but even within the scientific community. In fact, their tenacity depended on ignoring the expert naysayers. To make mRNA work, in Hayek’s words, they had to go through a “voyage of exploration into the unknown.” And because exploration requires sacrificing time and resources today in hopes of greater gains tomorrow, government planners and corporate managers alike struggle to oversee and motivate inventors who hold deeper expertise than anyone else.
So it is not surprising that, as MIT’s Daron Acemoglu and collaborators have shown, companies operating at the cutting edge choose to decentralize decision-making. When dealing with radically new technologies, whose benefits and applications are uncertain, bureaucratic planning almost always fails.
Hayek’s teacher, Ludwig von Mises, surely agreed. Writing in 1944, Mises opened his book Bureaucracy with the following line: “Nobody doubts that bureaucracy is thoroughly bad and that it should not exist in a perfect world.” If this was true, the world he observed around him certainly did not change for the better. Even when the war ended, bureaucratic management persisted, not only through state control of strategic industries but in the private sector as well.
A paradox of the postwar era is that it symbolizes the historic confluence of oppressive factory work, cascading productivity, and shared prosperity in the West, as well as oppressive political regimes and rapid growth in the centrally planned economies of the East. This age of planning produced America’s Golden Age, Germany’s Wirtschaftswunder, Italy’s il miracolo economico, France’s les trente glorieuses, Spain’s el milagro, not to mention Japan’s and Korea’s great leaps, or indeed that of the Soviet Union. Around the world, “the visible hand of management replaced what Adam Smith referred to as the invisible hand of market forces,” to borrow historian Alfred Chandler Jr.’s memorable phrase.
Of course, even Smith’s pin factory, depicted in The Wealth of Nations (1776), had a visible hand of its own. In it, pin-making was divided into many small sequential steps, allowing workers to specialize and so boost productivity. But everything was done in house, not through the invisible hand of the market. In a pure market system, there would be no managers overseeing production. Each step would be handled by individuals buying and selling to one another based on changing prices.
For instance, in pin production, the wire-drawer would auction off the wire to a buyer, who would then take it to be cut, and later sell the cut pieces to a specialist in sharpening. At each stage, new bids, payments, and transport would be involved just to move from one part of pin-making to the next. Of course, no factory in the world is organized this way and for good reason: any gains from the division of labor would be swamped by endless rounds of haggling, transporting, and quality checks. That, as Ronald Coase explained in “The Nature of the Firm” (1937), is why companies and hierarchies exist. They let people cooperate under ongoing contracts rather than one-off transactions, and they permit the use of new technologies, like factories, on a massive scale.

To some students of history, this example might seem familiar. Indeed, one of the hidden truths about the capitalist enterprise is that its internal organization bears a striking resemblance to the crude material balance calculations used by Soviet planners. Inside the firm, there are no market prices to signal where time and resources are best allocated. Instead, individuals are given objectives that they strive to fulfill in exchange for a fixed salary, job security, and the prospect of career advancement. Employees engage in trading favors and disfavors to climb the ranks of the bureaucracy, while superiors use social engineering tactics, exert pressure, and shape incentives, much like an authoritarian state – though workers in a market system generally do have many more outside options, and so they can leave at will. Moreover, one of the market economy’s great strengths, as Coase emphasized, is the ability of firms to choose between bureaucratic command-and-control and a system based on horizontal transactions. Under Soviet planning, there was no such choice. And in a decentralized system, firms are ultimately subject to market discipline, with those that incur losses shrinking or vanishing – a fate not shared by state-run bureaucracies that are permitted to operate at a loss. Yet it is also true that governments at times operate under intense geopolitical competition, which has its own disciplining effect. The Soviet Union, for instance, effectively exploited many technologies made in the West, and even made some strides of its own, while competing with America for global hegemony.
In the end, innovation is not only about ideation but also about converting these ideas into practical, reliable products that are available and affordable. And when efforts shift from exploration to exploitation, vertical lines of command trump horizontal lines of exchange. During the late 19th century, for example, independent inventors were the primary explorers of new technologies, and they relied on licensing their discoveries to the major corporations of the day that turned these patents into marketable products.
As these technologies matured, competition shifted from innovation to price, spurring centralization and consolidation, transforming what Hayek called a “market economy” consisting of individual actors and small firms into an economy of command-and-control organization in large companies connected by market exchange.
Latecomers can take different paths to prosperity. The Crown of England did not gather a group including barons, bishops, bankers, and tinkerers to create the modern world.
The simple insight that centralization and consolidation naturally follow periods of decentralization has profound implications for economic development. It means that the economic organization that excels at inventing the industrial future is not necessarily the most suitable for catching up to an existing target. In fact, backwardness creates opportunities for latecomers to leapfrog exploration and imitate innovators’ successes since they can use centralizing institutions from the onset. Latecomers, in other words, can take different paths to prosperity. The Crown of England did not gather a group including barons, bishops, bankers, and tinkerers to create the modern world. But after Britain had spearheaded the Industrial Revolution, that is effectively what Japan did with the Meiji Restoration.
To see how this transition from exploration to exploitation favors fundamentally different forms of organization, consider the postwar period, which, as noted, was a time when capitalism around the world bureaucratized. Many of the technologies underpinning global growth were made in America before the war and invented through decentralized exploration. Few breakthroughs have transformed the world more than the automobile, and no place had a more remarkable impact on its ascent than Detroit.
In the early 20th century, the dynamism of the Motor City was strikingly similar to that of Silicon Valley in the age of computers. In both, job hopping was the norm, which allowed ideas to spread like fire from one firm to another. Many inventors and engineers who left incumbents did so to set up their own shops. Yet if one obscure start-up deserves to be singled out, it would surely be the company of Henry Ford, which managed to survive the early shakeout and went on to build some of the wonders of the world.
When the Highland Park factory with its moving assembly line opened on Manchester Street in 1910, The New York Times marveled, “it offers a striking illustration of the solidity of this pioneering company and the methods it adopts for the care of its customers.” Indeed, Highland Park soon produced the Model T at a sufficiently low price for it to become the people’s vehicle.
Testifying to American technological leadership as well as to the transition from exploration to exploitation was the remarkable stream of visits from foreign delegations to Ford’s new factories. In the 1930s, countless foreigners spent weeks, sometimes months at Ford’s new, vertically integrated River Rouge factory.
In June 1937, Ferdinand Porsche led his own group of Volkswagen engineers to Detroit on a state-sponsored mission as the protégé designer of Adolf Hitler, whose ambition was to produce a German “people’s car.” Rather than searching for new ideas, Porsche was on the hunt for machinery and skilled technicians for his own factory, which was already in the planning stage. Around the same time, the Italian carmaker Fiat led another delegation, including Vittorio Bonadè Bottino, a leading architect in Fascist Italy, tasked by Benito Mussolini with designing the enormous Mirafiori factory at the outskirts of Turin. There were also convoys of Soviet bureaucrats, who through an extensive technology transfer agreement created Russia’s own “River Rouge” with the opening of the Gorky Automobile Factory (Gaz) in 1932.
While much of the productivity growth over the centuries can be attributed to incremental product and process improvements by large, established organizations, these alone are insufficient.
As H. J. Freyn, a prominent consultant who had spent an extensive period in the Soviet Union advising on industrial development, astutely put it in 1931: “A modern business enterprise can scarcely be operated or managed by applying the principles of democracy.” Fascists on the right and communists on the left heralded a new century of central authority and planning.
For theorists of the interwar period like Karl Polanyi, there was a sense of a momentous institutional reversal as laissez-faire imploded and gave way to a new age of competitive technological upgrading orchestrated by activist governments. Mussolini, who started out as the editor of an Italian socialist newspaper only to become the first leader of the fascist world, was at least consistent in his views on decentralization. “The Doctrine of Fascism,” an essay he published in 1932, is emblematic for its attacks on liberalism, individualism, and democracy as “outgrown ideologies of the 19th century.”
Yet a closer look at the 19th century reveals a similar pattern of development. While England had clearly been moving closer to laissez- faire, latecomers also leveraged the advantages of backwardness by adopting technologies invented in Britain. And they relied heavily on the symbiotic relationship between big business and the state to do so.
Take Prussia, where Karl vom und zum Stein, Karl August von Hardenberg, and their successors prompted a revolution from above and used a variety of means to drive industrial catch-up. They handed over government- purchased technologies to private companies and created institutions to mobilize scarce resources “all in order to pressure Prussian industry to modernize its production method.” Similarly, after Bismarck’s unification, Germany – justly regarded as a liberal autocracy – did much to spur the growth of big business, which was capable of absorbing and exploiting new technologies at greater pace and scale than its British competitors.
Max Weber, writing in the wake of these developments, unsurprisingly emphasized both the role of bureaucracy and capitalism in Western civilization. The well-defined hierarchical structures of bureaucracy, he noted, “are capable of attaining the highest degree of efficiency and is in this sense formally the most rational known means of exercising authority over human beings.”
Yet despite Weber’s admiration for the superior efficiency of bureaucracy, he also feared that its spread could stifle the dynamism of capitalism and thwart innovation over the long run. And in this he was right. While much of the productivity growth over the centuries can be attributed to incremental product and process improvements by large, established organizations, these alone are insufficient. Breakthrough innovations are the foundation upon which such incremental advancements are built.
Startups like OpenAI and Anthropic are now challenging Meta and Google. In theory, large companies have the financial strength to make bigger and riskier bets. But in reality, they tend to play it safe.
One can improve a horse carriage in terms of design and functionality, but eventually one needs a radical innovation to create a motorcar, or progress will stall. And tellingly, none of the leading makers of either bicycles or horse carriages became leading producers of automobiles. Great leaps in technology usually come from challengers, as is evident from the endless churn among the companies that make up the S&P 500.
In recent times, this phenomenon has played out in various industries, from the development of highly effective Covid-19 vaccines by startups, to the rise of e-commerce championed by outsider Amazon, to the transformation of the media landscape by the likes of X (formerly, Twitter), Meta, and You- Tube. Even in capital-intensive industries such as space and electric vehicles, established players such as Boeing, Lockheed Martin, GM, and Volkswagen have been outpaced by challengers like SpaceX and Tesla.
The same appears true at the forefront of artificial intelligence, where startups like OpenAI and Anthropic are now challenging Meta and Google. In theory, large companies have the financial strength to make bigger and riskier bets. But in reality, they tend to play it safe. As Chicago’s Ufuk Akcigit and Harvard’s William Kerr have shown in a detailed study of patented inventions, challengers have a competitive advantage in spawning major technological breakthroughs.
Today’s tech giants may still innovate, but their growth comes at a cost. As they absorb more of the world’s top talent, those they hire become less inventive than they previously were at startups.
Weber understood why. Bureaucracy thrives on stability, not risk. As technologies mature and processes become standardized, managers gain greater control over production and efficiency. But when a field is still evolving and experimentation is key, measuring performance is much harder. In these uncertain conditions, strict oversight can backfire – surveillance discourages collaboration, keeping workers focused on narrow, well-defined tasks instead of exploring new ideas.
Centralized bureaucracies that operate on clear rules and predictable workflows therefore struggle when conditions change. Projects requiring multiple layers of approval or broad internal consensus rarely push boundaries. True innovation often requires breaking rules, not following them. When Lingfei Wu and colleagues analyzed some 65 million scientific papers, patents, and software projects from the postwar years to the Internet era, they found that solo inventors and decentralized teams consistently generated more disruptive ideas and technologies, whereas larger and hierarchical ones focused on developing existing ones. Like large movie studios, they produced sequels instead of new narratives.

Challengers tend to prevail for another strategic reason. Incumbents are often reluctant to pursue new breakthroughs for fear of putting existing revenues from older technologies at risk. Kodak is a classic example. Although they moved into research on computers in the 1960s and developed a digital camera in 1975, the product was dropped. Executives feared cannibalizing the company’s main source of income, its photographic film business.
To protect revenues in the short run, management ended up sacrificing the company in the long run, filing for Chapter 11 bankruptcy in January 2012. The great inventor Thomas Edison was implacably hostile to the alternating electrical current systems that George Westinghouse was developing because they challenged the direct current system of his own General Electric. And when an industry is dominated by a few behemoths like GE, there is a real risk that conservatism inside one company ends up reducing the pace of innovation in the economy as a whole.
To be sure, Edison’s concern was not just that GE faced a lower return on investment relative to outsiders with no rents to cannibalize. It was also a matter of pride and recognition. It is therefore perhaps unsurprising that young inventors, who are less intellectually invested in the status quo, are typically more creative. Even in science, where the profit motive is less prevalent, established scholars act as guardians of the existing order.
Recent research shows that it often takes the passing of a star for outsiders to challenge the leadership in a field and so advance the frontiers of our knowledge. Max Planck was on to something when he suggested that science progresses one funeral at a time. Over the long run, progress entails creative destruction in both science and technology.
The Road to Stagnation
A longstanding debate in economics has revolved around the existence of hierarchies. While neoclassical economists such as Ronald Coase and Oliver Williamson have argued that corporations help achieve economies of scale and drive down costs, thus increasing the accessibility of goods to the populace, neo-Marxians like Stephen Hymer and Stephen Marglin have suggested that hierarchies emerge as a means of power and resource monopolization at the expense of society.
According to the latter group, the growth of corporations has resulted in a loss of human welfare. They echo the words of historian Eric Hobsbawm: “It is often assumed that an economy of private enterprise has an automatic bias towards innovation, but this is not so. It has a bias only towards profit.”
Yet these views are not mutually exclusive. In fact, they are both essential if we want to understand the rise and fall of growth. Early in the technology lifecycle, exploration thrives in a decentralized environment. But once a prototype proves viable, the focus shifts. The next challenge is scaling production, cutting costs, and increasing efficiency – and this is where centralization and corporate consolidation take over.
Economist Mancur Olson made perhaps the clearest argument about how vested interests can strangle progress. His conclusion was short but sour: when interest groups entrench themselves across industries, stagnation follows.
At first, this comes with tangible benefits to society. As Joseph Schumpeter pointed out, the capitalist achievement did not consist of providing “more silk stockings for queens but in bringing them within the reach of factory girls in return for steadily decreasing amounts of effort.” Beyond driving down consumer prices, consolidation streamlines research and development, reducing wasteful duplication across competing firms.
Indeed, as a vaccine’s efficacy nears 100 percent, further investment in incremental improvements yields diminishing – or even negative – returns, diverting resources away from other critical areas. At this point the easy gains are gone, and the focus shifts once more from perfecting production to protection.
When this happens, big companies stop innovating and start lobbying. Instead of investing in productive pursuits, they resort to anti-competitive tactics and pressure the government for regulation that shields them from competition. One need not be a cynic to think that it is no coincidence that politically connected companies take out fewer patents.
Economist Mancur Olson made perhaps the clearest argument about how vested interests can strangle progress. His conclusion was short but sour: when interest groups entrench themselves across industries, stagnation follows. A recurring theme of his book, Rise and Decline of Nations, is that decentralization is necessary for further progress as a country approaches the technological frontier. But this is rarely in the interest of incumbents who dislike competition and favor the status quo.
Stagnation happens when institutions fail to adapt to new technological realities or create the space to explore new avenues of progress, generally because incumbents seek to prevent competition from outsiders. In fact, the capacity for institutional change in the wake of new technological realities can go far in explaining why America has been the technology leader for over a century.
However, latecomers can skip the costly trial-and-error phase by adopting technologies developed elsewhere, often with state intervention accelerating the process. Indeed, what is entirely absent from Olson’s account is precisely the role of the state. Yet, constant revolution from above took the Soviet Union – where Stalin’s terror made sure that no interest groups could organize safely – some way toward the technological frontier under intense pressure from geopolitical competition.
Just as special interests can lobby governments to thwart competition in the private sector, states can check the clout of private enterprise – a policy Xi Jinping has vigorously pursued since surging to power in 2012. The challenge for powerful autocrats is a different one. To stay in power they must invest heavily in surveillance technologies that allow them to control the private sector and society to prevent mobilization against the regime. This might, under specific circumstances, give the economy a boost in the short run, as some sputniks are delivered. But it also risks undermining its dynamism over the long run.