The Rise and Fall of American Growth


The Rise and Fall of American Growth

amazon icon2   barnes and noble icon2

robert gordon, an economist at Northwestern University, specializes in productivity, growth – and contrarianism.

Illustrations by Steven Guarnaccia

Published January 19, 2016.


Worried about lagging economic growth? Never fear: Silicon Valley will save us. … The certainty that technological change is far and away the most important driver of economic growth in advanced industrialized economies is at least as old as Robert Solow's 1956 landmark analysis.

Robert Gordon, an economist at Northwestern known for his inclination to rain on his colleagues' parades, certainly doesn't dispute the point. But he does argue that there is nothing inevitable about technological change.

robot waiter

Indeed, his new book, The Rise and Fall of American Growth,* excerpted here, makes the case that the genie has deserted us – that the slowdown in productivity over the past decade is prelude to a long dry spell in which promised breakthroughs ranging from artificial intelligence to self-driving vehicles will have only a modest impact on living standards. Read it and weep.
- Peter Passell

Can the Future Match the Great Inventions of the Past?

The epochal rise in the U.S. standard of living that occurred from 1870 to 1940, with continuing benefits to 1970, represents the fruits of the Second Industrial Revolution (IR #2). Many of the benefits of this unprecedented tidal wave of inventions show up in measured GDP and hence in output per person, output per hour and productivity, which grew more rapidly during the half-century 1920-70 than before or since. Beyond their contribution to the record of measured growth, these inventions also benefited households in many ways that escaped measurement by GDP along countless dimensions, including the convenience, safety and brightness of electric light compared to oil lamps; the freedom from the drudgery of carrying water made possible by clean piped water; the value of human life itself made possible by the conquest of infant mortality.

The slower growth rate of measured productivity since 1970 constitutes an important piece of evidence that the Third Industrial Revolution associated with computers and digitalization has been less important than IR #2. Not only has the measured record of growth been slower since 1970 than before, but the unmeasured improvements in the quality of everyday life created by IR #3 are less significant than the unmeasured benefits of the earlier industrial revolution.

This chapter addresses the unknown future by closely examining the nature of recent innovations and by comparing them with future aspects of technological change that are frequently cited as most likely to boost the American standard of living over the next few decades. There is no debate about the frenetic pace of innovative activity, particularly in the sphere of digital technology, including robots and artificial intelligence. Instead, this chapter distinguishes between the pace of innovation and the impact of innovation on the growth rates of productivity.

Capital investment itself waxes and wanes depending not just on the business cycle but also on the potential profit made possible by investing to produce newly invented or improved products.
Innovation Through History: The Ultimate Risk-Takers

The entrepreneurs who created the great inventions of the late 19th century – not just Americans, including Thomas Edison and the Wright Brothers, but also foreigners, such as Karl Benz – deserve credit for most of the achievements of IR #2, which created unprecedented advances in the American standard of living in the century after 1870. Individual inventors were the developers not just of new goods, from electric light to the automobile to processed corn flakes to radio, but also of new services such as the department store, mail-order catalog retailing and the motel by the side of the highway.

Most studies of long-term economic growth attempt to subdivide the sources of growth among the inputs, particularly the number of worker-hours, the amount of physical capital per worker-hour and the "residual" that remains after the contributions of labor and capital are subtracted out. That residual, defined initially in Robert Solow's pioneering work of the 1950s, often goes by its nickname, "Solow's residual," or by its more formal rubric, "total factor productivity" (TFP). Though primarily reflecting the role of innovation and technological change, increases in TFP also respond to other types of economic change going beyond innovation – for instance, the movement of a large percentage of the working population from low-productivity jobs on the farm to higher-productivity jobs in the city. To his own and others' surprise, Solow found that only 13 percent of the increase in U.S. output per worker between 1910 and 1950 resulted from an increase in capital per worker; this famous result seemed to "take the capital out of capitalism."

The usual association of TFP growth with innovation misses the point that innovation is the ultimate source of all growth in output per worker-hour, not just the residual after capital investment is subtracted out. Capital investment itself waxes and wanes depending not just on the business cycle but also on the potential profit made possible by investing to produce newly invented or improved products. As Evsey Domar famously wrote in 1961, without technical change, capital accumulation would amount to "piling wooden plows on top of existing wooden plows."

Technological change raises output directly and induces capital accumulation to create the machines and structures needed to implement new inventions. In addition, innovations are the source of improvements in the quality of capital – for example, the transition from the rotary-dial telephone to the iPhone, or from the Marchant calculator to the personal computer running Excel. The standard technique of aggregating capital input by placing a higher weight on short-lived capital, such as computers, than on long-lived capital, like structures, has the effect of hiding the contribution of innovation in shifting investment from structures to computers inside the capital input measure.


This leaves education and reallocation as the remaining sources of growth beyond innovation itself. However, both of these also depend on innovation to provide the rewards necessary to make the investment to stay in school or to move from farm to city. This is why there was so little economic growth between the Roman era and 1750, as peasant life remained largely unchanged. Peasants did not have an incentive to become educated because, before the wave of innovations that began around 1750, there was no reward to the acquisition of knowledge beyond how to move a plow and harvest a field.

Similarly, the reallocation of labor from farm to city required the innovations that began in the late 18th century and created the great urban industries to provide the incentive of higher wages to induce millions of farm workers to move. Thus every source of growth can be reduced back to the role of innovation and technological change.

The last three decades of the 19th century were the glory years of the self-employed American entrepreneur/inventor. A U-shaped interpretation of entrepreneurial history starts with a primary role for individual entrepreneurs, working by themselves or in small research labs, like Edison's. By the 1920s, the role of the individual entrepreneur reached the bottom part of the U, as innovation came to be dominated by large corporate research laboratories. Much of the early development of the automobile culminating in the powerful Chevrolets and Buicks of 1940-41 was achieved at the GM labs. Similarly, much of the development of the electronic computer was carried out in the laboratories of large corporations. The transistor, the fundamental building block of modern electronics and digital innovation, was invented by a team led by William Shockley at Bell Labs in late 1947.

The R&D division of IBM pioneered most of the advances of the mainframe computer era from 1950 to 1980. Improvements in consumer electric appliances occurred at large firms such as General Electric, General Motors and Whirlpool, while RCA led the early development of television.

But then the process began to climb the right side of the U, as the seminal developments of the transition from mainframes to personal computers and the Internet were pioneered by individual entrepreneurs. A pivotal point in this transition was the decision by IBM, the developer in 1981 of the first widely purchased personal computer, to farm out not just the creation of the operating system software but its ownership, to two young entrepreneurs, Paul Allen and Bill Gates, who had founded Microsoft in 1975. The Third Industrial Revolution, which consists of the computer, digitalization and communication inventions of the past 50 years, has been dominated by small companies founded by individual entrepreneurs, each of whom created organizations that soon became very large corporations. Allen and Gates were followed by Steve Jobs at Apple, Jeff Bezos at Amazon, Sergei Brin and Larry Page at Google, Mark Zuckerberg at Facebook and many others.

The left side of the entrepreneurial "U" is well-documented. The percentage of all U.S. patents granted to individuals fell from 95 percent in 1880, to 73 percent in 1920, to 42 percent in 1940, and then gradually to 21 percent in 1970 and 15 percent in 2000. The decline in the role of individuals occurred not just because of the increased capital requirements of ever more complex products, but also because the individuals who developed the most successful products formed large business enterprises. Edison's early light bulb patents ran out in the mid-1890s, leading to the establishment of General Electric laboratories to develop better filaments. By the same time, Bell's initial telephone invention had become the giant AT&T, which established its own laboratory (later known as Bell Labs); by 1915, it had developed amplifiers that made nationwide long-distance telephone calls feasible.

Successive inventions were then credited to the firm rather than the individual. Furthermore, a natural process of diminishing returns occurred in each industry. The number of patents issued in three industries that were new in the early 20th century – the automobile, airplane and radio – exhibit an initial explosion of patent activity followed by a plateau. Or, in the case of automobiles after 1925, an absolute decline.

Individual inventors flourished in the United States in part because of the democratic nature of the patent system, which allowed them to develop their ideas without a large investment in obtaining a patent; once the patent was granted, even inventors who lacked personal wealth were able to attract capital funding and sell licenses.

The failure of the share of patents claimed by individuals to turn around after 1980 appears to contradict the U-shaped evolution of innovation. Instead, that share remains at 15 percent, down from 95 percent in 1880. This may be explained by the more rapid formation of corporations by individuals in the past three decades than in the late 19th century. Though the Harvard dropout Bill Gates may be said to have invented personal computer operating systems for the IBM personal computer, almost all Gates' patents were obtained after he formed Microsoft in 1975. The same goes for the other individuals who developed Google's search software and Facebook's social network.

The Historical Record: The Growth of Total Factor Productivity

The overwhelming dominance of the 1920-70 interval in making possible the modern world is clearly evident. Though the great inventions of IR #2 mainly took place between 1870 and 1900, at first their effect was small. The late Paul David provided a convincing case that almost four decades were required after Edison's first power station in 1882 for the development of the machines and methods that finally allowed the electrified factory to emerge in the 1920s. Similarly, Karl Benz's invention of the first reliable internal combustion engine in 1879 was followed by two decades in which inventors experimented with brakes, transmissions and other ancillary equipment needed to transfer the engine's power to axles and wheels. Even though the first automobiles appeared in 1897, they did not gain widespread acceptance until the price reductions made possible by Henry Ford's moving assembly line, which was introduced in 1913.

The digital revolution, IR #3, also had its main effect on TFP after a long delay. Even though the mainframe computer transformed many business practices starting in the 1960s, and the personal computer largely replaced the typewriter and calculator by the 1980s, the main effect of IR #3 on TFP was delayed until the 1994-2004 decade, when the Internet, Web browsers, search engines and e-commerce produced a pervasive changes in every aspect of business practice.

That brings three questions front and center: First, why was the main effect of IR #3 on TFP limited to the 1994-2004 decade? Second, why was TFP growth so slow in the subsequent 2004-14 decade? Third, what are the implications of recent slow recent TFP growth for the future evolution of TFP and labor productivity over the next quarter century?

Will future innovations be sufficiently powerful and widespread to duplicate the relatively brief revival in productivity growth between 1994 and 2004? Examination of the evidence does not lead to optimism.
Achievements to Date of the Third Industrial Revolution

IR #3's main impact on TFP growth was driven by an unprecedented and never-repeated rate of decline in the price of computer speed and memory, and by a never-since-matched surge in the share of GDP devoted to investment in information and computer technology (ICT).

The mediocre record of TFP growth after 2004 underlines the temporary nature of the late 1990s revival. More puzzling is the absence of any apparent stimulus to TFP growth in the quarter century between 1970 and 1994. After all, mainframe computers created bank statements and phone bills in the 1960s and powered airline reservation systems in the 1970s. Personal computers, ATMs and bar code scanning were among the innovations that created productivity growth in the 1980s. Reacting to the failure of these innovations to boost productivity growth, Robert Solow quipped, "You can see the computer age everywhere but in the productivity statistics." The best explanation: The gains from the first round of computer applications were partially offset by a severe slowdown in productivity growth in the rest of the economy.

The achievements of IR #3 can be divided into two major categories: communications and information technology. Within communications, progress started with the 1983 breakup of the Bell Telephone monopoly. After a series of mergers, landline service was provided primarily by a new version of AT&T and by Verizon, soon to be joined by major cable television companies, such as Comcast and Time-Warner, which offered landline phone service as part of their cable TV and Internet packages.

The mobile phone, the major advance in the communications sphere, made a quick transition from heavyweight brick-like models in the 1980s to the sleek small instruments capable of phoning, messaging, e-mailing and photography by the late 1990s. The final communications revolution occurred in 2007 with the introduction of Apple's iPhone. By 2015, there were 183 million smartphone users in the United States, or roughly 60 per 100 members of the population.

The "I" and the "T" of ICT began in the 1960s with the mainframe computer, which eliminated routine clerical labor previously needed to prepare telephone bills, bank statements and insurance policies. Credit cards would not have been possible without mainframe computers to keep track of the billions of transactions. Gradually, electric memory typewriters, and later, personal computers, eliminated repetitive retyping of everything from legal briefs to academic manuscripts.

In the 1980s, three additional stand-alone electronic inventions introduced a new level of convenience into everyday life. The first of these was the ATM, which made personal contact with bank tellers unnecessary. In retailing, two devices greatly raised the productivity and speed of the checkout process: the bar code scanner, and the authorization devices that read credit cards and deny or approve a transaction within seconds.

The late 1990s, when TFP growth finally revived, witnessed the marriage of computers and communication. Within the brief half-decade between 1993 and 1998, the stand-alone computer was linked to the outside world through the Internet, and by the end of the 1990s, Web browsers and e-mail had become universal. The market for Internet services exploded, and by 2004, most of today's Internet giants had been founded. Throughout every sector, paper and typewriters were replaced by flat screens running powerful software.

Although IR #3 was indeed revolutionary, its effect was felt in a limited sphere of human activity – in contrast to IR #2, which changed everything. Categories of personal consumption little affected by the ICT revolution included food eaten at home and away, clothing and footwear, motor vehicles and motor fuel, furniture, household supplies and appliances. In 2014, fully two-thirds of consumption expenditures went for services, including rent, health care, education and personal care. But here, the ICT revolution had virtually no effect. A pedicure is a pedicure whether the customer is reading a magazine or surfing the Web on a smartphone.

This brings us back to Solow's quip that we can see the computer age everywhere but in the productivity statistics. The final answer to Solow's computer paradox is that computers are not everywhere. We don't eat computers or wear them or drive to work in them or let them cut our hair. We live in dwellings that have appliances much like those of the 1950s, and we ride in vehicles that perform the same functions as in the 1950s, albeit with more convenience and safety.

What are the implications of the uneven progress of TFP? Should the lugubrious 0.40 percent growth rate of the most recent 2004-14 decade be considered the most relevant basis for future growth? Or should our projection for the future be partly or largely based on the 1.02 percent average TFP growth achieved by the decade 1994-2004? There are several reasons beyond the temporary nature of the TFP growth recovery in 1994-2004 to regard those years as unique and not relevant for the next several decades.

Could the Third Industrial Revolution Almost Be Over?

What factors caused the TFP growth revival of the late 1990s to be so temporary and to die out so quickly? Most of the economy has already benefited from the Internet revolution, and in this sphere of economic activity, methods of production have been little changed over the past decade. The revolutions in everyday life made possible by e-commerce and search engines were already well established – Amazon dates back to 1994, Google to 1998 and Wikipedia and iTunes to 2001.

Will future innovations be sufficiently powerful and widespread to duplicate the relatively brief revival in productivity growth between 1994 and 2004? Examination of the evidence does not lead to optimism.

The slowing transformation of business practices. The digital revolution centered on 1970-2000 utterly changed the way offices function. In 1970, the electronic calculator had just been introduced, but the computer terminal was still in the future. Office work required innumerable clerks to operate the keyboards of electric typewriters that had no ability to download content from the rest of the world and that, lacking a memory, required repetitive retyping of everything from legal briefs to academic research papers.

By 2000, every office was equipped with Web-linked personal computers that not only could perform any word-processing task, but could also perform any type of calculation virtually instantaneously as well as download multiple varieties of content. By 2005, flat screens had completed the transition to the modern office, and broadband service had replaced dial-up service at home.

In the past decade, business practices, while relatively unchanged in the office, have steadily improved outside of the office as smartphones and tablets have become standard business equipment. The cable guy arrives not with a paper work order and clipboard, but with a multipurpose smartphone. Product specifications and communication codes are available on the phone, and the customer completes the transaction by scrawling a signature on the screen. Paper has been replaced almost everywhere outside of the office. Airlines are well along in equipping pilots with smart tablets that contain all the information previously provided by large paper manuals. Maintenance crews at Exelon's six nuclear power stations in Illinois are the latest to be trading in their three-ring binders for iPads.

A leading puzzle of the current age is why the near-ubiquity of smartphones and tablets has been accompanied by such slow economy-wide productivity growth, particularly since 2009. One answer is that smartphones are used in the office for personal activities. Some 90 percent of office workers, whether using their office personal computers or their smartphones, visit recreational Web sites during the workday. Almost the same percentage admit that they send personal e-mails and more than half report shopping for personal purposes during work time.

Stasis in retailing. Since the development of "big-box" retailers in the 1980s and 1990s, and the conversion to bar code scanners, little has changed in the retail sector. Payment methods have gradually changed from cash and checks to credit and debit cards. In the early years of credit cards in the 1970s and 1980s, checkout clerks had to make voice phone calls for authorization. Then there was a transition to terminals that would dial the authorization phone number, and now the authorization arrives within a few seconds. The big-box retailers brought with them many other aspects of the productivity revolution. Walmart and others transformed supply chains, wholesale distribution, inventory management, pricing and product selection, but that productivity-enhancing shift away from small-scale retailing is largely over. The retail productivity revolution is high on the list of the many accomplishments of IR #3 that are largely completed and will be difficult to surpass in the next several decades.

What is often forgotten is that we are well into the computer age, and many Home Depots and local supermarkets have self-checkout lines that allow customers to scan their paint cans or groceries through a standalone terminal. But except for small orders, doing so takes longer, and customers still voluntarily wait in line for a human instead of taking the option of the no-wait, self-checkout lane. The same theme – that the most obvious uses of electronic devices have already been adopted – pervades commerce. Airport baggage sorting belts are mechanized, as is most of the process of checking in for a flight. But at least one human agent is still needed at each airline departure gate to deal with seating issues and stand-by passengers.

Restaurants have largely completed the transition to point-of-sale terminals that allow waitstaff to enter customer orders on screens spaced around the restaurant with no need to make a separate trip into the kitchen with a paper order form. But the waitstaff and the cooks remain human, with no robots in sight.

A plateau of activity in finance and banking. The ICT revolution changed finance and banking along many dimensions, from the humble street-corner ATM to the development of fast trading on the stock exchanges. Both the ATM and billion-share trading days are creations of the 1980s and 1990s. Average daily shares transacted on the New York Stock Exchange increased from only 3.5 million in 1960 to 1.7 billion in 2005 and then declined to around 1.2 billion per day in early 2015.

Nothing much has changed in more than a decade. And despite all those ATMs – and a transition by many customers to managing their bank accounts online – the nation still maintains a system of 97,000 bank branches, and employment of bank tellers has only declined from 484,000 in 1985 to 361,000 recently.

James Bessen, an economist at Boston University, explains the longevity of bank branches in part by the effect of ATMs in reducing the number of employees needed per branch from about 20 in 1988 to about 13 in 2004. That meant it was less expensive for a bank to open a branch, leading banks to increase the number of branches by 43 percent over the same period. This illustrates how the role of robots (in this case ATMs) in causing a destruction of jobs is often greatly exaggerated. Bessen also shows that the invention of bookkeeping software did not prevent the number of accounting clerks from growing substantially between 1999 and 2009.

Home and consumer electronics. In contrast to the decade or so of stability in procedures at work, life inside the home has been stable for nearly a half century. By the 1950s, all the major household appliances (washer, dryer, refrigerator, range, dishwasher and garbage disposal) had been invented, and by the early 1970s, they had reached most American households. Besides the microwave oven, the most important change has been air-conditioning; by 2010, almost 70 percent of American dwelling units were equipped with central AC.

Other significant changes in the home since 1965 were all in the categories of entertainment, communication and information. Television made its transition to color between 1965 and 1972, then variety increased with cable television in the 1970s and 1980s, and finally picture quality was improved with high-definition signals and receiving sets. Variety increased even further when Blockbuster and then Netflix made it possible to rent an almost infinite variety of motion picture DVDs, and now movie and video streaming has become common. For the past decade, homes have had access to entertainment and information through fast broadband connections to the Web, and smartphones have made the Web portable. But now that smartphones and tablets have saturated their potential market, further advances in consumer electronics have become harder to achieve.

Decline in business dynamism.Recent research has used the word dynamism to describe the process of "creative destruction" by which new startups and young firms are the source of productivity gains as they shift resources away from old low-productivity firms. The share of all business firms consisting of young firms (aged five years or younger) declined from 14.6 percent in 1978 to only 8.3 percent in 2011, even as the share of firms exiting (going out of business) remained roughly constant in the range of 8-10 percent. It is notable that the share of young firms had already declined substantially before the 2008-9 financial crisis.

Measured another way, the share of total employment accounted for by firms no older than five years has declined by almost half, from 19.2 percent in 1982 to 10.7 percent in 2011. This decline was pervasive across retailing and services, and after 2000 the high-tech sector experienced a large decline in startups and fast-growing young firms. In another measure of the decline in dynamism, the percentage of people younger than 30 who owned stakes in private companies declined from 10.6 percent in 1989 to 3.6 percent in 2014.

Related research on labor market dynamics points to a decline in "fluidity" as job reallocation rates fell more than a quarter after 1990, and worker reallocation rates fell more than a quarter after 2000. Slower job and worker reallocation means that new job opportunities are less plentiful and it is harder to gain employment after long jobless spells.

Objective Measures of Slowing Economic Growth

We now turn to objective measures that uniformly depict an economy that experienced a spurt of productivity and innovation in the 1994-2004 decade, but that has slowed since then, in some cases to a crawl.

Manufacturing capacity. The growth rate of manufacturing capacity proceeded at an annual rate between 2 and 3 percent from 1972 to 1994, surged to almost 7 percent in the late 1990s, and then came back down, becoming negative in 2012.

The role of ICT investment in temporarily driving up the growth rate of manufacturing capacity in the late 1990s is well known. Brookings senior fellows Martin Baily and Barry Bosworth have emphasized that if the production of ICT equipment is stripped from the manufacturing data, TFP growth in manufacturing was an unimpressive 0.3 percent per year between 1987 and 2011. MIT economist Daron Acemoglu and co-authors have also found that the impact of ICT on productivity disappears once the ICT-producing industries are excluded. And among the remaining industries, there is no tendency for labor productivity to grow faster in industries that have a relatively high ratio of expenditures on computer equipment to expenditures on total capital equipment.

Net investment. The second reason that the productivity revival of the late 1990s is unlikely to be repeated anytime soon is the behavior of net investment (gross investment less depreciation). The ratio of net investment to the capital stock has been trending down since the 1960s relative to its 1950-2007 average value of 3.2 percent. In fact, during the entire period 1986-2013, the ratio exceeded that 3.2 percent average value for only four years, 1999-2002, that were all within the interval of the productivity growth revival. The 1.0 percent value of the five-year moving average in 2013 was less than half of the value in 1994 and less than a third of the 3.2 percent 1950-2007 average. Thus the investment needed to support a repeat of the late 1990s productivity revival has been missing during the past decade.

Computer performance. The 1996-2000 interval witnessed the most rapid rate of decline in performance-adjusted prices of ICT equipment recorded to date. The faster the rate of decline in the ICT equipment deflator, the more quickly the price of computers is declining relative to their performance, or the more quickly computer performance is increasing relative to its price. The rate of decline of the ICT equipment deflator peaked at 14 percent in 1999, but then steadily diminished to barely 1 percent in 2010-14. The slowing rate of improvement of ICT equipment has been reflected in a sharp slowdown in the contribution of ICT as a factor of production to growth in labor productivity. The latest estimates of the ICT contribution by French economist Gilbert Cette and co-authors show it declining from 0.52 percentage points per year during 1995-2004 to 0.19 points per year during 2004-2013.

Moore's Law. The late 1990s were not only a period of rapid decline in the price of computer power, but simultaneously a period of rapid change in the progress of computer chip technology. Moore's Law was originally formulated in 1965 as a forecast that the number of transistors on a computer chip would double every two years. This predicted what actually happened between 1975 and 1990 with uncanny accuracy. Then the doubling time crept up to 3 years during 1992-96, followed by a recovery, a plunge in the doubling time to less than 18 months between 1999 and 2003. Indeed, this acceleration of technical progress in chip technology was the underlying cause of the rapid decline in the ratio of price-to-performance for computer equipment.

The doubling time reached a trough of 14 months in 2000, roughly the same time as the peak rate of decline in the computer deflator. But since 2006, Moore's Law has gone off the rails: The doubling time soared to eight years in 2009 and then returned gradually to four years in 2014.

Kenneth Flamm of the University of Texas examines the transition toward a substantially slower rate of improvement in computer chips and in the quality-corrected performance of computers themselves over the past decade. His data show that the "clock speed," a measure of computer performance, has been on a plateau of no change at all since 2003, despite a continuing increase in the number of transistors squeezed onto computer chips.

These factors unique to the late 1990s – the surge in manufacturing capacity, the rise associated decline in the contribution of ICT capital to labor productivity growth, and the shift in the timing of Moore's Law – all create a strong case that the dot-com era of the late 1990s was unique in its conjunction of factors that boosted growth in labor productivity and of TFP well above both the rate achieved during 1970-94 and during 2004-14. There are no signs in recent data that anything like the dot-com era is about to recur: manufacturing capacity growth turned negative during 2011-12 and the net investment ratio fell during 2009-13 to barely a third of its postwar average.

We wanted flying cars; instead we got 140 characters.
Peter Thiel
Can Future Innovation Be Predicted?

What's in store for the next 25 years? The usual stance of economic historians, notably my Northwestern colleague Joel Mokyr, is that the human brain is incapable of forecasting innovations. He states without qualification that "history is always a bad guide to the future, and economic historians should avoid making predictions." He assumes that an instrument is necessary for an outcome.

As an example, it would have been impossible for Pasteur to discover his germ theory of disease if Joseph Lister had not invented the achromatic-lens microscope in the 1820s. Mokyr's own optimism about future technological progress rests partly on the dazzling array of new tools that have arrived recently to create further research advances: "DNA sequencing machines and cell analysis," "high-powered computers" and "astronomy, nano-chemistry and genetic engineering." One of Mokyr's central tools in facilitating scientific advance is "blindingly fast search tools" so that all of human knowledge is instantly available.


Mokyr's examples of future progress do not center on digitalization but rather involve fighting infectious diseases, and the need for technology to reduce the environmental damage caused by excess fertilizer use and global warming. It is notable that innovations to fight local pollution and global warming involve fighting "bads" rather than creating "goods." Instead of raising the standard of living in the same manner as the past two centuries of innovations that have brought a wonder of new goods and services for consumers, innovations to stem the consequences of pollution and global warming seek to prevent the standard of living from declining.

I believe the common assumption that future innovation can't be forecasted is wrong. Indeed, there are historical precedents of correct predictions made as long as 50 or 100 years in advance.

An early forecast of the future of technology is contained in Jules Verne's 1863 manuscript, Paris in the Twentieth Century, in which Verne made bold predictions about Paris in 1960. In that early year, before Edison or Benz, Verne had already conceived of the basics of the 20th century. He predicted rapid-transit running on overhead viaducts, motorcars with gas combustion engines and streetlights connected by underground wires.

In fact, much of IR #2 should not have been a surprise. Looking ahead in the year 1875, inventors were feverishly working on turning the telegraph into the telephone, trying to find a way to transform electricity coming from batteries into electric light, trying to find a way of harnessing the power of petroleum to create a lightweight and powerful internal combustion engine. The atmosphere of 1875 was suffused with "we're almost there" speculation. After the relatively lightweight internal combustion engine was achieved, flight – humankind's dream since Icarus – became a matter of time and experimentation.

Some of the most important sources of human progress over the 1870-1940 period were not new inventions at all. Running water had been achieved by the Romans, but it took political will and financial investment to bring it to every urban dwelling place. A separate system of sewer pipes was not an invention, but implementing it over the interval 1870-1930 required resources, dedication and a commitment to using public funds for infrastructure investment.

A set of remarkable forecasts appeared in December 1900 in an unlikely medium: Ladies' Home Journal. Some of the predictions were laughably wrong and unimportant, such as strawberries the size of baseballs. But enough were accurate in a page-long article to suggest that much of the future can be known. Among the more interesting forecasts:

  • Hot and cold air will be turned on from spigots to regulate the temperature of the air just as we now turn on hot and cold water from spigots to regulate the temperature of the bath.
  • Ready-cooked meals will be purchased from establishments much like our bakeries of today.
  • Liquid-air refrigerators will keep large quantities of food fresh for long intervals.
  • Photographs will be telegraphed from any distance. If there is a battle in China a century hence, photographs of the events will be published in newspapers an hour later.
  • Automobiles will be cheaper than horses are today. Farmers will own automobile hay-wagons, automobile truck-wagons … automobiles will have been substituted for every horse-vehicle now known.
  • Persons and things of all types will be brought within focus of cameras connected with screens at opposite ends of circuits, thousands of miles at a span. ...[T]he lips of a remote actor or singer will be heard to offer words or music when seen to move.
It has been estimated recently that the combined assets under management by robo-advisers still amounts to less than $20 billion, against $17 trillion managed by flesh-and-blood advisors.
The Inventions That Are Now Forecastable

Despite the slow growth of TFP recorded since 2004, commentators view the future of technology with great excitement.

Economist Nouriel Roubini writes, "There is a new perception of the role of technology. Innovators and tech CEOs both seem positively giddy with optimism." For their part, Erik Brynjolfsson and Andrew McAfee, authors of The Second Machine Age, assert that "we're at an inflection point" between a past of slow technological change and a future of rapid change.

They remind us that Moore's Law predicts endless exponential growth of the performance capability of computer chips – but they ignore the fact that chips fell behind the predicted pace of Moore's Law after 2005. Exponential increases in computer performance will continue, but at a slower rate than in the past, not at a faster rate.

Since 2004, the pace of innovation in general has been slower, but it has certainly not been zero. When we examine the likely innovations of the next several decades, we are not doubting that many will occur, but rather are assessing them in the context of the past two decades of fast (1994-2004) and then slow (2004-2014) growth in TFP.

The advances that are forecast by Brynjolfsson and McAfee can be divided into four main categories: medical, small robots and 3-D printing, big data, driverless vehicles. It is worth examining the potential contribution of each to boost TFP growth back to the pace achieved in the late 1990s.

Medical and pharmaceutical advances. The most important sources of longer life expectancy in the 20th century were achieved in the first half of that century, when life expectancy rose at twice the rate it did in the second half.

This was the interval when infant mortality was conquered and life expectancy was extended by the dissemination of the germ theory of disease, the development of an antitoxin for diphtheria, and the near-elimination of contamination of milk and meat as well as the near-elimination of air- and water-distributed diseases through the construction of urban sanitation infrastructure. Many of the current basic tools of modern medicine were developed between 1940 and 1980, including antibiotics, the polio vaccine, procedures to treat coronary heart disease and the basic tools of chemotherapy and radiation to treat cancer – all advances that contribute to productivity growth.

Medical technology has not ceased to advance since 1980, but rather has continued at a slow and measured pace along with life expectancy. It is likely that life expectancy will continue to improve at a rate not unlike that of the past few decades. There are new issues, however. As described by Jan Vijg, an eminent geneticist, progress on physical disease and ailments is advancing faster than on mental disease, which has led to widespread concern that there will be a steady rise in the burden of care of elderly Americans who are afflicted by dementia.

Pharmaceutical research has hit a brick wall of rapidly increasing costs and declining benefits, with a decline in major drugs approved each pair of years over the past decade (as documented by Vijg). Drugs are being developed that will treat esoteric types of cancer at costs that no medical insurance system can afford. The upshot is that over the next few decades, medical and pharmaceutical advances will doubtless continue at modest pace, while the increasing burden of Alzheimer's care will be a significant contributor to increased cost of the medical care system.

Small robots and 3-D printing. Industrial robots were introduced by General Motors in 1961. By the mid-1990s, robots were welding automobile parts and replacing workers in the lung-killing environment of the automotive paint shop. Until recently, however, robots were large and expensive and needed to be separated from humans for reasons of safety. The ongoing reduction in the cost of computer components has made ever-smaller and increasingly capable robots feasible.

Former DARPA executive Gill Pratt enumerates eight "technical drivers" that are advancing at steady exponential rates. Among those relevant to the development of more capable robots are exponential growth in computer performance, improvements in electromechanical design tools and electrical energy storage. Others on his list involve more general capabilities of all digital devices, including exponential expansion of local wireless communications, in the scale and performance of the Internet, and in data storage.

As an example of the effects of these increasing technical capabilities, inexpensive robots suitable for use by small businesses have been developed and brought to public attention by a 2012 segment on the TV program 60 Minutes featuring Baxter, a $25,000 robot. The appeal of Baxter is that it is cheap and can be reprogrammed to do a different task every day. But these attributes of small robots are no different in principle from the distinctive advances in machinery dating back to the textile looms and spindles of the early British industrial revolution.

Most workplace technologies are introduced with the intention of substituting machines for workers. Because this has been going on for two centuries, why are there still so many jobs? Why in mid-2015 was the U.S. unemployment rate close to 5 percent instead of 20 or 50 percent?

MIT economist David Autor has posed this question as well as answered it: machines, including futuristic robots, not only substitute for labor, but also complement it:

Most work processes draw upon a multifaceted set of inputs: labor and capital; brains and brawn; creativity and rote repetition; technical mastery and intuitive judgment; perspiration and inspiration; adherence to rules and judicious application of discretion. Typically, these inputs each play essential roles; that is, improvements in one do not obviate the need for the other.

The complementarity between robots and human workers is illustrated by the cooperative work ritual that is taking place daily in warehouses, often cited as a frontier example of robotic technology. Far from replacing all human workers, the Kiva robots in these warehouses do not actually touch any of the merchandise, but are limited to lifting shelves containing the objects and moving the shelves to the packer, who lifts the object off the shelf and performs the packing operation by hand.

The tactile skills needed for the robots to distinguish the different shapes, sizes and textures of the objects on the shelves are beyond the capability of current robot technology. Other examples of complementarities include ATMs, which, as already noted, have been accompanied by an increase, rather than a decrease, in the number of bank branches, and the bar code retail scanner, which works along with the checkout clerk, with little traction thus far for self-checkout lanes.

Daniela Rus, director of MIT's Computer Science and Artificial Intelligence Laboratory, summarizes some of the limitations of the robots developed to date: "the scope of the robot's reasoning is entirely contained in the program. … Tasks that humans take for granted – for example, answering the question, 'Have I been here before?' – are extremely difficult for robots." Further, if a robot encounters a situation that it has not been specifically programmed to handle, "it enters an error state and stops operating."

Surely multi-function robots will be developed, but it will be a long and gradual process before robots outside of the manufacturing and wholesaling sectors become a significant factor in replacing human jobs in the service, transportation or construction sectors. And it is in those sectors that the slow pace of productivity growth is a problem.

3-D printing is another revolution described by the techno-optimists. Its most important advantage is the potential to speed up the design of new products. New prototypes can be designed in days or even hours rather than months, and can be created at relatively low cost, lowering one major barrier to entry for entrepreneurs. New design models can be simultaneously produced at multiple locations around the world. 3-D printing also excels at one-off customized operations, such as the ability to create a crown in a dentist office instead of having to send out a mold, thereby reducing the process of adding a dental crown from two office visits to one.

3-D printing may thus contribute to productivity growth by reducing certain inefficiencies and lowering barriers to entrepreneurship, but these are unlikely to be huge effects felt throughout the economy. 3-D printing is not expected to have much impact on mass production and thereby on how most U.S. consumer goods are produced.

Big data and artificial intelligence. The optimists' case lies not with physical robots or 3-D printing but with the growing sophistication and human-like abilities of computers – often described as artificial intelligence. Brynjolfsson and McAfee provide many examples to demonstrate that computers are becoming sufficiently intelligent to supplant a growing share of human jobs. They wonder "if automation technology is near a tipping point, when machines finally master traits that have kept human workers irreplaceable."

Thus far, it appears that the vast majority of big data is being analyzed within large corporations for marketing purposes. The Economist reported recently that corporate IT expenditures for marketing were increasing at three times the rate of other corporate IT expenditures. The marketing wizards use big data to figure out what their customers buy, why they change their purchases from one category to another and why they move from merchant to merchant. With enough big data, Corporation A may be able to devise a strategy to steal market share from Corporation B, but B will surely fight back with an onslaught of more big data.

An excellent current example involves the large legacy airlines with their data-rich frequent flyer programs. The analysts at these airlines are constantly trolling through their big data trying to understand why they have lost market share in a particular city or with a particular demographic group of travelers.

Every airline has a "revenue management" department that decides how many seats on a given flight on a given day should be sold at cheap, intermediate and expensive prices. Vast amounts of data are analyzed, and computers examine historical records, monitor day-by-day booking patterns, factor in holidays and weekends and come out with an allocation. But at a medium-size airline, JetBlue, 25 employees are still required to monitor the computers. And the director of revenue management at JetBlue describes his biggest surprise since taking over his job as "how often the staff has to override the computers."

Marketing is just one form of artificial intelligence that has been made possible by big data. Computers are working in fields such as medical diagnosis, crime prevention and loan approvals. In some cases, human analysts are replaced. But often the computers speed up a process and make it more accurate while working alongside human workers.

New software allows consumer-lending officers to "know borrowers as never before, and more accurately predict whether they will repay." Vanguard and Charles Schwab have begun to compete with high-priced human financial advisers by offering "robo-advisers" – online services that offer automated investment management via software. They use computer algorithms to choose assets consistent with the client's desired allocation at a cost that is a mere fraction of the fees of traditional human advisers. But this application of artificial intelligence has not yet made much of a dent in advising high-net-worth individuals.

Advanced search technology and artificial intelligence are indeed happening now, but they are nothing new. The quantity of electronic data has been rising exponentially for decades without pushing TFP growth out of its post-1970 lethargy, except for the temporary productivity revival period of 1994-2004. The sharp slowdown in productivity growth in recent years has overlapped the introduction of smartphones and iPads, which consume huge amounts of data. These sources of innovation have disappointed in what counts in the statistics on productivity growth: their ability to boost output per hour in the American economy.

Driverless cars. The enthusiasm of techno-optimists for driverless cars leaves numerous issues unanswered. As pointed out by David Autor, the experimental Google car "does not drive on roads" but rather proceeds by comparing data from its sensors with "painstakingly hand-curated maps." Any deviation of the actual environment from the pre-processed maps, such as a road detour or a crossing guard in place of the expected traffic signal, causes the driving software to blank out and requires instant resumption of control by the human driver. At present, tests of driverless cars are being carried out on multi-lane highways, but test models so far are unable to judge when it is safe to pass on a two-lane road or to navigate winding rural roads in the dark.

Even if the technology can be perfected, it is unclear how much it can raise productivity. An important distinction here is between cars and trucks.

People are in cars to go from A to B, much of it for essential aspects of life, such as commuting or shopping. Thus the people must be inside the driverless car to achieve their objectives; the additions to consumer surplus of being able to commute without driving are relatively minor. Instead of listening to the current panoply of options, including Bluetooth phone calls, radio news or Internet-provided music, drivers will be able to look at computer screens, read books or keep up with e-mail.

The use of driverless cars is predicted to reduce the incidence of automobile accidents, continuing the steady decline in automobile accidents and fatalities that has been occurring for decades. Driverless car technology may also help to foster a shift from nearly universal ownership to widespread car-sharing in cities and perhaps suburbs, leading to reductions in gasoline consumption, air pollution and the amount of land devoted to parking – all of which should have positive effects on quality of life. But none of this will have much impact on productivity growth.

That leaves the advantages offered by driverless trucks. This is a potentially productivity-enhancing innovation, albeit within the small slice of U.S. employment consisting of truck drivers. However, driving from place to place is only half of what many truck drivers do. Those driving Coca-Cola and bread delivery trucks do not just stop at the back loading dock and wait for a store employee to unload the goods. The drivers are responsible for loading the cases of Coke or the stacks of loaves onto dollies and manually placing the goods on the store shelves.

Remarkably, in this late phase of the computer revolution, almost all placement of individual product cans, bottles and tubes on retail shelves is by humans rather than robots. Thus, driverless delivery trucks will not save labor unless the tasks are reorganized so that unloading and placement of goods from the driverless trucks are taken over by workers at the destination location.

* * *

The problem created by the computer age is not mass unemployment but the gradual disappearance of good, steady, mid-level jobs that have been lost not just to robots and algorithms but to globalization and outsourcing, together with the concentration of job growth in routine manual jobs that offer relatively low wages.

main topic: Economy: U.S.
related topics: Tech & Telecoms, Books, Workforce