To understand how tomorrow’s technology will change our lives, we need to look at what yesterday’s futurists got wrong—and right.
Peering into the automated future that he saw quickly approaching in the early 1960s, sociologist David Riesman had one big worry: What would we do with all our leisure time? Riesman had been thinking about the problem ever since coauthoring The Lonely Crowd in 1950, the book that seemed to define a whole decade of American life. He proposed creating a federal Office of Recreation to help Americans cope with the coming banquet of free time and imagined that instead of a New Deal-like Works Progress Administration, a future recession might require a Play Progress Administration.
Riesman was not alone. “By 2000,” Time magazine promised, “the machines will be producing so much that everyone in the U.S. will, in effect, be independently wealthy.” The pessimists shared with the optimists a certainty that the coming change would be nothing short of revolutionary. Francis B. Sayre, Dean of the Washington Cathedral, said that automation worried him more than the specter of nuclear war. “Scientists say that if they construct machines to run society, they must have theologians to tell them what kind of society we want,” and he wasn’t sure the nation’s religious institutions were up to the job. Marshall McLuhan, the prophet of the new age of media, didn’t seem to feel the need for the dean’s advice. “As unfallen Adam in the Garden of Eden was appointed the task of the contemplation and naming of creatures, so with automation. We have now only to name and program a process or a product in order for it to be accomplished,” he enthused in Understanding Media (1964). We were being freed to lead lives of artistic self-creation and “imaginative participation in society.”
It’s easy to poke fun at McLuhan and the others who pondered the automation “crisis” of his era. But as we experience a fresh wave of alarm over the rise of artificial intelligence and other new technologies, it’s important to ask why and how so many clear-eyed, serious people could have been so wrong, what we can learn from their mistakes—and how, in unexpected ways, they were at least partly right.
The importance of a certain modesty is an obvious lesson of the early debate. The mid-century thinkers got ahead of themselves. The first computer, the ENIAC (Electronic Numerical Integrator And Computer), which was rolled out in 1946 and remained in service until 1955, weighed 30 tons and occupied a footprint larger than two Levittown Cape Cods. When it appeared, it had only enough memory to store 20 ten-digit numbers. By the mid-1960s, when the automation debate suddenly ended, there were only about 16,000 computers in the United States, and the best of them could not come close to matching the performance of a modern $600 laptop.
Today, when Alexa fulfills the whims of toddlers, and employers can virtually eliminate a white-collar job by buying a robotic process automation bot with a few mouse clicks, we have a better purchase on the future, as well as more than half a century of experience to contemplate. Yet that hasn’t stopped us from falling prey to hyperbolic thinking and all the other pitfalls that plagued the mid-century prophets. They overestimated the speed of change even as they underestimated the difficulties it would encounter—or the way it would be shaped by humans. They extrapolated into the future from anecdotes and momentary phenomena. These mistakes are all being repeated today. But worse than any mistake the prophets themselves made was the fact that when automation began to take a heavy toll on American industry after the severe recession of 1973-74, the visions of apocalypse and utopia had dissolved, and there was no longer much debate at all. America had moved on to other preoccupations and taken its eye off the ball.
The tenor of the mid-century debate tended to follow the ups and downs of the economy. When times were good, many seers worried about the problems of abundance Americans would face 50 years in the future. When the economy slumped and unemployment spread, the future looked more like an automated nightmare. That pattern has not changed.
Pessimism struck early, in 1950, with Norbert Wiener’s The Human Use of Human Beings: Cybernetics and Society. Wiener was a brilliant, eccentric MIT mathematician and philosopher who had pioneered the field of information theory during World War II. Now he took up his pen to popularize the implications of what he called “cybernetics.” To vastly oversimplify, machines supplied with feedback from sensors (such as photoelectric cells and thermostats) that communicated and self-corrected without human intervention would eventually render humans largely superfluous in many settings.
The shape of the future could be seen in the handful of manufacturing processes that had already incorporated the new technologies, such as steel rolling and canning. The automatic machine is “the precise equivalent of slave labor,” Wiener wrote. “Any labor which competes with slave labor must accept the economic conditions of slave labor. It is perfectly clear that this will produce an unemployment situation, in comparison with which the present recession and even the depression of the thirties will seem a pleasant joke.”
“The machine,” he added ominously, “plays no favorites between manual labor and white-collar labor.” That warning was often repeated. Economist Herbert Simon, a future Nobel Prize winner, declared in 1956 that “machines will be capable, within 20 years, of doing any work a man can do.”
Cooler heads prevailed closer to the front lines of automation. Walter Reuther, the canny liberal president of the United Auto Workers of America, and other labor leaders saw automation as “more blessing than curse,” writes historian David Steigerwald, in part because it promised to erase industry’s dirtiest and most tedious jobs. They believed that automation would create new jobs, allowing workers to upgrade their skills and assume new roles as overseers of the machines, or perhaps rise into the white-collar ranks. The AFL-CIO campaigned for a 30-hour workweek (for 40 hours of pay), but Reuther was not enthusiastic about the idea, believing that Americans were not prepared to use leisure time “intelligently.” At the bargaining table, he pushed for a guaranteed annual wage that would reduce employers’ incentive to replace men with machines.
Opinion on automation split into two camps: Expansionists saw unemployment as a product of weak demand; structuralists argued that automation was the chief cause. Reuther came down squarely in the middle. But as manufacturing jobs continued to disappear, he shifted toward a more structuralist response, emphasizing an array of new federal policies, including job training and targeted aid for distressed communities, some of which were incorporated into President Lyndon B. Johnson’s short-lived Great Society.
It’s worth noting that industrial robots—machines that could be reprogrammed for various uses—first appeared at about this time, and utterly failed to gain traction. In 1961, the Unimate debuted without fanfare at a GM factory near Trenton, New Jersey, where it was put to work as a die-caster, plucking red-hot door handles from a line and dropping them in pools of cooling liquid. Workers were glad to be relieved of the task. A few years later, a Unimate opened and poured a can of beer for an astounded Johnny Carson on The Tonight Show. But that was the height of its success. Labor was plentiful and manufacturers were reluctant to adopt the costly and unproven technology. The American robotics industry fizzled out, and the momentum shifted to Japan. Today, the United States ranks seventh in the world in robot density (robots per 1,000 workers).
Unlike in our own time, entrepreneurs and technologists were little heard-from during the first automation crisis. The new technologies were being born in the nurseries of staid corporate giants like IBM and Sperry-Rand, which didn’t owe their existence to swashbuckling founders like Steve Jobs or Salesforce’s Mark Benioff. One of the few seers to emerge from the world of business and technology was an entrepreneurial consultant named John Diebold, who was acclaimed (or scorned) as the chief evangelist of automation. Diebold, however, was relatively measured in his pronouncements. A classic science nerd, he had corresponded with scientists and curators as a boy, built his own rocket, and, at the age of 15, founded the Diebold Research Laboratory in the basement of his parents’ New Jersey home. While serving in the merchant marine during World War II, he became interested in the new radar-controlled anti-aircraft tracking and firing systems (just as Wiener had), and that led him to investigate automation. After the war, he went to Harvard Business School, where he wrote what would become a defining work, Automation: The Advent of the Automatic Factory (1952).
Diebold gave the word “automation” its modern meaning. An automated factory wasn’t one in which a few new machines or controls were plugged into the manufacturing process, he said. It meant redesigning the factory’s processes, machines, and sometimes its products around the concept of automation. Diebold built an international consulting firm on the rock of automation and became the chief publicist of the movement. He did not shrink from the charge that automation would increase unemployment, but he rejected the notion that computers would quickly sweep through all sectors of the economy. He regularly complained that critics were making arguments based on anecdotes and lamented the absence of hard facts about the spread of automation. In a refrain that sounds familiar today, Diebold argued that education and training would mute the human economic costs of technology (at a time when, astonishingly, a third of the unemployed had not gone beyond grade school). He did not share our present-day infatuation with STEM education, calling it “the easiest mistake one can make.” A degree of technical facility was needed, but learning to think was more important than specialized knowledge.
That is not to say that Diebold had 20/20 vision. He was a sober analyst when testifying before congressional committees but a bit of an arm-waver when giving a commencement address. Despite his soothing words about unemployment, Diebold believed that the three-day workweek was a distinct possibility—though not in the near future. Like everyone who dared to make specific predictions, he put his name to some howlers. He said that videophones would be commonplace by 1969, for example, and took it for granted that technology would soon allow humans to control the weather.
Yet Diebold’s upbeat perspective looks positively timid compared to those that emerged as the economy turned vigorously upward after the recession of 1958. In The Challenge of Abundance (1961), for example, futurist Robert Theobald asked, “Do we know how to deal with the revolution that will result by the year 2000 if we use our ability to multiply our standard of living by a factor of three or four, and decrease our hours of work to the same extent?”
Theobald was one of the 35 mostly leftwing intellectuals, activists, and technologists who formed the Ad Hoc Committee on the Triple Revolution, which sent a highly publicized open letter to President Johnson in 1964. The letter quickly passed over the “revolutions” in weaponry and human rights to the one that most excited the writers: the “cybernation revolution.” They held it responsible for a list of ills that sound as if it could have been written yesterday, including unemployment, inequality, and a growing number of people dropping out of a labor force that had “no place for them.” But not to worry.
The “almost unlimited capacity” of the coming cybernetic economy would make all things possible.
The “almost unlimited capacity” of the coming cybernetic economy would make all things possible.
The “almost unlimited capacity” of the coming cybernetic economy would make all things possible. There is “no question,” they said, that “cybernation would make possible the abolition of poverty at home and abroad. . . . The economy of abundance can sustain all citizens in comfort and economic security, whether or not they engage in what is commonly reckoned as work.” Their reform agenda included a guaranteed national income and many other ambitious programs. Americans would need to invent a new way of life free of “meaningless and repetitive” toil and devoted to leisure and learning.
That a new day was dawning seemed obvious to the committee. Productivity growth had surged to a rate of more than 3.5 percent for three years running, and the writers assured the President that an even greater rate of increase “can be expected from now on.” At the same time, unemployment remained stubbornly high after the short but severe 1958 recession. The obvious explanation was that machines had displaced humans. The machines were already producing more than Americans could consume, they reasoned, and the surplus was only going to increase. It was time to shed traditional notions of economic policy and face Theobald’s challenge of abundance. “The major economic problem is not how to increase production but how to distribute the abundance that is the great potential of cybernation,” wrote the Triple Revolution authors.
A few years later, Herman Kahn, the polymath nuclear strategist who inspired Dr. Strangelove by thinking about the unthinkable, weighed in with an only slightly less audacious view of the future. In the table- and chart-laden The Year 2000: A Framework for Speculation on the Next Thirty-three Years (1967), he and coauthor Anthony Wiener allowed that the onrush of technology could bring authoritarian surveillance, environmental calamity, and other misfortunes, but their perspective was overwhelmingly hopeful. In their main “quantitative scenario,” members of the lower middle class would make between $82,000 and $164,000 (in inflation-adjusted dollars) in 2000 while working only two-thirds as many hours as they did in 1965. Kahn and Wiener speculated in great detail about the shape of work in this halcyon future. Perhaps the five-day workweek would endure, but the day would be reduced to seven hours, and vacations would be eight weeks long. They sketched out nine different scenarios for a four-day workweek, none involving more than 30 hours of labor.
The two writers worried that tomorrow’s leisured lower middle class would be prone to vulgar pastimes and “conservative national policies and political jingoism,” but they had little doubt that “70 or 80 percent of people [will] become gentlemen and put a great deal of effort into various types of self-development. . . . for example, a very serious emphasis on sports, on competitive ‘partner’ games (chess, bridge), on music, art, languages, or serious travel, or on the study of science, philosophy, and so on.”
Even before The Year 2000 saw print, however, the tide was turning against this brand of mid-century optimism. In response to the growing debate over technology, President Johnson had established the National Commission on Technology, Automation, and Economic Progress, and in 1966 it delivered an un-commission-like blunt response to the notion that the United States was on the verge of economic transformation: “We dissent from this view.”
The puzzling stickiness of unemployment amid prosperity that had excited so much speculation had already vanished, the commission noted. Now, with the war in Vietnam growing, the worry was about tight labor supplies and inflation. The sudden jump in productivity that had inspired the Ad Hoc Committee’s dreams was not as significant as it had appeared and had other causes in addition to technology. Finally, a close survey of the nation’s 36 largest industries revealed that automation was not spreading nearly as quickly as anecdotal reports suggested. Sixteen of the sectors were not likely to experience any automation in the next decade.
The real problem, according to the commission and other in the expansionist camp, was that government had failed to use the fiscal techniques available to it to keep the economy expanding. They were confident that Keynesian economic tools gave government the means to this end. Daniel Bell, a commission member and a sociologist then at Columbia University, declared, “Full employment is no problem, without war, in any modern economy where the government is willing to use the fiscal powers available to it.”
Then, almost in an instant, the debate ended.
Thanks to LBJ’s war, unemployment dropped to 3.4 percent (a rate it would not approach again until last year). The Vietnam war, civil rights, and other issues now occupied center stage. America got a taste of the abundance that had excited so much speculation, but it didn’t seem so delicious. A distaste for materialism had been growing during the 1950s, and with the rise of counterculture and the New Left, it broke with full force. Automation and the computer, once viewed with a mixture of fear and hope, increasingly seemed instruments of the power structure. At the University of California at Berkeley, notes historian Steven Lubar, the Free Speech movement made the computer punch card a symbol of the depersonalization and regimentation the students felt all around them. Some burned their course registration punch cards along with their draft cards.
The pivot was complete by 1972, when the Club of Rome’s famous report, The Limits to Growth, decisively changed the terms of debate. In its quest for prosperity, the report warned, humankind was ravaging the planet and racing toward economic collapse. The abundance that had seemed so tantalizingly close only a few years before now looked like poisoned fruit. The computer and automation did not figure directly in this new drama, even though the book derived much of its authority from the fact that it was based entirely on a computer simulation. Population growth and resource depletion were the new threats. Industry, once the focus of so much concern, now became for many an agent of evil. Pollution and other ills needed to be heavily regulated and taxed, and if industry suffered and jobs were lost, so be it. Automation no longer seemed such a bad idea. When factory jobs slid away, politicians looked abroad for culprits, despite economists’ evidence of automation’s impact. It was easier to blame Japan. Today, it’s easier to blame China—or Europe or Mexico.
Four years after the publication of The Limits to Growth, Steve Jobs and Steve Wozniak launched a little startup in a garage in Cupertino, California, that would put the computer back on the side of the angels. As the debate that had lapsed in the mid-1960s revived, the computer and the internet emerged as instruments of liberation and creativity. Utopia beckoned once again.
Remember the Long Boom, cyber-utopianism, and the Twitter revolution? It was easy to see the new technologies of the late 20th century as benign world-changers in part because they didn’t immediately threaten jobs.
Remember the Long Boom, cyber-utopianism, and the Twitter revolution? It was easy to see the new technologies of the late 20th century as benign world-changers in part because they didn’t immediately threaten jobs.
Remember the Long Boom, cyber-utopianism, and the Twitter revolution? It was easy to see the new technologies of the late 20th century as benign world-changers in part because they didn’t immediately threaten jobs. McKinsey & Co. estimates that the PC created 16 million net new jobs in its first four decades, mostly in occupations outside of technology, from financial analysts to call center operators. Yet beneath the glittery surface, wages stagnated, workers left the labor force, and inequality grew. Obvious signs of the machines’ impact started to show in the wake of the severe 1973-74 recession, just a few years after the debate over automation was cut short. MIT economist David Autor called it labor market “polarization”: Wage gains were now going disproportionately to workers at the top and bottom of the “income and skill distribution,” while those in the middle, with jobs more subject to automation, suffered.
Today, the pendulum has swung back toward Norbert Wiener’s dire view. Despite an unemployment rate of 3.6 percent and a long stock market boom, there is a sense of things going wrong. Wages have been rising very slowly, inequality rankles, and segments of the population are in deep distress. And now, some say, robots are stealing our jobs. Nobody is safe. “Will We Still Need Novelists When AI Learns to Write?” a Financial Times headline recently asked. In 2018, Democratic presidential contender Andrew Yang predicted that within a few years a million truck drivers would be thrown out of work by autonomous vehicles. “That one innovation,” he declared, “will be enough to create riots in the street.” There are still plenty of tech enthusiasts, such as Peter H. Diamandis, author of Abundance: The Future Is Better Than You Think (2014). And there are also Triple Revolution die-hards, now called “post-workists,” who argue that automation will bring down capitalism and clear the way for a beneficent socialism. From the far Left comes British writer Aaron Bastani’s Fully Automated Luxury Communism (2019).
The academic world has also entered the fray. MIT economist Daron Acemoğlu, in research with a variety of collaborators, has done some of the foundational work on the impact of automation. Like his colleague David Autor, who argues that labor market polarization “is unlikely to continue very far” into the future, Acemoğlu takes a relatively optimistic view of automation. Other recent academic works include John Danaher’s Automation and Utopia: Human Flourishing in a World without Work (2019) and Carl Benedikt Frey’s The Technology Trap: Capital, Labor, and Power in the Age of Automation (2019).
Frey helped reignite the pessimistic case as the coauthor of a 2013 Oxford University study that concluded that 47 percent of U.S. jobs are at risk of being automated away in the foreseeable future. Other well-publicized studies don’t offer much more comfort. In 2017, the McKinsey Global Institute gave a midpoint estimate of 23 percent by 2030, though it said that new jobs and occupation switching by workers would greatly reduce the impact, just as Reuther and others had hoped in the 1950s.
The best of the new books on the dark side are those whose authors don’t succumb to millenarian despair. In The Rise of the Robots: Technology and the Threat of a Jobless Future (2015), software entrepreneur Martin Ford makes a strong case that technology will eliminate many jobs. Nevertheless, he holds out hope that people with few attractive employment prospects, when provided with a guaranteed annual income, may be freed to pursue entrepreneurial ideas or a refined sort of leisure. Economist Tyler Cowen is more granular and imaginative—and therefore somewhat less encouraging—in Average Is Over: Powering America Beyond the Age of the Great Stagnation (2013). Those who can collaborate adeptly with the new “genius machines” will prosper, he predicts, while the bulk of the population just manages to get by providing services to this growing affluent class, consoled by a declining cost of living and increasingly available “cheap fun.”
Today’s truest dystopians cite the specter of artificial intelligence run amok, with genetic engineering and other technologies sometimes stirred in. Elon Musk called artificial intelligence humanity’s “biggest existential threat” and other luminaries in science and technology have issued equally alarming warnings. Popular historian Yuval Noah Harari paints a vivid picture of civilizational collapse before the onslaught of algorithms in his bestselling Homo Deus: A Brief History of Tomorrow (2017). Harari believes that liberal humanism and all other value systems may fall before the worship of data flow—something he calls “dataism.” “If humankind is indeed a single data-processing system, what is its output? Dataists would say that its output will be the creation of a new and even more efficient data-processing system. . . . Once this mission is accomplished, Homo sapiens will vanish.”
Is this time different? Or are we seeing only another turn in a cycle of speculation about the future?
Neither proposition is entirely on target. The seers of the 1950s and ’60s got a few things partly right, and we can learn from those successes, just as we can learn from the failures. They put their fingers on some of the right issues even if their predictions about how they would play out were badly flawed. As for today’s seers, there is every reason to think they will have the same mixed record.
The mid-century thinkers were partly correct about productivity. Overall, the U.S. economy has a dismal record on this score. Since the 1960s, labor productivity has generally grown at an anemic annual rate of two percent or less. However, productivity in the manufacturing sector, where automation is most easily implemented, rose at close to the three percent rate that caused so much excitement in the 1960s. The story of U.S. manufacturing since that time looks at least a little like what the Ad Hoc Committee and others suggested it would be for the whole economy. Output rose briskly, but manufacturing jobs shrank as a share of total employment; during the dot-com bust of 2000, the number of jobs began to drop in absolute terms (though there has been a slight increase in the past ten years). Yet even this partially successful forecast comes with an asterisk, because quite a few of those jobs—a majority of them, some economists say—were lost to outsourcing and trade, not automation.
Which brings us to leisure. On the face of it, the mid-century seers look like fools. Far from falling, the labor force participation rate rose, and today, at 61 percent, it stands five points above its mid-century rate. The length of the average workweek hasn’t changed much in the past 50 years, but because some groups are working a lot more hours and the number of two-worker families rose, the old visions of carefree Americans dabbling in philosophy between tennis matches sound like a bad joke.
Yet one group has seen a substantial increase in leisure: the elderly. They are retiring earlier, living longer, and are a growing share of the population. The 65-and-over demographic has nearly doubled in size, growing from nine percent of the total when the Triple Revolution was trumpeted to 16 percent today. A man who retired at age 65 in 1960 could expect to enjoy another 13 post-work years; today, most men have already hit their easy chairs by 65, and they can look forward to another 18 years of Social Security checks. This was a revolution that was barely talked about, even though the demographic future was one of the few things that could be discerned with reasonable clarity. Advances in medicine guarantee that the elderly will absorb a significant share of any new leisure that machines yield in the future.
Advances in medicine guarantee that the elderly will absorb a significant share of any new leisure that machines yield in the future.
Advances in medicine guarantee that the elderly will absorb a significant share of any new leisure that machines yield in the future.
Artificial intelligence poses a qualitatively different challenge, threatening finally to realize the dreams and nightmares of mid-century America. The automation that writers such as David Riesman saw sweeping through the white-collar workforce could finally be coming, along with the computer-dominated society some envisioned. But the chances are that our powers of foresight and imagination haven’t grown much in seven decades.
What lessons can we draw from the record of the mid-century prophets of automation? Here are a few:
Tomorrow will not look like today. For all their emphasis on radical future change, many seers conveniently assumed that a few key things would not change. The Ad Hoc Committee and other futurists thought that the robust productivity increases of their time were a permanent feature of the economy. It is a category of mistake that is often repeated. When unemployment remained high in the long hangover after the Great Recession, for example, even some of the more sober-minded futurists concluded that technological change would likely prevent it from ever falling significantly. Unemployment is now 3.6 percent, a rate not seen since the days of the mid-century prophets.
Everything takes longer than you think. Change is often slower than anticipated, and adjustment easier. The inroads of automation into white-collar employment predicted decades ago look more imminent today, but they largely remain over the horizon. Many new technologies take time to develop and integrate into the economy, and some never make it. Autonomous vehicles were supposed to own the roads by now, but their delivery date keeps receding into the future. Corporations, meanwhile, don’t seem to be pouring more money into technology. After rising steadily as a percentage of gross domestic product, investment in information processing equipment and software dropped during the dot.com bust two decades ago and has not resumed its upward march.
Change is unevenly distributed. Technology enthusiasts like to quote science fiction writer William Gibson’s line, “The future is already here—it’s just not very evenly distributed.” The assumption is that it will soon spread everywhere, but in the 70 years since the first automation crisis began, that has not been borne out.
“Acceleration” is a buzzword. People have been talking excitedly about the acceleration of technological change at least since the Industrial Revolution and the idea is liberally sprinkled throughout the current debate. While bursts of speed in particular technologies are often dramatically visible, and it is mandatory to genuflect before Moore’s Law, reliable metrics of long-term broad-based acceleration are hard to find. Has the world changed more rapidly in the 70 years since 1950 than it did in the seven decades from 1880 to 1950, which brought the automobile, electrification, and airplanes? Indeed, some economists argue that our rate of innovation has slowed. Talk of accelerating change is like the pop-ups on travel websites that tell you 36 people have booked rooms at this hotel today—usually meant to excite a reaction. It’s enough to know that change is occurring rapidly.
Technology is not destiny. This should be obvious but is often forgotten. Just because a technology exists doesn’t mean it will be widely or quickly embraced, or that it will be left unregulated. Civilian nuclear power has been stopped in its tracks. Many states do not allow speed cameras. While the broad power of technological change is undeniable, individual technologies can be shaped and redirected. Daron Acemoğlu and Pascual Restrepo write that artificial intelligence can be used to create technologies that enhance human productivity or that eliminate jobs. It is a market failure caused by the dominance of a handful of technology firms bent on labor-killing innovations, they argue, that the latter is currently favored. Why are both treated equally by our tax and legal systems?
A healthy skepticism about the rhetoric surrounding today’s automation debates should not prevent anyone from taking the questions posed by the rise of artificial intelligence and other technologies with great seriousness. Fast or slow, unevenly or not, they are coming. The grandiloquence of the mid-century Episcopal clergyman Francis Sayre may elicit a chuckle even today. Still, he was right when he said that technological change requires us to decide what kind of society we want.
This article originally appeared in The American Interest.