By Peter Richardson
To see long excerpts from “Rise of the Robots” at Google Books, click here.
“Rise of the Robots: Technology and the Threat of a Jobless Future”
A book by Martin Ford
There’s good reason to believe that robots will replace more and more workers, especially those who perform routine tasks, in the coming years. According to one analysis, up to 47 percent of jobs in the United States now performed by humans will be performed by machines within two decades. In the past, this sort of job loss was attributed to “creative destruction” — the destruction of something old by something new, which economist Joseph Schumpeter considered “the essential fact about capitalism.” Continuous innovation sustained economic growth even as it destroyed older modes of production. Predictions of long-term joblessness went awry because innovation also transformed our wants and needs. When I was running Fortran programs using punched cards, I didn’t realize I would eventually need a handheld computer that also doubled as my telephone, camera and navigational guide.
In “Rise of the Robots: Technology and the Threat of a Jobless Future,” author Martin Ford, software developer and computer designer, predicts that the next wave of job losses will be different. The main reason for this difference, he argues, is Moore’s Law. Formulated in the mid-1960s by Intel founder Gordon Moore, that axiom predicts that advances in chip technology will increase computing power exponentially. When combined with advances in robotics and artificial intelligence, these gains will make robots the most efficient way to perform routine work now allocated to humans.
We shouldn’t underestimate the consequences of accelerating automation, but Ford’s thesis is vulnerable to two sets of counterarguments. The first set is economic. Yes, American jobs are disappearing, but Ford never makes a convincing case that automation — rather than the neoliberal policies we have pursued for decades — is the main culprit. Indeed, it often seems he has read but not fully digested the relevant literature in labor economics, international trade, economic history and political economy. This lack of expertise doesn’t prevent him from offering a wide range of policy prescriptions.
Much of Ford’s book considers the role of automation in various sectors of the economy, not all of which conform to his thesis. By putting robots at the center of his health care discussion, for example, he seems to overlook the real reasons Americans pay so much (and show worse health outcomes) compared with residents of other advanced countries. In the second half of the chapter, however, Ford turns to the well-known drawbacks of the American approach. He concludes that “health care is a broken market and no amount of technology is likely to bring down costs until the structural problems of the industry are resolved.” This conclusion appears to be a setback for his argument, but instead of modifying his thesis, he suggests “a brief detour from our technology narrative” to offer his policy solutions for this sector. Then, having acknowledged a tenuous connection between automation and the 18 percent of the American economy that comprises our health care costs, Ford returns to his robo-centric discussion.
The other main challenge to Ford’s analysis, one he never addresses, is philosophical. More than four decades ago, philosopher Hubert Dreyfus outlined the conceptual limits of artificial intelligence in “What Computers Can’t Do” (1972). Those limits revolve around the difference between computation, which machines do very well, and consciousness, which machines don’t possess. Many in the high-tech community ignored or mocked Dreyfus’ argument, but by the early 1990s, most had conceded that his critique was on point. It resurfaced in 1999, when Dreyfus’ former colleague, John Searle, assessed Ray Kurzweil’s book, “The Age of Spiritual Machines: When Computers Exceed Human Intelligence.” Searle called that work “an extended reflection on the implications of Moore’s Law” and argued that Kurzweil, an accomplished high-tech inventor and controversial futurist, had left a “huge gulf between the spectacular claims advanced and the weakness of the arguments given in their support.” The main drawback, Searle claimed, was that Kurzweil had failed to distinguish between artificial intelligence and consciousness. When Kurzweil complained about the review in print, Searle made quick work of him.
Although Ford clearly harbors misgivings about Kurzweil’s techno-optimism, he nowhere distinguishes between computation and consciousness. For that reason, he can imagine a world in which robots will furnish at least some of our journalism. By way of example, he offers a computer-generated report of a baseball game. I don’t doubt that robots can supply certain kinds of sporting news, nor do I doubt that they will do so with increasing sophistication. I’m certain, too, that they can produce routine weather reports and business news. That trend may be worrisome for some beat reporters and meteorologists, but Ford’s analysis is another way of saying that not all news is journalism. Although that field is shedding jobs for reasons less directly related to robots, I suspect we will demand conscious journalism (as well as news) for some time to come.
Ford’s discussion of higher education takes a similar line. After discussing the disappointing results of massive open online courses (MOOCs), Ford indulges his own techno-optimism, hoping against all early evidence that the highly touted MOOCs will “bring high-quality education to hundreds of millions of the world’s poor.” He then turns to the rising costs of, and bleak employment picture in, higher education. “Thus far, colleges and universities have largely been immune to the substantial increases in productivity that have transformed other industries,” Ford reports. “The benefits of information technology have not yet scaled across the higher-education sector. This, at least in part, explains the extraordinary increase in the cost of college in recent decades.”
Actually the main reasons for such cost increases are simpler and closer at hand. In many states, public investments in higher education have taken a back seat to spending on prisons and health care. But there’s a more fundamental misunderstanding here. Like many casual observers, Ford thinks higher education is primarily about information, which has never been cheaper or more abundant. But elite higher education is primarily about transformation, which is best produced in small batches. This distinction, which resembles the one between news and journalism, is lost on Ford. That is perhaps clearest when he considers the machine scoring of student essays. “English professors have little reason to fear that the algorithms are poised to invade upper-level creative writing seminars,” he assures us. “However, their deployment in introductory courses might eventually displace the graduate teaching assistants who now perform these routine grading tasks.”
I’ve graded college essays for decades, and I happen to believe that any paper worth evaluating requires a conscious reader. But let’s say I’m wrong; what does this suggest about the papers we’re asking students to write? If these assignments aren’t designed to sharpen their thinking, or if writing is seen as a testable skill rather than a mode of discovery, maybe we should let computers write the essays as well as grade them. The real problem with this conception of higher education isn’t robots acting like people, but requiring young people to act like robots. If this problem bothers Ford, he masks his discomfort well.
As Ford works his way though each economic sector, his main claim begins to bifurcate. The weak version (increased automation) is obviously true, and the strong version (robot takeover) is probably false. Yet even as Ford’s argument fragments, his discussion moves toward intriguing policy questions and solutions, the most important of which is a guaranteed income. As Ford notes, this isn’t a new idea, and many of its earlier exponents were unterrified by increased automation. Indeed, many were delighted that robots would relieve us of tedious labor. One such person was Buckminster Fuller, who celebrated the contemplative possibilities of leisure. “The true business of people,” he claimed, “should be to … think about whatever it was they were thinking about before somebody came along and told them they had to make a living.”
In some ways, Ford’s argument resembles a MacGuffin, Alfred Hitchcock’s term for a plot device that launches a narrative but is otherwise irrelevant to its climax and meaning. To illustrate this point, suppose we granted the strongest possible version of Ford’s thesis; robots eliminated all human labor, and we still had abundant food, housing, health care, education, entertainment, etc. How would we divide the robotic output? I suspect few would argue that robot owners should control 100 percent of the wealth. Now replace full automation with the level we have now, re-pose the question and try to explain why 1 percent of the population should control half the world’s wealth. What makes this arrangement — which happens to be the current state of affairs — more defensible?
Ford’s analysis is an indirect route to such foundational questions — not only in political economy, but also in our understanding of what it is to be human. But if he and his high-tech readers are ready to make that trip, I have no interest in discouraging them.