Monday, June 18, 2012

How Capitalism Steered Innovation Toward Social Control Rather Than Technological Wonders

Why Don't We Have Flying Cars?
The technologies that did emerge proved most conducive to surveillance and work discipline.
By David Graeber, The Baffler
June 18, 2012

The following article has been adapted from the Baffler. To read the piece in its entirety check out the Baffler.com. 

A secret question hovers over us, a sense of disappointment, a broken promise we were given as children about what our adult world was supposed to be like. I am referring not to the standard false promises that children are always given (about how the world is fair, or how those who work hard shall be rewarded), but to a particular generational promise—given to those who were children in the fifties, sixties, seventies, or eighties—one that was never quite articulated as a promise but rather as a set of assumptions about what our adult world would be like. And since it was never quite promised, now that it has failed to come true, we’re left confused: indignant, but at the same time, embarrassed at our own indignation, ashamed we were ever so silly to believe our elders to begin with.

Where, in short, are the flying cars? Where are the force fields, tractor beams, teleportation pods, antigravity sleds, tricorders, immortality drugs, colonies on Mars, and all the other technological wonders any child growing up in the mid-to-late twentieth century assumed would exist by now? Even those inventions that seemed ready to emerge—like cloning or cryogenics—ended up betraying their lofty promises. What happened to them?

We are well informed of the wonders of computers, as if this is some sort of unanticipated compensation, but, in fact, we haven’t moved even computing to the point of progress that people in the fifties expected we’d have reached by now. We don’t have computers we can have an interesting conversation with, or robots that can walk our dogs or take our clothes to the Laundromat.


***


For the technologies that did emerge proved most conducive to surveillance, work discipline, and social control. Computers have opened up certain spaces of freedom, as we’re constantly reminded, but instead of leading to the workless utopia Abbie Hoffman imagined, they have been employed in such a way as to produce the opposite effect. They have enabled a financialization of capital that has driven workers desperately into debt, and, at the same time, provided the means by which employers have created “flexible” work regimes that have both destroyed traditional job security and increased working hours for almost everyone. Along with the export of factory jobs, the new work regime has routed the union movement and destroyed any possibility of effective working-class politics.

Meanwhile, despite unprecedented investment in research on medicine and life sciences, we await cures for cancer and the common cold, and the most dramatic medical breakthroughs we have seen have taken the form of drugs such as Prozac, Zoloft, or Ritalin—tailor-made to ensure that the new work demands don’t drive us completely, dysfunctionally crazy.

With results like these, what will the epitaph for neoliberalism look like? I think historians will conclude it was a form of capitalism that systematically prioritized political imperatives over economic ones. Given a choice between a course of action that would make capitalism seem the only possible economic system, and one that would transform capitalism into a viable, long-term economic system, neoliberalism chooses the former every time. There is every reason to believe that destroying job security while increasing working hours does not create a more productive (let alone more innovative or loyal) workforce. Probably, in economic terms, the result is negative—an impression confirmed by lower growth rates in just about all parts of the world in the eighties and nineties.

But the neoliberal choice has been effective in depoliticizing labor and over determining the future. Economically, the growth of armies, police, and private security services amounts to dead weight. It’s possible, in fact, that the very dead weight of the apparatus created to ensure the ideological victory of capitalism will sink it. But it’s also easy to see how choking off any sense of an inevitable, redemptive future that could be different from our world is a crucial part of the neoliberal project

At this point all the pieces would seem to be falling neatly into place. By the sixties, conservative political forces were growing skittish about the socially disruptive effects of technological progress, and employers were beginning to worry about the economic impact of mechanization. The fading Soviet threat allowed for a reallocation of resources in directions seen as less challenging to social and economic arrangements, or indeed directions that could support a campaign of reversing the gains of progressive social movements and achieving a decisive victory in what U.S. elites saw as a global class war. The change of priorities was introduced as a withdrawal of big-government projects and a return to the market, but in fact the change shifted government-directed research away from programs like NASA or alternative energy sources and toward military, information, and medical technologies.

Of course this doesn’t explain everything. Above all, it does not explain why, even in those areas that have become the focus of well-funded research projects, we have not seen anything like the kind of advances anticipated fifty years ago. If 95 percent of robotics research has been funded by the military, then where are the Klaatu-style killer robots shooting death rays from their eyes?

Obviously, there have been advances in military technology in recent decades. One of the reasons we all survived the Cold War is that while nuclear bombs might have worked as advertised, their delivery systems did not; intercontinental ballistic missiles weren’t capable of striking cities, let alone specific targets inside cities, and this fact meant there was little point in launching a nuclear first strike unless you intended to destroy the world.

Contemporary cruise missiles are accurate by comparison. Still, precision weapons never do seem capable of assassinating specific individuals (Saddam, Osama, Qaddafi), even when hundreds are dropped. And ray guns have not materialized—surely not for lack of trying. We can assume the Pentagon has spent billions on death ray research, but the closest they’ve come so far are lasers that might, if aimed correctly, blind an enemy gunner looking directly at the beam. Aside from being unsporting, this is pathetic: lasers are a fifties technology.

Phasers that can be set to stun do not appear to be on the drawing boards; and when it comes to infantry combat, the preferred weapon almost everywhere remains the AK-47, a Soviet design named for the year it was introduced: 1947.

The Internet is a remarkable innovation, but all we are talking about is a super-fast and globally accessible combination of library, post office, and mail-order catalogue. Had the Internet been described to a science fiction aficionado in the fifties and sixties and touted as the most dramatic technological achievement since his time, his reaction would have been disappointment. Fifty years and this is the best our scientists managed to come up with? We expected computers that would think!

Overall, levels of research funding have increased dramatically since the seventies. Admittedly, the proportion of that funding that comes from the corporate sector has increased most dramatically, to the point that private enterprise is now funding twice as much research as the government, but the increase is so large that the total amount of government research funding, in real-dollar terms, is much higher than it was in the sixties. “Basic,” “curiosity-driven,” or “blue skies” research—the kind that is not driven by the prospect of any immediate practical application, and that is most likely to lead to unexpected breakthroughs—occupies an ever smaller proportion of the total, though so much money is being thrown around nowadays that overall levels of basic research funding have increased

Yet most observers agree that the results have been paltry. Certainly we no longer see anything like the continual stream of conceptual revolutions—genetic inheritance, relativity, psychoanalysis, quantum mechanics—that people had grown used to, and even expected, a hundred years before. Why?

Part of the answer has to do with the concentration of resources on a handful of gigantic projects: “big science,” as it has come to be called. The Human Genome Project is often held out as an example. After spending almost three billion dollars and employing thousands of scientists and staff in five different countries, it has mainly served to establish that there isn’t very much to be learned from sequencing genes that’s of much use to anyone else. Even more, the hype and political investment surrounding such projects demonstrate the degree to which even basic research now seems to be driven by political, administrative, and marketing imperatives that make it unlikely anything revolutionary will happen.

Here, our fascination with the mythic origins of Silicon Valley and the Internet have blinded us to what’s really going on. It has allowed us to imagine that research and development is now driven, primarily, by small teams of plucky entrepreneurs, or the sort of decentralized cooperation that creates open-source software. This is not so, even though such research teams are most likely to produce results. Research and development is still driven by giant bureaucratic projects.

What has changed is the bureaucratic culture. The increasing interpenetration of government, university, and private firms has led everyone to adopt the language, sensibilities, and organizational forms that originated in the corporate world. Although this might have helped in creating marketable products, since that is what corporate bureaucracies are designed to do, in terms of fostering original research, the results have been catastrophic.

My own knowledge comes from universities, both in the United States and Britain. In both countries, the last thirty years have seen a veritable explosion of the proportion of working hours spent on administrative tasks at the expense of pretty much everything else. In my own university, for instance, we have more administrators than faculty members, and the faculty members, too, are expected to spend at least as much time on administration as on teaching and research combined. The same is true, more or less, at universities worldwide.

The growth of administrative work has directly resulted from introducing corporate management techniques. Invariably, these are justified as ways of increasing efficiency and introducing competition at every level. What they end up meaning in practice is that everyone winds up spending most of their time trying to sell things: grant proposals; book proposals; assessments of students’ jobs and grant applications; assessments of our colleagues; prospectuses for new interdisciplinary majors; institutes; conference workshops; universities themselves (which have now become brands to be marketed to prospective students or contributors); and so on.

As marketing overwhelms university life, it generates documents about fostering imagination and creativity that might just as well have been designed to strangle imagination and creativity in the cradle. No major new works of social theory have emerged in the United States in the last thirty years. We have been reduced to the equivalent of medieval scholastics, writing endless annotations of French theory from the seventies, despite the guilty awareness that if new incarnations of Gilles Deleuze, Michel Foucault, or Pierre Bourdieu were to appear in the academy today, we would deny them tenure.

There was a time when academia was society’s refuge for the eccentric, brilliant, and impractical. No longer. It is now the domain of professional self-marketers. As a result, in one of the most bizarre fits of social self-destructiveness in history, we seem to have decided we have no place for our eccentric, brilliant, and impractical citizens. Most languish in their mothers’ basements, at best making the occasional, acute intervention on the Internet.

If all this is true in the social sciences, where research is still carried out with minimal overhead largely by individuals, one can imagine how much worse it is for astrophysicists. And, indeed, one astrophysicist, Jonathan Katz, has recently warned students pondering a career in the sciences. Even if you do emerge from the usual decade-long period languishing as someone else’s flunky, he says, you can expect your best ideas to be stymied at every point:

You will spend your time writing proposals rather than doing research. Worse, because your proposals are judged by your competitors, you cannot follow your curiosity, but must spend your effort and talents on anticipating and deflecting criticism rather than on solving the important scientific problems. . . . It is proverbial that original ideas are the kiss of death for a proposal, because they have not yet been proved to work.


That pretty much answers the question of why we don’t have teleportation devices or antigravity shoes. Common sense suggests that if you want to maximize scientific creativity, you find some bright people, give them the resources they need to pursue whatever idea comes into their heads, and then leave them alone. Most will turn up nothing, but one or two may well discover something. But if you want to minimize the possibility of unexpected breakthroughs, tell those same people they will receive no resources at all unless they spend the bulk of their time competing against each other to convince you they know in advance what they are going to discover.

No comments:

Post a Comment