"More than machinery, we need humanity."
Lest there be any doubt, the summer of 2017 was characterized by a string of disasters, tragedies, calamities, and almost apocalyptic events. Against such a grim backdrop of lost lives and destroyed homes it can come off as rather crass, privileged, or even uncaring, to dwell on other occurrences. And yet, one variety of worrisome trends do not cease unfolding merely because even more dangerous trends are unfolding elsewhere. In the wake of the ruin wrought by this summer’s series of climate change exacerbated storms there have been some insisting that such storms should be seen as “man-made disasters,” but there is another variety of “man-made disaster” that is also becoming harder to ignore.
The summer’s storms went by names like Harvey, Irma, and Marie – but there’s another set of calamities that go by names like Equifax and Facebook.
The massive hack of the consumer credit reporting agency Equifax is the sort of thing from which cautionary tales are constructed. Hackers gained access to the sensitive personal information (social security numbers, credit card numbers, birth dates) for some 143 million people in the US alone. The bleak irony of this event is that Equifax had that information precisely because it is one of the companies that uses such details to check whether an individual has good credit and therefore can be trusted to repay credit cards, loans, mortgages, and the like. Equifax’s tagline is “Powering the World with Knowledge,” but the hack makes it maddeningly clear that such “knowledge” and the power that comes with it is liable to be stolen. And then a feeling of power can swiftly turn into a feeling of acute powerlessness.
Facebook finds itself in the muck of a different sort of controversy. It was not so much that Facebook was hacked as that some are accusing the company of having played a prominent role in the “hacking” of the 2016 US election. While Facebook had previously been assailed for helping to disseminate news of questionable veracity, the social network’s problems were only deepened as it admitted that it had accepted ad buys from groups that were likely fronts for the Russian government (to say nothing of revelations that the site was allowing blatantly anti-Semitic ad targeting). Mark Zuckerberg has been busy traversing the country for a series of highly stage-managed encounters with “regular folks” that may well signal his personal political ambitions, but recent stories make it clear that Zuckerberg hardly needs to be sitting in elected office for Facebook to wield unsettling political influence.
The Equifax hack of 2017 and Facebook’s role in the 2016 election should be seen as “man-made disasters” of a particular sort. These occurrences are the products of a society that invested its trust and hope in Internet connected computer technologies and now finds itself at the mercy of those very systems. In the wake of any calamity it is inevitable that, amidst the accusations and recriminations, various voices will wail “who could have seen this coming?” This is the sort of lamentation that seeks to protect the event from criticism by acting as if it was a wholly unforeseen occurrence. However, of the many things can be said about these category 5 high-tech tempests, one cannot that they weren’t predicted.
In his 1977 book, Autonomous Technology, Langdon Winner explored the recurrences of the theme of “technology-out-of-control” throughout political thought and history. From Greek philosophers, to Mary Wollstonecraft Shelley’s Frankenstein, to twentieth century critics of technology – Winner traced a lengthy legacy of warnings about the dangers of technologies and technological systems slipping out of control of their human creators and masters. Often with disastrous consequences. Though technology, as Winner clearly demonstrates, has been a topic of much concern for a variety of writers (novelists, poets, philosophers, historians, social critics) it has remained something of a fringe issue in politics. Certainly, governments have directed impressive quantities of money towards technical research, but responsibility for the technology has generally been something entrusted to the engineers, technicians, and business owners. While, beginning in 1972, there was an Office of Technology Assessment, created to advise members of the US Congress about scientific and technological developments, this office was shuttered in 1995. Though it was not as if the need for such an office had suddenly vanished.
Early in Autonomous Technology, Winner outlines the social and political problem posed by technology clearly:
“Developments in the technical sphere continually outpace the capacity of individuals and social systems to adapt. As the rate of technological innovation quickens, it becomes increasingly important and increasingly difficult to predict the range of effects that a given innovation will have.” (Winner, 3)
This is a key insight, and it is one that seems every bit as pertinent today as when it was written. From robots, to AI, to human enhancement technologies, to the Internet of Things, to the rise of a handful of massive tech conglomerates, to constant surveillance, to wastelands filled with decaying e-waste, to the abandonment of reality in favor of VR…there are no shortage of “developments in the technical sphere” that today challenge “individuals and social systems” capacity to “adapt.” Indeed, one can easily frame the Equifax hack and the manipulation of Facebook in precisely these terms. A company like Equifax brings together and conveniently centralizes a mountain of sensitive information, but the capacity to bring all of this information together seems to have outpaced the capacity to protect it – or to figure out what to do should this information be accessed by malicious groups. Similarly, a massive social network like Facebook – a company that knows from having conducted its own experiments that it can be used to manipulate users – is an obvious candidate for what happens when a technical system is developing with speed and in ways that challenge users to adapt.
The case of Facebook and Equifax demonstrate that, as Winner argued, the question of responsibility for minding the machines has generally been left to these technology companies themselves. They say “you can trust us,” and many people do wind up trusting them; perhaps recognizing that they don’t have particularly much choice in the matter. You can trust Facebook, or you can abstain from using it – but opting out personally doesn’t protect you from the social impact of the platform. Similarly, many individuals were shocked to realize that their information was bound up in Equifax, despite their having never made use of that company.
Thus, it can swiftly turn out that such trust was unwarranted as recent incidents reveal that these companies really don’t have everything under control. While the hack of Equifax was a calamity, the company’s handling of the event (and some fishy business activity) made it obvious that the company simply did not know what it was doing. Or take the case of Facebook’s Mark Zuckerberg who, as the public face of the company, has built a public persona on being the affable tech-wiz who wants to “connect the whole world,” a figure who can be trusted because he was so smart that he created Facebook! But as the news of Facebook users being easily conned by bogus stories, and dodgy agents manipulating users with manipulative ads began piling up, Zuckerberg ceased looking like the wizened sorcerer and began resembling the horror struck sorcerer’s apprentice who has realized too late that he does not actually know how to stop or control the chain of events that he has set off.
Alas, perhaps the clearest sign that it is time to worry about Facebook is that Zuckerberg himself seems to have realized that it is time to worry about Facebook.
The fear that technology has slipped out of control can easily turn into a sentiment that the machinery has become genuinely autonomous (a la Terminator or HAL 9000), but one should avoid making this sort of jump. For not only does it oversimplify matters, it also hides the fact that there are still people behind many of these problems. Equifax is a company built by people, to ostensibly meet the needs of other people, and it was hacked by people. Yes, Equifax deserves plenty of blame, but we need to find out which humans are responsible for building Equifax in such a way that it was such an easy target. Equifax should have been more secure (indeed, given the information they housed it had a responsibility to be more secure), but in finding out where the blame lies we will inevitably find human choices undergirding the technical ones. As for Facebook, Zuckerberg may be the public face but it is a company that employs no shortage of other people. It is a company that clearly wants to claim the mantle of being the twenty-first century “public sphere,” but it does not seem to want to shoulder any of the responsibility that comes with that. Facebook is a company that is built upon a set of human choices which are then reified in the technical systems (or organizational procedures) the company builds. A company that is diligent in ensuring that no women’s nipples appear on the site (how scandalous!) is capable of filtering out bogus clickbait or determining the questionable sources behind a divisive advertisement.
Facebook and Equifax are not technical systems out of control. They are technical systems operating exactly how they were built to function. Albeit, how they were built to function by people who don’t seem to have been particularly concerned with the consequences of what they were building and how they were building it.
Of course, Winner was correct in noting that it is a challenge to predict the impacts of a given technological development. Especially if one has a mind for genuine specificity. Yet, it is worth noting that many a critic of technology has come pretty close to the mark in their predictions. Indeed, there is often something prescient about a dash of pessimism. Neil Postman warned that conglomeration and the rise of mass media technologies would produce a public unable (and unwilling) to tell fact from fiction. Shoshana Zuboff wrote of the “information panopticon” while highlighting that the programs that can be used for surveillance almost certainly will wind up being used for those purposes. A host of thinkers have emphasized that an unthinking embrace of everything technological has devastating ecological consequences. Paul Virilio focused his attention on accidents to warn that every technology carries within itself the premonition of the catastrophe it will cause. While the computer stood as an object of particular concern for some of the twentieth centuries most vital critics of technology – even if they were writing decades before Zuckerberg’s birth. Throughout his oeuvre, Jacques Ellul warned of the rise of technologies like the computer, casting doubt on the actual need for such gadgets, and noting that when it comes to massive technological systems and organizations:
“The bigger an organization becomes, the more accident prone it is.” (Ellul, 113)
And that is a line that seems confirmed by Equifax and Facebook. Granted, Ellul had little confidence that even such accidents would spark much in the way of needed change. Much like Lewis Mumford, he believed that the technological systems generally managed to secure the acquiescence of the public by buying them off with a steady stream of pleasurable gadgets. This prompted Ellul to despairingly note:
“The human race has never given historical proof of the wisdom that is needed. We always want more, no matter what the damage or the costs.” (Ellul, 225)
And the present moment is one which provides an opportunity to ask whether or not Ellul was wrong or right. Are occurrences like the Equifax hack and the proof that Facebook is perfectly capable of undermining democracy events that will prompt people to reevaluate the role that we allow technology to play in society…or shall we just double down on technology?
Winner ends Autonomous Technology by staking out a provocative challenge aimed at unearthing and reclaiming the power that has been sapped by technological developments. He proposed what was, and certainly remains, an unsettlingly radical solution. It is one that surely will provoke sneers and scoffs, but it is also the type of response that avoids the risk of society simply being sucked deeper into the technological morass. As Winner explains:
“The idea is that in certain instances it may be useful to dismantle or unplug a technological system in order to create the space and opportunity for learning.” (Winner, 331)
If these recent occurrences wrought by technological developments are not clear cases of just such “certain instances,” how would such “certain instances” look? There are certainly many individuals who enjoy platforms like Facebook and who would chafe at the idea that it be dismantled or unplugged. Yet our society faces a choice: we can begin to carefully dismantle and unplug some of these technologies, or we can do nothing and allow them to continue dismantling our society.
We, unfortunately, don’t have to choose wisely. But, at the very least, we should stop pretending that we haven’t been warned.
Ellul, Jacques. The Technological Bluff. Grand Rapids: Eerdmans Publishing, 1990 (originally published in French in 1988).
Winner, Langdon. Autonomous Technology. Cambridge: The MIT Press, 1989 (first published 1977).