"More than machinery, we need humanity."
“The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its serve is to remove responsibility from the shoulders of everyone who truly believes in it. But, in fact, there are actors!” – Joseph Weizenbaum
In a work of dystopian fiction, those responsible for designing the technological systems upon which the repressive state rely are rarely represented. True, there may be the occasional executive, or reference to the machinery’s creator, but little page space or screen time is generally devoted to the programmers, engineers, and designers who created these technologies. As a result, when consuming dystopian worlds, audiences often encounter the technological systems of these works as naturalized components of these settings. It is easy not to think about how these systems came into being, because these works rarely dwell on that matter. From Zamyatin’s We to Collins’s The Hunger Games to whichever episode of Black Mirror you found most unnerving, these works all feature complex technologies that are key to the dystopia, but the audience rarely sees the legion of workers who built them. Which can easily give rise to a question along the lines of: didn’t the people who built these systems realize what they were creating? Perhaps, seeing as some of these dystopias are in authoritarian worlds, the programmers had no choice.
But we don’t live in a dystopia, at least not yet. So who, then, bears responsibility for creating the technologies that carry clear dystopian potential?
A wave of recent activism from tech company workers has demonstrated that some of the individuals working for these companies are wary about what they are being asked to create, and displeased about the ends to which their work is being directed. Or, at the very least, they’re angry with the relationships the companies for which they work have with various aspects of the military-industrial complex and the repressive apparatus of the state. In recent weeks: Alphabet (Google) employees have expressed anger over their company’s role in drone warfare, Amazon employees have demanded that their company stop creating/selling facial-recognition software to the police, and Microsoft employees have insisted that their company cut ties with ICE. These stories have unsettled the popular image of tech company employees as libertarian tech bros with no sense of responsibility for what they create. They’ve displaced the image of Mark Zuckerberg’s blasé responses when testifying before Congress, with a sense that tech companies are sites of resistance wherein the workers are challenging the direction their companies (and their work) are taking the world. It’s a feel good story in the midst of a harrowing story, even for those who don’t directly share these workers’ politics, for it says something about people working in tech that we rarely here, namely: they care.
Lest there be any doubt, the workers at these tech companies who are organizing, speaking out, and pushing back are to be applauded and commended. It can be unsettling to realize that you are a cog (or that you’re building the cogs) in a vast repressive machine, and picking activism over lethargy is rarely the safe route to take. Especially, seeing as it isn’t particularly clear just how many of their co-workers the ones speaking out really represent. And what costs these workers may have to pay should they genuinely start creating problems for their employers.
Yet even while applauding these activists it is worth taking a step back to consider the broader context. At risk of being unfair: it is encouraging when arsonists begin to speak out against particular acts of arson, but they’ve already poured the gasoline and thrown the lit match.
There has always been something rather problematic about the analysis that paints all of the tech industry as being populated by white libertarian bros. This is not to say that there is no truth to this viewpoint, the analysis put forth by Richard Barbrook and Andy Cameron in “The Californian Ideology” remains influential for a reason, but it has always been somewhat insufficient. Of course, there are other types of individuals and viewpoints present at these companies. Indeed, this analysis might be best if it is narrowly directed at the people running these tech companies – it isn’t necessarily true of the average programmer (though it is certainly true of some of them). And for every tech executive or money-eyed young programmer there is likely a company or person who genuinely believes that their start-up or gadget will make the world a better place – the Californian Ideology has its counterpart in the New York Ideology.
All of which is to say, these tech company activists are making it clear that the CEOs do not speak for all of their employees. Which is good.
And one should not doubt the sincerity of these activist employees. It’s great that they’re speaking out against drone warfare, selling facial-recognition software to the police, and corporate complicity with ICE. Yet there is also a, admittedly cynical, analysis that sees these stories about tech company workers as being exactly what the tech companies need right now.
Silicon Valley’s tech behemoths have not had a particularly pleasant couple of months. Sure, the companies are still raking in massive sums of money, but the companies are aware that their once shiny auras have become tarnished. When Mark Zuckerberg found himself dragged before Congress he was not just representing Facebook, but the whole of the tech industry. And though Google executives were likely pleased that they weren’t sitting before the cameras, the public fury at the tech sector for its actions during the 2016 Presidential campaign is not only directed at Facebook. While it is true that Zuckerberg emerged largely unscathed from his testimony, largely because Congress lacks the political will to actually do anything about Facebook or its ilk, there is one major area in which his testimony was a complete failure. Zuckerberg needed to seem like he really cared. And he didn’t. He repeated his carefully rehearsed talking points, mouthed his standard techno-utopian blather, let slip a few details that likely made Facebook’s lawyers cringe, but he ultimately failed to seem contrite or caring. Given the frequency with which Facebook has to apologize one would think the company would be better at it by now.
Luckily, to the rescue come the activist tech workers! And one of the things they are rescuing is the image of the companies for which they work. Because the message that these activists broadcast loud and clear, and which major media outlets have been happy to help them disseminate, is: we care.
This is a message which it’s hard for tech CEOs to make these days, because they’ve spent years demonstrating by their actions that they really don’t care. But in the hands of their employees this message helps revitalize the image of these companies. No longer do they appear as monopolistic octopi getting their suckers onto everything, once more they appear like spunky ragtag operations filled with activist exuberance and a willingness to take on “the man.”
There is much to protest when it comes to these massive tech companies. Certainly, the particular issues that seem to be the current focus of activist ire are deserving of that outrage. But it is hardly as if the recent topics that seem to have set off these activists are the first galling things done by these companies. To consider but a few: Amazon’s Alexas (like most high-tech gadgets) are manufactured in sweatshops, Amazon’s warehouses are sites where highly exploited workers barely earning a living wage, Amazon has huge contracts to provide services to the NSA and CIA; Google (as well as Facebook) has constructed a surveillance apparatus that would make Big Brother blush; and, without meaning to be crass, you can rest assured that many repressive elements of the government are reliant on Microsoft’s humdrum products like Word, Excel, and Windows; and this list could go on.
So, why this burst of activism now? Or, to put it differently, why is this burst of activism getting so much attention right now?
It may be that there has been quieter activism within these companies around the aforementioned issues previously, but what makes the current activism noteworthy is the amount of coverage its generating. Thus, it is quite possible that this activism is in keeping with the general wave of activism being seen in the US right now. In which case this activism seems to be less about the tech employees taking on the tech companies, and more about tech employees taking on the Trump administration. And it should be noted that in the three current cases where this activism is getting a lot of attention the workers aren’t really challenging the tech companies (as such), rather they’re challenging the ways in which these companies are cooperating with the Trump administration. It’s not that this is a bad thing. But it pulls a bit of a bait and switch. Yes, the things that the Trump administration (and police forces unleashed by that administration) could do with these powerful technologies are bad, but it means that the problem winds up getting framed as the Trump administration (which is certainly a problem) and not the tech companies themselves (they are certainly also a problem).
It is not the intent to suggest here that these activists are puppets or pawns, and to reiterate their actions should be applauded; however, it is worth considering that at a moment when the tech companies desperately need to seem like they care, these activist employees are giving the companies precisely that opportunity. The script in this situation flips back and forth in an odd way: Google is bad for playing a role in the drone war, but Google is good because Google’s employees are going to hold it to account; Amazon is bad for selling bias-laden facial-recognition software to law enforcement, but Amazon is good because Amazon’s employees are going to hold it to account; Microsoft is bad for working with ICE, but Microsoft is good because its employees are going to hold it to account; and so forth. The danger is that this becomes another case wherein Silicon Valley’s guilty conscience is held up as the solution to the things of which Silicon Valley is guilty. It is a narrative in which outsiders don’t need to care about what’s going on at these companies, because these activist campaigns suggest that these companies can regulate themselves. Whereas we are in the present situation because these companies have repeatedly demonstrated that they aren’t particularly concerned with ethics or the negative implications of their actions.
Thus, the risk posed by these well publicized campaigns is that they distract from just how bad and problematic these companies truly are by holding up a handful of employee reformers as the solution. Or, to put it another way, it’s certainly bad that Google was getting involved in drone warfare, but even if employees get Google to halt its involvement (at least for a time) Google is still a highly problematic company. And these companies can easily take this moment to bow to the pressure from concerned employees in order to gain some positive progressive PR. Sure, they might lose some valuable contracts in the short term, but what they’re really trying to head off is the mounting public frustration that could culminate in a genuine push for these companies to be broken up. These stories make these companies seem rebellious and cool again, and they make it seem like these companies can hold themselves accountable. But from exploited workers, to displaced residents, to experiments on users, to planned obsolescence, to mountains of e-waste, to an inability to foresee consequences, to the refusal of responsibility, to rampant corporate surveillance, to…it is horridly obvious that these companies are not particularly interested in being held to account by themselves or anyone else.
It is vital to recognize that just because we use devices and platforms created by these companies, it does not mean that these companies are our friends. And just because it looks like there are a few employees at these companies who could be our friends, it still doesn’t mean these companies are our friends.
Lingering in the background of this whole matter is a more complex and significant issue, namely: to what extent are tech company employees responsible for the uses to which their companies put their work? Are people working on facial-recognition technology responsible for the uses to which police departments (or outright authoritarian regimes) put that which they created? This is a more complicated question than can easily be answered here, though if one digs into the thought of many past social critics concerned with technological issues the answer would seem to be “yes.” This becomes a question of means and ends: are those who create the means responsible for the ends to which the companies for which they work put them? Considering a similar sort of conundrum, Jacques Ellul once wrote:
“today everything has become ‘means.’ There is no longer an ‘end;’ we do not know whither we are going. We have forgotten our collective ends and we possess great means: we set huge machines in motion in order to arrive nowhere.” (Ellul, 51)
Those comments seem a fitting indictment of today’s tech companies, and of many of those who work for them. For, alas, these companies routinely demonstrate that they are in possession of “great means” and they certainly “set huge machines in motion,” but the problem is that the “nowhere” we are arriving at is not a utopian “no-place” but a dystopian void. This late in the game it requires an increasing level of naïveté to ignore the nowhere to which we are heading – the grim cyberpunk future of a blighted environment and authoritarian corporate control – but too many still act as if they do not know that this is where we are going. This is what happens when complex technologies, and those who create them, are untethered from a concern with “collective ends” and allowed to see themselves (read: technology itself) as the end that matters. Thus, the means become the means for more means. And this situation is held up by those who still want to frame technology as neutral: it isn’t that facial-recognition is itself bad, but that it can be used for good or bad. But, to return to Ellul,
“In reality, when we say that we regard technics as neutral, we really think, at bottom, that it is good. The very fact that it extends man’s powers show that technics is good. Today the means are justified by the power which they give to man,” (Ellul, 59)
Ellul in these comments is speaking of “man” in the abstract, and it is couched in a larger point that prompts us to consider which people benefit and which people lose out.
This returns us to the conundrum facing the activists at these tech companies. While it is unwise to attempt to speak for them, their employment by these companies (and their attempt to push these companies in a different direction) suggests that they believe in these means, and that they believe that what they are doing is more about “the good” than just “the goods.” This is not to paint these employees as starry-eyed, but at this point to believe that companies like Google and Facebook can be redeemed is to demonstrate that one still harbors a genuine faith in the positive potential of the sorts of technologies these companies disseminate. And to be clear, it seems that many people still hold to that faith – even if it has been slightly shaken of late.
But frankly, that just isn’t good enough. Especially from the people who work for these companies.
While these employee activists, to say it again, should be applauded for what they are doing, the situation we are in speaks to a stunning lack of imagination on their parts. Silicon Valley is filled with individuals who are able to imagine all manner of different doodads and gadgets, but these same people seem woefully inept at imagining the negative potential of these same things. What’s more by focusing on the hoped for positive potential, the dangers often go overlooked, ignored, or pushed aside out of the belief that the gains will surpass the losses. But the truth is that, even if they might have some neat potential, there are some technologies where the potential risks simply outweigh the potential benefits. And in such cases it is not enough to demand that a parent company not sell these things to the police, if tech workers are serious they need to refuse to build these things altogether. They need to begin the slow and careful work of dismantling these systems, and they need to rally the tech community to realize that these means are contrary to the “collective end” of humanity. Or, to put it clearly, nobody should be working on facial recognition systems. At least not until there is a definite and clear framework for the way in which these systems can be used. But even then it may be necessary to state that the potential societal harms from this technology outweigh any potential gains. This is not to say that there are no potential gains, but these gains cannot be isolated from the risks, and where those potential risks are greater than the potential gains, things need to be stopped. What makes this problem particularly pernicious is that many of those who are making these systems, as a result of their societal position, only see themselves in the position of the beneficiaries of these technologies. It’s easy to be enthusiastic about facial recognition when you know that the police won’t be using this system to profile you – and that may well describe the ethical blinders being worn by too many in the tech world.
It may seem foolish, or itself naïve, to make such a suggestion, to say that some things simply should not be built. When it comes to technology, the precautionary principle has rarely been more than a Cassandra call, and yet if we are serious about checking the power of these dangerous technologies than we need to be willing to do more than politely applaud tech company workers when they sign a strongly worded petition. The idea that “someone else will just build these,” may have some truth to it but it also serves to absolve the tech sector (and the rest of us) of responsibility for what’s going on. In the present moment those who work to design and create new technologies need to be seriously considering the repressive potential of the things they are making. Those of us on the outside of the tech sector need to help them see these dangers if they refuse to consider them. And, though it may be cynical, when the media is suddenly filled with stories about tech company workers demonstrating how much they care, we need to be aware of the way in which these stories ultimately help to shore up the strength of these companies.
For, the issue is not that the road to a high-tech dystopia is paved with good intentions. The issue is that people find themselves walking along the road to high-tech dystopia, but just keep walking in the same direction.
Ellul, Jacques. The Presence of the Kingdom. Colorado Springs: Helmers and Howard, 1989.
Weizenbaum, Joseph. Computer Power and Human Reason: From Judgment to Calculation. San Francisco: W.H. Freeman and Company, 1976. Pg. 241.
What Technology Do We Really Need?
Be Wary of Silicon Valley’s Guilty Conscience
Living Well in the Technosocial World
What Could Go Wrong? Another Question to Ask
Pingback: Weekly Links & Thoughts #175 | meshedsociety.com
“Congress lacks the political will to actually do anything about Facebook” especially considering the lobbying effort to which the Zuckster resorted prior to the inquiry. He gave money To 85% of the House Committee who “Questioned” him. https://www.zerohedge.com/news/2018-04-07/facebook-gave-money-85-house-committee-questioning-zuckerberg-next-week and the article also exposes how many congress members also hold shares, along with the minimum worth of their stocks and of any capital gains or dividends. This speaks volumes about why no one asked the hard questions:
Rep. Joyce Beatty — $15,001 / $5,001 dividends
Rep. Steve Chabot — $15,001
Rep. James R. Comer — $1,001
Rep. K. Michael Conaway — $0* / $2,501 capital gains
Rep. Carlos Curbelo — $1,001
Rep. Mike Gallagher — $0*
Rep. John Garamendi — $1,001
Rep. Josh Gottheimer — $16,002
Sen. John Hoeven — $50,001
Rep. Mike Kelly — $15,001
Rep. Joseph P. Kennedy III — $81,004
Rep. Ro Khanna — $2,002
Rep. Jim Langevin — $115,002 / $5,001 capital gains
Rep. Brenda Lawrence — $15,001
Rep. Alan Lowenthal — $15,001
Rep. Roger Marshall — $0* / $1 capital gains
Rep. Michael McCaul — $1,000,002 / $30,002 capital gains
House Minority Leader Nancy Pelosi — $500,001
Rep. James B. Renacci — $150,002 / $5,001 capital gains
Sen. Pat Roberts — $1,001 / $201 capital gains
Rep. Tom Rooney — $15,001
Rep. Francis Rooney — $1,001
Rep. Brad Schneider — $200,002
Rep. Kurt Schrader — $15,001
Rep. Lamar Smith — $1,001 / $1 capital gains
Rep. Tom Suozzi — $15,001
Sen. Sheldon Whitehouse — $31,003
Rep. John Yarmuth — $1,001
Pingback: Resources – on automated systems and bias | Abeba Birhane
Pingback: Algorithmenethik | Algorithmenethik Erlesenes #32 - Algorithmenethik
Pingback: “Striving to minimize technical and reputational risks” – Ethical OS and Silicon Valley’s guilty conscience | LibrarianShipwreck
Pingback: The technology giants didn’t deserve public trust in the first place | LibrarianShipwreck
Pingback: 2 – The technology giants didn't deserve public trust in the first place | Traffic.Ventures Social