"More than machinery, we need humanity."
Amazon may have been expecting lots of public attention when it announced where it would establish its new headquarters – but like many technology companies recently, it probably didn’t anticipate how negative the response would be. In Amazon’s chosen territories of New York and Virginia, local politicians balked at taxpayer-funded enticements promised to the company. Journalists across the political spectrum panned the deals – and social media filled up with the voices of New Yorkers and Virginians pledging resistance.
Similarly, revelations that Facebook exploited anti-Semitic conspiracy theories to undermine its critics’ legitimacy indicate that instead of changing, Facebook would rather go on the offensive. Even as Amazon and Apple saw their stock-market values briefly top US$1 trillion, technology executives were dragged before Congress, struggled to coherently take a stance on hate speech, got caught covering up sexual misconduct and saw their own employees protesting business deals.
In some circles this is being seen as a loss of public trust in the technology firms that promised to remake the world – socially, environmentally and politically – or at least as frustration with the way these companies have changed the world. But the technology companies need to do much more than regain the public’s trust; they need to prove that they deserved it in the first place – which, when placed in the context of the history of technology criticism and skepticism, they didn’t.
Big technology companies used to frame their projects in vaguely utopian, positive-sounding lingo that obscures politics and public policy, transcending partisanship and – conveniently – avoiding scrutiny. Google used to remind its workers “Don’t be evil.” Facebook worked to “make the world more open and connected.” Who could object to those ideals?
Scholars warned about the dangers of platforms like these, long before many of their founders were even born. In 1970, social critic and historian of technology Lewis Mumford predicted that the goal of what he termed “computerdom” would be “to furnish and process an endless quantity of data, in order to expand the role and ensure the domination of the power system.” That same year a seminal essay by feminist thinker Jo Freeman warned about the inherent power imbalances that remained in systems that appeared to make everyone equal.
Similarly, in 1976, the computer scientist Joseph Weizenbaum predicted that in the decades ahead people would find themselves in a state of distress as they became increasingly reliant on opaque technical systems. Countless similar warnings have been issued ever since, including important recent scholarship such as information scholar Safiya Noble’s exploration of how Google searches replicate racial and gender biases and media scholar Siva Vaidhyanthan’s declaration that “the problem with Facebook is Facebook.”
The technology companies are powerful and wealthy, but their days of avoiding scrutiny may be ending. The American public seems to be starting to suspect that the technology giants were unprepared, and perhaps unwilling, to assume responsibility for the tools they unleashed upon the world.
In the aftermath of the 2016 U.S. presidential election, concern remains high that Russian and other foreign governments are using any available social media platform to sow discord and discontent in societies around the globe.
Facebook has still not solved the problems in data privacy and transparency that caused the Cambridge Analytica scandal. Twitter is the preferred megaphone for President Donald Trump and home to disturbing quantities of violent hate speech. The future of Amazon’s corporate offices is shaping up to be a multi-sided brawl among elected officials and the people they supposedly represent.
Viewing the present situation with the history of critiques of technology in mind, it’s hard not to conclude that the technology companies deserve the crises they are facing. These companies ask people to entrust them with their emails, personal data, online search histories and financial information, to the point that many of these companies proudly tout that they know individuals better than they know themselves. They promote their latest systems, including “smart speakers” and “smart cameras,” seeking to ensure that users’ every waking moment – and sleeping moments too – can be monitored, feeding more data into their money-making algorithms.
Yet seemingly inevitably these companies go on to demonstrate how unworthy of trust they actually are, leaking data, sharing personal information and failing to prevent hacking, as they slowly fill the world with a disturbing techno-paranoia worthy of an episode of “Black Mirror.”
Technology firms’ responses to each new revelation fit a standard pattern: After a scandal emerges, the company involved expresses alarm that anything went wrong, promises to investigate, and pledges to do better in the future. Some time – days, weeks or even months – later, the company reveals that the scandal was a direct result of how the system was designed, and trots out a dismayed executive to express outrage at the destructive uses bad people found for their system, without admitting that the problem is the system itself.
Zuckerberg himself told the U.S. Senate in April 2018 that the Cambridge Analytica scandal had taught him “we have a responsibility to not just give people tools, but to make sure that those tools are used for good.” That’s a pretty fundamental lesson to have missed out on while creating a multi-billion-dollar company.
Using any technology – from a knife to a computer – carries risks, but as technological systems increase in size and complexity the scale of these risks tends to increase as well. A technology is only useful if people can use it safely, in ways where the benefits outweigh the dangers, and if they can feel confident that they understand, and accept, the potential risks. A couple of years ago, Facebook, Twitter and Google may have appeared to most people as benign communication methods that brought more to society than they took away. But with every new scandal, and bungled response, more and more people are seeing that these companies pose serious dangers to society.
As tempting as it may be to point to the “off” button, there’s not an easy solution. Technology giants have made themselves part of the fabric of daily life for hundreds of millions of people. Suggesting that people just quit is simple, but fails to recognize how reliant many people have become on these platforms – and how trapped they may feel in an increasingly intolerable situation.
As a result, people buy books about how bad Amazon is – by ordering them on Amazon. They conduct Google searches for articles about how much information Google knows about each individual user. They tweet about how much they hate Twitter and post on Facebook articles about Facebook’s latest scandal.
The technology companies may find themselves ruling over an increasingly aggravated user base, as their platforms spread the discontent farther and wider than possible in the past. Or they might choose to change themselves dramatically, breaking themselves up, turning some controls over to the democratic decisions of their users and taking responsibility for the harm their platforms and products have done to the world. So far, though, it seems the industry hasn’t gone beyond offering half-baked apologies while continuing to go about business as usual. Hopefully that will change. But if the past is any guide, it probably won’t.
Note: I was asked to write this piece by The Conversation, it was published there under a Creative Commons license. You can read the original article on their site. I would like to thank Jeff Inglis for his excellent editorial work.
Challenging the Tech Companies from Within
Be Wary of Silicon Valley’s Guilty Conscience
Ethical OS and Silicon Valley’s Guilty Conscience
An Island of Reason in the Cyberstream – on Joseph Weizenbaum
“Technology firms’ responses to each new revelation fit a standard pattern: After a scandal emerges, the company involved expresses alarm that anything went wrong, promises to investigate, and pledges to do better in the future. Some time – days, weeks or even months – later, the company reveals that the scandal was a direct result of how the system was designed, and trots out a dismayed executive to express outrage at the destructive uses bad people found for their system, without admitting that the problem is the system itself.”
Reading this bit, I was struck by how it seems to be a nigh-direct copy of the way the USian gun lobby operates. But then, if the model is proven to be effective, why wouldn’t your “reputation management” consultants sell it to you over and over again?
Great article! I don’t use facebook (henceforth called fakebot) but I am concerned about privacy. I watched the Zuck being questioned in Congress and wasn’t sure of his integrity after learning that just the the day before he paid a visit to the congress folks, chequebook in hand, paying out hundreds of thousands of dollars in lobby money for a “favorable” question period — where the hard questions would not be asked.
But, that is just the tip of the iceberg. Read “Pentagon Kills “Lifelog” Project Same Day Facebook Founded”: gopreload.org/pentagon-kills-lifelog-project-same-day-facebook-founded seems to indicate that the global “government” is using fakebot for a covert data mining operation — globally. Fakebot’s “friendly” presentation is anything but friendly. That is enough to let me know that the planet is undergoing the greatest experiment in “free speech” in our history. Especially after realizing that bots are not really “free speech” — bots are being used as a propaganda mill by governments — globally. Anyone thinking 1984 — anyone besides me?
That leaves me with a question: How much of fakebot’s growth (new users) are actually bots? That means a room full of phones with fake accounts, pretending to be real people, but are being manned by perhaps one person, ” ‘producing’ interesting content for you”. How many folks are manning a room full of phones
” ‘producing’ interesting content for you”? Producing — yes. Interesting — maybe. Inflaming — yes, and not necessarily true. Spreading false narratives?
Fakebot imposes a global government’s value system on this world wide community.
So, first it’s bots, then false flags using mercenaries to take down entire governments opposed to the New World Order’s way of seeing their fake world. Fakebot is being used to inflame divisions to speed up an evil agenda — and our governments are complicit? That spells serious trouble for anyone standing in their way.
Pingback: Choose Very Carefully: a Review of Black Mirror – Bandersnatch | LibrarianShipwreck
Reblogged this on Quaerere Propter Vērum.
Pingback: The Whale and the CEO – a Review of “The Inventor Out for Blood in Silicon Valley” | LibrarianShipwreck
Pingback: “Cover Your Tracks!” – A Critique of the Privacy Project from The New York Times’ | LibrarianShipwreck
Pingback: Stop Using “Google” as a Verb | LibrarianShipwreck
Pingback: Facebook ≠ Democracy | LibrarianShipwreck
Pingback: Who Listens to the Listeners? | LibrarianShipwreck
Pingback: TikTok Will Not Save Us | LibrarianShipwreck
Pingback: Facing Facebook | LibrarianShipwreck