"More than machinery, we need humanity."
The decisions that one makes in the course of a day are subtly and overtly influenced by a host of factors. From the opinions of others to long term goals to societal pressures and the need to satisfy essential needs – the options a person chooses makes may well be shaped by choices the individual would not recognize as their own. Life in a society opens one up to manipulating factors along with countless quiet nudges and shouted commands. At what point does personal responsibility in such situations become so foggy that the outlines of the choices one makes become hard to differentiate from the effects of manipulation? How does one gauge responsibility when the very notion seems anachronistic?
This is hardly an idle concern for those who live in technological societies – particularly ones wherein individuals find themselves spending a large portion of their time accessing the Internet (through a range of devices). The memory of Facebook’s emotional experiment had not yet fully vanished from the proverbial news feed before the dating website OkCupid admitted, albeit with a shrug, that they too experimented upon their users. While initial outrage greeted the Facebook admission, OkCupid’s misfired arrow was met with an inured remission of outrage – quiet compliance with the idea that “yes, social media platforms are going to experiment on their users, evidently we accepted this by clicking “I agree,” but we will reap the benefits of improved services.” As these are large corporations that engaged in manipulating many people, it can be challenging to consider those impacted as individuals. Thus, lost in the uproar (raucous or receding) was the question of what such manipulation means for individuals who use these platforms (and by extension the other platforms that likely engage in the same or similar behavior).
Or, to put it another way, to what extent can a person take responsibility for their actions online if the actions they take are the result of algorithms and policies designed to provoke particular actions or evoke certain emotional states? If a variety of tests and unseen matching calculations tell two people they are “a good match” and they proceed to go on a date – how much responsibility can each member of the duo truly claim? Certainly, each one had to perform certain steps like asking for the date or accepting the offer – but was this simply being done because “the website said we’re a 94% match”? Likewise, if one feels depressed (or elated) after an afternoon on a social network and thus fires off a negative (or positive) message – are the choices that lead to this posting truly personal or are they guided as a result of the other posts a site chose to show that person? Where does one locate the responsibility – with the individual human or with the algorithm (designed by other humans)? Both? If so, is it an even split?
Granted, we are so bombarded with decisions and information that the very source of this influx presents itself as the only way to manage it. Far from being a spontaneous development, this problem was foreseen many decades ago by the historian and philosopher Lewis Mumford who wrote of the predicament:
“We have multiplied the mechanical demands without multiplying in any degree our human capacities for registering and reacting intelligently to them.” (273)
Thus online platforms take up the management and curatorial function for us – after all, we would be incapable of reading all of the updates, browsing all of the shelves, reading all of the tweets. So much more is seemingly available – from information, to products, to people we could potentially befriend/date – but it is more than we can logically sort out on our own. We thus become reliant upon – generally proprietary, for-profit products – to give us a sense of increased “human capacities” though this desire for increased capacities is itself just a response to the way that “mechanical demands” have multiplied the expectations we must navigate. If such seemingly increased capabilities are sometimes cast as “super powers” – it nevertheless remains the case that such powers are barely sufficient for keeping up with the things for which we are now expected to be responsible.
Though Facebook and OkCupid provide particularly pertinent examples – especially given their recent activities – the examples need not remain isolated to them: the online experience we have is generally a catered one, filled with personalized advertisements and recommendations based on our friends or other shoppers who have looked at a given item. Our online interactions – when filtered through proprietary platforms and devices – are built not upon a framework of responsibility but upon the ability of given platforms and devices to shape the responses available. Consider things as seemingly innocuous as the “like” button, the 140 character limit, the hashtag, the ability to “pin it,” emoji, and so forth – when one is only able to respond in particular ways than responsibility must wind its way through the options available. Once we press the on button, tap the screen, or input our password – we move into a world where every decision we make is shaped by the type of decisions that we are allowed to make.
It is therefore tempting to see – and thus locate – personal responsibility at an earlier level of engagement with a piece of technology: at the moment we turn the device on, or at the moment when we hit “agree” on a Terms of Service agreement. What emerges in such a situation is that responsibility is linked to the initial choice to use a given device and all later moments where we question the degree of our responsibility wind up being reflections back upon that initial choice. For example: we are still responsible for what we do on Facebook because we are responsible for the initial choice to go on Facebook. While there is some truth to this, and something appealing to it, it nevertheless seems to inaccurately capture the true extent of technological pressure people experience everyday. Indeed, the pressure to use a given platform rarely comes overtly from the platform itself – instead it is couched in the appeals of friends or family: rare is the individual who uses a social media platform for the sake of using a social media platform, more common is the person who uses it in order to connect with friends or family. That such sounds like a platform’s PR slogan only evinces the reason why such slogans can be so emotionally powerful.
As a few major online platforms (Google, Facebook, etc…) rise to increasing positions of dominance our ability to engage with certain online areas becomes dependent on our willingness to use one of those platforms. And while it is not the case that one “must” use a given platform or device – it is nevertheless the case that not using a given platform may mark one as something of an outsider prey to such comments as “you’re not on Facebook?” or “you don’t have a smart phone?” As online platforms become the way in which people engage and participate with one another in society – usage of such devices and platforms seems to become a requirement for participation. It is a level of participation that extends across multiple technological spheres: for example, to use Instagram one does not only need the app, one also needs a device like a smart phone that enables picture taking. In other words – the responsibility seemingly inherent in choosing to use a platform or turn on the device does not occur in a vacuum but is the result of a host of societal pressures, and appeasing the social set of pressures sets one up to be pressured by new factors. We are still responsible for making the choice to go online – but if we want to know what is going on in the world, we find that we must first go online. When an individual tries to distance themselves from the online realm this may be interpreted by others as a shirking of responsibility – as to not be online is to not accept responsibility for everything that is going on in that realm. Media refusal may be interpreted by some as a social affront – to not be on a given social network may come across as an insult to those on it. Legion are the individuals who detest a given platform or company but stay because it has become their primary mode of communicating with others.
An important aspect of responsibility involves knowingly engaging with things in the world, and making informed decisions. This matter is only made more difficult because most of us know precious little about the functioning of the devices and platforms that guide, shape and warp the decisions that we are able to make. A challenge that is only increased by the fact that we simply cannot know what many of these algorithms and devices really do (thanks to trade secrets, intellectual property, copyright). Even the tech company executive may not truly understand the way the algorithms function, even the individual who wrote the algorithm may not truly understand the functioning of the device upon which the algorithm runs, even the person who designed the device may not know the way it will be assembled in a factory, even the…and so on and so on. We are hardly even cognizant of the functioning of the device or platform at the level with which we interact with it let alone the other levels for which we bear some ethical responsibility. We have enough trouble sorting out our responsibility in line with going online, issues like mineral extraction, labor conditions, e-waste reclamation, and profit accrual are simply another cast of different concerns – the set of responsibilities that we can scarcely begin to contemplate. It is not that people do not want to take responsibility for technology, it is that modern technology pulls people into a vast labyrinth in which we may never even encounter that fabled sprite. This is not a new maze in which we have been dropped – citizens of technological society have been attempting to navigate these pathways for decades. In attempting to envision a “sane society” the philosopher and psychologist Erich Fromm dourly observed the way people become alienated from the technical tools they make use of:
“We are surrounded by things of whose nature and origin we know nothing. The telephone, radio, phonograph, and all other complicated machines are almost as mysterious to us as they would be to a man from a primitive culture; we know how to use them, that is, we know which button to turn, but we do not know on what principle they function, except in the vaguest terms of something we once learned at school…We consume, as we produce, without any concrete relatedness to the objects with which we deal; we live in a world of things, and our only connection with them is that we know how to manipulate or to consume them.” (130)
To the above lines, today could be added the words “those things manipulate us back” – even as the devices and platforms related to the Internet neatly fall into Fromm’s mention of “other complicated machines.” And yet the technological world of today is even more all encompassing than it was in Fromm’s day (the above words were first published in 1955); after all, the telephone mounted on the wall is a very different device – from a social and technological perspective – than the smart phone. But as demands have multiplied along with devices – the increased complexity has not given rise to an increase in understanding. Indeed, it may be precisely the opposite as new technologies invite us to pass even more tasks and responsibilities off to the machine. Our devices manage our schedules, tell us where we are, keep track of how many steps we have taken, keep us in constant communication with others, and open up a vast world of informational possibilities – but in increasing the responses we are able to have what responsibilities have we passed off to the devices? There is something to this that remains resonant with Mumford’s claim that – as technology swelled:
“The belief that values could be dispensed with constituted the new system of values.” (283)
And it may well be the case that responsibility is becoming not simply devalued, but something that has become an altogether antiquated concept. Not because responsibility is not important, but because we find ourselves in situations where the opportunities to take responsibility may lock us into a range of choices where we are able to respond in fewer and fewer ways, and where the meaning of those options is steadily eroded.
Yet if technological responsibility seems to be falling victim to planned obsolescence – the question remains: whose plan is this?
Fromm, Erich. The Sane Society. Routledge Classics, 2002.
Mumford, Lewis. Technics and Civilization. University of Chicago Press, 2010.