"More than machinery, we need humanity."

An island of reason in the cyberstream – on the life and thought of Joseph Weizenbaum

The juxtaposition between the potential of technology and its actual manifestation can be rather jarring. Tools that promise to make tasks easier are used to automate people out of their jobs, devices that trumpet their power of connectivity leave their users feeling alienated, and the machines that propel humans to the stars are the cousins of rocket delivery systems that could rain doom upon humanity. In writing about the “paradoxical role” of technology Joseph Weizenbaum captured this disjunction eloquently, noting “Our adventure with science and technology has brought us to the very brink of self-destruction…and also to unprecedented comfort and even self-fulfillment to many of us. Some of us are beginning to think it is not such a fair bargain after all.”[1]

As a computer scientist and longtime professor at the Massachusetts Institute of Technology (MIT), Joseph Weizenbaum has secured himself a well-deserved place in the annals of computer history for his program ELIZA and for his role in the development of the programming language SLIP. Yet, what sets Weizenbaum apart from his peers and colleagues is not his successes as a computer scientist but in his recognition of the effects that advances in his field were having upon the wider society. From a position wherein he found himself surrounded by people more interested in the technological than the human, Weizenbaum embraced his role as an iconoclastic, even heretical, figure – pushing back against the ideological embrace of technology at its very source.

For Weizenbaum, the computer could not be separated from the social context in which these machines were couched. Therefore he rejected the mantle of computer critic, preferring to cast himself as a social critic.[2] Like a magician revealing the secrets of the craft, Weizenbaum cautioned computer users against becoming mystified by mechanical magic, even as he reassured them that they too could come to understand the workings of the machines that worked over their lives. Holding to an ethos that emphasized the need for computer programmers to accept responsibility for their creations, Weizenbaum set himself in opposition to those he derided as “compulsive programmers,” the “artificial intelligentsia,” and those who refused to think through the implications and applications of their work.

Having lived from 1923 to 2008, Joseph Weizenbaum witnessed significant societal, political and technological shifts. These life experiences left indelible marks on Weizenbaum’s view of the world – especially as he was not an idle observer of the changes around him, but an active participant in these shifts, particularly as regarded advances in technology. Not only the computer as such, but the Internet, and the range of objectives for which computers were utilized all came under Weizenbaum’s impassioned analysis. As a social critic and computer scientist, Weizenbaum’s critiques have lost little of their power with time. And the present volume demonstrates the richness of Weizenbaum’s thought along with his belief that issues involving computers are far too important to be left to computer scientists alone.

After all, computers and technology still perform a paradoxical role in society.

[Note – the preceding, and following text, is the introductory essay for the book Islands in the Cyberstream: Seeking Havens of Reason in a Programmed Society, which is a lengthy interview between the computer scientist and social critic Joseph Weizenbaum and Gunna Wendt (translated by Benjamin Fasching-Gray). In writing this essay, my goal was to provide readers of Islands in the Cyberstream with an introduction to Weizenbaum’s life and thought. It is posted here, with the publisher’s permission, in the hopes of introducing more people to this important and challenging thinker. If you find this introduction interesting I highly recommend that you pick up a copy of Islands in the Cyberstream – it is truly exceptional book. Though the introduction is posted here in its entirety, it is quite long, and thus you can also download and/or read it as a pdf if you prefer.]


From Berlin to Michigan to Massachusetts

Joseph Weizenbaum was born in Berlin on January 8, 1923. While his father had himself haled from an orthodox Jewish background – Weizenbaum’s upbringing was not particularly religious, though he and his older brother both received religious instruction. While the Weizenbaums were able to escape Germany before the worst of the Nazi’s atrocities – leaving Germany for the United States in 1936 – the experience of growing up amidst the rise of fascism left an impression on Weizenbaum would doggedly follow him throughout the rest of his life, wherever he went.[3]

As his family left Germany when he was only thirteen years old – in fact, they left Germany on his thirteenth birthday — Weizenbaum was only tentatively aware of the changes that were befalling the country of his birth. The Weizenbaum family departed Germany in the period when, at least as he perceived it, the main groups being hounded by the Nazis were primarily political opponents rather than those of Jewish descent. Nevertheless, the rise of the Nazis and their anti-Semitic policies did have direct impacts on Joseph’s life as he the Nazi’s laws forced Weizenbaum to leave the public school he had been attending to enroll in a school for Jewish boys. Berlin, and the world around Weizenbaum, came to increasingly be a site of insecurity. The policeman on the corner transformed from a figure a child could turn to if they needed assistance into a man who a Jewish child had to avoid. Berlin had become home to bars frequented by the SA where horrid things occurred in backrooms, and streets where members of the Hitler Youth lurked waiting to attack Weizenbaum on his way home from school – and yet these things just appeared as evidence to the young boy that “we simply lived in a cruel society.”[4] Though the specific reasons for his family’s departure were not clear to Weizenbaum at the time of their emigration, he was still aware that his family “had just escaped something evil.”[5] With the sense of growing unease having only served to make Joseph anxiously aware that many of his former friends and classmates were staying behind in Berlin even as he made his way first to England and then across the Atlantic.[6]

Arriving in the United States, Weizenbaum was acutely aware of his difference from his peers. Immigration was not something for which he had been carefully preparing himself – it happened suddenly and by necessity – and thus he arrived in the US speaking no English. Having been educated in German schools, and Jewish schools in Germany, Weizenbaum found himself quickly having to overcome the knowledge gap between himself and his new peers. He was not only learning how to live in a new country but he simultaneously had to learn the history of his new home. Yet, for Joseph, being different was a source of strength as he adjusted to life in Detroit, Michigan. Still struggling with the English language, his nascent interest in mathematics grew rapidly as it was a subject that he could make sense of: as mathematics is a universal language. It was Joseph’s affection for mathematics that would eventually lead him to the computer.[7]

After graduating from high school, Weizenbaum went on to Detroit’s Wayne State University where he studied mathematics and from which he would earn a BS and an MS – though his studies were disrupted by a stint as a meteorologist for the Army Air Corps during WWII, he returned to his studies at the war’s end. Joseph’s introduction to the computer entailed becoming genuinely involved in what was then a still inchoate field, as he was offered the opportunity to “contribute as an assistant on the construction of a computer”[8] whilst he was still at Wayne State. The personal computers of the twenty-first century bear only a passing similarity to the computer that Weizenbaum helped construct at Wayne State. Indeed, the computer he participated in building “filled an entire lecture hall” and was nicknamed “Whirlwind” while the next one was given the equally imposing moniker “Typhoon.”[9] Upon graduating from Wayne State, Weizenbaum went on to work briefly in the private sector where, while in the employ of the General Electric Corporation, he helped develop the automatic bookkeeping and proofing system ERMA (Electronic Recording Machine-Accounting) for Bank of America. He left the corporate world in 1962 when the Massachusetts Institute of Technology (MIT) offered him a position as a visiting professor.

It was at MIT that Weizenbaum would create ELIZA and where he would gradually grow increasingly concerned about the impact that computers were having upon society.



While at MIT, Joseph Weizenbaum developed a computer program that secured his place in the annals of computer history – that program was ELIZA. The name ELIZA was chosen for this “language analysis program because, like the Eliza of Pygmalion fame, it could be taught to ‘speak’ increasingly well.”[10] The program allowed a person to communicate with a computer using natural, conversational language. This resulted in responses from the computer that could give the impression that the computer understood what was being said, and that the computer was even conversing back. An individual “conversing” with ELIZA would type a message in natural language on a typewriter connected to a computer running the program. After they typed their message the computer would generate a response that would be displayed on the same machine.[11]

The early script which ELIZA ran was such that the program responded to questions in a way similar to the methods employed by Rogerian psychotherapists – meaning that ELIZA would generally respond to a user’s message by taking the very words of that message and echoing them back in the form of a question. Indeed, for this incarnation of ELIZA – which was sometimes referred to as DOCTOR – the human user was actually directed to engage with the program as though they were genuinely speaking to a psychiatrist. The reason for this instruction was that it allowed ELIZA to seem as though it was truly participating in the conversation, as “psychiatric interview is one of the few examples of categorized dyadic natural language communication in which one of the participating pair is free to assume the pose of knowing almost nothing of the real world.”[12] What ELIZA accomplished was that it seemed more aware—a result of its human interlocutor projecting a belief that they were being understood onto the computer program. When ELIZA responded with a sentence like “Tell me more about your family”[13] the context did not make it seem that this question came from ignorance about families, but just the opposite.

The ELIZA program was able to give the appearance of engaging in real discussion through the execution of a “transformation rule” which was applied when the program detected certain keywords in the text. If ELIZA received a message containing certain keywords the program would decompose the string of text in which the word occurred and reassemble it in such a way as to prompt a further reply. By following these rules “any sentence of the form ‘I am BLAH’ can be transformed to ‘How long have you been BLAH,’ independently of the meaning of BLAH.”[14]  A user would feed in comments and statements filled with keywords ELIZA would take these sentences and by following the rules of its script which would do things like swap out first person pronouns for second person pronouns, give fitting replies. Furthermore in cases wherein ELIZA did not detect appropriate keywords the script was constructed such that “an earlier content-free remark, or under certain conditions, an earlier transformation” could be given as an answer.[15]

In the very first paragraph of Weizenbaum’s article on ELIZA he acknowledges that computers can seem as though they are performing magic; however, “once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible.”[16] In the article that followed this quotation, Weizenbaum explained in clear detail precisely the way in which ELIZA functioned – he went step by step to show that it was not the result of magic or genuine understanding on the part of the program, but was instead attributed to clever programming. Granted, for purposes of convincing its conversant that it understood ELIZA relied heavily on something quite outside its script. In a conversation with ELIZA “the human speaker will, as has been said, contribute much to clothe ELIZA’S responses in the vestments of plausibility.”[17]  And even if the human knew they were simply exchanging messages with the computer, even if they were aware of the script and programming that were resulting in the particular answers the magic of ELIZA demonstrated “how easy it is to create and maintain the illusion of understanding.”[18]

Weizenbaum was emphatic that ELIZA did not actually understand the messages it was receiving – even if its script-generated replies made it seem that the contrary was true. A primary aim of the ELIZA program was, once the “conversation” began, to keep the discussion going. It achieved this both by concealing “any misunderstanding on it own part” and by relying on the good faith of the human discussant to not prematurely step away upon being presented with hiccup-like evidence that the program did not truly understand the messages they were typing.[19] Communications between two human beings relied, in Weizenbaum’s estimation, on a belief on the part of each that the human understood what they were saying. Such understanding could be difficult as the varied contexts from which people came meant that two people might have very different frames of reference. People conversing with each other were thus always limited in their understanding of others, as Weizenbaum stated, “there can be no total understanding and no absolutely reliable test of understanding.”[20] For Weizenbaum the key was that humans understood each other “to within acceptable tolerances” but what programs like ELIZA could achieve was only to “deal with such ideas symbolically.”[21] The machine could, through clever programming, mimic the appearance of human understanding. This was not in and of itself proof of understanding, but simply evidence of the successful execution of the script.

Even if Weizenbaum was skeptical of the degree to which two humans could ever truly understand each other, as its creator, he was confident that he fully understood ELIZA – and thus he was rather surprised by the ways in which other people seemed to misunderstand ELIZA. As Weizenbaum wrote “people who knew very well that they were conversing with a machine soon forgot that fact,” some people would even “demand to be permitted to converse with the system in private, and would, after conversing with it for a time, insist, in spite of my explanations, that the machine really understood them.”[22] Furthermore, the degree to which ELIZA was able to successfully mimic the work of Rogerian psychoanalysts greatly impressed many psychiatrists, some of whom went so far as to suggest that the program could be used for working with real patients.[23] Puzzled by the responses to ELIZA, Weizenbaum found himself disquieted by certain tendencies playing out elsewhere in the computer science field, such as tendencies that attempted to cast humans as similar to computers and claims that cast the human brain as “merely a meat machine.”[24]

What became clear to Weizenbaum was that the computer had become not only a powerful tool in people’s lives, but that “we have permitted technological metaphors…and technique itself to so thoroughly pervade our thought processes that we have finally abdicated to technology the very duty to formulate questions.”[25] Part of the rise of this technological metaphor was the result of computer scientists shirking responsibility for what they had created – even as the spread of computers was allowing the technological metaphor to diffuse widely amongst a public that did not fully understand the way in which computers worked.

It was towards redressing these challenges – which was more a feat of “social criticism” than “computer criticism” – which Weizenbaum turned in the wake of ELIZA’s success. Weizenbaum steadily became a prominent critic of the technological metaphor from within one of the very sites from which the metaphor was disseminated.


Of Computer Power and Human Reason

Joseph Weizenbaum’s book Computer Power and Human Reason: From Judgment to Calculation is many things: an introductory lecture in the basic workings of computers, an accessible statement of mathematical principles in computer science, an attempt to demystify computers, an ethical challenge to those working in the field of computing, and an unflinching critique that states that just because a computer can do something does not mean it necessarily should. Though the text dallies in math and science – even warning uninitiated readers that they might find such sections difficult[26] – it is a book that does not seek to solely speak to an academic audience. Computer Power was not the first book, by any means, to offer a rebuke to the effects of technology in society. The historian and prominent critic of technology, Lewis Mumford clearly influenced Weizenbaum’s thinking;[27] however, a key element that set Weizenbaum apart as a critic of technology is that – unlike Mumford – Weizenbaum was actually a computer scientist.   

Computer Power and Human Reason begins with Weizenbaum explaining his technological qualifications and by expressing how his experiences with computers led him to write a book so critical of those machines. He begins the book with ELIZA, but unlike the article in which he explained the workings of the program in heavy technical detail to an audience of scientific peers, in Computer Power Weizenbaum recounts the story of ELIZA in a manner that highlights his own surprise at the reactions the program elicited. Weizenbaum notes his surprise that practicing psychiatrists genuinely believed the program had therapeutic potential, admits to be startled by the ease with which people became emotionally invested in their communication with the computer, and highlights his astonishment at the number of individuals in his own field who seemed to believe that ELIZA represented a program that could genuinely understand the prompts it was receiving in natural language.[28] Yet, Weizenbaum did not simply shrug off these quizzical confrontations. Instead, as he put it, the experiences “gradually convinced me that my experience with ELIZA was symptomatic of deeper problems.”[29]

Weizenbaum emphasized that the computer was not the problem, but was instead only reifying something which had long been a dangerous societal tendency to view human beings in an ever more mechanized manner. A debate was breaking out in Weizenbaum’s estimation and “on one side are those who, briefly stated, believe computers can, should, and will do everything, and on the other side those who, like myself believe there are limits to what computers ought to be put to do.”[30] The presence of the “ought” is of particular importance for Weizenbaum’s argument, as it is changes the discussion from one focused on what functionality computers can have to whether or not they should be built execute such functions in the first place. For Weizenbaum it was a matter of “the proper place of computers in the social order” with this question also signifying that there was an improper place for computers.[31] And though Weizenbaum wrote as a respected professor at MIT he had no qualms stating, “science may also be seen as an addictive drug” which due to it being “taken in increasing dosages, science has gradually converted into a slow-acting poison.”[32]

What set the computer apart from other tools people used was the degree to which these machines were autonomous – meaning that once they were turned on they functioned without needing further human control. While clocks, in a nod to Mumford, were important early examples of such autonomous machines, computers were similarly autonomous but capable of much more significant functions than merely keeping track of time.[33] The significance of such machines was that they functioned in accordance with a model of some part of the real world – such as delineating a day into a quantity of 24 hours with each hour consisting of sixty minutes and with each minute consisting of sixty seconds. Gradually what transpired was that in mimicking some aspect of reality, these autonomous machines would come to instill that model of reality upon the humans who had initially built the machine. The model came to replace that which it was modeling. Thus, under the auspices of technology “experiences of reality had to be representable as numbers in order to appear legitimate in the eyes of the common wisdom.”[34]

Emerging in the years around WWII, and breaking out further in the years after the end of that conflict, the computer was presented by the military, industry, and businesses as the tool needed to address a host of issues that were too great for humans to be able to take care of without significant technical assistance. As miniaturization allowed smaller computers to accommodate everything from offices to aircrafts, a transition steadily occurred wherein the computer came to be seen as an indispensible component of the emerging modern society. The eventual result of this trend was that it became nearly unthinkable to return to the way it had been before. Yet, just because the computer had come to seem indispensible did not mean that it genuinely was. Rather what occurred was that the computer had merely become “essential to society’s survival in the form that the computer itself had been instrumental in shaping.”[35]

The computer, with its close ties to military needs, may have appeared to be radically changing society, but in Weizenbaum’s estimation the computer “was used to conserve America’s social and political institutions. It buttressed them, at least temporarily, against enormous pressures for change.”[36] Indeed, in Weizenbaum’s view, the computer allowed the retrenchment of the social, political and economic status quo even as the widespread dispersion of the computer allowed for the mechanistic worldview to take hold in ever more areas – while computers were simultaneously used to help shore up a post-war explosion in consumerism. Whether or not the computer was necessarily the correct solution, it became embraced in a range of areas “for reasons of, say, fashion or prestige” – if one’s competitors, be they business rivals or competing superpowers, had computers than one could not allow oneself to be left behind.[37]  While the computer, as the machine’s name suggests, excel at computational tasks, there are societal challenges that have persisted for reasons other than a lack of computing power – “the validity of a technique is a question that involves the technique and its subject matter.”[38] But the adoration for computers fawned over “the technique” while too often ignoring “its subject matter.”[39] And yet, even writing in 1976, Weizenbaum was aware that the computer had already become complexly interwoven into society, what therefore mattered was recognizing how “the society’s newly created ways to act often eliminate the possibility of acting in older ways.”[40] The computer relied on certain types of information, excelled at specific types of tasks, and was guided by particular social, political and economic forces – and though the computer might have been portrayed as opening doors – Weizenbaum emphasized that it had also closed many.

One particular problem that the computer represented was that unlike simple tools many of those who used computers had little understanding of the way that the machines actually worked. A reassuring aspect of computers was the perception of their regularity and the way in which they stuck to routines – but “if we depend on that machine, we have become servants of a law we cannot know, hence of a capricious law. And that is the source of our distress.”[41] To alleviate this distress, in Computer Power and Human Reason, Weizenbaum delved into an explanation of the way in which computers worked, laying out the complex computational scripts by way of explanations of relatively simple games, highlighting that one must remember that the computer is completely bound to follow the rules of the game. While Weizenbaum’s discussion of “Where the Power of the Computer Comes From” and “How Computers Work” is not sufficient to turn a novice into an instant programmer, these chapters still help to elucidate what is actually going on inside a computer.[42] In dense prose that tends towards the technical, Weizenbaum explains Turing machines and explicates on the ways a computer can stack multiple routines atop one another, stating, “the alphabet of the machine languages of all modern computers is the set consisting of the two symbols ‘0’ and ‘1.’ But their vocabularies and their transformation rules differ widely…a computer is a superb symbol manipulator.”[43] Despite the efficiency of the computer it remains bound to follow the rules of its programming and to rely upon its own specific language; that a program can successfully execute its script does not mean that it has any real understanding of the world. Indeed, as Weizenbaum puts it, “a real reason that programming is very hard is that, in most instances, the computer knows nothing of those aspects of the real world that its program is intended to deal with.”[44]

Computers, and the programs these machines run, do not appear organically in nature. Instead, the computer and its programs are the physical manifestations of sets of choices that have been made by human beings. As a computer scientist and professor at MIT, Weizenbaum was well acquainted with the types of people responsible for making the decisions that resulted in the computer systems that the broader public would eventually use. And though Weizenbaum was himself a programmer, he was unflinching in critiquing his peers. For Weizenbaum there was a difference between the “professional” like himself who “regards programming as a means toward an end, not as an end in itself”[45] and what Weizenbaum identified as “the compulsive programmer.” Comparing this figure to a professional gambler,[46] Weizenbaum casts “the compulsive programmer” as an individual who sees interacting with the computer as an end in and of itself[47] – and though this type of person may be working on numerous projects their main goal is to simply continue working with – or “hacking”[48] – the computer.  The unsavory description that Weizenbaum presents of “these computer bums” who “exist…only through and for computers”[49] in many respects outlines an unflattering stereotype of a computer programmer that lingers to this day – but what makes Weizenbaum’s description particularly stinging is that he is not imagining this type of individual but is describing a type of person he has seen frequently in his time working as a computer scientist and a professor at MIT.

What draws “the compulsive programmer” to the computer, in Weizenbaum’s estimation, is a fascination and adoration for the power manifested in computer systems. While, “the quest for control is inherent in all technology” the computer provides a space wherein the skilled programmer can delight in seizing control.[50] For “the compulsive programmer…life is nothing but a program running on an enormous computer,” and thus “every aspect of life can ultimately be explained in programming terms.”[51] To Weizenbaum, the danger of computers could be found in the way that ever more of the individuals working on them had come to represent these “compulsive programmers” whose loyalty to the computer, as such, had transcended any other values. Nevertheless, Weizenbaum did not attribute this to any wickedness so much as to a certain vacuous fecklessness whereby certain programmers – along with other modern scientists and technologists – had confused their techno-scientific means with being ends in and of themselves. And though “compulsive programmers” might not lack for skill, this “skill is…aimless, even disembodied. It is simply not connected with anything other than the instrument on which it may be exercised.”[52] Yet, perhaps the key detail in Weizenbaum’s description of “the compulsive programmer” is that this is a figure who believes the world in all of its complexity can be reduced to something simple enough to be captured by a computer program.

In some respects what the computer represented was a type of stage for which a programmer could compose a script that would then be performed thereupon.[53] Computers excelled at meticulously executing scripts, following the rules to exact specificity, and thus what a computer does could be understood by examining the code it was following. Humans presented a trickier situation. True, humans process and respond to a tremendous amount of information and people’s behavior follows some “laws that science can discover and formalize within some scientific framework.”[54] Still, Weizenbaum bridled at the idea that all human intelligence and understanding could be reduced to rules that fit neatly inside a scientific framework – even as the belief in the existence of such rules guided the confidence of some Artificial Intelligence (AI) researchers.[55] After all, as Weizenbaum puts it: what type of equipment would a machine need “in order to think about such human concerns as, say, disappointment in adolescent love.”[56] Yet the theories held by computer programmers had a particularly important quality thanks to the machines that allowed these theories to be more than merely texts. For “a theory written in the form of a computer program is thus both a theory and, when placed on a computer and run, a model to which the theory applies”[57] – the computer provided the stage on which the script was performed. And though a computer might present an impressive result, Weizenbaum cautioned “a model is always a simplification, a kind of idealization of what it is intended to model.”[58] Unfortunately, the complexity of computers and the simplification inherent in models often only resulted in widespread misunderstandings of what computers can and had accomplished.

“The computer has become a source of truly powerful and often useful metaphors” however “the public embrace of the computer metaphor rests on only the vaguest understanding of a difficult and complex scientific concept.”[59] For Weizenbaum the prevalence of this metaphor represented a perilous trend, wherein people who did not fully understand the workings of computers came to gradually believe that everything in the world could be turned into a computer model. The computer metaphor allowed for the ideology of the “compulsive programmer” to overtake those who had no experience in programming and thus were susceptible to the exhortations of those celebrating the latest technological feats achieved by computers. The result being that “the computer metaphor has become another lamppost under whose light, and only under whose light, men seek answers to burning questions.”[60] Certain questions, of course, did lend themselves well to computational solutions – and some of these were even matters that seemed to evidence an achievement of a certain human intelligence, such as skill at chess, but Weizenbaum emphasized that such victories were related to the computers’ strength at performing calculations swiftly and executing logic programs. For some computer scientists, the types of problems that computers excelled at solving were cast as nearly synonymous with the types of problems humans tried to solve, but for Weizenbaum “it is precisely this unwarranted claim to universality that demotes their use of the computer, computing systems, programs, etc., from the status of scientific theory to that of a metaphor.”[61]

Weizenbaum’s personal experience with the problematic misunderstandings of the computer metaphor related largely to his experiences with ELIZA. To make a computer do something, the computer had to be told to do something. This did not necessarily mean the computer understood what it had been told, only that its script enabled it to execute a given command. Humans excel at understanding “communications couched in natural languages” but computers, on the contrary, required “the precision and unambiguousness of ordinary programming languages.”[62] Though Weizenbaum’s program ELIZA could respond to prompts in natural language, the program itself did not understand what was being said to it – rather it was simply following a script – and the illusion of understanding seemed to be strongest amongst those who were unfamiliar with the workings of computers.[63]

ELIZA was many things, but intelligent was not one of them. For Weizenbaum, ELIZA was a demonstration of the way in which people were eager to ascribe intelligence to machines when such was not warranted – and this was a tendency he saw strongly amongst some of his colleagues working in the AI field whom he came to label “the artificial intelligentsia.” While Weizenbaum recognized that AI scientists had created programs that could perform many tasks he cast as hubristic the tendency to believe that any shortcomings of AI were simply “to be found in the limitations of the specific system’s program.”[64] Those unfamiliar with computers had a reason for being mesmerized by ELIZA’s performance, but what was the excuse of “the artificial intelligentsia”? Weizenbaum had borne witness to tremendous leaps in computer power over his life, and was cognizant that what was impossible for computers today could be possible for computers tomorrow. And yet the question became increasingly not so much a matter of what computers could do, but of what they should do – were there “appropriately human objectives that are inappropriate for machines”?[65]

“Man is not a machine…Computers and men are not species of the same genus” was Weizenbaum’s stinging retort to what he perceived as the beliefs that had allowed “artificial intelligence’s perverse grand fantasy to grow.”[66] Intelligence, for Weizenbaum, was a complex and difficult concept and one that did not lend itself to simplistic reductions. Thus attempts to conclusively define intelligence, such as IQ tests,[67] were bound to capture at best only a partial glimpse of intelligence. That computers could succeed in feats that seemed demonstrative of intelligence was clear, but simultaneously it was a gross oversimplification. A computer might win at chess but that did not mean it could change a baby’s diaper. For Weizenbaum these were matters of vastly different intelligences that “cannot be compared.”[68] Computers excelled at task involving quantification, but for Weizenbaum there was much about human beings that simply could not be quantified. Weizenbaum was aware that the members of the “artificial intelligentsia” could take this criticism as little more than a scientific challenge and they could attempt to meet this thrown gauntlet by building ever more sophisticated machines. Yet for Weizenbaum this was not a matter of a scholarly duel – rather it was a moral one – as “the question is not whether such a thing can be done, but whether it is appropriate to delegate this hitherto human function to a machine.”[69]

Much remained, and remains, unknown about the human condition – and much else about being human is learned only through the crucible of human experience – such things could not be easily quantified and programmed. Should computers, or robots, eventually reach such sophistication that they demonstrated an intelligence genuinely similar to humans theirs would be a rather “alien” intelligence, for their intelligence would have been shaped under different social conditions and would involve a different set of experiences.[70] Attempts to model human intelligence might appear impressive, but they remain incomplete. Beyond intelligence also lies the matter of emotions and of the unconscious, matters that could not be easily subjected to calculation and cold logic – and thus Weizenbaum cautioned restraint for “there are some things beyond the power of science to fully comprehend.”[71] Such sentiments functioned both as a warning to his scientific colleagues, and as a reminder to the wider public, to look askance at claims of omnipotence. For Weizenbaum this was essentially tied to the belief that there were certain tasks that were simply improper for a computer – “since we do not now have any ways of making computers wise, we ought not now to give computers tasks that demand wisdom.”[72]

Alas, Weizenbaum’s warning was knowingly given at the point when many tasks inappropriate for computers had already been passed on to them. By the time Weizenbaum was writing Computer Power and Human Reason the computer had already become a commonplace feature of businesses and large organizations – even if it had not yet become a fixture of every home. Though these machines had originally been sold on the promise that they would be helpful, what had transpired was that the machines had “both surpassed the understanding of their users and become indispensible to them.”[73] That which was supposed to help had become that without which people were helpless. This was an affliction that was particularly dangerous for those who were unacquainted with the inner workings of the computer, those who had been led to believe that the computer was infallible, who felt as though their ability to act, their responsibility, had been usurped by the machine. Yet, as Weizenbaum fiercely reminded his readers, “the myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it. But, in fact, there are actors!”[74] The world dreamt up by the celebrants of the computer metaphor was thus portrayed as inevitable – and feeling powerless in the face of the machine people came to doubt that there was any point in voicing their opposition.[75] Better to ride the tide than be compared to King Canute.

Yet Weizenbaum refused to accept inured tranquility. Science, and “the rhetoric of the technological intelligentsia,” portrayed itself as logical and reasonable but for Weizenbaum this was “instrumental reasonings, not authentic human rationality.”[76] In the worldview concocted under the aegis of the computer metaphor the vast complexities of the world were turned into things that are “‘programmed,’ one speaks of ‘inputs’ and ‘outputs,’ of feedback loops, variables, parameters, processes and so on, until eventually all contact with concrete situations is abstracted away.”[77] This was a recipe for passivity, apathy, and impotence – as human life became just a matter of data to be fed into a machine and analyzed. The overreliance on “instrumental reason” and the technologies that acted in accordance with such reasoning best had become a “fetish surrounded by black magic. And only the magicians have the rights of the initiated.”[78] As a successful computer scientist, Weizenbaum was himself a figure that could have taken up membership amongst these “magicians” and yet he had not been bewitched by instrumental reason. Granted, Weizenbaum was aware that his objections would be cast by technologists as “anti-technological, anti-scientific, and finally anti-intellectual” but Weizenbaum emphasized, “I am arguing for rationality. But I argue that rationality may not be separated from intuition and feeling. I argue for the rational use of science and technology, not for its mystification, let alone its abandonment. I urge the introduction of ethical thought into science planning. I combat the imperialism of instrumental reason, not reason.”[79]

That Weizenbaum’s call for ethical reevaluation sounded not unlike a Cassandra cry was not lost on him – nor did he forget that such cries generally went unheeded.[80] Science and technology by no means represented new forces in human civilization, but the scale of their effects had grown massively – the power that had so seduced many was more than could be safely contained and managed, and it was steadily eroding the ability to choose a different path.[81] That many people enjoyed at least the superficial benefits of computer driven technological advances, and would be hesitant to give up such devices was a serious matter, but Weizenbaum held to the view that “ethics, at bottom, deals with nothing so much as renunciation.”[82] What good was technological and scientific power if it turned humans into little more than entertained robots? And in a world already being so swamped by ever more computers what reason was there to truly believe that serious social and political issues still existed due to a lack of computer power? If Weizenbaum called for renunciation of the computer, in certain contexts, it was because the embrace of the computer in all contexts had led to a renunciation of the human. As Weizenbaum reiterated through the text, “there are some human functions for which computers ought not to be substituted. It has nothing to do with what computers can or cannot be made to do. Respect, understanding, and love are not technical problems.”[83]

A keen and powerful sense of responsibility motivated Weizenbaum’s defiant text on computers and the individuals enthralled by those machines. Joseph Weizenbaum was a computer scientist and one who taught computer science to others, and he hoped that the message of Computer Power and Human Reason would resonate with his peers and colleagues.[84] As Weizenbaum saw it “scientists and technologists have, because of their power, an especially heavy responsibility, one that is not to be stuffed off behind a façade of slogans such as technological inevitability.”[85] Therefore, what Weizenbaum saw as essential was for scientists and technologists to think through the consequences of their actions, to rediscover their “own inner voice,” and most of all to “learn to say ‘No!’”[86]


The Need for Responsibility

Haunting Weizenbaum’s social criticism is his memory of fleeing the Nazis as a child. Throughout his work figures associated with fascism appear not as ghastly bogeyman or monoliths of inexplicable evil but as examples of real people who lost sight of their responsibility to the rest of humanity. Adolf Hitler and the leadership of the Nazi party are not mulled over at length by Weizenbaum, rather his focus is upon the personage of “the good German” – those who plead ignorance of the horrors taking place all around them. The excuse that the careful organization of the Nazi regime kept “the good German” in a state of ignorance does not convince Weizenbaum, who instead proposes “the real reason the good German didn’t know is that he never felt it to be his responsibility to ask what had happened to his Jewish neighbor whose apartment suddenly became available.”[87] After all, when the Weizenbaum family fled Berlin their domicile suddenly became available – and what of all of the apartments that opened up when the residents disappeared for much more harrowing reasons? When Weizenbaum first returned to Germany in the 1950s, he found himself looking around at the Germans who had lived through the Nazi years and “wondering, sometimes more, sometimes less, what did you do? Who were you? Did you keep your mouth shut? Or did you resist? Or did you eagerly participate?”[88]

Beyond “the good German,” Weizenbaum’s professional linkages led him to particularly reflect upon the lives of the scientists who had worked for the Nazi government. In Weizenbaum’s estimation many German scientists had themselves assumed a “good German” aura of obliviousness during the Nazi regime, an attitude of “We are scientists; Politics has nothing to do with us; the Führer decides.”[89] This was a position that many German scientists were able to conveniently hide behind in the years after the war, as figures like Wernher von Braun went on to assume important roles in the Cold War that provided ideological coverage for their past associations. The Nazism that had swept over a continent and ended millions of lives had become something for which nobody seemed to be responsible – a dark chapter that was to be held up as a warning, but never read. The paradox that Weizenbaum observed in post-war Germany was in the way that suddenly nobody seemed to be responsible for what had occurred – the citizens claimed ignorance while the scientists acted as though the uses to which their inventions would be deployed were not of their concern.

The phrase “never again” is often invoked regarding the Nazi’s murderous reign, but in Weizenbaum’s work this “never again” takes on another quality, as if to say, “never again must we scientists shirk our responsibility.” As Weizenbaum himself put it: “Accepting responsibility is a moral matter. It requires, above all, recognition and acceptance of one’s own limitations and the limitations of one’s tools. Unfortunately, the temptations to do exactly the opposite are very great.”[90] From the computers that were essential to waging the war in Vietnam, to the advances for developing ever more destructive atomic weapons, the work of scientists and technologists was, to Weizenbaum, essential in undergirding and allowing the violence in the world. These build ups of the means of ending lives relied upon the acquiescence of the sciences, and to this Weizenbaum exhorted his peers to remember, “It cannot go on without us! Without us the arms race, especially the qualitative arms race, could not advance another step.”[91]

Granted, it would be quite difficult for the sciences to divorce themselves from the military – as many a scientist’s research relied heavily upon the generous funding from various elements of the defense department. Weizenbaum was aware of the way that this touched his work and that of his colleagues, he was a professor at MIT and he was only too aware that, “At MIT, we invented weapons and weapon systems for the Vietnam War…MIT is closely connected with the Pentagon.”[92] This challenge for the scientific community extended well beyond the university at which Weizenbaum taught, and he was forthright in expressing that scientists had to be aware of the ends to which their projects were being used: “Today we know with virtual certainty that every scientific and technical result will, if at all possible, be put to military use in military systems.”[93] Even seemingly innocuous programs that in and of themselves seemed harmless could still wind up serving the cause of violence[94] – for Weizenbaum it was essential for scientists to realize this and face this implication. That computers and science could serve humanitarian ends did not obviate concern for anti-human effects.

Weizenbaum was committed to never allowing himself to become like the German scientists who acted as though they were not responsible for the impacts of their work – even as Weizenbaum observed that many of his colleagues were unwilling to heed his call for rigorous responsibility. Thus, Weizenbaum became an outspoken critic on the obligations of scientists, an opponent of militarization, and refused to sit quietly in his office during times of injustice – throughout this he was aware that he had become “a kind of fig leaf” for MIT, but he remained committed to being an iconoclastic figure holding fast to ethical principles.[95] Willingly calling out “compulsive programmers,” the “technological intelligentsia” and the “artificial intelligentsia” – Weizenbaum was willing to challenge his fellow scientists. While other social critics with a fierce concern around technology, such as Lewis Mumford and Hans Jonas, had written about the need for scientists to take responsibility – this critique was invested with particular vigor when voiced by a member of the scientific community like Weizenbaum. When Weizenbaum exhorted scientists to take responsibility he set himself as an example of what that might look like.

It was this commitment that led Weizenbaum to engage in multiple areas of the debates around science and technology, not limiting himself to discussions of computers and weapons systems but also to challenging the “artificial intelligentsia.” Weizenbaum reacted strongly against the tendency whereby “the artificial intelligentsia shouts that the human being is a machine. Their central thesis is that the whole person can be understood in terms of science alone.”[96] For Weizenbaum such thinking not only displayed a dangerous hubristic tendency but a “profound contempt for life.”[97] A comment from Marvin Minsky, who was also a professor at MIT, particularly irked Weizenbaum, that “the brain is merely a meat machine” is a statement that Weizenbaum would return to repeatedly in his work. The “meat machine” seemed a simple summary of the view of the artificial intelligentsia and as part of the principle underscoring the technological metaphor.[98] The belief that something as complex as the human mind could be reduced to an easily quantifiable amount of information struck Weizenbaum as ridiculous, though he was unhappily aware of the prevalence of this view in “important sectors of the AI community, the artificial intelligentsia, as well as many scientists, engineers, and ordinary people.”[99] Intelligence, be it human or artificial, was challenging to define and thus even more difficult to quantify – but declarations about “artificial intelligence” often gave the impression that their was a conclusive consensus around the meaning of intelligence.[100] Having personally observed, through his work on ELIZA, the types of misunderstandings about computer intelligence that can crop up and having witnessed the views put forth by prominent scientists in the AI field, like Minsky, Weizenbaum recognized that the technologically optimistic, if rather misanthropic, “spirit of artificial intelligence pervades the ethos of so much of the rest of the computer practicum.”[101]

Weizenbaum did not see himself primarily as a critic of computers and technology, but of society; however, his societal critique focused heavily upon the impact that computers and technology were having upon society.[102] Thus Weizenbaum joined with other thinkers who “over the years expressed grave concern about the conditions created by the unfettered march of science and technology.”[103] Though, in his youth, Weizenbaum had enjoyed the early thrill of computers and had been able to play an important role in their development his experiences working alongside the “technological intelligentsia” and observing the impacts of technology on society dampened his initial enthusiasm. Speaking at a conference held in Cambridge, Massachusetts in 1979, Weizenbaum dourly observed, “I think our culture has a weak value system and little use of collective welfare, and is therefore disastrously vulnerable to technology.”[104] In such a setting it was all too easy for systems that overvalued the scientific and technological to take hold – especially as they simultaneously presented a reassuring explanation for the feeling of powerlessness that many individuals experienced. The juxtaposition between the potential of technology and its realization was a continuing paradoxical matter, “on the one hand the computer makes it possible in principle to live in a world of plenty for everyone, on the other hand we are well on the way to using it to create a world of suffering and chaos.”[105] At numerous points in his work Weizenbaum refers to the bargain with technology as being a sort of “Faustian pact”[106] – and yet this still left the question open as to who was responsible for signing the contract with Mephistopheles?

At the conference in Cambridge, Weizenbaum focused particular attention on the way that terms such as “we” and “us” wind up being used in discussions around technology and science. A machine could emerge from the decisions of a few people in a lab and wreak huge global consequences – and yet the price was still often framed in terms of “it will serve us right.”[107] This all encompassing “us” represented an odd assignment of guilt. Weizenbaum had been critical of the “good German” type who plead ignorance, but his criticism of such individuals had been a result of his sense that their claims of ignorance were something a sham, a willful self-delusion. Yet in the case of technological changes flowing out of corporations and universities people’s ignorance of the changes afoot was no sham. As a child Weizenbaum had born witness to the barbarity inherent in the early Nazi regime, but the dangerous risks of technologies were not a matter open for public debate or vote. When, still at the conference in Cambridge, Albert Hirschman responded to Weizenbaum’s questioning of the identity of the “us” by saying “Every country has the technology it deserves” Weizenbaum offered the retort “I don’t think countries deserve things any more than peoples do, especially things that are imposed on them by others.”[108]

If Weizenbaum advised caution in identifying “we” and “us” those whom he considered to be the “others” is clearer – for these are the members of the “technological intelligentsia.” As the scientists behind the machines, wielding the technological metaphor, these were the individuals whose decisions wound up being “imposed” upon the broader public. What Weizenbaum prescribed as essential was for these “others” to accept responsibility – “the scientist and the technologist can no longer avoid the responsibility for what he does by appealing to the infinite powers of society to transform itself in response to the new realities and to heal the wounds he inflicts on it. Certain limits have been reached. The transformations the new technologies may call for may be impossible to achieve, and the failure to achieve them may mean the annihilation of all life. No one has the right to impose such a choice on mankind.”[109] Having presented itself as a panacea science and technology had transformed the world in such a way as to make other remedies no longer effective. And thus Weizenbaum sought to reawaken the ethical imagination of his peers.

Weizenbaum sought to encourage his colleagues to think in terms of harnessing technology and science to ensure that “every human being has available to him or herself all material goods necessary for living with dignity” and to those who would balk at such an aim as Quixotic he defiantly replied “the impossible goals I mention here are possible, just as it is possible that we will destroy the human race.”[110]


Joseph Weizenbaum Today 

When Joseph Weizenbaum passed away in 2008, at the age of 85, he had secured a place for himself in the history of computer science as both an important scientist and as a major critic of the role of computers in society. Over the course of his life Weizenbaum was on the frontlines of the significant shifts computers underwent from mechanical behemoths that required entire rooms, to personal computers, to the early incarnations of the smart phone. And as he watched computers become ever smaller, ever more powerful, and ever more bound up in the activities of everyday life, his criticism remained firm. For him the military origins of the computer could not simply be forgotten as an inconvenient historical detail,[111] and though he did not deny the impressive potential of computers he remained aware that this potential often went awry.

A prolific writer Weizenbaum was not, but his articles and his book Computer Power and Human Reason continue to have an important influence on scholars writing about the impact of computers on society. In the book The Closed World: Computers and the Politics of Discourse in Cold War America, Paul N. Edwards argues that “tools and metaphors are linked through discourse”[112] and he draws upon Weizenbaum’s consideration of the technological metaphor as he discusses how “tools and their uses thus form an integral part of human discourse and, through discourse, not only shape material reality directly but also mold the mental models, concepts, and theories that guide that shaping.”[113] Weizenbaum’s rather scathing description of the “compulsive programmer” also proved to be more than a stereotype that could be simply dismissed. Sherry Turkle credits Weizenbaum with helping expose the figure of the “hacker” as “many people first became aware of the existence of hackers in 1976 with the publication of Joseph Weizenbaum’s Computer Power and Human Reason.[114] Weizenbaum’s description of the hacker/compulsive programmer has resonated over the years, even as such figures have migrated from the dark corners of university computer labs and into the boardrooms of major corporations. “Weizenbaum argues that programming creates a new mental disorder: the compulsion to program”[115] writes Wendy Hui Kyong Chun and though she pushes back against Weizenbaum’s accusation that the compulsive programmer is pleasureless in their pursuit she affirms that there are individuals, she singles out Richard Stallman, “who [fit] Weizenbaum’s description of a hacker.”[116] Evidently Weizenbaum’s proximity to compulsive programmers allowed him to identify something genuine, even as his work around those embracing the technological metaphor enabled him to anticipate the impact it would have upon discourse.

Aspects of Weizenbaum’s thought have been challenged over the years, and some of his predictions have proven simply incorrect such as his comment, posed in 1978, “Will the home computer be as pervasive as today’s television sets?” to which Weizenbaum supplied the retort “The answer almost certainly is no.”[117] This “no” proves false in a world of smart phones, tablets, laptop computers, and televisions, which are themselves, hooked up to the Internet. Yet, even if some of Weizenbaum’s comments about computers have become dated, his arguments have lost nothing of their moral weight. In Computer Power and Human Reason, Weizenbaum presented a list of critics who had spoken out against “the unfettered march of science and technology”[118] and this is a group amongst whose number Weizenbaum has himself come to be counted by later writers. David Golumbia, in The Cultural Logic of Computation, critiques the rise of “computationalism” which he defines “as a commitment to the view that a great deal, perhaps all, of human and social experience can be explained via computational processes”[119] and notes that “Weizenbaum publicly dissented from the computationalist view and went on to write a compelling volume about its problems.”[120] When Golumbia draws together his own list of “established scholars” who had criticized “computationalism” – not unlike a list of those challenging “the unfettered march of science and technology” – Golumbia places Weizenbaum on a list alongside many of the same individuals who had appeared in Weizenbaum’s own list.[121]

The present volume, thus, provides an important overview of the life and thought of Joseph Weizenbaum. In this wide ranging discussion Weizenbaum speaks with candor to Gunna Wendt about the issues that occupied him over the course of his life – from ELIZA to AI to the technological metaphor – and sees him revealing the ways in which his personal experience has informed the positions that he has taken over the course of his life; while Benjamin Fasching-Gray’s translation preserves Weizenbaum’s barbed wit. Most importantly, this volume demonstrates Weizenbaum’s continuing vitality as a social critic regarding the role of technology in contemporary society. Whereas other prominent critics, such as Mumford, passed away before they could write anything regarding the Internet – the interview presented in this book makes it quite clear that Weizenbaum was unwilling to be taken in by the utopian finery in which some attempt to drape the Internet. Weizenbaum lost nothing of his critical fire in the face of technological advances. As Weizenbaum writes, with biting humor, “the Internet is a big garbage dump – admittedly with some pearls in it, but you have to find them first.”[122] The present book sees Weizenbaum as self aware and self-deprecating, but still stolidly committed to holding fast to ethical principles – it is not an overview of Joseph Weizenbaum as simply a computer scientist but of Joseph Weizenbaum as a complex human being who celebrated the complexity of other human beings.

Those looking for a simple celebration of technology will not find that in the work of Joseph Weizenbaum. As a social critic he found, and in this volume continues to demonstrate, a commitment to stating aloud uncomfortable opinions: “the computer is embedded in our crazy society, just like the television. Everything is embedded in this society, and this society is obviously insane.”[123] Yet this observation is not mired in nihilistic misanthropy, or in fatalistic despair, but instead in the belief that people can overcome their feelings of powerlessness, and retake responsibility for their lives. For Weizenbaum it was essential for people to become “Islands of Reason” – even if doing so made one isolated and lonely, for the potential remained that such islands could attract similar people willing to recognize that “perhaps we are now addicted to modern science and technology and need to practice withdrawal therapies.”[124] Importantly, this is not a call for a monastic withdrawal from the world but a call for a deeper investment in the moral quandaries of the day. For people to recognize the madness in the world around them and when such a recognition arises “we should speak out, we should share what we have realized with others.”[125]

And this is precisely what Joseph Weizenbaum had endeavored to accomplish. This book is a map for discovering the Island of Reason that is Joseph Weizenbaum.


Here is a complete PDF of the text (as it appears in the published book – complete with page numbers)



[1] Joseph Weizenbaum, “The Paradoxical Role of the Computer.” Holst Memorial Lecture (1983), 13.

[2] Joseph Weizenbaum and Gunna Wendt. Islands in the Cyberstream: Seeking Havens of Reason in a Programmed Society (Duluth: Litwin Books, 2015), “We Have a Choice.”

[3] Weizenbaum and Wendt, Islands in the Cyberstream, “Childhood in Berlin.”

[4] Weizenbaum and Wendt, Islands in the Cyberstream, “Emigration to the USA.”

[5] Weizenbaum and Wendt, Islands in the Cyberstream, “Emigration to the USA.”

[6] Weizenbaum and Wendt, Islands in the Cyberstream, “Emigration to the USA.”

[7] Weizenbaum and Wendt, Islands in the Cyberstream, “Emigration to the USA” and “Being Different as an Opportunity.”

[8] Weizenbaum and Wendt, Islands in the Cyberstream, “About Computer History.”

[9] Weizenbaum and Wendt, Islands in the Cyberstream, “About Computer History.”

[10] Joseph Weizenbaum, Computer Power and Human Reason (San Francisco: W. H. Freeman and Company, 1976), 3.

[11] Joseph Weizenbaum, “ELIZA – A Computer Program For the Study of Natural Language Communication Between Man And Machine,” Communications of the ACM. 9 (1966), 36.

[12] Weizenbaum, “ELIZA – A Computer Program For the Study of Natural Language Communication Between Man And Machine,” 42.

[13] Weizenbaum, “ELIZA – A Computer Program For the Study of Natural Language Communication Between Man And Machine,”  37. In the original text this quotation appears in all caps to denote that it was a reply coming from ELIZA and not the human interlocutor.

[14] Weizenbaum, “ELIZA – A Computer Program For the Study of Natural Language Communication Between Man And Machine,” 37.

[15] Weizenbaum, “ELIZA – A Computer Program For the Study of Natural Language Communication Between Man And Machine,” 37.

[16] Weizenbaum, “ELIZA – A Computer Program For the Study of Natural Language Communication Between Man And Machine,” 36.

[17] Weizenbaum, “ELIZA – A Computer Program For the Study of Natural Language Communication Between Man And Machine,” 42.

[18] Weizenbaum, “ELIZA – A Computer Program For the Study of Natural Language Communication Between Man And Machine,” 42.

[19] Joseph Weizenbaum, “Contextual Understanding by Computers,” Communications of the ACM 10 (1967), 475.

[20] Weizenbaum, “Contextual Understanding by Comptuers,”476.

[21] Weizenbaum “Contextual Understanding by Comptuers,”476.

[22] Weizenbaum, Computer Power and Human Reason, 189.

[23] Weizenbaum, Computer Power and Human Reason, 5.

[24] Joseph Weizenbaum, “On the Impact of the Computer on Society,” Science 176, (1972), 612.

[25] Weizenbaum, “On the Impact of the Computer on Society,”611.

[26] Weizenbaum, Computer Power and Human Reason. A footnote on page 39 reads, “Chapters 2 and 3 are somewhat technical. The reader who is not comfortable with technical material might either skim these two chapters or postpone reading them until after the rest of the book has been read.”

[27] Weizenbaum, Computer Power and Human Reason, x. – Weizenbaum states of the manuscript that “Lewis Mumford read all of it.” Mumford’s influence is also clear in the development of Weizenbaum’s thinking about the technological metaphor which he links – in the article “On the Impact of the Computer on Society” – to “what Mumford (4) calls the ‘Myth of the Machine,’” 611.

[28] Weizenbaum. Computer Power and Human Reason, 5-7.

[29] Weizenbaum. Computer Power and Human Reason, 11.

[30] Weizenbaum. Computer Power and Human Reason, 11.

[31] Weizenbaum. Computer Power and Human Reason, 13.

[32] Weizenbaum. Computer Power and Human Reason, 13.

[33] Weizenbaum. Computer Power and Human Reason, 23-24.

[34] Weizenbaum. Computer Power and Human Reason, 25.

[35] Weizenbaum. Computer Power and Human Reason, 29.

[36] Weizenbaum. Computer Power and Human Reason, 31.

[37] Weizenbaum. Computer Power and Human Reason, 34.

[38] Weizenbaum. Computer Power and Human Reason, 35.

[39] Weizenbaum. Computer Power and Human Reason, 35.

[40] Weizenbaum. Computer Power and Human Reason, 37.

[41] Weizenbaum. Computer Power and Human Reason, 41.

[42] Weizenbaum. Computer Power and Human Reason, 39-110.

[43] Weizenbaum. Computer Power and Human Reason, 98.

[44] Weizenbaum. Computer Power and Human Reason, 109.

[45] Weizenbaum. Computer Power and Human Reason, 117.

[46] Weizenbaum. Computer Power and Human Reason, 121-123.

[47] Weizenbaum. Computer Power and Human Reason, 116.

[48] Weizenbaum. Computer Power and Human Reason, 118.

[49] Weizenbaum. Computer Power and Human Reason, 116. Weizenbaum goes on to describe them physically in the following terms: “Their rumpled clothes, their unwashed and unshaven faces, and their uncombed hair all testify that they are oblivious to their bodies and to the world in which they move.”

[50] Weizenbaum. Computer Power and Human Reason, 126.

[51] Weizenbaum. Computer Power and Human Reason, 126.

[52] Weizenbaum. Computer Power and Human Reason, 118.

[53] Weizenbaum. Computer Power and Human Reason, 126.

[54] Weizenbaum. Computer Power and Human Reason, 139-140.

[55] Weizenbaum. Computer Power and Human Reason, 138.

[56] Weizenbaum. Computer Power and Human Reason, 139.

[57] Weizenbaum. Computer Power and Human Reason, 145.

[58] Weizenbaum. Computer Power and Human Reason, 149.

[59] Weizenbaum. Computer Power and Human Reason, 157.

[60] Weizenbaum. Computer Power and Human Reason, 158.

[61] Weizenbaum. Computer Power and Human Reason, 176.

[62] Weizenbaum. Computer Power and Human Reason, 183.

[63] Weizenbaum. Computer Power and Human Reason, 188-189.

[64] Weizenbaum. Computer Power and Human Reason, 196-197.

[65] Weizenbaum. Computer Power and Human Reason, 197-198

[66] Weizenbaum. Computer Power and Human Reason, 203.

[67] Weizenbaum. Computer Power and Human Reason, 204.

[68] Weizenbaum. Computer Power and Human Reason, 205.

[69] Weizenbaum. Computer Power and Human Reason, 207. Italics in original text.

[70] Weizenbaum. Computer Power and Human Reason, 213.

[71] Weizenbaum. Computer Power and Human Reason, 233.

[72] Weizenbaum. Computer Power and Human Reason, 227.

[73] Weizenbaum. Computer Power and Human Reason, 236.

[74] Weizenbaum. Computer Power and Human Reason, 241. Italics in original text.

[75] Weizenbaum. Computer Power and Human Reason, 242.

[76] Weizenbaum. Computer Power and Human Reason, 253.

[77] Weizenbaum. Computer Power and Human Reason, 253.

[78] Weizenbaum. Computer Power and Human Reason, 255.

[79] Weizenbaum. Computer Power and Human Reason, 255-256. Italics in original text.

[80] Weizenbaum. Computer Power and Human Reason, 256

[81] Weizenbaum. Computer Power and Human Reason, 259.

[82] Weizenbaum. Computer Power and Human Reason, 264.

[83] Weizenbaum. Computer Power and Human Reason, 270.

[84] Weizenbaum. Computer Power and Human Reason, 276.

[85] Weizenbaum. Computer Power and Human Reason, 273.

[86] Weizenbaum. Computer Power and Human Reason, 276.

[87] Weizenbaum. Computer Power and Human Reason, 240.

[88] Weizenbaum and Wendt, Islands in the Cyberstream, “To Look Closely.”

[89] Weizenbaum and Wendt, Islands in the Cyberstream, Who Takes Responsibility?”

[90] Joseph Weizenbaum, “Once more—a computer revolution.” Bulletin of the Atomic Scientists (1978), 17

[91] Joseph Weizenbaum, “Not Without Us.” ETC: A Review of General Semantics (1987), 43.

[92] Weizenbaum and Wendt, Islands in the Cyberstream, “Islands of Reason.”

[93] Weizenbaum, “Not Without Us,” 46.

[94] Weizenbaum and Wendt, Islands in the Cyberstream, “About Computer History.”

[95] Weizenbaum and Wendt, Islands in the Cyberstream, “Islands of Reason.”

[96] Joseph Weizenbaum, “Social and Political Impact of the Long-term History of Computing,” IEEE Annals of the History of Computing 30 (2008) 41.

[97] Weizenbaum, “Social and Political Impact of the Long-term History of Computing,” 41.

[98] Weizenbaum, “Social and Political Impact of the Long-term History of Computing,” 41. Weizenbaum also refers to this quotation in Present Work “Eliza Today.” Weizenbaum further discusses, and critiques, Minsky’s work in Computer Power and Human Reason (157-158, 233- 236); and in “Once more—a computer revolution” (16-18).

[99] Weizenbaum, “Social and Political Impact of the Long-term History of Computing,” 41.

[100] Weizenbaum. “Once more—a computer revolution.” pg. 18.

[101] Weizenbaum. “Once more—a computer revolution.” pg. 17.

[102] Weizenbaum and Wendt, Islands in the Cyberstream, “We have the choice.”

[103] Weizenbaum, Computer Power and Human Reason, 11. The list of thinkers Weizenbaum counts amongst this number includes “Mumford, Arendt, Ellul, Roszak, Comfort, and Boulding.” Over the course of Computer Power, he also draws upon the work of Max Horkheimer and Erich Fromm – two other individuals who raised concerns regarding “the unfettered march of science and technology.”

[104] Harvey Brooks et al., “Modern Technology: Problem or Opportunity,” Daedalus 109 (1980), 3.

[105] Weizenbaum, “The paradoxical role of the computer,” 10.

[106] Weizenbaum, “The paradoxical role of the computer,” 13.

[107] Brooks et al., “Modern Technology: Problem or Opportunity,”22. Italics in original text.

[108] Brooks et al., “Modern Technology: Problem or Opportunity,” 22.

[109] Weizenbaum, Computer Power and Human Reason, 272.

[110] Weizenbaum. “Not Without Us.” pg. 48.

[111] Weizenbaum,“Once more—a computer revolution,” 18

[112] Paul N. Edwards, The Closed World: Computers and the Politics of Discourse in Cold War America (Cambridge: The MIT Press, 1996), 27.

[113] Edwards, The Closed World: Computers and the Politics of Discourse in Cold War America, 30.

[114] Sherry Turkle, The Second Self: Computers and the Human Spirit (Cambridge: The MIT Press, 2005), 189.

[115] Wendy Hui Kyong Chun, Programmed Visions: Software and Memory (Cambridge: The MIT Press, 2013), 48.

[116] Chun, Programmed Visions, 49.

[117] Weizenbaum, “Once more—a computer revolution,” 13. Italics in original text.

[118] See footnote 97.

[119] David Golumbia, The Cultural Logic of Computation (Cambridge: Harvard University Press, 2009), 8.

[120] Golumbia, The Cultural Logic of Computation, 53

[121] Golumbia, The Cultural Logic of Computation, 4-5.

[122] Weizenbaum and Wendt, Islands in the Cyberstream, “Television and Internet.”

[123] Weizenbaum and Wendt, Islands in the Cyberstream, “About Computer History.”

[124] Weizenbaum, “The paradoxical role of the computer,” 13.

[125] Weizenbaum and Wendt, Islands in the Cyberstream, “Islands of Reason.”


About Z.M.L

“I do not believe that things will turn out well, but the idea that they might is of decisive importance.” – Max Horkheimer @libshipwreck

54 comments on “An island of reason in the cyberstream – on the life and thought of Joseph Weizenbaum

  1. Pingback: How the first chatbot predicted the dangers of AI more than 50 years ago

  2. Pingback: How the first chatbot predicted the dangers of AI more than 50 years ago - Techio

  3. Pingback: From ELIZA to ChatGPT, our digital reflections show the dangers of AI - Vox

  4. Pingback: How the first chatbot predicted the dangers of AI more than 50 years ago - Adolfo Eliazàt - Artificial Intelligence - AI News

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Ne'er do wells



Creative Commons License

%d bloggers like this: