css.php

My Name Is Legion: The Danger of a Single, Replicated Voice

Three years ago, my department decided to address a problem: teaching grammar. Our anecdotal evidence told us that our traditional method of teaching “proper” English simply wasn’t working. First, we were turning a lot of students off to the joy of language. Second, contrary to our hopes, the students who earned the highest grades—those who technically mastered the lessons—were sounding dry and alike rather than finding their own voices. Worst of all, we were building a class of kids who felt superior for knowing that data is plural or that whom is an object, and those kids often enjoyed mocking those who didn’t get it.

Our department decided to embrace a new model—one of inquiry where we isolated grammatical concepts across the work of a host of powerful writers and asked students to consider and experiment with some of the many ways those writers employed or subverted the technique. The result was powerful and immediate: students started writing more effectively and more originally, not because they were writing “correctly” (which, ironically, they more often were), but because they were exploring how to communicate their ideas in ways that played into and played with their readers’ expectations. In short, we were asking them explore the expansive nature of language rather than to whittle it down to a single, “right” expression.

A decade ago, novelist Chimamanda Ngozi Adichie warned of the danger of a single story in a TED talk that now boasts over five million views. It’s hard to imagine any digital humanists who would disagree with her, even those who dedicated that decade to works in the Eurocentric canon. And yet, Roopika Risam’s piece, “What Passes for Human? Undermining the Universal Subject in DH Humanities Praxis” reminds us how quick we often are to accept a singular perspective when it comes to methodology. We may be expansive when it comes to narratives or content, actively seeking to broaden our scope. But, we often remain expediently reductive when it comes to process. (Harrell takes that one step further: we are reductive even in the logic that drives our computing.) In our search to create lifelike machines of the future or algorithms that decide what images are memorable, the field assumes a universal human—one that springs too quickly from the white legacy of the ivory tower from which many DH centers spring themselves—and we often build from there without reflection.

Risam’s examples of the results of such single-view methodology are harrowing. She relays the disastrous effects of assuming a universal ideal of language, of beauty, of humanity. Tay, Microsoft’s AI chatbot, became a racist, Holocaust denier in just hours of “learning” from American social media. Hanson Robotics, in an attempt to make its humanoid Sophia “beautiful” (and therefore approachable), created a thin, white female akin to a sentient Barbie. I found a New York Times article about a similarly flawed project, Google Photos that, because of the preponderance of white faces fed into the original data set, mistagged pictures of at least two black Americans as gorillas. That article conveys Google’s response to this singular thinking saying, “The idea is that you never know what problems might arise until you get the technologies in the hands of real-world users.”

But Risam says otherwise. She notes that we can anticipate such problems if we re-examine the lens through which we come to our processes and methodologies. In her words, “Given the humanity ascribed to such technologies, it is incumbent on digital humanities practitioners to engage with the question of what kinds of subjectivities are centered in the technologies that facilitate their scholarship.” (O’Donnell et al. note that those questions are relevant to the processes we use to seek out collaborators or to choose locations for conferences, and Harrell notes that those questions are relevant to the very electronic processes of our CPUs.)

Perhaps worse still is that even with evidence that a given praxis is fraught with gross cultural bias, many companies choose to eliminate the symptom rather than grapple with the problem. Instead of coding Tay to engage meaningfully with challenging subjects, she was was re-versioned as a vapid chatbot who avoids any suggestion of political talk, even when the prompt is mundane, such as, “I wore a hijab today.” Instead of feeding more and better pictures of humans of color into their photo-recognition data set, Google deleted the tags of “gorilla,” “monkey,” and “primate” altogether. (The term “white washing” seems appropriate on a variety of levels here.)

Perhaps, to me, the most seemingly benign example of this proliferation—and, in many ways therefore, the most insidious—of a singular view of what it is to be human is that of Jill Watson, IBM’s Watson project that now serves as a digital TA to college students. There is the very fact of her: a narrowly defined version of a human (like Sophia and Tay) who is, beyond being born out of a Eurocentric mindset, also now a gatekeeper to Western knowledge. But more frightening still, she is scalable. That same, singular voice could respond to hundreds and thousands of college students. She is, in effect, legion. She and projects like her (from Siri and Alexa to even Google, I suppose) repopulate the globe with a monolithic way of thinking, despite more expansive shifts in national and world demographics, replicating exponentially a sliver of all that humanity has to offer.