26 March 2019

Anthropological/Artificial Intelligence & the HAI

Last week Stanford launched the institute for human-centered artificial intelligence, and to kick things off James Landay posted about the roles AI could play in society, and the importance of exploring smart interfaces.

I’ve followed the HAI’s development in passing, and I watched the inaugural event in the background on Monday last week while I was doing other work. I study algorithmic systems that make important decisions about us - which I call “street-level algorithms” in reference to Michael Lipsky’s street-level bureaucracies - and some of the work I’ve done in the past has taken a more careful look at historical parallels between things we see today (like quantified self and piecework) to see if we can learn anything useful either for making sense of phenomena from a sociological perspective, and sometimes for informing the design of systems from an engineering perspective. James is a professor in the Human-Computer Interaction group at Stanford, and I’m a PhD student in that group.

So I was worried to find James leave details out from a series of anecdotes - details that would seriously undermine the point James seemed to be trying to make in his post. I started writing notes to call out how a more cynical perspective might describe the future or remember the past that James writes about; but with the launch of the HAI, the reaction from people around the world, and specifically the responses from people in the HAI, it seems like a more serious point that needs to be made.

The voices, opinions, and needs of disempowered stakeholders are being ignored today in favor of stakeholders with power, money, and influence - as they have been historically; our failure to listen promises to doom initiatives like the HAI.

James opens with a story of an office that senses you slouching, registers that you’re fatigued, intuits that your mood has shifted, and alters the ambiance accordingly to keep you alert throughout the day. This, James promises, is “a glimpse of the true potential of AI”. Fair enough, I suppose. I believe that he believes in a future of work wherein his environment conforms to his desires, and makes his life better.

But here’s another glimpse: someday you may have to work in an office where the lights are carefully programmed and tested by your employer to hack your body’s natural production of melatonin through the use of blue light, eking out every drop of energy you have while you’re on the clock, leaving you physically and emotionally drained when you leave work. Your eye movements may someday come under the scrutiny of algorithms unknown to you that classifies you on dimensions such as “narcissism” and “psychopathy”, determining your career and indeed your life prospects. Systems with access to photos of your face may someday classify you on the basis of its hunches about your sexuality. Your social networks can - and it turns out do - aim to manipulate your emotional state with neither your knowledge nor consent.

That the effect size of the social contagion study was negligible may be some consolation; it turns out computer scientists don’t really know how to effect any particular emotional state except for outrage.

James’s vision of the future calls for some amount of tracking of physiological measures like your posture and eye movement. Do you feel comfortable being measured, evaluated, and ultimately managed by your employer at that level? Teachers in West Virginia answered that they did not, pushing back on a policy roll out that seemed all but inevitable. What sort of collective action might take place to push back against the unilateral, unregulated, unaccountable manipulation of people’s emotions, uninvited guesses about people’s sexualities and genders, and opaque decisions about the lives of all people? I and others have thought about this problem, collective action, and the short version is that it’s incredibly hard (Francesca Polletta aptly summarized this in the title of her book on participatory democracy: Freedom is an Endless Meeting); there are no panaceas.

What’s the point of digging into this one anecdote, if it was meant as little more than an offhand comment to segue into the deeper point? The point is that this vision is stunningly unaware of the barbs that envelop a suggestion like tracking every aspect of your life at work. It’s a casual, even optimistic, vision for someone whose career wasn’t principally characterized by monitoring, surveillance, and punishment; for drivers who can’t afford to sleep, for Amazon delivery workers who have to urinate in bottles while they make deliveries, and for domestic workers who have no idea whether they’re going to be safe in the next home they clean, this future is a threatening one. Stefan Helmreich wrote about this 20 years ago in Silicon Second Nature, and it seems to remain true today.

… researchers are encouraged to take their privileges for granted, even to the point where these become invisible […] ignor[ing] how much labor is done for them, labor that allows them to be flexible, self-­determining, and independent.

- Helmreich 1999

James goes on to write about Engelbart’s “mother of all demos” in 1968, the introduction of something like half a dozen features of modern computing that we use every day: text editing (including the ability to copy/paste), a computer mouse, a graphical user interface, dynamic reorganizations of data, hyperlinks, real-time group editing (think Google Docs), video conferencing, the forerunner to the internet, the list goes on. What he doesn’t write about - what few of us talk about - is the funding the Stanford Research Institute got from the Department of Defense, the role the DoD played in the development of the internet and of Silicon Valley itself, and the uncomfortable readiness with which we collaborate with power. We’re shaping our work toward the interests of organizations - interests that are at best neutral and at worst in opposition toward the interests of the public.

John Gledhill wrote about the work of political anthropologists in the 1940s and 1950s in Power and its Disguises, arguing that “the subject matter … seemed relatively easy to define,” outlining that the ultimate motivation of government-sponsored political anthropology like EE Pritchard’s study of the Nuer people was that “… authority was to be mediated through indigenous leaders and the rule of Western law was to legitimate itself through a degree of accommodation to local ‘customs’” (Gledhill 2000). The danger of aligning our work with existing power is the further subjugation and marginalization of the communities we ostensibly seek to understand.

the cruelty and everyday violence of our world is the result of dominant people and institutions abusing the kind of people [we] habitually study.

- Gledhill 2000

One of the most frustrating aspects of human-computer interaction isn’t the common refrain that we haven’t yet settled on a definitive core body of work that every practitioner should know. That would at least be a tractable problem. It’s that we’re not all on the same page about important facts about the origin of our field. For some people, Engelbart’s demonstration was a singular vision of the future of computing; for others, it was the product of more than a decade of very carefully managed and guided work leading up to that point.

James’s post was a springboard for the launch of the HAI. In the same way, my critique starts with his post, but my thoughts since then have shifted to the HAI more generally. I’ve said elsewhere that the notion of a human-centered AI institute strikes me as a contradiction upon itself. Human-centered design doesn’t start with a solution as the premise; it starts with humans, and brings to bear whatever solution makes sense for the problem at hand. An initiative that starts with AI as a premise is, maybe tautologically, AI-centered; it will always look for ways to argue for the continued existence, development, and application of AI.

Computer scientists have utterly failed to learn from the history of other fields, and in doing so we’re replicating the same morally objectionable, deeply problematic relationships that other fields could have warned us to avoid - indeed, have tried to warn others to avoid. Political anthropologists of the 1940s “tended to take colonial domination itself for granted” (Gledhill 2000), and in doing so fashioned itself principally as a tool to further that hegemonic influence by finding ways to shape indigenous cultures to colonial powers.

We should be thinking about the relationship we have with institutional powers; do we enhance their hegemony, do we stand by and do nothing, or do we actively resist it?

This isn’t the first time we’ve faced such a crossroads. In the mid-20th century, anthropologists substantively informed intelligence operations during World War II. We came out of that with blood on our hands, but there was consensus that what we had done was morally right. It was World War II, and Nazism threatened the “psychic unity of humankind”. Anthropologists conducted interviews, reviewed historical works, studied philosophical texts, and ultimately produced classified ethnographic accounts of Japanese and other cultures. We produced manuscripts detailing how to undermine cultures and to secure American dominance in war. We even reflected on how we had annihilated Native American cultures, and whether that had served our own ends: “in an attempt to hit at what was supposed to be the sole or main function of the chief, his many other functions were overlooked, social balance was seriously disrupted and a disintegration for which we had not bargained took place.” (see Janssens 1995).

Almost incomprehensibly, we came out of that war exuberant in our optimism about anthropology’s potential. We had weaponized our field and found that we had in our hands a power to achieve widespread manipulation of entire cultures. It was awful power, but we believed that we could use it to make the world a better place. That optimism carried us for the better part of a decade, well into the 60s - past the dissolution of the Office of Strategic Services (the predecessor to the CIA) and into the creation of the CIA - where we continued to use our skills to reshape the cultures we had once set out to understand.

It took us more than a decade to realize that the proxy wars, covert actions, and coups of the Cold War were substantially morally different from fighting Nazis, “a unique situation at that time, which may never recur” as Mead put it (quoted in Price 2008). Slowly, we developed a vocabulary and foundation to address the ethics of “applied anthropology”. Slowly, we made distance from the violence we helped commit around the world, hoping to forget what we had done.

I bring all this to attention because anthropologists, like computer scientists today, had the attention of the government - and specifically the military - and were drowning in lucrative funding arrangements. We were asked to do something that seemed reasonable at the time. It took us decades to come to realize that we weren’t listening and certainly weren’t thinking about the people we were “studying”, whom we asked to trust us, and whose confidences we violated. We eventually came to appreciate some of the things that we value today - respect for participants and an overriding motivation to abide by the wishes of the people we’ve come to study - but that lesson was hard-learned.

Today the government isn’t the main director of research agendas and funding so much as private corporations are. Facebook, Google, Amazon, Twitter, and others offer substantial funding for people who conform to their ethics - one which fundamentally has to account to shareholders but not necessarily to people whose lives are wrapped up in these systems; numerous laws on the regular disclosure of the financial state of publicly traded companies carefully ensure that publicly traded companies are responsibly pursuing the best business decisions, but still in the United States almost no laws concerning the handling of data about us, the ethical commission of research on or about us, or even the negligent handling of private data.

The conflicts of interest are almost innumerable and mostly obvious; that organizations discussing the ethical applications of AI should not be mostly comprised of venture capitalists, AI researchers, and corporate executives whose businesses are built on the unregulated (or least-regulated) deployment of AI should be blindingly obvious. And yet, here we are. Somehow.

Anthropology continues to think deeply and carefully about building trust and being transparent precisely because we’ve been suspicious and untrustworthy for so many years, in particular in Central and South America, where we endorsed the conduction of spywork and espionage under the guise of anthropological fieldwork, and more subtly shaped our work and our ethics to contemporary power structures. In doing so, we’ve allowed countless people to be harmed or killed as a result of the work we did.

All this because we didn’t work with the people we came to study and learn from. Instead, we sought to manipulate them for a greater good that we were sure we knew better. We deliberately excluded them from our discussions of ethics, failed to respect them in our prioritizations of goals, and fashioned codes of ethics that reflected our needs, but never theirs. I see all of this in my field - computer science - today, and it scares me: I’ve attended more panels than I can count about the ethical concerns of studying users by manipulating their news feeds, but vanishingly few - if any - that solicited the input of anyone but researchers whose understanding of computing and whose interests are “niche”. The Association of Computing Machinery’s code of ethics vaguely outlines general guidelines that can be interpreted too loosely to be practicably useful, even for someone who wants to be ethical. For bad actors, the threat of being struck off the roster of the ACM - a consequence that has almost no bearing for designers of computing systems - is toothless.

I’m telling this story to emphasize that these dilemmas aren’t new; to show how another field had similar temptations to pay attention to what institutional, financial, governmental powers wanted, how that field followed that temptation, and how we came to regret it; and to underscore that failing to represent the needs of disempowered stakeholders - the “sources of data” in our ethnographies or in your algorithmic model - will come back to haunt you. Anthropologists weaponized our skills briefly, from a few years before Engelbart enlisted in the Navy until around the time that he demonstrated the NLS, but we all still pay for it all these years later.

Let’s be clear: inviting Henry Kissinger and Condoleezza Rice to the inaugural event was stupid, but it was microcosmic of the broader issue. Are we chasing the influence and prestige of people in power, with no regard for the violence they’ve committed or the people they’ve committed that violence upon? Breaking bread with a war criminal suggests that we are. Are we going to continue to ignore the people from whom we train our machine learning models, and upon whom we impose our AI? Appointing to the board no publicly accountable stakeholders, and overwhelming the leadership of the HAI with white men who have no experience being marginalized by technological systems suggests that we are. Are we going to fashion weakly-worded, half-hearted codes of ethics that give us latitude to interpret them in such a way that would support the objectification of others, so long as they don’t find out? The appointment of numerous researchers and thinkers in this space whose work can substantively be characterized as manipulating and harming others suggests that, regrettably, we are.

So what should the HAI do? There are a few things that the people within the HAI leadership should do to begin building credibility as a human-centered organization, if that is the ultimate goal:

  • If the HAI did reach out to people from marginalized groups and public and got turned down, they should stop for a moment and reflect on why. Maybe even ask those people what gave them pause. And they should listen.
  • The HAI should reconsider the leadership appointments of anyone with vested interests in the proliferation of AI. My position is not that we should exclude AI from our lives and from society, but that it’s not credible that an organization is human-centered if it starts from a place presupposing AI’s necessity. That’s AI-centered, not human-centered.
  • The HAI should also think (much) more critically about where and who it gets its money from, what those entities expect to gain from bankrolling the HAI, and be open to the prospect of turning away donations and grants, no matter how substantial. Organizations whose reason for existing is predicated on the deployment and expansion of AI are simply too compromised; the HAI should reject and return funding from those sources. (and the fact that other institutions have taken ethically hazardous funding, or that we’ve done it in the past, doesn’t make it okay - that’s a “tu quoque” fallacy; it’s at best a weak moral stance and more likely will be regarded as moral cowardice hiding among the unremarkable of our time)

If you’re not in the HAI but you’ve been following along all this time, first: thank you for reading.

Second, the question remains of what you can do. I have some thoughts on that:

  • The worst possible outcome for us is not that the HAI fails. The worst possible outcome for all of us is that the HAI succeeds, accumulating influence, power, and money that should have gone to honestly human-centered projects. The best way to resist that outcome is to deny it authority on the subject of ethics in AI; as long as it doesn’t make any credible effort to represent all of us, we cannot allow it to speak for any of us.
  • Find organizations who use their influence for the benefit of people who have been disempowered and oppressed. Black in AI is a good example. I’ll update this page as more come to mind/attention.
If you have something to say about this, contact me