15 January 2021

To Live In Their Utopia (and loose threads)

I’m not sure exactly what I’m waiting for (I mean, I know why I’m waiting, but that’s another post), but in the meantime between now and when I make the paper more publicly available, I’d like to unpack a few things that I didn’t find time to talk about in the paper itself. This is a weird experience for me to have a lot that I wished I could have talked about so soon after wrapping up a paper (usually I’m so wiped out that I just need a break), but I’ll take it as a good, or at least generative, sign.

To Live In Their Utopia

If you’ve read my post about “digital forests”, then you have a sense of some part of the paper that’ll be coming out. I unpacked the idea that James Scott introduces in Seeing Like a State, trying to find useful language to talk about how and why algorithmic systems cause harms in the ways they do toward the specific people they do.

If you’ve read my street-level algorithms paper, then you might be thinking “you talked about this, didn’t you?” and the answer there is basically “no”; my feeling about the street-level algorithms paper was that it explains why it makes the wrong calls once it’s in that situation, but I didn’t have a good answer for why AIs end up in those situations in the first place.

The basic idea that I came away with is that it might be most useful to think about AIs as constructing a world from the data that we give them that represents the world. People construct their worlds, and to some extent I want to elicit this feeling that the world in any meaningful sense is constructed by AIs as well, but I didn’t go very far with that thought because the way we talk about what and how an algorithm “understands” (see, that word is already problematic) is a little clumsy - or rather, I feel clumsy trying to navigate this area. There’s a whole bunch of scholarship to work through what we mean by “responsibility” (there are different meanings and they’re not as obviously distinct when we talk about people as when we talk about machines; again, another digression), but I didn’t (and still don’t) want to get into that. Talking about algorithms having a tacit understanding of something, being “responsible” for something, etc… sometimes descends into madness depending on whether your definitions of those words come from a social context or a psychological one (which is sort of a theme, isn’t it?).

All this is to say that AIs harm people - specifically marginalized groups - because the algorithms that construct models of the world construct these worlds represented by those models where all the things they observe are explained by other things in the data that went into the model. But Bowker & Star and tons of others have written about how shallow and incomplete the data we use to describe the world is when compared to our actual, rich experiences of things.

That oversimplification is okay when it’s just me looking at step counts and recognizing that there was a 2-week chunk of time that I wasn’t running because I had recently moved and needed to self-quarantine. There’s all this surrounding information about what’s going on in the world, what the particular details of the pandemic are, what the public health regimes are for the places I was living and am now living, etc… that step_count can’t possibly express. But as a person I can look at that metric and (not quite, but almost) disregard the period of time that I know there’s more to the story represented in the data.

Algorithmic systems don’t necessarily have that information. Or rather, computational models are always missing something. What they’re missing is never obvious until you look at it later, and even then you may not realize it until much later, and even then you may not realize it ever if you’ve never lived that experience.

John Jackson wrote about “thin description” in 2013 talking about African Hebrew Israelites to try to get across that even people don’t really understand everything; the idea of doing a “thick description” in the sense that Geertz wrote is hubristic and folly. Ruha Benjamin pulls that point further to talk about technology in Race After Technology, which is also an important part of my paper (it doesn’t take up as many lines in the paper, but I consider it central in the same way that Seeing Like a State and Utopia of Rules are central).

I’d hate myself if I didn’t underscore the conceit of the paper’s title: “To Live In Their Utopia” is the beckoning call that AIs make of people whose lives meaningfully don’t fit into the boxes that trained the AI. AIs might construct a world where race isn’t a thing of its own, where white supremacy and slavery aren’t features of the training data. And so the world that the AI constructs is this delusional utopia where these things aren’t factors that explain why someone has $8 to their name when their neighbors have $250,000. It’s something about the ZIP code or something else; the direct answer - “because their ancestors were slaves” just isn’t something a computational model can wrap itself around.

Back in June 2020 I was listening to an online conversation with Frank Wilderson III, who wrote Afropessimism, who said (quoting from a frantic hand-written transcription):

The brown person embodies the demand of the return of the southwestern US; the red person embodies the demand of the return of the midwestern US; but when the Black person enters, here is the embodiment of the demand for which words can never express.

How can computational systems express these things, if people struggle to express the depths of this injustice to one another? What hope do neural networks have? What good are they if they’re this far from expressing, let alone understanding, let alone perceiving, the depths of the underlying reasons behind all of these observations we feed into the systems we use?

I’m getting away from the reason that I wanted to write this post, which was to talk about some of the things I decided not to talk about because I just didn’t have enough space to really unpack it, or time or expertise to do it justice.

Loose threads

There were two or maybe three major threads I wished I could have pulled on.

How do people work around these increasingly ubiquitous regimes?

Forrest Stuart wrote about drillers in The Ballad of the Bullet, mostly focusing on the aspirations drillers express and pursue to get out of Chicago and to get rich and famous. But he briefly talks about how drillers learned about YouTube analytics, figured out what kinds of behaviors trigger YouTube’s promotion systems (like naming other drillers, doing collaborations, posting at certain times and certain frequencies, etc…) and I think this would have been a really interesting area to dig into.

Brooke Erin Duffy has written about cosmetics/fashion YouTubers struggling to make careers out of the videos they post, and I’ve been seeing more and more of instagram influencers talk with each other about what “the algorithm” does (and how it’s changed, when they think it’s been updated in a decisive way). I think the closest people that come to mind are Motahhare Eslami (who’s done a lot of work on perceptions of news feeds and algorithmic new sfeed curation), maybe Megan French (who’s written about folk theories of algorithmic systems), Sophie Bishop (who’s actually written about discussions about how YouTube’s algorithms work - “algorithmic gossip”), and Kelly Cotter (who’s been thinking about Instagram influencers trying to be visible, which I just mentioned).

All this is to say that there’s definitely a ton of work in this area, coming from various places (a lot of Comm, some Anthro, some STS, some HCI), and I think if we’re going to think about this stuff as consequential to the design of systems (in vaguely the kind of way that people need to study and talk to homeless people to understand homelessness before designing programs to try to help people who experience homelessness), then I think there’s room for someone to find a way to bring all of this work together, or at least a paper to try to do that. I think framing this implicitly as something that we need to understand in the terms of the people experiencing these systems (that is, meeting people where they are), and framing us as designers as akin to people who nominally serve the public, would be a different framing than the way we tend to talk about designing algorithmic systems (where we talk about things in terms that trace benefits back to the interests of the company running the system).


I’m going to breeze through the other topics because this post is getting long and I didn’t want to spend this long on it. I’m sure there’s work on this stuff, but it doesn’t immediately come to mind, so feel free to point it out by contacting me


How are rules applied or felt differently?

I’ve been talking about this a bit on Twitter recently, but the whole discourse around whether it violates the president’s first amendment right to speech if you suspend his Twitter account is so stupid it’s baffling. He has other platforms and Twitter doesn’t have an obligation to preserve anyone’s right to speech; it’s their platform. We should definitely talk about how messed up it is that one platform’s decision is so consequential, but Twitter being too big for its own good isn’t a good reason to say that the president is above the rules.

It doesn’t make sense to say that the people who are influential, who have power, who have voice, should necessarily get a pass to be more problematic on your platform, and yet that’s the tact Twitter has taken. Is the logic that because he already has all this mass swirling around him, we have to feed that with more mass? That’s ridiculous. If he’s newsworthy, let him go publish a book or write missives on White House stationary.

If anything, my concerns are less with the president than with anyone else. A year ago none of these platforms were wringing their hands about deplatforming sex workers for whom the little slivers of information about potential clients (which they got from forums talking to one another) meant the difference between life and death. Sex workers have died since these platforms gave up the fight against FOSTA/SESTA.

Platforms talk about applying rules equitably, but they know that for the people with resources, any kind of bureaucratic obstacles are trivial, whereas for the people whose lives hang by a few threads, the bureaucratic hurdles people have to jump over to get access to their money (PayPal) or to their audiences (Patreon, Instagram, Twitter, Tiktok, etc…) are tremendous. Applying the rules evenly without regard for how consequential this enforcement is going to be on the person’s life is sadistic, and I think we could draw from scholarship in the social sciences to learn about how things like ID/drivers’ licenses and other administrative requirements get disproportionately felt by people working hourly, who are already at the margin, and now have to answer this bureaucratic demand.

People talk about how a parking ticket could wipe out a Black woman but hundreds of parking tickets could mean nothing to Jeff Bezos; we should talk about how Twitter temporarily suspending a Black woman on twitter trying to use that platform as part of her career is disproportionately more disruptive to her life than permanently suspending Donald Trump is disruptive of his life, because at the end of the day he’ll be fine (if he can stop having a tantrum).

I’d be remiss if I didn’t note that Uber tracks your phone’s battery level and has gone to the trouble if verifying that you’re willing to pay more if your battery is going to die. They insist that they don’t use this information to set prices; I think I speak for everyone when I say I categorically don’t believe them. But besides that, what am I supposed to do with the knowledge that Uber is very aware that they’re quoting a rate to someone who’s more desperate than usual to get a ride home, when they quote a surge rate that’s 3x or 4x? More to the point: how likely do you think you would be to get away with this kind of behavior in a small-scale app you made where you sold something like rides home to people who may be vulnerable, like drunk or worried about their safety?

How do people with power buy or flex their way out of algorithmic absurdity?

This kind of dovetails with the previous question; I’ve heard rumors of people having inside contacts at YouTube to talk to when their videos get demonetized; it seems like the kind of thing that only people with enormous followings get to have, while people with a few hundred thousand subscribers (or even a million or more?) may not get to benefit from; instead they have to deal with the algorithmic appeals processes.

The same sort of stuff happens with Apple’s app review process; some developers (Facebook, Uber, etc…) not only seem to get special access to Apple when something breaks for them, but when the apps these companies produce are found to be doing egregiously inappropriate things, they get to negotiate their ways out of it in ways that I understand smaller devs don’t. After it came to light that Facebook has been paying teens to install spyware on their phones, and after Apple suspended Facebook’s developer credentials, Facebook was able to get back in the good graces of Apple; does anyone believe Marco Arment would’ve gotten that kind of accommodation if Overcast (arguably the best podcast player on iOS) was found doing shady stuff like that? And Marco is fairly prominent, with a large audience of tech enthusiasts (and undoubtedly a number of people within Apple). How do you feel about your chances of keeping your developer account if you pulled that kind of stunt?

Who are these systems games for?

Again, touching on some of the earlier ideas: Graeber wrote about the appeal of games and play in Utopia of Rules, talking about how these are constructed rules that people can walk away from whenever they want. I think this might actually be a really effective way to frame the difference between algorithmic systems as they’re felt by vulnerable groups versus how they’re felt by people with power. I think this shines a light on the disparate experiences of these systems on people, and how people struggle with those decisions.

For people with power that they can draw on (or return to), the rules and algorithmic decisions of any single place or system don’t really matter that much. But for someone whose life and voice and power are so strongly mediated by this one platform, suspension isn’t a game; it’s a serious threat. People who don’t invest in their identities on Twitter and create new sockpuppet accounts on a whim don’t care about the rules or about the arbitrariness of Twitter’s appeals policy.

I think this ties into a bunch of the questions I just outlined, but I hope that this conveys it more as a way to arrive at a generalizable theory than as a series of ethnographic accounts or vignettes.


Anyway, that’s where my head’s been at recently; a lot of these are very generally in the same kind of space, asking how this kind of framing changes what kinds of questions we’re asking, and how. I’m thinking about how i would pursue a few of these thoughts, but for now I’m thinking about the paper as it exists and as it tries to express the framing, rather than all the new ways I could think about how algorithmic systems manifest and express values and legibility.

If you have something to say about this post, email me or tweet at me.