29 March 2024

AI Kremlinology

I’ve been trying to dig myself out of a pretty deep depression for some time now, and while I still don’t feel like I’m out of it, I have at least found some sense of where some of this depression is coming from. I mean… in addition to all the other stuff.

A while back I wrote this paper where I tried to describe the kinds of violence that I was observing in complex algorithmic systems (let’s call them “AI” or “machine learning systems”, but this will all probably apply more broadly than either of those terms, whatever they actually mean). It seemed like people were experiencing a lot of violence that wasn’t mapping easily to a naive conceptualization of harms that was popular to talk about at the time, so I was looking for something that could kind of… bring into focus what felt like something sharp cutting obliquely - not along the dimensions that we were used to talking about.

I ultimately wrote about bureaucratic theory and administrative systems because the idea of a state - and all the history, theory, and insights about state violence - being brought to bear on seemingly new systems really appealed to me, because it made sense of a lot of the kinds of harm that otherwise seemed hard to parse for someone who had spent most of the preceding 5 years in a PhD program working among relatively ahistorical computer scientists.

It also appealed to a sense I had, that these systems were new in a strict sense… but didn’t feel new in the sense that we were seeing black people denied bail, and disabled people rejected from jobs, and trans people run out of groups and spaces for talking about gender identity… all stuff that was decidedly pretty un-novel. Whatever you wanted to say about how these systems work in new or remarkable ways, it sorta seemed like it was great at replicating unremarkable, old clichés of violence.

Historically minoritized, and particularly vulnerable people, arguably know how bureaucratic systems work, certainly better than any political scientist without direct experience does. To reference a recent episode from the 5-4 podcast, it’s not totally surprising that black rappers predicted the incomprehensible rationale of the supreme court ruling that drug-sniffing dogs don’t infringe on your constitutional protections against unreasonable searches; Jay-Z is a black man, and in that way he knows the law. Which is to say, he knows law as a function of power and oppression, in a way that any of the half-dozen constitutional law scholars who unrelatedly got quoted in the New York Times a while ago all in the midst of various existential crises over the Supreme Court’s more transparent politicization didn’t really know the law.

Constitutional law scholars, generally, don’t have an experience with the law as quotidian, as visceral, as life-or-death as black people in the US often do. Maybe it’s all to say that law professors need to know their subject in a professional way; black people, trans people, disabled people, people who are routinely harassed and targeted by cops all need to know the same subject in an existential way.

I digress; the point is that people with real experience pressed up against police, bureaucratic violence, etc… know administrative violence in a lot of the same ways that minoritized people know AI systems and their harms. Which is to say that there’s a community of technical people who know about AI professionally, and then there are people who actually know these systems because not knowing could have existential consequences.

At the end of my paper, I pointed out that two things make this algorithmic harm really possible:

  1. the capture of people within those systems, often against their will; and
  2. the lack of consequences for the system making the “wrong call”

I think I didn’t unpack my condemnation for systems that both capture people in these domains and also have no consequences for doing terrible things to those people, but I felt like I was painting a sufficiently clear picture of a kind of incoherent, senseless, bureaucratic dystopia - a utopia for the systems themselves rather than for us - and I hoped that people would instinctively pull back from that cliff upon seeing the precipitous fall.

Instead, it sort of seems like people embraced that incoherent, senseless, bureaucratic dystopia… and dove right in.

In the time since then, the ascent of generative AI has turned a lot of human processes and activities like learning; art; basic correspondence; as well as job, rental, and school applications; and myriad other things into a mess of noise and garbage. And it doesn’t seem like that’s getting any better, to be honest.

For some of that time, I was director of the Center for Applied Data Ethics, where I had some space to think about and try to engage with this stuff. But wave after wave of AI systems, whether for generative or discriminatory & classification purposes, made everything seem increasingly absurd and senseless. Because these systems weren’t constructing worlds that made sense according to any intrinsic reason; they were creating rules entirely extrinsically, without any reason or rationale behind any of them, purely informed by the patterns of observations from datasets - datasets not designed for these applications, to be clear.

Not being able to make things make sense made me depressed, which made me less able to make things make sense, which made me more depressed, which made me less able to make things make sense, which made me- You see where this is going.

None of this is the insight. Welcome to where my head was at, like… 2 years ago.

Over time, I felt haunted by a sense that these AIs were like a Kafkaesque nightmare. There was nothing that made sense about why these systems made any decisions that they did - it was just arbitrary, stochastic garbage. They just optimized to replicate patterns that they observed. And again, rather than pull back from the meaningless, senseless, dystopian crap that I felt was pretty clearly awful… A lot of people, and a lot of tech companies, and a lot of governments all seemed to be lured in by the tantalizing prospect of a bullshit generator of their own, finally doing the work for them.


I’d like to tell you a story; would that be okay? If not, you can skip to the next horizontal line.

In 2009 or 2010, I remember it seemingly wasn’t as common to file tax returns online as it is now. Maybe it’s more accurate to say it was something I wasn’t aware I could do. I remember filling out my tax information, printing out some documents, and driving my car over to the nearest regional processing facility - this huge shipping facility in San Jose - where you could drop off the mail.

This bit requires some explanation. Post offices and mailboxes all indicate a “last pickup” time. Sometimes it’s as early as 1pm, sometimes as late as 7pm. The idea is that if you drop off mail in a mailbox before its “last pickup” time, then the mail will be stamped and dated on that date, and if the recipient cares about that sort of thing, they’ll cut you some slack if your envelope arrives late.

This was one of those things. The IRS wouldn’t penalize you as long as your tax returns were postmarked April 15 or earlier. So if you blew past 1pm or 4pm or 7pm, you might think you were out of luck… but regional processing facilities tended to be open later, and on Tax Day this one would push their “last pickup” time to 11:59pm.

(It was a bit of ridiculous fiction, but this is the fiction we all chose and continue to live in. We just have the internet now, so you can file your returns without embarrassing yourself the way I did.)

The San Jose Processing Facility was a bit far, maybe 30 or 40 minutes away if there wasn’t traffic, or about an hour if there was. So it was worth filling out forms until 10 or 11pm and driving to the processing center in the industrial part of San Jose, near the airport.

This wasn’t a building that regular people were necessarily supposed to access all the time. When I say “USPS regional processing facility”, whatever industrial-looking picture you’re thinking of in your head is probably not industrial enough. It was a shipping yard nearby for a while, sweeping up the sprawling parking lot, and it was basically that.

I cannot put into words the chaos that you would have seen if you visited that post office at 11:30pm on April 15th in any given year. People stopped their cars in the middle of the street, and ran up to the wrong side of the building where all the trailers were, their tax returns clenched in their hands, held above their heads, as if desperately looking for a relay partner to hand a baton off to.


I think my depression came from an intuition that documenting and describing the peculiarity of AIs that cause harm seems… too peculiar to me. I know and respect a lot of the people who are leading the way on AI auditing work, but writing papers auditing algorithmic systems is a prospect that I could best describe as a kind of “AI Kremlinology”. The idiosyncratic, outsider observations of what little clues the AI systems dropped, and a lot of the writing about how and why these systems did what they did… felt a little too speculative and uncertain.

Worse, it seemed like anything I wrote about how these systems worked (or didn’t) could be modified and rendered irrelevant in the span of a few hours. And not because anyone at Google or Anthropic or any other company had any idea what they were doing, but merely to paper over the observations that I and others like me might have made. You would end up with some clunky system full of papered-over potholes, generating images of nazis who are exclusively people of color. Again, just incoherent, nonsensical garbage.

Maybe I also feel apprehensive, or maybe just frustrated, about the overarching idea of auditing in the sense of trying to construct knowledge about these arbitrary, senseless, harmful systems. The entire idea of “rationalizing” a senseless bureaucracy seems like a kind of desperate ploy to reinstate “our kind of knowledge” after being kicked out for having built these shitty systems in the first place. I don’t know if people need the language or the frameworks of computer scientists to formalize, rationalize, or legitimize the knowledge that oppressed people already have of the ways these systems suck and cause harm.

I think black rappers predicted the reality of their rights better than a constitutional law scholar would have at the time because they fundamentally needed to know how the system worked, without any of the high-minded nonsense that the ivory tower sometimes coddles and nurtures. I think incoming law students have a better sense of the reality of constitutional law and the Supreme Court because they’re more tethered to the world than law professors are. They have to be; law students don’t have tenure.

I don’t really want to write about the strange and exceptional traffic patterns outside the post office distribution center in San Jose on April 15th. I’m not saying it wasn’t chaotic, or dangerous; I’m certainly not saying it wasn’t important. But even as I write this out, it feels like I’m a strange old man telling you about how people used to have ice delivered every week for their fridges. Like, okay, that’s wild, but I don’t know what to do with that story anymore because fridges haven’t worked that way for like a hundred years. It’s such a dated and strange anecdote, and it lost relevance rapidly as electric refrigerators replaced them.

By far, the more important and enduring story would be one that sees through the ice delivery truck and sees the rearrangement of natural resources away from indigenous communities for the benefit and comfort of white people who are, it should be said, living on stolen land in the first place. And that analysis is actually pretty indifferent to the rise or fall of the ice delivery truck. More importantly, the details of that analysis are not so anachronistic that you get distracted by it.

I could write about how ChatGPT or Claude or Gemini or whatever else can’t give you an accurate count of the countries in Africa that have a “K” in their names. But then, within… an hour? Someone at one of these companies could push an update that renders my entire analysis irrelevant. And it would be almost categorically worthless, because describing how the system works or doesn’t work is the currency that I’m offering. If I don’t have the facts, or if I’m describing a system that doesn’t exist and never existed long enough for people to even remember it, then it’s perhaps even worse than the story about post office tax return shenanigans, or a refrigerator and the ice block delivery man. At least some people remember that shit.

All this is to say:

I feel like AIs are themselves a a kind of senseless, Kafkaesque nightmare where rules exist for no intrinsic reason except perhaps to justify the system itself and the power we give to that system; and it is that power that makes the rules have any bearing on reality through oppressive and coercive force, not adherence by those rules to the lived reality of … anyone.

And also: AIs, or more accurately the people and companies putting AI into everything… are making the world an increasingly senseless, nightmarish, frustrating place. They’re not the only thing making the world senseless and nightmarish, but they’re pulling their weight.

And finally: the idea of settling into a career where I document and speculate about little particular details of these senseless, arbitrary, unhinged-from-reality systems… just feels hopelessly depressing. Like… I don’t want a career in “AI Kremlinology”.

I have no idea where or how this piece was supposed to end. I actually do have some thoughts that I’d love to share, depending on how pitches and proposals go. One of them is… I wouldn’t say optimistic, because look at me. But I would say that it has a target on the horizon that I advocate working towards, rather than a spirit looming over me, haunting and hectoring me to keep running away.

If you have something to say about this, contact me