On techno-optimism
04 May 2026

I’ve got a longer draft in the works, but I don’t think that’s going to become a blog post, so I figure it might be useful to publish something here and solicit feedback, or just plant a flag in the ground that this was a thing I did. Last week Several weeks ago I gave a talk at Wayne State University titled “the uncanny pessimism of techno-optimism”. In this post I’m going to breeze through some of the stuff I talked about and give a sense of what’ll be coming. Eventually.

The gist is that techno-optimism, especially what we see in this era of AI’s ascent, is neither particularly “techno” nor “optimistic”, but something else entirely. In the talk I pieced apart some issues with talking about “AI” as a purely technological project (stuff I’ve written about elsewhere), and then I talked about how techno-optimism, taken on its own terms, seems to foreclose on our self-determination - that is, our ability to shape our own futures and make decisions about what our future looks like and is constituted of.

We might argue about whether “techno-something” makes sense, and I don’t particularly want to die on that hill, but the something is definitely not “optimism”; I think it’s at least arguably the case that what we’re looking at is techno-fatalism.

This sense of fatalism is both ubiquitous and conspicuous, making it kind of overstimulating to exist in the world today and to be aware of discourses about AI and billionaires’ and tech companies’ plans for our collective future. But I found some clarity and relief in comments Dr. Tressie McMillan Cottom made at the Urban Consulate back in November, when she posited that her audacious vision of the future is one with refusal.

So in my talk I asked my audience at Wayne State University to imagine (referencing Ruha Benjamin’s work) what it would require to refuse something, and more interestingly what kinds of futures refusal unlocks. You could think of this akin to what goes into that project, and what comes out of it. I touched on a bunch of examples of groups pushing back against this AI project in a bunch of different ways - with lawsuits, with protests, with moratoriums on data center infrastructure, with sabotage, etc…

I also got to talk a little about the Stop the Data Center group that I’m connected to via the AI Skeptics Reading Group, and both some of the conspicuously fucked up parts of the UM-LANL data center (as if it’s not enough that the University of Michigan is angling to build a billion-dollar data center with the folks that made the atomic bomb).

I ended my talk by observing that pushing back against this data center proposal doesn’t defeat this overarching project involving fatalism and proto-fascistic qualities (as Dan McQuillan has written about in Resisting AI, and other people have observed over the years since that book came onto the scene). We’re not looking to destroy a Flock camera and go home - we’re not even looking to destroy Flock the company and go home. Even if we burn Flock headquarters to the ground, the value proposition of selling surveillance services to cops means we’ll never be done with entrepreneurs looking to do essentially what Flock has done. Maybe they’ll crowdsource the capital investment the way Amazon has done with Ring; maybe they’ll deploy their technology on roving surveillance systems like Waymo has done with cars that can be queried en masse based on where they were at any given moment.

We need to change the conditions that make it profitable to do this stuff. If we stop this UM-LANL military facility, we’ll still have AI companies looking to pick us off by offering palliative solutions to make up for austerity cuts, defunded resources, and shuttered programs.

It seems like we see AI flourishing in certain industries where resources are being hollowed out and where people’s needs and refusal can be discarded - see Palestine, of course; but you can also see it every day in schools, in healthcare, in prisons & policing. It feels as if AI solutions are opportunistic infections or predatory creatures, in that they thrive where human rights and civil society have been weakened.

All this was to say, if we’re going to “destroy AI” in a sense that matters, we need to build counter-institutions that sustain us and make us stronger; we need to support one another, meaningfully, so that exploitative palliative non-solutions like chatbot counselors are incomprehensible to us because they don’t prey on our desperation and our vulnerability, because we have made ourselves collectively less desperate, less vulnerable to these predators.

Scolding people for using chatbots is easy; it’s less easy to put myself into the shoes of someone who has so utterly given up on getting help from anyone that they turn to a chatbot for that help, and put myself through the steps of what it’s going to take to restore their trust in asking for help from people, to say nothing of the work we have to do for that help to actually be available to be mobilized.

That sounds awfully hard, and maybe it is. At the end in my notes I admit that yes this is hard as hell, but if we’re going to work for anything, let’s make it be a future defined in terms centered around people, around our liberation. A future of a free Palestine; a future where trans people are not persecuted by transphobic, mold-huffing billionaires; a future where the sick get real care. I’d rather struggle for our liberation and die than live a lifetime of abject surrender.


Anyway, still a lot of revising and annotating that needs to happen before I can share the talk notes, so I figured it’d be worth sharing these condensed notes somewhere.


If you have something to say about this, contact me