07 December 2020

On Google And Intellectual Freedom

There’s a lot to unpack about Jeff Dean firing Timnit Gebru after what appears to have been a substantial amount of time trying to find excuses to push her out of the company, and failing to do so because she was very annoyingly very good at her job, managing a team that seemed to like Dr. Gebru a lot and that collectively consistently published important work. I’ve never worked at Google and I can’t comment on how awesome her team was internally, but I can unpack part of what’s shaken the foundation for me with regard to a point I made on Twitter a while ago - that now I don’t know how I can situate the work I read that comes out of Google’s Research arm, and specifically Google Brain. I don’t even think I can come up with a model for how Jeff Dean probably wants me to read the work that comes out of Brain, which is my way of saying that I’m completely lost.

There are two threads that I’m going to try to pull together here. One is that it’s hard to overstate the consequences of Brain’s actions. I’ll enumerate a few, but these will just be the ones that came to mind for me immediately. The second thread is more of a response to the people on Twitter who have been saying that of course corporations are cynical and business-minded, and we should have been taking the work of industry researchers with this kind of caution all along. The short piece to that second thread is “I’ve been concerned about how we relate to industry research and especially funding for a while; this is meaningfully more toxic”. look forward to that in another post. This is exhausting.

How should Google employees read this, Jeff?

Okay, let’s talk about the first part of this, in part because I want to quickly unload all of the anxieties I’m feeling right now. We need to say at the outset that the subject matter of the paper was focused on environmental and ethical dimensions of AIs harming the world. That’s a bit of an oversimplification, but I’m going to need to come back to keywords later, so let’s remember that this submission was engaging on the environmental and ethical/justice aspects of AI.

I don’t know how people at Brain are supposed to do their work without fear of finding out that Jeff or someone at Google objects to their work in some unknown way and that their pushing to get the paper published will only put them out past a cliff. Jeff claimed that the submission insufficiently engaged with relevant scholarship, but at more than 120 references this would be at the very high end of references for CS/engineering papers. My piecework paper at CHI had about 160 references and that was a substantial lead over most other papers that year. Papers that cite more than 100 other works are rare.

Does that number of references mean with certainty that they cited enough relevant work? No. But at the very least it begs further explanation - like “you did cite a lot of work, but almost none of it was from xyz field” - because at face value this is preposterous. If I got a ticket for riding a bicycle at 70 miles per hour in a 40mph zone, the cop would probably at least write “downhill” in parentheses for the judge’s benefit to explain something that sounds too absurd to believe without further explanation.

But beyond that, this is such a peculiarly trivial thing to complain about, and so clearly the kind of thing you would expect a reviewer to note only in passing in their review of the paper, that to hear it come from the company, to see it in Jeff’s letter, is disorienting. It’s so bizarre to make a claim on this issue at all, and to make this claim in particular, that it demands an explanation, and yet Jeff offers little to substantiate a claim that he must know is absurd.

It’s such a weird flex that Jeff would make a claim that we all simply can’t believe, that he must know we can’t believe, and that in any kind of sane world that would diminish his credibility in a real way. But it’s a flex exactly because he’s working with the bureaucratic, apparently Kafkaesque, power that Google projects. We’ve moved beyond whether his claim describes reality in a compelling way, or even in a plausible way.

So I don’t know how people at Google can do their work without this looming threat. Not just the thought that there are things they can’t discuss or pursue, but that the boundaries are so unclear, and the explanation can be so detached from anyone’s reality, makes it hard for me to imagine what anyone doing challenging work should do.

How should we read this, Jeff?

I also don’t know how Jeff wants people in the broader research community to read this. I do critical work and sometimes engage with or build on the work of scholars who are affiliated with Google. Up until now, I’ve assumed that there are cultural and institutional biases that motivate people to be in certain places - for instance, I wouldn’t be surprised to find that people who want to build systems to address problems will join CS programs, whereas people who want to draft legislation to address problems might join a more policy-minded program - but I can contextualize someone having a worldview that would make them amenable to working at Google, see how that would motivate a softer critique of AI, and go from there.

That kind of generic caveat can’t possibly suffice for Google now. It’s not merely that they’re a company with a financial stake in AI (and that their financial interests are the main drivers for their work). It’s not just that people affiliated with Google may dispositionally regard AI as something that starts out neutral or even good, rather than something bad, and how that disposition might start the ball rolling toward fixing and improving biased AIs rather than dismantling them. Now I have to deal with the fact that Google had one of the leading researchers in AI, especially in the context of justice, ethics, and environmental justice; and she was fired for reasons surrounding a paper that engaged on those subjects. Reasons that I can’t accept at face value, because they’re too absurd.

I can’t even begin to imagine what interference people at Google have experienced in the past, are experiencing now, or will experience going forward. I can’t be confident that any radical recommendation a paper makes hasn’t been carefully pruned by someone at Google who may have actively threatened to “resignate” the author if they didn’t go along with it.

I don’t know how Jeff wants me to read papers that come out of Brain, and I don’t know how I expect to cite a paper that comes out of Brain in 2021 or later unless I take a substantial amount of time and energy to discuss the circumstances surrounding that paper in my own paper: that these people worked at Google at the time, and just a year earlier Google had fired one of their top researchers for dubious reasons that appear to be related to a paper critiquing AI. I’ll have to unpack how, over and above the caveats that I would have to make to point out that citations [1] & [2] & [3] are affiliated with industry and therefore have particular motives, I’ll now have to specifically unpack this toxic atmosphere at Google possibly stifling work.

And this is aside from the question of reviewing Google-affiliated papers. If a submission at CHI makes a recommendation about AI that touches on environmental justice, I think the authors of that submission will have to be a little reflexive to discuss some of the culture surrounding the recommendations they make, and acknowledge that any part of their recommendation that doesn’t go far enough might be perceived as a corporate agenda rather than a conclusion that’s been honestly earned. Anonymous review was a mess before, but I don’t know how this works at all. It’s only a relief for me personally that a movement to boycott reviewing Google papers is picking up steam, because even if I were reviewing a submission on AI and ethics, and if it were written by a Google employee, I’d need to know that caveat. I have no idea how these complications would unfold, and I think it’ll depend on every single paper in a way that problematizes the whole review process (and certainly anonymous review).

At this point if you’re exhausted, welcome to my last few days. I’m at the end of my rope trying to figure out how I can cite the work of critical scholars I have tremendous respect for if I literally don’t have enough pages to pause when I cite them to tell readers to take the work with a grain of salt, because of all of these events that almost certainly affected their work. And where usually I could say that I can imagine how the other person wants me to engage with their work, I have no clue how Jeff expects me to cite the work that comes out of Google Brain except with massive caveats going forward.

If you have something to say about this post, email me or tweet at me.