This list might not be comprehensive
The promise AI’s proponents have made for decades is one in which our needs are predicted, anticipated, and met - often before we even realize it. Instead, algorithmic systems, particularly AIs trained on large datasets and deployed to massive scales, seem to keep making the wrong decisions, causing harm and rewarding absurd outcomes. Attempts to make sense of why AIs make wrong calls in the moment explain the instances of errors, but how the environment surrounding these systems precipitate those instances remains murky. This paper draws from anthropological work on bureaucracies, states, and power, translating these ideas into a theory describing the structural tendency for powerful algorithmic systems to cause tremendous harm. I show how administrative models and projections of the world create marginalization, just as algorithmic models cause representational and allocative harm. This paper concludes with a recommendation to avoid the absurdity algorithmic systems produce by denying them power.
Links: paper (pdf)
This bibtex entry will change when the ACM generates a page and snippet for this publication. Please use this incomplete snippet tentatively until that update is available (~May 2021)
Recently, Apple and Google discussed developing and distributing a digital contact-tracing system that will inform people when they’ve been exposed to someone who’s contracted Covid-19, and communicate to people that they’ve been exposed to you if you later test positive yourself. Apple has since deployed a beta of iOS 13 with the first parts of this system exposed to developers and users. At the time of this writing in late April and early May, we’re desperate for information and weary from not knowing who’s caught Covid-19, who’s still vulnerable, who gets it worse and why, or even how to treat it. We’re desperate for any information we can get our hands on. This proposal by Apple and Google promises some information that we can finally dig into. Unfortunately, this system of digital tracing isn’t going to work, and we need to stop the plan before it gets off the ground.
Errors and biases are earning algorithms increasingly malignant reputations in society. A central challenge is that algorithms must bridge the gap between high-level policy and on-the-ground decisions, making inferences in novel situations where the policy or training data do not readily apply. In this paper, we draw on the theory of street-level bureaucracies, how human bureaucrats such as police and judges interpret policy to make on-the-ground decisions. We present by analogy a theory of street-level algorithms, the algorithms that bridge the gaps between policy and decisions about people in a socio-technical system. We argue that unlike street-level bureaucrats, who reflexively refine their decision criteria as they reason through a novel situation, street-level algorithms at best refine their criteria only after the decision is made. This loop-and-a-half delay results in illogical decisions when handling new or extenuating circumstances. This theory suggests designs for street-level algorithms that draw on historical design patterns for street- level bureaucracies, including mechanisms for self-policing and recourse in the case of error.
The “gig economy” has transformed the ways in which people work, but in many ways these markets stifle the growth of workers and the autonomy and protections that workers have grown to expect. We explored the viability of a “worker-centric peer economy” — a system wherein workers benefit as well as consumers — and conducted ethnographic field work across fields ranging from domestic labor to home health care. We discovered seven facets that system designers ought to consider when designing a labor market for “gig workers,” consisting principally of the following: constructive feedback, assigning work fairly, managing customer expectations, protecting vulnerable workers, reconciling worker identities, assessing worker qualifications, & communicating worker quality. We discuss these considerations and provide guidance toward the design of a mutually beneficial market for gig workers.
Links: paper (pdf)
The internet is empowering the rise of crowd work, gig work, and other forms of on-demand labor. A large and growing body of scholarship has attempted to predict the socio-technical outcomes of this shift, especially addressing three questions: 1) What are the complexity limits of on-demand work?, 2) How far can work be decomposed into smaller microtasks?, and 3) What will work and the place of work look like for workers? In this paper, we look to the historical scholarship on piecework — a similar trend of work decomposition, distribution, and payment that was popular at the turn of the 20th century — to understand how these questions might play out with modern on-demand work. We identify the mechanisms that enabled and limited piecework historically, and identify whether on-demand work faces the same pitfalls or might differentiate itself. This approach introduces theoretical grounding that can help address some of the most persistent questions in crowd work, and suggests design interventions that learn from history rather than repeat it.
By lowering the costs of communication, the web promises to enable distributed collectives to act around shared issues. However, many collective action efforts never succeed: while the web’s affordances make it easy to gather, these same decentralizing characteristics impede any focus towards action. In this paper, we study challenges to collective action efforts through the lens of online labor by engaging with Amazon Mechanical Turk workers. Through a year of ethnographic fieldwork, we sought to understand online workers’ unique barriers to collective action. We then created Dynamo, a platform to support the Mechanical Turk community in forming publics around issues and then mobilizing. We found that collective action publics tread a precariously narrow path between the twin perils of stalling and friction, balancing with each step between losing momentum and flaring into acrimony. However, specially structured labor to maintain efforts’ forward motion can help such publics take action.
The term “Quantified Self” refers to people who track, measure, and analyze qualitative experiences using quantitative means. While various endeavors to monitor and track the self have existed throughout history, “Quantified Self” in modern contexts refers to a form of automated self-tracking that has emerged in the last decade as a recognizable subculture defined by shared practices and worldviews. Many of these metrics are defined by affordances made and indeed popularized by technologies in wearable self-tracking in recent years, rather than the areas of interest held by the community itself. This ethnographic study surveys the Quantified Self culture to investigate the motivation of “Quantified Selfers” to track and measure their lives. From a literature review spanning historical and contemporary sources as well as participant observation, this research makes three findings: first, that lifelogging and self-tracking, deeply interconnected with one another, owe more to historical cultures than previously imagined; second, that self quantification provides introspective reflection and insight on the self; third, that incongruence between the Quantified Self culture and mainstream culture on numerous issues problematize the adoption of self-quantification technologies by those in the mainstream.