Examining Crowd Work and Gig Work Through The
Historical Lens of Piecework
Ali Alkhatib, Michael S. Bernstein, Margaret Levi
Computer Science Department and CASBS
Stanford University
{ali.alkhatib, msb}@cs.stanford.edu, mle[email protected]
ABSTRACT
The internet is empowering the rise of crowd work, gig work,
and other forms of on–demand labor. A large and grow-
ing body of scholarship has attempted to predict the socio–
technical outcomes of this shift, especially addressing three
questions: 1) What are the complexity limits of on–demand
work?, 2) How far can work be decomposed into smaller mi-
crotasks?, and 3) What will work and the place of work look
like for workers? In this paper, we look to the historical schol-
arship on piecework — a similar trend of work decomposition,
distribution, and payment that was popular at the turn of the
20th century — to understand how these questions might play
out with modern on–demand work. We identify the mech-
anisms that enabled and limited piecework historically, and
identify whether on–demand work faces the same pitfalls or
might dierentiate itself. This approach introduces theoretical
grounding that can help address some of the most persistent
questions in crowd work, and suggests design interventions
that learn from history rather than repeat it.
ACM Classification Keywords
H.5.3. Information Interfaces and Presentation (e.g. HCI):
Group and Organization Interfaces
Author Keywords
Crowd work; gig work; on–demand work; piecework
INTRODUCTION
The past decade has seen a flourishing of computationally–
mediated labor. A framing of work into modular, pre–defined
components enables computational hiring and management
of workers at scale [68, 17, 83]. In this regime, distributed
workers engage in work whenever their schedules allow, often
with little to no awareness of the broader context of the work,
and often with fleeting identities and associations [104, 94].
For years, such labor was limited to information work such as
data annotation and surveys [82, 161, 166, 51, 119]. However,
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from [email protected].
CHI 2017, May 06 - 11, 2017, Denver, CO, USA
© 2017 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ISBN 978-1-4503-4655-9/17/05. . . $15.00
DOI: http://dx.doi.org/10.1145/3025453.3025974
physically embodied work such as driving and cleaning have
now spawned multiple online labor markets as well [94, 3, 1,
2]. In this paper we will use the term on–demand labor, to
capture this pair of related phenomena: first, crowd work [83],
on platforms such as Amazon Mechanical Turk (AMT) and
other sites of (predominantly) information work; and second,
gig work [48, 118], often as platforms for one–o jobs, like
driving, courier services, and administrative support.
The realization that complex goals can be accomplished by
directing crowds of workers has spurred firms to explore sites
of labor such as AMT to find the limits of this distributed,
on–demand workforce. Researchers have also taken to the
space in earnest, developing systems that enable new forms of
production (e.g. [14, 18, 117]) and pursuing social scientific
inquiry into the workers on these platforms [128, 138]. This
research has identified the sociality of gig work [54], as well
as the frustration and disenfranchisement that these systems
eect [72, 104, 106]. Others have focused on the responses
to this frustration, reflecting on the resistance that workers
express against digitally–mediated labor markets [94, 133].
This body of research has broadly worked toward the answer to
one central question: What does the future hold for on–demand
work and those who do it? Researchers have oered insights
on this question along three major threads: First, what are the
complexity limits of on–demand work specifically, how
complex are the goals that crowd work can accomplish, and
what kinds of industries may eventually utilize it [142, 79, 165,
164, 110, 59]? Second, how far can work be decomposed into
smaller microtasks [27, 100, 92, 29, 111]? And third, what
will work and the place of work look like for workers [72, 73,
54, 106]?
This research has largely sought to answer these questions by
examining extant on–demand work phenomena. So far, it has
not oered an ontology to describe or understand the develop-
ments in worker processes that researchers have developed, or
the emergent phenomena in social environments; nor has any
research gone so far as to anticipate future developments.
Piecework as a lens to understand on–demand work
In this paper, we oer a framing for on–demand work as a
contemporary instantiation of piecework, a work and payment
structure which breaks tasks down into discrete jobs, wherein
payment is made for output, rather than for time. We are not
the first to relate on–demand work to piecework: in 2013, for
Observations in piecework Mechanism
Implications for On–demand Work
Complexity
Growth from simple tasks such as sewing
to more complex composite outcomes on
the assembly line floor.
Complexity was limited to tasks that could
be easily measured and evaluated for pay-
ment by the piece.
Measurement and verification will remain
persistent challenges that will limit com-
plexity unless solved.
Decomposition
Work began sliced such that non–experts
could perform each piece, but over time was
sliced such that non–overlapping expertise
was required for each step.
Scientific Management and Taylorism in-
formed and drove decomposition by mea-
suring and facilitating the optimization of
smaller tasks.
After scientific management matured,
piecework began specialized training to cre-
ate experts in narrow tasks. A similar shift
seems feasible with on–demand work.
Workers
Firms antagonized and exploited workers,
leading workers to support one another in-
dependently, ultimately resulting in strong
advocacy groups counterbalancing firms.
The features of piecework (independence
and transience) were both the fulcrum man-
agers used to exploit workers as well as the
focal point around which workers bonded.
While worker frustrations are similar, the
decentralized nature of on–demand work
will limit collective action until there exist
platforms to coordinate and exert pressure.
Table 1. Piecework and on–demand work have both wrestled with questions of how complex work can get, how finely–sliced tasks can become, and
what the workplace will look like for workers. We connect piecework’s history (left) to the mechanisms that determined its outcomes to these three
questions (center) in order to derive predictions for modern on–demand work (right).
example, Kittur et al. referenced crowd work as piecework
briefly as a loose analogy [83]. Our goal in this paper is to
inspect the relationship much more closely. But more than
this, the framing of on–demand labor as a reinstantiation of
piecework gives us years of historical material to help us make
sense of this new form of work, and allows us to study on–
demand work through a theoretical lens that is informed by
years of rigorous, empirical research.
More concretely, by positioning on–demand labor as an instan-
tiation or even a continuation of piecework, we can make sense
of past events as part of a much larger series of interrelated
phenomena (Table 1). We can reflect on dierences in the
features that impacted piecework historically and on–demand
work today. And, to some extent, we can use these dierences
to oer some predictions of what on–demand work researchers
and workers themselves might expect to see on the horizon.
For example, we will draw on piecework’s scholarship on task
decomposition, which was historically limited by shortcom-
ings in measurement and instrumentation, and leverage that
insight to suggest how modern technology aects this mecha-
nism in on–demand work namely, enabling precise tracking
and measurement via algorithms and software.
We organize this paper as follows: first, we review the defi-
nition and history of piecework to make clear the analogy to
on–demand work; and second, we examine the three major
research questions above using the lens of piecework. For
each question, we will contrast the perspective the piecework
scholarship oers with on–demand labor’s body of research,
identify similarities and dierences, and then oer predictions
for on–demand work.
A REVIEW OF PIECEWORK
The HCI community has used the term “piecework” to describe
myriad instantiations of on–demand labor, but researchers
have generally made this allusion in passing. Since we trace
a much stronger parallel between (historical) piecework and
(contemporary) on–demand work, a more comprehensive back-
ground on piecework will be useful. Specifically, first, we’ll
define “piecework” as researchers in its field understand it;
and second, we’ll trace the rise and fall of piecework at a high
level, identifying key figures and ideas during this time. This
section is not intended to be comprehensive: instead, it sets
up the scaolding necessary for our later investigations of
on–demand work’s three questions: complexity limits, task
decomposition, and worker relationships.
What is piecework?: A primer and timeline
Aligning on–demand work with piecework requires an un-
derstanding of what piecework is. While it has had several
definitions over the years, we can trace a constellation of char-
acteristics that recur throughout the literature. We’ll follow
this research, collecting descriptions, examples, and defini-
tions, to develop a sense of piecework.
Piecework’s history traces back further than most would likely
expect. Grier describes the process astronomers adopted of
hiring teenage men to calculate equations in order to better–
predict the trajectories of various celestial bodies in the night
sky [55]. In the first half of the 19th century, George Airy
was perhaps the first to rigorously put piecework–style de-
composition to work; by breaking complex calculations into
constituent parts, and training young men to solve simple al-
gebraic problems, Airy could distribute work to many more
people than could otherwise complete the full calculations.
Piecework began in the intellectual domain of astronomical
calculations and projections, but it found its foothold in manual
labor. Piecework took hold in farm work [120], in textiles [12,
123], on railroads [22], and elsewhere in manufacturing [134]
by the mid–19th century. By 1847 we find a concise definition
of piecework in Raynbird’s essay on piecework, particularly
driven toward encapsulating the manual labor of farm work.
He does this by contrasting two paradigms: “the chief dier-
ence lies between the day–labourer, who receives a certain
some of money . . . for his day’s work, and the task–labourer,
whose earnings depend on the quantity of work done” [120].
Chadwick oers a number of illustrative examples: “pay-
ment is made for each hectare which is pronounced to be well
ploughed . . . for each living foal got from a mare; . . . for each
living calf got” [28]. This framing gives us an intuitive sense
of piecework; “payment for results,” as he calls it, is not only
common in practice, but well–studied in labor economics [46,
154, 155, 64].
It’s worth acknowledging that “this distinction [between piece–
rates and time–rates] was not completely clear–cut” [63]. Em-
ployers adopted piece–rates in some aspects and time–rates in
others. The Rowan premium system, for example, essentially
paid workers a base rate for time plus additional pay depend-
ing on output [129]. As Rowan’s premium system guaranteed
an hourly rate regardless of the worker’s productive output as
well as additional compensation tied to performance, work-
ers were in some senses “task–labourers”, but in other senses
“day–labourers”. This was just one of several alternatives to
strict time– and piece–rate remuneration paradigms.
In the late years of the 19th century, Taylor — a mechanical
engineer with an interest in work eciency — began studying
and formalizing the decomposition, tracking, and management
of tasks [144]. In 1911 he published The principles of scien-
tific management, concretizing an idea that had nebulously
been forming, and which he had been working out himself,
for years [145]. Scientific Management (and Fordism) thrust
piecework into higher gear, especially as mass manufacturing
and a depleted wartime workforce forced industry to find new
ways to eke out more production capacity.
It may be worth thinking about piecework through the lens of
its emergent properties to help understand it. Raynbird argues
for the merits of piecework, pointing out that “piece work
holds out to the labourer an increase of wages as a reward for
his skill and exertion . . . he knows that all depends on his own
diligence and perseverance . . . [and] so long as he performs
his work to the satisfaction of his master, he is not under that
control to which the day–labourer is always subject”. The
argument that “task–labourers” enjoy freedom from control
crops up in Raynbird’s and later Rowan’s works [120, 129].
We see this sense of independence in myriad times, locales,
and industries. Satre oers a look into the lives and culture
of “match–girls”, teenage women who assembled matchsticks
in the late 19th century in London. Of interest was their repu-
tation “. . . for generosity, independence, and protectiveness,
but also for brashness, irregularity, low morality, and little
education” [134]. Hagan and Fisher document piecework
from 1850 through 1930 in Australia, finding similar notions
of independence and autonomy among piecework newspaper
compositors: “If a piece–work compositor . . . decided that he
did not want to work on a particular day or night, the man-
agement recognised his right to put a ‘substitute’ or ‘grass’
compositor in his place” [58]. This sense of independence and
autonomy appears to be a common thread of piecework.
Since workers could now choose their own schedule and style,
a discussion arose on how best to manage pieceworkers. This
conversation came to regard workers antagonistically [130], a
far cry from the earlier rhetoric on piecework, which promised
that pieceworkers would gladly work diligently and for as long
as possible, as incentive–based pay rewarded exactly, and thus
aligned the goals of both managers and workers [35].
Piecework opened the door for people who previously couldn’t
participate in the labor market to do so, and to acquire job
skills incrementally. During World War II, women received
training in narrow subsets of more comprehensive jobs, en-
abling work in capacities similar to conventional (male) work-
ers [63]. Women previously had virtually no opportunities
to engage in engineering and metalworking apprenticeships
as men did; now, they could be trained quickly on narrowly
scoped tasks, demonstrate proficiency, and become experts.
“Rosie the Riveter”, an icon of 20th century America who
represented empowerment and opportunity for women [66],
would have been a pieceworker [38].
Piecework’s popularity in the United States and Europe fell
almost as quickly as it had climbed. Between 1938 and 1942,
the proportion of metal workers under piecework systems had
climbed steeply from 11% to 60% [61]. By 1961, Carlson
finds, the proportion dropped to 8% [26]. He notes that, from
1973 to 1980, the holdouts of piecework — where more than
50% worked under incentive wage plans — were principally in
clothes–making (e.g. hosiery, footwear, and garments). Hart
and Roberts oer a number of explanations for the sudden
demise of piecework. The salient suggestions include: 1) the
emergence of more eective, more nuanced incentive models
— rewarding teams for complex achievements, for instance;
2) the shifting of piecework industries such as manufacturing
and textiles to other countries; and 3) the quality of “multidi-
mensional” work, which was too dicult to evaluate [63].
In summary, piecework: 1) paid workers for quantity of work
done, rather than time done, but occasionally mixed the two
payment models; 2) aorded workers a sense of freedom and
independence; and 3) structured tasks in such a way as to
facilitate more narrowly scoped training and education.
Viewing on–demand work as a modern instantiation of piece-
work is relatively straightforward by this definition. First,
platforms such as Amazon Mechanical Turk (AMT), Uber,
Upwork, and TaskRabbit pay by the task, though some mix
systems in similar ways to the Rowan system’s combination
of piece rate and time rate pay. Second, workers are attracted
to these platforms by the freedom they oer to pick the time
and place of work [104, 21]. Third, system developers as on
Mechanical Turk typically assume no professional skills in
transcription or other areas, and attempt to build that exper-
tise into the workflow [112, 14]. Given this alignment, many
of the same historical properties of piecework will apply to
on–demand work as well.
Case studies in piecework
Throughout the paper, we will return to four case studies to
frame our analyses: Airy’s use of human computers; domestic
and farm workers; the “match–girls” strike; and industrial and
assembly–line workers. In introducing these cases at a high
level, we’ll trace the history of piecework while also framing
the later analysis of the leading research threads we named
earlier: complexity, decomposition, and relationships.
Airy’s computers
In the 19th century, the calculation of celestial bodies had
become a competitive field, and Airy needed to compute tables
that would allow sailors to locate themselves by starlight from
sea. This work ostensibly called for educated people who
comprehensively understood mathematics. Airy realized that
he could break the tasks down and delegate the constituent
parts to human computers, or people who could compute
basic functions. These human computers “. . . possessed the
basic skills of mathematics, including ‘Arithmetic, the use of
Logarithms, and Elementary Algebra’ [55]. As a result, many
of Airy’s computers had relatively rudimentary educations
compared to those that typically worked in the calculation of
solar tables. Airy distributed tasks by mail, allowing work
to be completed by a somewhat geographically distributed
workforce, and paid for each piece of work completed.
The human computers captured several aspects of task decom-
position that would become common. First, the work was
designed such that it could be done independently and without
collaboration. Second, the work was designed so that interme-
diate results could be quickly verified: Airy would have two
workers each do the calculation, and another person compare
their answers. Third, Airy identified ways to decompose the
large task into narrowly–trainable subtasks.
Some of Airy’s policies were more controversial, for exam-
ple firing computers once they reached age 23. This practice
ensured two outcomes that disfavored workers. First, it drasti-
cally reduced professional advancement, as workers’ careers
ended quickly, and without conventional backgrounds in math-
ematics they later struggled to find work for which their expe-
rience was meaningful. And second, it limited workers’ ability
to organize by ensuring that workers were in little communica-
tion with each other, and that they had almost no opportunity
to recognize their circumstances and to coordinate.
Domestic and farmhand labor
The application of piecework to farm work in the late 19th
century and later to manufacturing of small goods, such as
garments and matches, at the turn of the 20th century proved
to be a formative period for piecework as we would come to
know it. Piecework regimes in farms and in homes engaged
workers in assembling clothing. Textile manufacturers found
that they could deliver fabric to people at their homes, asking
them to sew together clothing. The manufacturers would later
return to retrieve the finished garments, paying these workers
for each piece of clothing completed. Farm work applied the
idea of piecework by paying workers for tasks like picking
bushels of fruit or bringing to birth animals [28].
Workers could, in principle, assemble as much or as little
clothing as they wanted; the reality was more grim, as Riis
documented in How the Other Half Lives in 1901 [123]. He
found that workers endured bleak living conditions and worked
long hours attempting to scrape together a living.
The match–girls’ strike
Match–makers were some of the first workers in mass man-
ufacturing to successfully rally for political causes. At the
end of the 19th century, manufacturers had begun to employ
teenage women to assemble matchsticks in factories. These
women rallied first in the form of a march on parliament in
1871 to protest a proposed tax, and later (more famously) in
what was later called “the match–girls strike of 1888” [134].
This later strike was sparked by a worker’s arbitrary docking
of pay, but much deeper resentment had been simmering for
years. Match–girls were already frustrated with the arbitrari-
ness of management, poor working conditions, and having to
work with hazardous materials such as white phosphorus, the
improper handling of which caused serious, painful, disfigur-
ing medical conditions in the bones and ultimately death.
Regardless of what prompted it, the lasting impact of the
match–girls strike of 1888 was profound. This was one of
the earliest and most famous successful worker strikes, and
perhaps the beginning of “militant trade unionism” [134]. As
Webb and Webb described, “the match–girls’ victory turned
a new leaf in Trade Union annals”: in the 30 years after the
match–girls strike, the Trade Union Movement enrollment
grew from 20% of eligible workers to over 60% [153].
Match–girls were some of the earliest to have formed a trade
union, according to Booth’s account in 1903. Satre noted
that match–girls “. . . pooled their resources to purchase their
plumes and clothes . . . and expressed their solidarity through
small [and major] strikes” [20]. But they were also, as Satre
confesses, known for “brashness, irregularity, low morality,
and little education” [134]. These were workers who treasured
their independence, but also fiercely protected one another.
“Brashness” may have detracted from their public image, but
almost undoubtedly contributed to their sense of solidarity,
making their propensity to act against such unfair treatment
and poor conditions understandable and maybe predictable.
Industrial workers
Piecework might be most familiar in the context of indus-
trial and factory work, which largely defined manufacturing
through the 20th century. Before the factory assembly line
arose, however, railway companies adopted piecework regimes
at the turn of the 20th century. What followed was a flourish-
ing of management practices, as railway companies worked to
find eective ways to motivate and evaluate this skilled work-
force of engineers. Graves takes up a case study of the Santa
Fe Railway, finding that they employed “eciency experts” to
develop a “standard time” to determine pay for each task at the
company informed by “thousands of individual operations”;
Graves goes on to list some of the roles required to facilitate
piecework in the early 20th century — among them, “piece-
work clerks, inspectors, and ‘experts’ ” [52]. This oversight,
while controversial (especially among workers [75]), paved
the way for piecework to grow substantially.
The 1930s represented a boom for piecework on an unprece-
dented scale, especially among engineering and metalworking
industries. Hart and Roberts characterize the 1930s and
more broadly the first half of the 20th century — as the “hey-
day” of piecework. They attribute this to the shortage of
male workers, who would have gone through a conventional
apprenticeship process aording them more comprehensive
knowledge of the total scope of work.
Piecework found its way into the war eort during World
War II. With the vast majority of men drafted into service, fac-
tories found themselves turning to a mostly female workforce
that had neither the formal training nor years of experience
that men would have had from apprenticeships. Rather than
attempting to train this new labor force in every aspect of in-
dustrial work, these women were trained for individual tasks
and correspondingly assigned to that or a similar task.
RESEARCH QUESTIONS
Research in crowdsourcing has spent the better part of a decade
exploring how to grow its limits. This has largely involved
iteratively identifying barriers to high–quality, complex work,
then overcoming them through novel designs of systems, work–
flows, and processes (e.g. [14, 121, 84]). The question has
become whether there are limits to on–demand work, and if
so, what factors determine them. To this question, a number
of contributions to the field have pressed for answers.
The exploration of on–demand labor’s potential and limits has
principally navigated three dimensions: First, what are the
complexity limits of on–demand work? Second, how far can
work be decomposed into smaller microtasks? And third, what
will work and the place of work look like for workers? We’ll
explore these aspects of on–demand labor by connecting to
corresponding piecework literature and comparing its lessons
to the current state of on–demand labor.
Complexity Limits of On–Demand Work
A key question to the future of on–demand work is what pre-
cisely will become part of this economy. Paid crowdsourcing
began with simple microtasks on platforms such as Amazon
Mechanical Turk, but microtasks are only helpful if they build
up to a larger whole. So, our first question: how complex can
the work outcomes from on–demand work be?
The perspective of on–demand work
Kittur et al. were among the first to ask whether crowdsourc-
ing could be used for more than parallelizing tasks [84]. Their
work showed that it could, with proof–of–concept crowdsourc-
ing of encyclopedia articles and news summaries tasks
which could be verified or repeated with reasonable expecta-
tions of similar results. Seeking to raise the complexity ceiling,
researchers have since created yet more applications and tech-
niques, including conversational assistants [90], medical data
interpreters [90], and idea generation [163, 164].
To achieve complex work, this body of research has often
applied ideas from Computer Science to design new workflows.
System designers leverage techniques such as MapReduce [84]
and sequence alignment algorithms [87], arranging humans
as computational black boxes. This approach has proven a
compelling one because it leverages the inherent advantages of
scale, automation, and programmability that software aords.
It is now clear that this computational workflow approach
works with some classes of complex tasks, but the broader
wicked problems largely remain unsolved. As a first example,
idea generation shows promise [163, 164], but there is as yet
no general crowdsourced solution for the broader goal of in-
vention and innovation [49]. Second, focused writing tasks are
now feasible [80, 14, 110, 147, 5], but there is no general solu-
tion to create a cross–domain, high–quality crowd–powered
author. Third, data analysis tasks such as clustering [34], cat-
egorization [10], and outlining [99] are possible, but there is
no general solution for sense–making. It is not yet clear what
insights would be required to enable crowdsourced solutions
for these broader wicked problems.
Restricting attention to non–expert, microtask workers proved
limiting. So, Retelny et al. introduced the idea of crowdsourc-
ing with online paid experts from platforms such as Upwork.
Expert crowdsourcing enables access to a much broader set of
workers, for example designers and programmers. The same
ideas can then be applied to expert “macro–tasks” [32, 57],
enabling the crowdsourcing of goals such as user–centered
design [121], programming [91, 45, 30], and mentorship [142].
However, there remains the open question of how complex the
work outcomes from expert crowds can be.
The perspective of piecework
Piecework’s body of research most squarely addresses com-
plexity in two of the cases we looked at earlier: Airy’s human
computers and among industrial workers.
Airy’s work on astronomical charts opened the door to greater
task complexity by encoding the intelligence into the process
rather than the people. Airy’s computers had relatively limited
education in mathematics, but by combining simple mathemat-
ical operations, Airy was able to create a complex composite
outcome [55]. Likewise, in Ford’s factories, no individual
could build the entire car, but the process could emergently
produce one.
But when piecework intially entered the American economy,
it was not used for complex work. Without having designed
complex work processes, piecework managers were restricted
to available workers’ skills such as sewing: it was infeasible to
provide new pieceworkers with the comprehensive education
that apprenticeships imparted [63]. So, initially piecework
arose for farm work, and as Raynbird and others discuss,
the practice remained relatively obscure until it blossomed
in the textile industry [120]. Complexity levels remained
low at the turn of the 20th century as piecework saturated
densely populated urban areas such as London and New York
City [123].
Measurement also limited the complexity of piecework: only
tasks that could be measured and priced could be completed
via piecework. Earlier we discussed Graves’s and later
Brown’s analysis of railway workers. They identified task
homogeneity and measurement as key requirements for piece-
work to be successful. However, complex, creative work —
which is inherently heterogeneous and dicult to routinize —
was unsuitable [52].
Brown’s description of “eciency experts” would corroborate
this: eciency experts can eectively gauge how long known
tasks should take, but would find themselves overwhelmed
if they attempted to assess creative tasks like scientific re-
search, which can take an arbitrary number of iterations before
proceeding to a subsequent step.
Moreover, piecework was limited to tasks that could be quickly
and accurately evaluated. Hart argues that evaluation limited
piecework’s complexity: at some point, evaluating multidimen-
sional work for quality (rather than for quantity) becomes in-
feasible. In his words, “if the quality of the output is more di-
cult to measure than the quantity [. . . ] then a piecework system
is likely to encourage an over–emphasis on quantity . . . and
an under–emphasis on quality” [62]. Complex work, which is
often subjective to evaluate, falls victim to this pitfall.
Comparing the phenomena
The research on piecework tells us that we should expect it to
thrive in industries where the nature of the work is limited in
complexity [22], and become less common as work becomes
more complex. Has computation shifted piecework’s previous
limits of expertise, measurement, and evaluation?
In some ways, yes: technology increases non–experts’ lev-
els of expertise by giving access to information that would
otherwise be unavailable. For example, taxi drivers in Lon-
don endure rigorous training to pass a test known as “The
Knowledge”: a demonstration of the driver’s comprehensive
familiarity with the city’s roads. This test is so challenging that
veteran drivers develop significantly larger the regions of the
brain associated with spatial functions such as navigation [101,
102, 140, 141, 160, 159]. In contrast, with on–demand plat-
forms such as Uber, services such as Google Maps and Waze
make it possible for people entirely unfamiliar with a city to op-
erate professionally [139, 65]. Other examples include search
engines enabling information retrieval, and word processors
enabling spelling and grammar checking. By augmenting the
human intellect [43], computing has shifted the complexity of
work that is possible with minimal or no training.
Algorithms have automated some tasks that previously fell to
management. Computational systems now act as “piecework
clerks” [52] to inspect and modify work [72, 106]. However,
these algorithms are less competent than humans at evalu-
ating subjective work, as well as in their ability to exercise
discretion, causing new problems for workers and managers.
Implications for on–demand work
Algorithms are undoubtedly capable of shepherding more
complex work than the linear processes available to Airy and
Ford. However, as work becomes more complex, it becomes
increasingly dicult to codify a process to achieve it [44,
41]. So, while algorithms will increase the complexity ceiling
beyond what was possible previously with piecework, there is
a fundamental limit to how complex such work can become.
Technology’s ability to support human cognition will enable
stronger assumptions about workers’ abilities, increasing the
complexity of on–demand work outcomes. Just as the shift
to expert crowdsourcing increased complexity, so too will
workers with better tools increase the set of tasks possible.
Beyond this, further improvements would most likely come
from replicating the success of narrowly–slicing education
for expert work as Hart and Roberts and later Grier described
in their piecework examples of human computation [55] and
drastically reformulating macro–tasks given the constraints of
piecework [63]. An argument might be made that MOOCs
and other online education resources provide crowd workers
with the resources that they need, but it remains to be seen
whether that work will be appropriately valued, let alone prop-
erly interpreted by task solicitors [7]. If we can overcome this
obstacle, we might be able to empower more of these workers
to do complex work such as engineering, rather than doom
them to “uneducated” match–girl reputations [134]. However,
many such experts are already available on platforms such as
Upwork, so training may not directly increase the complex-
ity accessible to on–demand work unless it makes common
expertise more broadly available.
Evaluation remains as dicult for crowd work as it did for
the eciency experts. Reputation systems for crowdsourcing
platforms remain notoriously inflated [67]. Ultimately, many
aspects of assessment remain subjective: whether a logo made
for a client is fantastic or terrible may depend on taste.
So, in the case of complexity, the history of piecework does
not yet oer compelling evidence that on–demand work will
achieve far more complex outcomes than piecework did. Im-
provements in workflows, measurement, and evaluation have
already been made, and it’s not immediately clear that the re-
maining challenges are readily solvable. However, on–demand
work will be far more broadly distributed than piecework his-
torically was — reaching many more tasks and areas of exper-
tise by virtue of the internet.
Decomposing Work
At its core, on–demand work has been enabled by decom-
position of large goals into many small tasks. As such, one
of the central questions in the literature is how finely–sliced
these microtasks can become, and which kinds of tasks are
amenable to decomposition. In this section, we place these
questions in the context of piecework’s Taylorist evolution.
The perspective of on–demand work
Many contributions to the design and engineering of crowd
work consist of creative methods for decomposing goals. Even
when tasks such as writing and editing cannot be reliably per-
formed by individual workers, researchers have demonstrated
that the decomposition of these tasks into workflows can suc-
ceed [84, 14, 147, 110]. These decompositions typically take
the form of workflows, instantiated as algorithmically man-
aged sequences of tasks that resolve interdependencies [17].
Workflows often utilize a first sequence of tasks to identify an
area of focus (e.g., a paragraph topic [84], an error [14], or a
concept [163, 164]) and a second sequence of tasks to execute
work on that area. This decomposition style has been success-
fully applied across many areas, including food labeling [112],
brainstorming [137, 163], and accessibility [90, 87, 88].
If decomposition is key to success in on–demand work, the
question arises: what can, and can’t, be decomposed? More
pointedly, how thinly should work be sliced and subdivided
into smaller and smaller tasks? The general trend has been
that smaller is better, and the microtask paradigm has emerged
as the overwhelming favorite [148, 146]. This work illus-
trates a broader sentiment in both the study and practice of
crowd work, that microtasks should be designed resiliently
against the variability of workers, preventing a single errant
submission from impacting the agenda of the work as a whole
fully exploiting the abstracted nature of each piece of work
[71, 89, 149]. In this sense, finer decompositions are seen as
more robust — both to interruptions and errors [32] — even if
they incur a fixed time cost. At the extreme, recent work has
demonstrated microtasks that take seconds [150, 23] or even
fractions of a second [85]. However, workers perform better
when similar tasks are strung together [89], or chained and
arranged to maximize the attention threshold of workers [24].
Despite this, we as a community have leaned into the peril of
low–context work, “embracing error” in crowdsourcing [85].
The general lesson has been that the more micro the task,
and the more fine the decomposition, the greater the risk that
workers lose context necessary to perform the work well. For
example, workers edit adjacent paragraphs in inconsistent
ways [14, 80], interpret tasks in dierent ways [76], and exhibit
lower motivation [81] without sucient context. Research has
sought to ameliorate this issue by designing workflows to help
workers “act with global understanding when each contributor
only has access to local views” [151], typically by automati-
cally or manually generating higher–level representations for
the workers to reflect on [34, 151, 80].
As the additional context necessary to complete a task dimin-
ishes, the invisible labor of finding tasks [104] has arisen as a
major issue. Chilton et al. illustrate the task search challenges
on AMT [33]. Workers seek out good requesters [104] and
then “streak” to perform many tasks of that same type [33].
Researchers have reacted by designing task recommendation
systems (e.g. [36]) and minimizing the amount of time that
people need to spend doing anything other than the work for
which they are paid [25].
The perspective of piecework
Four major stages characterize decomposition in the history of
piecework. The first stage was decomposition of an expert task
such that it could be done by non–experts. This was arguably
the main innovation of Airy’s human computers. Rather than
hire expert computers, Airy identified ways to break down
astrological calculations into steps that could be completed
with only a basic knowledge of mathematics. Likewise, Brown
argued that piecework arose in industries with homogeneous
tasks and low fixed costs of machinery and training [22].
After decomposing tasks for amateurs, the second major stage
was to apply the same methods to domain experts. Unlike
Airy’s human computers, railway engineers had significant
expertise [22]. As Brown noted, however, it was still possible
to discretize and measure their work. Thus, experts such as
railway engineers became pieceworkers as well.
Third, decomposition led to quantification and scientific man-
agement. What can be modularized can be measured, and what
can be measured can be optimized. With Taylor’s formaliza-
tion of scientific management in Taylorism (and Henry Ford’s
eponymously named Fordism), piecework in the early and mid–
20th century surged, especially in industrial work. Scientific
management promised that the careful measurement of work-
ers would yield higher eciency and output [145, 97]. While
Brown points out that piecework dramatically advanced the
instrumented measurement of workers, in Taylor’s time highly
instrumented, automatic measurement of workers was all but
impossible [22]. Instead, managers conducted “stop watch
time studies” [109], using completion times to inform per–task
compensation, similarly to the eciency experts hired in the
Santa Fe Railway, but substantially more precise. The distilla-
tion of work into smaller units ultimately bottomed out with
tasks as small as could be usefully measured [52].
The fourth and final stage was narrow expertise training.
Even after work is decomposed and measured, there were
not enough qualified workers available to do it. So, as World
War II raged and there was a dearth of skilled workers, man-
agers trained women just enough to be able to complete their
tasks [63]. Over time, these women could gain proficiency
and gain broader expertise.
Comparing the phenomena
Where measurement and instrumentation were limiting fac-
tors for historical piecework, computation has changed the
situation so that a dream of scientific management and Tay-
lorism — to measure every motion at every point throughout
the workday and beyond — is not only doable, but trivial [152].
Where Graves directly implicates measurement as preventing
scientific management from being fully utilized [52], modern
crowd work is measuring and modeling every click, scroll, and
keyboard event [132, 131]. The result is that on–demand work
can articulate and track far more carefully than piecework
historically could.
A second shift is the relative ease with which the metaphorical
“assembly line” can be experimented with and measured. His-
torical manufacturing equipment could not quickly be assem-
bled, edited, and redeployed [69]. In contrast, today system–
designers can share, modify, and instantiate environments like
sites of labor in a few lines of code [95, 98]. This opportunity
has spurred an entire body of work investigating the eects of
ordering, pacing, interruptions, and other factors accelerating
scientific management that would have been all but impossible
as few as 20 years ago [37, 24, 32, 31, 85].
Implications for on–demand work
If decomposition in piecework progressed in four stages, we
have seen three of them in on–demand work so far. First,
as with piecework, on–demand work began by decomposing
tasks so that anyone could complete them, as with data la-
beling on Amazon Mechanical Turk. Second, we began to
modularize and measure external expertise (e.g., software engi-
neering, design) so that it could be brought into crowdsourcing
systems [121, 30]. Third, we used measurement to mathemati-
cally optimize workers’ behaviors so that we could make the
systems more ecient [156].
The fourth stage, then, appears likely to occur: narrow train-
ing of workers for these decomposed expert tasks. There is
demand for skilled workers in many crowdsourcing tasks, and
systems to help train workers [142]. We might expect to see
the rise of systems that scaold workers into extremely narrow
areas of expertise, for instance using online courses as proof
of expertise in a specific domain necessary for a microtask.
Finally, improved measurement and lowered costs of produc-
tion have made it feasible to apply piecework methods to many
domains where it may not have historically been possible. The
limit is no longer measurement precision, but human cogni-
tion. Task switching and other cognitive costs make it dicult
to work on tasks so far decontextualized from their original
intention [89]. There will of course be tasks that can be de-
composed without much context, and these will form the most
fine–grained of microtasks. However, other tasks cannot be
freed from context — for example, logo design requires a deep
understanding of the client and their goals.
Workers’ Relationships to their Work
HCI and CSCW have historically framed themselves around
supporting work. While all artifacts have politics, the recent
shift into computational labor systems has directly impacted
the lives and livelihoods of workers in new ways. So, it’s im-
perative to ask: What will the future look like for the workers
who use these systems?
The perspective of on–demand work
Who are the crowd workers and what draws them to crowd
work? Early literature emphasized motivations like fun and
spare change, but this narrative soon shifted to emphasize
that many workers use platforms such as Amazon Mechanical
Turk as a primary source of income [77, 70, 11]. Despite
this, Mechanical Turk is a disappointingly low–wage worksite
for most people in the United States [70, 104, 56]. Thus,
those who choose to opt out of the traditional labor force
and spend significant time on Mechanical Turk are especially
motivated by the opportunity for autonomy and transience
between tasks [77]. While some describe Turkers as powerless
victims or even unaware of what’s going on, this framing
is increasingly being rejected by workers and designers as
“cast[ing] Turkers as dopes in the system.” [73].
Workers’ relationships with requesters are fraught. The unbri-
dled power that requesters have over workers, and the resultant
frustration that this generates, has motivated research into the
tense relationships between workers and requesters [53, 133].
Workers are often blamed for any low–quality work, regardless
of whether they are responsible [104, 106]. Some research
is extremely open about this position, blaming unpredictable
work on “malicious” workers [50] or those with “a lack of
expertise, dedication [or] interest” [136]. Workers resent this
position, and for good reason. Irani and Silberman highlighted
the information asymmetry between workers and requesters
on AMT, which led to the creation of Turkopticon, a site which
allows Turkers to rate and review requesters [72]. Dynamo
then took this critique on information asymmetry and power
imbalances further, designing a platform to facilitate collective
action among Turkers to changes to their circumstances [133].
Researchers have also begun to appreciate the sociality of
crowd workers. Because the platforms do not typically in-
clude social spaces, workers instead congregate o–platform
in forums and mailing lists. There, Turkers exchange advice
on high–paying work, talk about their earnings, build social
connections, and discuss requesters [104]. Many crowd work-
ers know each other through oine and online connections,
coordinating behind–the–scenes despite the platforms encour-
aging independent work [54, 162]. However, the frustration
and mistrust that workers experience with requesters does
occasionally boil over on the forums.
The perspective of piecework
Early observers believed that workers were strongly motivated
by the autonomy of working in the piecework model. Clark
observed textile mill pieceworkers and reported, “When he
works by the day the Italian operative wishes to leave before
the whistle blows, but if he works by the piece he will work as
many hours as it is possible for him to stand” [35]. However,
the emergent trend contrasted with this early rhetoric, as when
workers began instituting “The Fix”, deliberately slow work
to game eciency experts [130]. Piece workers, Roy found,
would form acrimonious relationships with their managers.
Soon, workers began resisting piecework regimes. The match–
girls engaged in their famous strike of 1888, particularly push-
ing to abolish the fines that were taken out of their wages.
Soon others followed suit, including women in the garment
industry in Philadelphia who established collective bargaining
rights [42] and national coal miners who eected an individual
minimum wage in 1912 [125].
Many worker organizations began weighing in against piece-
work and the myriad oversights it made in valuing workers’
time [75, 122]. As mounting attention increasingly revealed
problems in piecework’s treatment of workers, workers them-
selves began to speak out about their frustration with this new
regime. Organizations representing railway workers, mechani-
cal engineers, and others began to mount advocacy in defense
of workers [75, 122]. Pieceworkers’ relationships with their
employers eventually developed a pattern of using laborer ad-
vocacy groups [96, 8, 105, 74]. Following the template of
the match–girls, collective action grew to become a central
component of negotiating with managers [60, 115].
Relative to the modern on–demand workers, there is a notice-
able dearth of information on the interpersonal relationships
among pieceworkers beyond the match–girls at the end of the
19th century. Nevertheless, we can oer some observations:
primary sources indicate that labor organizations wished for
workers to identify as a collective group, “not only as rail-
road employees but also as members of the larger life of the
community” [75]. Doing this, Ostrom and others later argued,
would facilitate collective action and perhaps collective gov-
ernance [116, 60, 115]. Riis also contributed to this sense of
shared struggle and endurance by documenting pieceworkers
in their home–workplaces, literally bringing to light the grim
circumstances in which pieceworkers lived and worked [123].
Comparing the phenomena
There was generally less written about work quality concerns
for historical pieceworkers than there is in modern on–demand
work. Why the dierence? One possibility is that, by writing
web scripts and applying them to many tasks, a small number
of spammers have an outsized influence on the perception of
bad actors. Another possibility is that historical pieceworkers
faced much more risk in shirking: it was much harder for
pieceworkers to move to a new location and find a new job.
Today, Mechanical Turk workers can work for a dozen or
more dierent groups in the span of a day. A third possibility:
online anonymity breeds distrust [47], and where pieceworkers
could be directly observed by foremen and known to them,
online workers are known by little more than an inscrutible
alphanumeric string, like A2XJMS2J2FMVXK.
The relationship between workers and employers has also
shifted: while historically the management of workers had to
be done through a foreman, foremen of the 20th century have
largely been replaced by algorithms of the 21st century [94].
Consequently, the agents managing work are now cold, logi-
cal, and unforgiving. While a person might recognize that the
“attention check” questions proposed by Le et al. and others
ensure that malicious and inattentive workers are stopped [93,
113], some implementations of these approaches only seem
to antagonize workers [106]. As Anderson and Schmittlein
wrote in 1984, “when performance is dicult to evaluate, im-
perfect input measures and a manager’s subjective judgment
are preferable to defective (simple, observable) output mea-
sures” [9]. This frustration has only grown as requesters have
had to rely on automatic management mechanisms. Only a
few use the equivalent of human foremen [57, 86].
Relative to the history of collective action for piecework-
ers, on–demand workers have struggled to make their voices
heard [133, 73, 72]. With workers constantly drifting through
these platforms, and with many part–time members, it’s ex-
tremely dicult to corral the group to make a collective de-
cision [133]. Even when they can, enforcement remains a
challenge: while pieceworkers could physically block access
to a site of production and convince other workers to join
them, online labor markets provide no facilities for workers to
change the experience of other workers. This is a key limita-
tion — without it, workers cannot enforce a strike.
Implications for on–demand work
The decentralization and anonymization of on–demand work,
especially online crowd work, will continue to make many of
its social relationships a struggle. While some workers get
to know each other well on forums [104, 54], many never
engage in these social spaces. Without intervention, worker
relationships and collectivism are likely to be inhibited by this
decentralized design. One option is to build worker central-
izing points into the platform, for example asking workers to
vote on each others’ reputation or allowing groups of workers
to collectively reject a task from the platform [157].
The history of piecework further suggests that relationships be-
tween workers and employers might be improved if employers
engaged in more human management styles. Instead of dele-
gating as many management tasks as possible to an algorithm,
it might be possible to build dashboards and other information
tools that empower modern crowd work foremen [86]. If the
literature on piecework is to be believed, more considerate
human management may resolve many of the tensions.
Reciprocally, crowd work may be able to inform piecework
research in this domain. There exists far less literature about
piece workers’ relationships than there does today about on–
demand workers’ relationships. Two reasons stand out: first,
modern platforms are visible to researchers in ways that the
sites of piece work labor were not. Second, Anthropology
stands on a firmer theoretical and methodological basis than it
did at the turn of the 20th century. Malinowski, Boas, Mead,
and other luminaries throughout the first half of the 20th cen-
tury eectively defined Cultural Anthropology as we know it
today; participant–observation, the etic and the emic under-
standing of culture, and reflexivity didn’t take even a resem-
blance of their contemporary forms until these works [103, 19,
108]. On–demand labor today may give us an opportunity to
revisit open questions in piecework with a more refined lens.
DISCUSSION
In our analysis of on–demand work via the piecework lens,
three issues arise: 1) the hazards of predicting the future,
2) utopian and dystopian visions, and 3) a research agenda.
We will attempt to grapple with these questions here explicitly.
The Hazards of Predicting the Future
The past isn’t a perfect predictor for the future; as Scholz
cautions, “it would be wrong to conclude that in the realm of
digital labor there is nothing new under the sun” [135]. Our
analysis is limited by the dierences, foreseen and unforeseen,
between historical piecework and modern on–demand work.
For example, unlike physical work environments, people can
(and often do) make one–o contributions to online communi-
ties [107]. While we have attempted to identify some likely
parallels and divergences between piecework and on–demand
work, we can’t claim to have accounted for everything.
But this does not mean that attempting to draw meaningfully
from historical scholarship would be folly; enough of piece-
work can and does inform on–demand work that HCI and
CSCW researchers might seek out historical framings for other
phenomena of study as well. While we can only speculate one
of (perhaps many) possible futures, history does allow us to
articulate and bound which futures appear more likely.
Rosenberg and others have contributed substantially to the
practice in part by clearly limiting the extent of their claims —
only oering, for instance, “to narrow our estimates and thus
to concentrate resources in directions that are more likely to
have useful payos” [127]. Using this approach, our method
of relating history to modern socio–technical systems may
be a useful tool for researchers attempting to make sense of
ostensibly new phenomena. In other words, oering “that past
history is an indispensable source of information to anyone
interested in characterizing technologies” [126].
Utopian and Dystopian Visions
An easy narrative is to characterize the future of on–demand
labor at one of two extremes. On one hand, crowd work
researchers imagine the application of crowdsourcing as a
potentially bright future that enables the achievement of near–
impossible goals and career opportunities [143, 83, 18, 142].
On the other hand, researchers warn that on–demand labor
will create exploitative sites of dispossession [135], discrimi-
nation [40], and invisible, deeply frustrated workers [72, 16].
A uniquely challenging facet of this domain is the public
attention that it has garnered. Activists have described specula-
tive work as having “essentially been turned into modern–day
slaves” [13]. Meanwhile, advocates have described it as “a
project of sharing aimed at providing ordinary people with
more economic opportunities and improving their lives” [39].
Piecework teaches us that, without appropriate norms and poli-
cies, the dystopian outcome has happened and may happen
again. The piecework nature of on–demand work induces us
“to neglect tasks that are less easy to measure” [6], rewarding
us not for creativity but predictability; payment for this work
may ultimately be determined by algorithms that fundamen-
tally don’t understand people; the layers between us and our
managers might eventually become “defective (simple, observ-
able)” algorithms [9], just like those which already frustrate
on–demand workers [94, 133, 72]. However, social policy has
advanced since the early 1900s, so as on–demand work grows,
a repeat of How the Other Half Lives [123] seems less likely.
On the other hand, while piecework’s nascent years were grim,
what followed was a century of some of the most potent labor
advocacy organizations in modern history [63, 105]. Even
today, the geist of the labor union revolution inspires collective
action and worker empowerment around the world. Recently,
in India, workers across the nation engaged in the largest labor
strike in human history [4]. If labor advocacy groups can find
ways to eect change in on–demand work as some have called
for [78], then the future of on–demand labor may follow the
same trajectory of worker empowerment that piecework saw.
The history of piecework suggests that the utopian and
dystopian outcomes will both occur, in dierent parts of the
world and to dierent groups of people. When piecework
largely disappeared in the United States, outsourcing appeared
— creating major labor issues around the world. It’ss entirely
possible that we will create a new brand of flexible online
career in developed countries, while simultaneously fueling an
unskilled decentralized labor force in developing nations. As
designers and researchers, this prompts the question: which
outcome are we attempting to promote or avoid for whom?
A Research Agenda
Piecework also helps bring into focus the areas of research
that might bear the most fruit. We return to the three questions
that motivated this paper: 1) What are the complexity limits of
on–demand work? 2) How far can work be decomposed into
smaller microtasks? 3) What will work and the place of work
look like for workers?
While we have arguably outpaced piecework with regard to
the limits on the complexity of work, the most complex and
open–ended wicked problems [124] remain the domain of
older human collectives such as governments and organiza-
tions. In addition, we can learn from the piecework literature
as it relates to the stymieing eect that mismanagement has on
workers; research into complexity limits should focus on find-
ing new ways to manage workers, in particular using humans
(perhaps other crowd workers) to act as modern “foremen”.
Piecework researchers looking into decomposition pointed out
long ago that piecework is saddled by a lower limit on decom-
position: as Bewley mentions, “piecework does not compen-
sate workers for time spent switching tasks” [15]. We’ve since
studied this phenomenon in crowd work both observationally
[33] and experimentally [89]. We should consider whether
this remains a worthwhile area to explore; unless the work we
put forth directly aects the costs of task–switching for
instance, the cost of suboptimal task search, or the cognitive
burden of changing tasks — we may only make incremental
advances in micro–task decomposition. When the cognitive
cost of understanding a task and its inputs outstrips the eort
required to complete it, decomposition seems a poor choice.
Finally, we turn to the relationships of crowd workers. The
crowd work literature here can convincingly speak back to the
piecework scholarship perhaps more than in the other sections.
The tools that are available to us today — not just technical,
but methodological — make it possible to discover, study, and
partner with crowd workers in ways that were unimaginable to
piecework researchers. Bigham engages in crowd work [16]
not just because it’s possible, but because our community
appreciates the importance of approaches such as participant–
observation and ethnography as a whole [114].
We should also take a moment to explore the opportunity to dis-
cuss the ethics of on–demand labor, as Williamson does [158].
The literature on the history of piecework does not frame the
question as whether piecework is inherently ethical or unethi-
cal, instead asking what conditions render it exploitative. The
literature we have brought to bear suggests that exploitation
occurs when conditions harm workers directly or indirectly,
such as in sweatshops and agricultural work with pesticides,
or where employers systematically underpay or overwork la-
borers by contemporary standards.
The question then is whether given socio–technical infrastruc-
tures systematically harm, underpay, or overwork workers.
For example, Amazon Mechanical Turk does not directly re-
quire any rate of payment for work, but its design encourages
employers to engage in exploitative behavior: piece–rate pay,
for example, does not value workers’ task search time; addi-
tionally, task design interfaces undeniably frame workers as
unreliable by recommending replication with multiple workers
rather than trusting and paying individual workers more.
Piecework also has lessons for a number of other research
questions in crowdsourcing. For example, future work could
more deeply explore the evolution of scientific management
as it relates to crowdsourcing optimization; quality control
approaches between the two eras; and further analysis of in-
centive structures.
CONCLUSION
On–demand work is not new, but a contemporary instantia-
tion of piecework. In this paper, we reconsider three major
research questions in on–demand work using piecework as a
lens: 1) What are the complexity limits of on–demand work?
2) How far can work be decomposed into smaller microtasks?
3) What will work and the place of work look like for workers?
We draw on piecework scholarship to inform analyses of what
has changed, what hasn’t, and may change soon. Reciprocally,
we believe that modern on–demand work can teach us about
the broader phenomenon of piecework as well. If history really
does repeat itself, the best we can do is be prepared.
ACKNOWLEDGEMENTS
This work was supported by the Stanford Cyber Initiative and
National Science Foundation award IIS-1351131.
REFERENCES
1. 2015. House Cleaning, Handyman, Lawn Care Services
in Austin, Denver, Kansas City, Minneapolis and San
Francisco —- Zaarly. (9 2015). https://www.zaarly.com/
2. 2015. TaskRabbit connects you to safe and reliable help
in your neighborhood. (9 2015).
https://www.taskrabbit.com/
3. 2015. Uber. (9 2015). https://www.uber.com/
4. 2016. 150 Million Indian Workers Take Part In Largest
Strike in Centuries. (9 2016).
http://therealnews.com/t2/index.php?option=
com_content&task=view&id=31&Itemid=74&jumival=17170
5. Elena Agapie, Jaime Teevan, and Andrés
Monroy-Hernández. 2015. Crowdsourcing in the field: A
case study using local crowds for event reporting. In
Third AAAI Conference on Human Computation and
Crowdsourcing.
6. Jonas Agell. 2004. Why are Small Firms Dierent?
Managers’ Views. Scandinavian Journal of Economics
106, 3 (2004), 437–452. DOI:
http://dx.doi.org/10.1111/j.0347-0520.2004.00371.x
7. J Ignacio Aguaded-Gómez. 2013. The MOOC
Revolution: A new form of education from the
technological paradigm. Comunicar 41, 21 (2013), 7–8.
8. John S Ahlquist and Margaret Levi. 2013. In the interest
of others: Organizations and social activism. Princeton
University Press.
9. Erin Anderson and David C. Schmittlein. 1984.
Integration of the Sales Force: An Empirical
Examination. The RAND Journal of Economics 15, 3
(1984), 385–395. http://www.jstor.org/stable/2555446
10. Paul André, Aniket Kittur, and Steven P. Dow. 2014.
Crowd Synthesis: Extracting Categories and Clusters
from Complex Data. In Proceedings of the 17th ACM
Conference on Computer Supported Cooperative Work &
Social Computing (CSCW ’14). ACM, 989–998. DOI:
http://dx.doi.org/10.1145/2531602.2531653
11. Judd Antin and Aaron Shaw. 2012. Social Desirability
Bias and Self-reports of Motivation: A Study of Amazon
Mechanical Turk in the US and India. In Proceedings of
the SIGCHI Conference on Human Factors in Computing
Systems (CHI ’12). ACM, 2925–2934. DOI:
http://dx.doi.org/10.1145/2207676.2208699
12.
Peter Baker. 1993. Production restructuring in the textiles
and clothing industries. New Technology, Work and
Employment 8, 1 (1993), 43–55. DOI:
http://dx.doi.org/10.1111/j.1468-005X.1993.tb00033.x
13. Je Bercovici. 2011. AOL–Hupo Suit Seeks $105M:
‘This Is About Justice’. (4 2011).
http://www.forbes.com/sites/jeffbercovici/2011/04/12/
aol-huffpo-suit-seeks-105m-this-is-about-justice
14. Michael S. Bernstein, Greg Little, Robert C. Miller,
Björn Hartmann, Mark S. Ackerman, David R. Karger,
David Crowell, and Katrina Panovich. 2010. Soylent: A
Word Processor with a Crowd Inside. In Proceedings of
the 23Nd Annual ACM Symposium on User Interface
Software and Technology (UIST ’10). ACM, 313–322.
DOI:http://dx.doi.org/10.1145/1866029.1866078
15. Truman F Bewley. 1999. Why wages don’t fall during a
recession. Harvard University Press.
16. Jerey Bigham. 2014. My MTurk (half) Workday. (7
2014). http://www.cs.cmu.edu/~jbigham/posts/2014/half-
workday-as-turker.html
17.
Jerey P. Bigham, Michael S. Bernstein, and Eytan Adar.
2015. Human-Computer Interaction and Collective
Intelligence. In Handbook of Collective Intelligence. MIT
Press, 57–84. http:
//repository.cmu.edu/cgi/viewcontent.cgi?article=1264
18. Jerey P. Bigham, Chandrika Jayant, Hanjie Ji, Greg
Little, Andrew Miller, Robert C. Miller, Robin Miller,
Aubrey Tatarowicz, Brandyn White, Samual White, and
Tom Yeh. 2010. VizWiz: Nearly Real-time Answers to
Visual Questions. In Proceedings of the 23Nd Annual
ACM Symposium on User Interface Software and
Technology (UIST ’10). ACM, 333–342. DOI:
http://dx.doi.org/10.1145/1866029.1866080
19. Franz Boas. 1940. Race, language, and culture.
University of Chicago Press.
20. Charles Booth. 1903. Life and Labour of the People in
London. Vol. 8. Macmillan and Company.
21. Robin Brewer, Meredith Ringel Morris, and Anne Marie
Piper. 2016. "Why Would Anybody Do This?":
Understanding Older Adults’ Motivations and Challenges
in Crowd Work. In Proceedings of the 2016 CHI
Conference on Human Factors in Computing Systems
(CHI ’16). ACM, 2246–2257. DOI:
http://dx.doi.org/10.1145/2858036.2858198
22. Charles Brown. 1990. Firms’ Choice of Method of Pay.
Industrial & Labor Relations Review 43, 3 (1990),
165S–182S. DOI:
http://dx.doi.org/10.1177/001979399004300311
23. Carrie J. Cai, Philip J. Guo, James R. Glass, and
Robert C. Miller. 2015. Wait-Learning: Leveraging Wait
Time for Second Language Education. In Proceedings of
the 33rd Annual ACM Conference on Human Factors in
Computing Systems (CHI ’15). ACM, 3701–3710. DOI:
http://dx.doi.org/10.1145/2702123.2702267
24. Carrie J. Cai, Shamsi T. Iqbal, and Jaime Teevan. 2016.
Chain Reactions: The Impact of Order on Microtask
Chains. In Proceedings of the 2016 CHI Conference on
Human Factors in Computing Systems (CHI ’16). ACM,
3143–3154. DOI:
http://dx.doi.org/10.1145/2858036.2858237
25. Chris Callison-Burch. 2014. Crowd-workers:
Aggregating information across turkers to help them find
higher paying work. In Second AAAI Conference on
Human Computation and Crowdsourcing.
26.
Norma W Carlson. 1982. Time rates tighten their grip on
manufacturing industries. Monthly Lab. Rev. 105 (1982),
15.
27. L. Elisa Celis, Sai Praneeth Reddy, Ishaan Preet Singh,
and Shailesh Vaya. 2016. Assignment Techniques for
Crowdsourcing Sensitive Tasks. In Proceedings of the
19th ACM Conference on Computer–Supported
Cooperative Work & Social Computing (CSCW ’16).
ACM, 836–847. DOI:
http://dx.doi.org/10.1145/2818048.2835202
28. Edwin Chadwick. 1865. Openi, at the Meeting of the
National Association for the Promotion of Social Science,
held at York, in September, 1864. Journal of the
Statistical Society of London 28, 1 (1865), 1–33.
http://www.jstor.org/stable/2338394
29. Joseph Chee Chang, Aniket Kittur, and Nathan Hahn.
2016. Alloy: Clustering with Crowds and Computation.
In Proceedings of the 2016 CHI Conference on Human
Factors in Computing Systems (CHI ’16). ACM,
3180–3191. DOI:
http://dx.doi.org/10.1145/2858036.2858411
30. Yan Chen, Steve Oney, and Walter S. Lasecki. 2016.
Towards Providing On-Demand Expert Support for
Software Developers. In Proceedings of the 2016 CHI
Conference on Human Factors in Computing Systems
(CHI ’16). ACM, 3192–3203. DOI:
http://dx.doi.org/10.1145/2858036.2858512
31. Justin Cheng, Jaime Teevan, and Michael S. Bernstein.
2015a. Measuring Crowdsourcing Eort with Error–Time
Curves. In Proceedings of the 33rd Annual ACM
Conference on Human Factors in Computing Systems
(CHI ’15). ACM, 1365–1374. DOI:
http://dx.doi.org/10.1145/2702123.2702145
32. Justin Cheng, Jaime Teevan, Shamsi T. Iqbal, and
Michael S. Bernstein. 2015b. Break It Down: A
Comparison of Macro- and Microtasks. In Proceedings of
the 33rd Annual ACM Conference on Human Factors in
Computing Systems (CHI ’15). ACM, 4061–4064. DOI:
http://dx.doi.org/10.1145/2702123.2702146
33. Lydia B. Chilton, John J. Horton, Robert C. Miller, and
Shiri Azenkot. 2010. Task Search in a Human
Computation Market. In Proceedings of the ACM
SIGKDD Workshop on Human Computation (HCOMP
’10). ACM, 1–9. DOI:
http://dx.doi.org/10.1145/1837885.1837889
34. Lydia B. Chilton, Greg Little, Darren Edge, Daniel S.
Weld, and James A. Landay. 2013. Cascade:
Crowdsourcing Taxonomy Creation. In Proceedings of
the SIGCHI Conference on Human Factors in Computing
Systems (CHI ’13). ACM, 1999–2008. DOI:
http://dx.doi.org/10.1145/2470654.2466265
35. William Alexander Graham Clark. 1908. Cotton Textile
Trade in Turkish Empire, Greece, and Italy. Vol. 10. US
Government Printing Oce.
36. Dan Cosley, Dan Frankowski, Loren Terveen, and John
Riedl. 2007. SuggestBot: Using Intelligent Task Routing
to Help People Find Work in Wikipedia. In Proceedings
of the 12th International Conference on Intelligent User
Interfaces (IUI ’07). ACM, 32–41. DOI:
http://dx.doi.org/10.1145/1216295.1216309
37.
Peng Dai, Jerey M. Rzeszotarski, Praveen Paritosh, and
Ed H. Chi. 2015. And Now for Something Completely
Dierent: Improving Crowdsourcing Workflows with
Micro-Diversions. In Proceedings of the 18th ACM
Conference on Computer Supported Cooperative Work &
Social Computing (CSCW ’15). ACM, 628–638. DOI:
http://dx.doi.org/10.1145/2675133.2675260
38. Andrea Rees Davies and Brenda D Frink. 2014. The
origins of the ideal worker: The separation of work and
home in the United States from the market revolution to
1950. Work and Occupations 41, 1 (2014), 18–39.
39. Jan Drahokoupil and Brian Fabo. 2016. The Sharing
Economy That Is Not: Shaping Employment In Platform
Capitalism. (7 2016).
https://www.socialeurope.eu/2016/07/sharing-economy-
not-shaping-employment-platform-capitalism/
40. Benjamin G Edelman, Michael Luca, and Dan Svirsky.
2015. Racial Discrimination in the Sharing Economy:
Evidence from a Field Experiment. Harvard Business
School NOM Unit Working Paper 16-069 (2015).
41.
Amy C. Edmondson. 2012. Teaming: How organizations
learn, innovate, and compete in the knowledge economy.
John Wiley & Sons, San Francisco, California.
42. Boris Emmet. 1918. Trade Agreements In The Women’s
Clothing Industries Of Philadelphia. Monthly Review of
the U.S. Bureau of Labor Statistics 6, 1 (1918), 27–39.
http://www.jstor.org/stable/41829256
43.
Douglas C Engelbart. 2001. Augmenting human intellect:
a conceptual framework (1962). PACKER, Randall and
JORDAN, Ken. Multimedia. From Wagner to Virtual
Reality. New York: WW Norton & Company (2001),
64–90.
44. Samer Faraj and Yan Xiao. 2006. Coordination in
Fast-Response Organizations. Management Science 52, 8
(2006), 1155–1169. http:
//mansci.journal.informs.org/content/52/8/1155.short
45. Ethan Fast and Michael S. Bernstein. 2016. Meta:
Enabling Programming Languages to Learn from the
Crowd. In Proceedings of the 29th Annual Symposium on
User Interface Software and Technology (UIST ’16).
ACM, 259–270. DOI:
http://dx.doi.org/10.1145/2984511.2984532
46.
David N Figlio and Lawrence W Kenny. 2007. Individual
teacher incentives and student performance. Journal of
Public Economics 91, 5 (2007), 901–914.
47.
Batya Friedman, Peter H. Khan, Jr., and Daniel C. Howe.
2000. Trust Online. Commun. ACM 43, 12 (12 2000),
34–40. DOI:http://dx.doi.org/10.1145/355112.355120
48. Gerald Friedman. 2014. Workers without employers:
shadow corporations and the rise of the gig economy.
Review of Keynesian Economics 2 (2014), 171–188.
49. Mark Fuge, Kevin Tee, Alice Agogino, and Nathan
Maton. 2014. Analysis of collaborative design networks:
A case study of openideo. Journal of Computing and
Information Science in Engineering 14, 2 (2014), 021009.
50. Ujwal Gadiraju, Ricardo Kawase, Stefan Dietze, and
Gianluca Demartini. 2015. Understanding Malicious
Behavior in Crowdsourcing Platforms: The Case of
Online Surveys. In Proceedings of the 33rd Annual ACM
Conference on Human Factors in Computing Systems
(CHI ’15). ACM, 1631–1640. DOI:
http://dx.doi.org/10.1145/2702123.2702443
51. David Geiger, Stefan Seedorf, Thimo Schulze, Robert C
Nickerson, and Martin Schader. 2011. Managing the
Crowd: Towards a Taxonomy of Crowdsourcing
Processes.. In AMCIS.
52. Carl Graves. 1981. Applying Scientific Management
Principles to Railroad Repair Shops — the Santa Fe
Experience, 1904-18. Business and Economic History 10
(1981), 124–136. http://www.jstor.org/stable/23702539
53. Mary Gray. 2015. Fixing the Chaotic Crowdworker
Economy. (8 2015).
http://www.bloombergview.com/articles/2015-08-12/
fixing-the-chaotic-crowdworker-economy
54. Mary L. Gray, Siddharth Suri, Syed Shoaib Ali, and
Deepti Kulkarni. 2016. The Crowd is a Collaborative
Network. In Proceedings of the 19th ACM Conference on
Computer–Supported Cooperative Work & Social
Computing (CSCW ’16). ACM, 134–147. DOI:
http://dx.doi.org/10.1145/2818048.2819942
55. David Alan Grier. 2013. When computers were human.
Princeton University Press.
56. Neha Gupta, David Martin, Benjamin V. Hanrahan, and
Jacki O’Neill. 2014. Turk-Life in India. In Proceedings of
the 18th International Conference on Supporting Group
Work (GROUP ’14). ACM, 1–11. DOI:
http://dx.doi.org/10.1145/2660398.2660403
57. Daniel Haas, Jason Ansel, Lydia Gu, and Adam Marcus.
2015. Argonaut: macrotask crowdsourcing for complex
data processing. Proceedings of the VLDB Endowment 8,
12 (2015), 1642–1653.
58. J. Hagan and C. Fisher. 1973. Piece Work and Some of
Its Consequences in the Printing and Coal Mining
Industries in Australia, 1850-1930. Labour History 25
(1973), 19–39. http://www.jstor.org/stable/27508091
59. Nathan Hahn, Joseph Chang, Ji Eun Kim, and Aniket
Kittur. 2016. The Knowledge Accelerator: Big Picture
Thinking in Small Pieces. In Proceedings of the 2016
CHI Conference on Human Factors in Computing
Systems (CHI ’16). ACM, 2258–2270. DOI:
http://dx.doi.org/10.1145/2858036.2858364
60. Russell Hardin. 1982. Collective action. Resources for
the Future.
61. Robert A Hart. 2005. Piecework Versus Timework in
British Wartime Engineering. (2005).
62. Robert A Hart and others. 2016. the rise and fall of
piecework. IZA World of Labor (2016).
63. Robert A Hart and J Elizabeth Roberts. 2013. The rise
and fall of piecework–timework wage dierentials:
market volatility, labor heterogeneity, and output pricing.
(2013).
64. John S. Heywood, W. S. Siebert, and Xiangdong Wei.
1997. Payment by Results Systems: British Evidence.
British Journal of Industrial Relations 35, 1 (1997), 1–22.
DOI:http://dx.doi.org/10.1111/1467-8543.00038
65. Sam Hind and Alex Gekker. 2014. ’Outsmarting Trac,
Together’: Driving as Social Navigation. Exchanges: the
Warwick Research Journal 1, 2 (2014), 165–180.
66.
Maureen Honey. 1985. Creating Rosie the Riveter: class,
gender, and propaganda during World War II. Univ of
Massachusetts Press.
67.
John J. Horton, Leonard N. Stern, and Joseph M. Golden.
2015. Reputation Inflation: Evidence from an Online
Labor Market. (2015).
68. Je Howe. 2008. Crowdsourcing: How the power of the
crowd is driving the future of business. Random House.
69. Te C Hu. 1961. Parallel Sequencing and Assembly Line
Problems. Operations Research 9, 6 (1961), 841–848.
DOI:http://dx.doi.org/10.1287/opre.9.6.841
70. Panagiotis G Ipeirotis. 2010. Demographics of
mechanical turk. (2010).
71. Shamsi T. Iqbal and Brian P. Bailey. 2008. Eects of
Intelligent Notification Management on Users and Their
Tasks. In Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems (CHI ’08). ACM,
93–102. DOI:http://dx.doi.org/10.1145/1357054.1357070
72. Lilly C. Irani and M. Six Silberman. 2013. Turkopticon:
Interrupting Worker Invisibility in Amazon Mechanical
Turk. In Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems (CHI ’13). ACM,
611–620. DOI:http://dx.doi.org/10.1145/2470654.2470742
73. Lilly C. Irani and M. Six Silberman. 2016. Stories We
Tell About Labor: Turkopticon and the Trouble with
"Design". In Proceedings of the 2016 CHI Conference on
Human Factors in Computing Systems (CHI ’16). ACM,
4573–4586. DOI:
http://dx.doi.org/10.1145/2858036.2858592
74. Sanford M Jacoby. 1983. Union–management
cooperation in the United States: Lessons from the 1920s.
Industrial & Labor Relations Review 37, 1 (1983), 18–33.
75. B.M. Jewell. 1921. The problem of piece work. Number
nos. 1-16 in The Problem of Piece Work. Bronson
Canode Print. Co.
https://books.google.com/books?id=NN5NAQAAIAAJ
76. Sanjay Kairam and Jerey Heer. 2016. Parting Crowds:
Characterizing Divergent Interpretations in
Crowdsourced Annotation Tasks. In Proceedings of the
19th ACM Conference on Computer-Supported
Cooperative Work & Social Computing (CSCW ’16).
ACM, 1637–1648. DOI:
http://dx.doi.org/10.1145/2818048.2820016
77. Nicolas Kaufmann, Thimo Schulze, and Daniel Veit.
2011. More than fun and money. Worker Motivation in
Crowdsourcing–A Study on Mechanical Turk.. In AMCIS,
Vol. 11. 1–11.
78. Sarah Kessler. 2015. What Does A Union Look Like In
The Gig Economy? (2 2015).
http://www.fastcompany.com/3042081/what-does-a-union-
look-like-in-the-gig-economy
79. Joy Kim and Andrés Monroy-Hernández. 2016. Storia:
Summarizing Social Media Content Based on Narrative
Theory Using Crowdsourcing. In Proceedings of the 19th
ACM Conference on Computer–Supported Cooperative
Work & Social Computing (CSCW ’16). ACM,
1018–1027. DOI:
http://dx.doi.org/10.1145/2818048.2820072
80.
Joy Kim, Sarah Sterman, Allegra Argent Beal Cohen, and
Michael S Bernstein. 2017. Mechanical Novel:
Crowdsourcing Complex Work through Revision. In
Proceedings of the 20th ACM Conference on Computer
Supported Cooperative Work \& Social Computing.
81. Peter Kinnaird, Laura Dabbish, and Sara Kiesler. 2012.
Workflow Transparency in a Microtask Marketplace. In
Proceedings of the 17th ACM International Conference
on Supporting Group Work (GROUP ’12). ACM,
281–284. DOI:http://dx.doi.org/10.1145/2389176.2389219
82. Aniket Kittur, Ed H. Chi, and Bongwon Suh. 2008.
Crowdsourcing User Studies with Mechanical Turk. In
Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems (CHI ’08). ACM, 453–456.
DOI:http://dx.doi.org/10.1145/1357054.1357127
83. Aniket Kittur, Jerey V. Nickerson, Michael Bernstein,
Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt
Lease, and John Horton. 2013. The Future of Crowd
Work. In Proceedings of the 2013 Conference on
Computer Supported Cooperative Work (CSCW ’13).
ACM, 1301–1318. DOI:
http://dx.doi.org/10.1145/2441776.2441923
84. Aniket Kittur, Boris Smus, Susheel Khamkar, and
Robert E. Kraut. 2011. CrowdForge: Crowdsourcing
Complex Work. In Proceedings of the 24th Annual ACM
Symposium on User Interface Software and Technology
(UIST ’11). ACM, 43–52. DOI:
http://dx.doi.org/10.1145/2047196.2047202
85. Ranjay A. Krishna, Kenji Hata, Stephanie Chen, Joshua
Kravitz, David A. Shamma, Li Fei-Fei, and Michael S.
Bernstein. 2016. Embracing Error to Enable Rapid
Crowdsourcing. In Proceedings of the 2016 CHI
Conference on Human Factors in Computing Systems
(CHI ’16). ACM, 3167–3179. DOI:
http://dx.doi.org/10.1145/2858036.2858115
86.
Anand Kulkarni, Philipp Gutheim, Prayag Narula, David
Rolnitzky, Tapan Parikh, and Björn Hartmann. 2012.
Mobileworks: Designing for quality in a managed
crowdsourcing architecture. IEEE Internet Computing 16,
5 (2012), 28–35.
87. Walter Lasecki, Christopher Miller, Adam Sadilek,
Andrew Abumoussa, Donato Borrello, Raja Kushalnagar,
and Jerey Bigham. 2012. Real-time Captioning by
Groups of Non-experts. In Proceedings of the 25th
Annual ACM Symposium on User Interface Software and
Technology (UIST ’12). ACM, 23–34. DOI:
http://dx.doi.org/10.1145/2380116.2380122
88. Walter S. Lasecki, Kyle I. Murray, Samuel White,
Robert C. Miller, and Jerey P. Bigham. 2011. Real-time
Crowd Control of Existing Interfaces. In Proceedings of
the 24th Annual ACM Symposium on User Interface
Software and Technology (UIST ’11). ACM, 23–32.
DOI:
http://dx.doi.org/10.1145/2047196.2047200
89. Walter S. Lasecki, Jerey M. Rzeszotarski, Adam
Marcus, and Jerey P. Bigham. 2015. The Eects of
Sequence and Delay on Crowd Work. In Proceedings of
the 33rd Annual ACM Conference on Human Factors in
Computing Systems (CHI ’15). ACM, 1375–1378. DOI:
http://dx.doi.org/10.1145/2702123.2702594
90.
Walter S. Lasecki, Rachel Wesley, Jerey Nichols, Anand
Kulkarni, James F. Allen, and Jerey P. Bigham. 2013.
Chorus: A Crowd-powered Conversational Assistant. In
Proceedings of the 26th Annual ACM Symposium on User
Interface Software and Technology (UIST ’13). ACM,
151–162. DOI:http://dx.doi.org/10.1145/2501988.2502057
91. Thomas D. LaToza, W. Ben Towne, Christian M.
Adriano, and André van der Hoek. 2014. Microtask
Programming: Building Software with a Crowd. In
Proceedings of the 27th Annual ACM Symposium on User
Interface Software and Technology (UIST ’14). ACM,
43–54. DOI:http://dx.doi.org/10.1145/2642918.2647349
92. Edith Law, Ming Yin, Joslin Goh, Kevin Chen,
Michael A. Terry, and Krzysztof Z. Gajos. 2016.
Curiosity Killed the Cat, but Makes Crowdwork Better.
In Proceedings of the 2016 CHI Conference on Human
Factors in Computing Systems (CHI ’16). ACM,
4098–4110. DOI:
http://dx.doi.org/10.1145/2858036.2858144
93. John Le, Andy Edmonds, Vaughn Hester, and Lukas
Biewald. 2010. Ensuring quality in crowdsourced search
relevance evaluation: The eects of training question
distribution. In SIGIR 2010 workshop on crowdsourcing
for search evaluation. 21–26.
94.
Min Kyung Lee, Daniel Kusbit, Evan Metsky, and Laura
Dabbish. 2015. Working with Machines: The Impact of
Algorithmic and Data–Driven Management on Human
Workers. In Proceedings of the 33rd Annual ACM
Conference on Human Factors in Computing Systems
(CHI ’15). ACM, 1603–1612. DOI:
http://dx.doi.org/10.1145/2702123.2702548
95. Lawrence Lessig. 2006. Code. Lawrence Lessig.
96. Margaret Levi, David Olson, Jon Agnone, and Devin
Kelly. 2009. Union democracy reexamined. Politics &
Society 37, 2 (2009), 203–228.
97. Alain Lipietz. 1982. Towards Global Fordism? New Left
Review 0, 132 (3 1982), 33. http://search.proquest.com/
docview/1301937328?accountid=14026 Last updated —
2013–02–24.
98. Greg Little, Lydia B. Chilton, Max Goldman, and
Robert C. Miller. 2010. TurKit: Human Computation
Algorithms on Mechanical Turk. In Proceedings of the
23Nd Annual ACM Symposium on User Interface
Software and Technology (UIST ’10). ACM, 57–66.
DOI:
http://dx.doi.org/10.1145/1866029.1866040
99. Kurt Luther, Nathan Hahn, Steven P Dow, and Aniket
Kittur. 2015. Crowdlines: Supporting Synthesis of
Diverse Information Sources through Crowdsourced
Outlines. In Third AAAI Conference on Human
Computation and Crowdsourcing.
100. Ioanna Lykourentzou, Angeliki Antoniou, Yannick
Naudet, and Steven P. Dow. 2016. Personality Matters:
Balancing for Personality Types Leads to Better
Outcomes for Crowd Teams. In Proceedings of the 19th
ACM Conference on Computer–Supported Cooperative
Work & Social Computing (CSCW ’16). ACM, 260–273.
DOI:http://dx.doi.org/10.1145/2818048.2819979
101. Eleanor A. Maguire, David G. Gadian, Ingrid S.
Johnsrude, Catriona D. Good, John Ashburner, Richard
S. J. Frackowiak, and Christopher D. Frith. 2000.
Navigation-related structural change in the hippocampi of
taxi drivers. Proceedings of the National Academy of
Sciences 97, 8 (2000), 4398–4403. DOI:
http://dx.doi.org/10.1073/pnas.070039597
102.
Eleanor A. Maguire, Rory Nannery, and Hugo J. Spiers.
2006. Navigation around London by a taxi driver with
bilateral hippocampal lesions. Brain 129, 11 (2006),
2894–2907. DOI:http://dx.doi.org/10.1093/brain/awl286
103. Bronislaw Malinowski. 2002. Argonauts of the Western
Pacific: An account of native enterprise and adventure in
the archipelagoes of Melanesian New Guinea. Routledge.
104. David Martin, Benjamin V. Hanrahan, Jacki O’Neill,
and Neha Gupta. 2014. Being a Turker. In Proceedings of
the 17th ACM Conference on Computer Supported
Cooperative Work & Social Computing (CSCW ’14).
ACM, 224–235. DOI:
http://dx.doi.org/10.1145/2531602.2531663
105. Jamie K McCallum. 2013. Global unions, local power:
the new spirit of transnational labor organizing. Cornell
University Press.
106. Brian McInnis, Dan Cosley, Chaebong Nam, and Gilly
Leshed. 2016a. Taking a HIT: Designing Around
Rejection, Mistrust, Risk, and Workers’ Experiences in
Amazon Mechanical Turk. In Proceedings of the 2016
CHI Conference on Human Factors in Computing
Systems (CHI ’16). ACM, 2271–2282. DOI:
http://dx.doi.org/10.1145/2858036.2858539
107. Brian James McInnis, Elizabeth Lindley Murnane,
Dmitry Epstein, Dan Cosley, and Gilly Leshed. 2016b.
One and Done: Factors Aecting One-time Contributors
to Ad-hoc Online Communities. In Proceedings of the
19th ACM Conference on Computer-Supported
Cooperative Work & Social Computing (CSCW ’16).
ACM, 609–623. DOI:
http://dx.doi.org/10.1145/2818048.2820075
108.
Margaret Mead and Franz Boas. 1973. Coming of age in
Samoa. Penguin.
109. Milton J Nadworny. 1955. Scientific management and
the unions, 1900-1932; a historical analysis. Harvard
University Press.
110. Michael Nebeling, Alexandra To, Anhong Guo,
Adrian A. de Freitas, Jaime Teevan, Steven P. Dow, and
Jerey P. Bigham. 2016. WearWrite: Crowd–Assisted
Writing from Smartwatches. In Proceedings of the 2016
CHI Conference on Human Factors in Computing
Systems (CHI ’16). ACM, 3834–3846. DOI:
http://dx.doi.org/10.1145/2858036.2858169
111. Edward Newell and Derek Ruths. 2016. How One
Microtask Aects Another. In Proceedings of the 2016
CHI Conference on Human Factors in Computing
Systems (CHI ’16). ACM, 3155–3166. DOI:
http://dx.doi.org/10.1145/2858036.2858490
112. Jon Noronha, Eric Hysen, Haoqi Zhang, and
Krzysztof Z. Gajos. 2011. Platemate: Crowdsourcing
Nutritional Analysis from Food Photographs. In
Proceedings of the 24th Annual ACM Symposium on User
Interface Software and Technology (UIST ’11). ACM,
1–12. DOI:http://dx.doi.org/10.1145/2047196.2047198
113. David Oleson, Alexander Sorokin, Greg Laughlin,
Vaughn Hester, John Le, and Lukas Biewald. 2011.
Programmatic Gold: Targeted and Scalable Quality
Assurance in Crowdsourcing. (2011). http:
//www.aaai.org/ocs/index.php/WS/AAAIW11/paper/view/3995
114. Judith S Olson and Wendy A Kellogg. 2014. Ways of
Knowing in HCI. Springer.
115. Mancur Olson. 1965. Logic of collective action public
goods and the theory of groups Rev. ed..
116. Elinor Ostrom. 1990. Governing the commons: The
evolution of institutions for collective action. Cambridge
university press.
117. Gabriele Paolacci, Jesse Chandler, and Panagiotis G
Ipeirotis. 2010. Running experiments on amazon
mechanical turk. Judgment and Decision making 5, 5
(2010), 411–419.
118. Paolo Parigi and Xiao Ma. 2016. The Gig Economy.
XRDS 23, 2 (12 2016), 38–41. DOI:
http://dx.doi.org/10.1145/3013496
119. Alexander J. Quinn and Benjamin B. Bederson. 2011.
Human Computation: A Survey and Taxonomy of a
Growing Field. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems
(CHI ’11). ACM, 1403–1412. DOI:
http://dx.doi.org/10.1145/1978942.1979148
120. Hugh Raynbird. 1847. Essay on Measure Work, locally
known as task, piece, job, or grate work (in its
application to agricultural labour).
121. Daniela Retelny, Sébastien Robaszkiewicz, Alexandra
To, Walter S. Lasecki, Jay Patel, Negar Rahmati, Tulsee
Doshi, Melissa Valentine, and Michael S. Bernstein.
2014. Expert Crowdsourcing with Flash Teams. In
Proceedings of the 27th Annual ACM Symposium on User
Interface Software and Technology (UIST ’14). ACM,
75–85. DOI:http://dx.doi.org/10.1145/2642918.2647409
122. Frank Richards. 1904. Is Anything the Matter with
Piecework. ASME.
123. Jacob August Riis. 1901. How the other half lives:
Studies among the tenements of New York. Penguin.
124. Horst WJ Rittel and Melvin M Webber. 1973.
Dilemmas in a general theory of planning. Policy
sciences 4, 2 (1973), 155–169.
125. D. H. Robertson. 1912. A Narrative of the Coal Strike.
The Economic Journal 22, 87 (1912), 365–387.
http://www.jstor.org/stable/2221944
126. Nathan Rosenberg. 1982. Inside the black box:
technology and economics. Cambridge University Press.
127. Nathan Rosenberg. 1994. Exploring the black box:
Technology, economics, and history. Cambridge
University Press.
128. Joel Ross, Lilly Irani, M. Six Silberman, Andrew
Zaldivar, and Bill Tomlinson. 2010. Who Are the
Crowdworkers?: Shifting Demographics in Mechanical
Turk. In CHI ’10 Extended Abstracts on Human Factors
in Computing Systems (CHI EA ’10). ACM, 2863–2872.
DOI:http://dx.doi.org/10.1145/1753846.1753873
129. James Rowan. 1901. A Premium System of
Remunerating Labour. Proceedings of the Institution of
Mechanical Engineers 61, 1 (1901), 865–882.
130. Donald Roy. 1954. Eciency and “the fix": Informal
intergroup relations in a piecework machine shop.
American journal of sociology (1954), 255–266.
131. Jerey Rzeszotarski and Aniket Kittur. 2012.
CrowdScape: Interactively Visualizing User Behavior
and Output. In Proceedings of the 25th Annual ACM
Symposium on User Interface Software and Technology
(UIST ’12). ACM, 55–62. DOI:
http://dx.doi.org/10.1145/2380116.2380125
132. Jerey M. Rzeszotarski and Aniket Kittur. 2011.
Instrumenting the Crowd: Using Implicit Behavioral
Measures to Predict Task Performance. In Proceedings of
the 24th Annual ACM Symposium on User Interface
Software and Technology (UIST ’11). ACM, 13–22.
DOI:
http://dx.doi.org/10.1145/2047196.2047199
133.
Niloufar Salehi, Lilly C. Irani, Michael S. Bernstein, Ali
Alkhatib, Eva Ogbe, Kristy Milland, and Clickhappier.
2015. We Are Dynamo: Overcoming Stalling and
Friction in Collective Action for Crowd Workers. In
Proceedings of the 33rd Annual ACM Conference on
Human Factors in Computing Systems (CHI ’15). ACM,
1621–1630. DOI:
http://dx.doi.org/10.1145/2702123.2702508
134. Lowell J. Satre. 1982. After the Match Girls’ Strike:
Bryant and May in the 1890s. Victorian Studies 26, 1
(1982), 7–31. http://www.jstor.org/stable/3827491
135. Trebor Scholz. 2012. Digital labor: The Internet as
playground and factory. Routledge.
136. Victor S. Sheng, Foster Provost, and Panagiotis G.
Ipeirotis. 2008. Get Another Label? Improving Data
Quality and Data Mining Using Multiple, Noisy Labelers.
In Proceedings of the 14th ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining
(KDD ’08). ACM, 614–622. DOI:
http://dx.doi.org/10.1145/1401890.1401965
137.
Pao Siangliulue, Kenneth C. Arnold, Krzysztof Z. Gajos,
and Steven P. Dow. 2015. Toward Collaborative Ideation
at Scale: Leveraging Ideas from Others to Generate More
Creative and Diverse Ideas. In Proceedings of the 18th
ACM Conference on Computer Supported Cooperative
Work & Social Computing (CSCW ’15). ACM, 937–945.
DOI:http://dx.doi.org/10.1145/2675133.2675239
138.
Six Silberman. 2015. Stop citing Ross et al. 2010, “Who
are the crowdworkers?". (3 2015).
https://medium.com/@silberman/stop-citing-ross-et-al-
2010-who-are-the-crowdworkers-b3b9b1e8d300
139.
Thiago H Silva, Pedro OS Vaz de Melo, Aline Carneiro
Viana, Jussara M Almeida, Juliana Salles, and
Antonio AF Loureiro. 2013. Trac condition is more
than colored lines on a map: characterization of waze
alerts. In International Conference on Social Informatics.
Springer, 309–318.
140. Walter Skok. 1999. Knowledge Management: London
Taxi Cabs Case Study. In Proceedings of the 1999 ACM
SIGCPR Conference on Computer Personnel Research
(SIGCPR ’99). ACM, 94–101. DOI:
http://dx.doi.org/10.1145/299513.299625
141. Walter Skok. 2000. Managing knowledge within the
London taxi cab service. Knowledge and Process
Management 7, 4 (2000), 224.
142. Ryo Suzuki, Niloufar Salehi, Michelle S. Lam, Juan C.
Marroquin, and Michael S. Bernstein. 2016. Atelier:
Repurposing Expert Crowdsourcing Tasks As
Micro–internships. In Proceedings of the 2016 CHI
Conference on Human Factors in Computing Systems
(CHI ’16). ACM, 2645–2656. DOI:
http://dx.doi.org/10.1145/2858036.2858121
143. John C. Tang, Manuel Cebrian, Nicklaus A. Giacobe,
Hyun-Woo Kim, Taemie Kim, and Douglas “Beaker"
Wickert. 2011. Reflecting on the DARPA Red Balloon
Challenge. Commun. ACM 54, 4 (4 2011), 78–85. DOI:
http://dx.doi.org/10.1145/1924421.1924441
144. Frederick Winslow Taylor. 1896. A piece rate system.
Economic Studies 1, 2 (1896), 89.
145. Frederick Winslow Taylor. 1911. The principles of
scientific management. Harper.
146. Jaime Teevan, Shamsi T. Iqbal, Carrie J. Cai, Jerey P.
Bigham, Michael S. Bernstein, and Elizabeth M. Gerber.
2016b. Productivity Decomposed: Getting Big Things
Done with Little Microtasks. In Proceedings of the 2016
CHI Conference Extended Abstracts on Human Factors
in Computing Systems (CHI EA ’16). ACM, 3500–3507.
DOI:http://dx.doi.org/10.1145/2851581.2856480
147. Jaime Teevan, Shamsi T. Iqbal, and Curtis von Veh.
2016a. Supporting Collaborative Writing with
Microtasks. In Proceedings of the 2016 CHI Conference
on Human Factors in Computing Systems (CHI ’16).
ACM, 2657–2668. DOI:
http://dx.doi.org/10.1145/2858036.2858108
148.
Jaime Teevan, Daniel J. Liebling, and Walter S. Lasecki.
2014. Selfsourcing Personal Tasks. In CHI ’14 Extended
Abstracts on Human Factors in Computing Systems (CHI
EA ’14). ACM, 2527–2532. DOI:
http://dx.doi.org/10.1145/2559206.2581181
149. Rajan Vaish, Peter Organisciak, Kotaro Hara, Jerey P
Bigham, and Haoqi Zhang. 2014a. Low Eort
Crowdsourcing: Leveraging Peripheral Attention for
Crowd Work. In Second AAAI Conference on Human
Computation and Crowdsourcing.
150. Rajan Vaish, Keith Wyngarden, Jingshu Chen, Brandon
Cheung, and Michael S. Bernstein. 2014b. Twitch
Crowdsourcing: Crowd Contributions in Short Bursts of
Time. In Proceedings of the 32Nd Annual ACM
Conference on Human Factors in Computing Systems
(CHI ’14). ACM, 3645–3654. DOI:
http://dx.doi.org/10.1145/2556288.2556996
151.
Vasilis Verroios and Michael S Bernstein. 2014. Context
trees: Crowdsourcing global understanding from local
views. In Second AAAI Conference on Human
Computation and Crowdsourcing.
152. E. Waltz. 2012. How i quantified myself. IEEE
Spectrum 49, 9 (9 2012), 42–47. DOI:
http://dx.doi.org/10.1109/MSPEC.2012.6281132
153. Sidney Webb and Beatrice Webb. 1894. The History of
Trade Unionism. (1894).
154. Martin L Weitzman. 1976. The new Soviet incentive
model. The Bell Journal of Economics (1976), 251–257.
155.
Martin L. Weitzman. 1980. The "Ratchet Principle" and
Performance Incentives. The Bell Journal of Economics
11, 1 (1980), 302–308.
http://www.jstor.org/stable/3003414
156. Peng Dai Mausam Daniel S Weld. 2010.
Decision–theoretic control of crowd–sourced workflows.
In Twenty–Fourth Association for the Advancement of
Artificial Intelligence Conference on Artificial
Intelligence.
157. Mark E. Whiting, Dilrukshi Gamage, Aaron Gilbee,
Snehal Gaikwad, Shirish Goyal, Alipta Ballav, Dinesh
Majeti, Nalin Chhibber, Freddie Vargus, Teo Moura,
Angela Richmond Fuller, Varshine Chandrakanthan,
Gabriel Bayomi Tinoco Kalejaiye, Tejas Seshadri Sarma,
Yoni Dayan, Adam Ginzberg, Mohammed Hashim
Kambal, Kristy Milland, Sayna Parsi, Catherine A.
Mullings, Henrique Orefice, Sekandar Matin, Vibhor
Sehgal, Sharon Zhou, Akshansh Sinha, Je Regino,
Rajan Vaish, and Michael S. Bernstein. 2017. Crowd
Guilds: Worker-led Reputation and Feedback on
Crowdsourcing Platforms. In CSCW:
Computer-Supported Cooperative Work and Social
Computing.
158. Vanessa Williamson. 2016. On the Ethics of
Crowdsourced Research. PS: Political Science & Politics
49, 1 (001 001 2016), 77–81. DOI:
http://dx.doi.org/10.1017/S104909651500116X
159. Katherine Woollett and Eleanor A Maguire. 2011.
Acquiring “the Knowledge” of London’s layout drives
structural brain changes. Current biology 21, 24 (2011),
2109–2114.
160. Katherine Woollett, Hugo J. Spiers, and Eleanor A.
Maguire. 2009. Talent in the taxi: a model system for
exploring expertise. Philosophical Transactions of the
Royal Society of London B: Biological Sciences 364,
1522 (2009), 1407–1416. DOI:
http://dx.doi.org/10.1098/rstb.2008.0288
161. Shao-Yu Wu, Ruck Thawonmas, and Kuan-Ta Chen.
2011. Video Summarization via Crowdsourcing. In CHI
’11 Extended Abstracts on Human Factors in Computing
Systems (CHI EA ’11). ACM, 1531–1536. DOI:
http://dx.doi.org/10.1145/1979742.1979803
162. Ming Yin, Mary L Gray, Siddharth Suri, and
Jennifer Wortman Vaughan. 2016. The Communication
Network Within the Crowd. In Proceedings of the 25th
International Conference on World Wide Web.
International World Wide Web Conferences Steering
Committee, 1293–1303.
163. Lixiu Yu, Aniket Kittur, and Robert E. Kraut. 2016a.
Distributed Analogical Idea Generation with Multiple
Constraints. In Proceedings of the 19th ACM Conference
on Computer-Supported Cooperative Work & Social
Computing (CSCW ’16). ACM, 1236–1245. DOI:
http://dx.doi.org/10.1145/2556288.2557371
164. Lixiu Yu, Aniket Kittur, and Robert E. Kraut. 2016b.
Encouraging “Outside- The- Box” Thinking in Crowd
Innovation Through Identifying Domains of Expertise. In
Proceedings of the 19th ACM Conference on
Computer-Supported Cooperative Work & Social
Computing (CSCW ’16). ACM, 1214–1222. DOI:
http://dx.doi.org/10.1145/2818048.2820025
165.
Alvin Yuan, Kurt Luther, Markus Krause, Sophie Isabel
Vennix, Steven P Dow, and Bjorn Hartmann. 2016.
Almost an Expert: The Eects of Rubrics and Expertise
on Perceived Value of Crowdsourced Design Critiques. In
Proceedings of the 19th ACM Conference on
Computer–Supported Cooperative Work & Social
Computing (CSCW ’16). ACM, 1005–1017. DOI:
http://dx.doi.org/10.1145/2818048.2819953
166. M. C. Yuen, I. King, and K. S. Leung. 2011. A Survey
of Crowdsourcing Systems. In Privacy, Security, Risk and
Trust (PASSAT) and 2011 IEEE Third Inernational
Conference on Social Computing (SocialCom), 2011
IEEE Third International Conference on. 766–773. DOI:
http://dx.doi.org/10.1109/PASSAT/SocialCom.2011.203