CrawfordTheAtlasofAI.pdf

Yale University Press

Chapter Title: Introduction

Book Title: The Atlas of AI

Book Subtitle: Power, Politics, and the Planetary Costs of Artificial Intelligence

Book Author(s): KATE CRAWFORD

Published by: Yale University Press. (2021)

Stable URL: https://www.jstor.org/stable/j.ctv1ghv45t.3

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide

range of content in a trusted digital archive. We use information technology and tools to increase productivity and

facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at

Terms and Conditions of Use

Yale University Press is collaborating with JSTOR to digitize, preserve and extend access to

The Atlas of AI

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

Introduction

!e Smartest Horse in the World

At the end of the nineteenth century, Europe was captivated by a horse called Hans. “Clever Hans” was nothing less than a marvel: he could solve math problems, tell time, identify days on a calendar, dif-ferentiate musical tones, and spell out words and sentences. People "ocked to watch the German stallion tap out answers to complex problems with his hoof and consistently arrive at the right answer. “What is two plus three?” Hans would dili-gently tap his hoof on the ground #ve times. “What day of the week is it?” !e horse would then tap his hoof to indicate each letter on a purpose- built letter board and spell out the correct answer. Hans even mastered more complex questions, such as, “I have a number in mind. I subtract nine and have three as a remainder. What is the number?” By 1904, Clever Hans was an international celebrity, with the New York Times championing him as “Berlin’s Wonderful Horse; He Can Do Almost Every-thing but Talk.”$

Hans’s trainer, a retired math teacher named Wilhelm von Osten, had long been fascinated by animal intelligence.

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

2 Introduction

Von Osten had tried and failed to teach kittens and bear cubs cardinal numbers, but it wasn’t until he started working with his own horse that he had success. He #rst taught Hans to count by holding the animal’s leg, showing him a number, and then tapping on the hoof the correct number of times. Soon Hans responded by accurately tapping out simple sums. Next von Osten introduced a chalkboard with the alphabet spelled out, so Hans could tap a number for each letter on the board. A%er two years of training, von Osten was astounded by the animal’s strong grasp of advanced intellectual concepts. So he took Hans on the road as proof that animals could reason. Hans became the viral sensation of the belle époque.

But many people were skeptical, and the German board of education launched an investigative commission to test Von Osten’s scienti#c claims. !e Hans Commission was led by the psychologist and philosopher Carl Stumpf and his assis-tant Oskar Pfungst, and it included a circus manager, a retired schoolteacher, a zoologist, a veterinarian, and a cavalry o&cer. Yet a%er extensive questioning of Hans, both with his trainer present and without, the horse maintained his record of cor-rect answers, and the commission could #nd no evidence of deception. As Pfungst later wrote, Hans performed in front of “thousands of spectators, horse- fanciers, trick- trainers of #rst rank, and not one of them during the course of many months’ observations are able to discover any kind of regular signal” between the questioner and the horse.'

!e commission found that the methods Hans had been taught were more like “teaching children in elementary schools” than animal training and were “worthy of scienti#c examination.”( But Strumpf and Pfungst still had doubts. One #nding in particular troubled them: when the questioner did not know the answer or was standing far away, Hans rarely gave the correct answer. !is led Pfungst and Strumpf to con-

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

Introduction 3

sider whether some sort of unintentional signal had been pro-viding Hans with the answers.

As Pfungst would describe in his 1911 book, their intu-ition was right: the questioner’s posture, breathing, and facial expression would subtly change around the moment Hans reached the right answer, prompting Hans to stop there.) Pfungst later tested this hypothesis on human subjects and con#rmed his result. What fascinated him most about this discovery was that questioners were generally unaware that they were providing pointers to the horse. !e solution to the Clever Hans riddle, Pfungst wrote, was the unconscious di-rection from the horse’s questioners.* !e horse was trained to produce the results his owner wanted to see, but audiences felt that this was not the extraordinary intelligence they had imagined.

!e story of Clever Hans is compelling from many angles: the relationship between desire, illusion, and action, the busi-ness of spectacles, how we anthropomorphize the nonhuman,

Wilhelm von Osten and Clever Hans

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

4 Introduction

how biases emerge, and the politics of intelligence. Hans in-spired a term in psychology for a particular type of conceptual trap, the Clever Hans E+ect or observer- expectancy e+ect, to describe the in"uence of experimenters’ unintentional cues on their subjects. !e relationship between Hans and von Osten points to the complex mechanisms by which biases #nd their ways into systems and how people become entangled with the phenomena they study. !e story of Hans is now used in ma-chine learning as a cautionary reminder that you can’t always be sure of what a model has learned from the data it has been given., Even a system that appears to perform spectacularly in training can make terrible predictions when presented with novel data in the world.

!is opens a central question of this book: How is intel-ligence “made,” and what traps can that create? At #rst glance, the story of Clever Hans is a story of how one man constructed intelligence by training a horse to follow cues and emulate humanlike cognition. But at another level, we see that the prac-tice of making intelligence was considerably broader. !e en-deavor required validation from multiple institutions, includ-ing academia, schools, science, the public, and the military. !en there was the market for von Osten and his remarkable horse—emotional and economic investments that drove the tours, the newspaper stories, and the lectures. Bureaucratic au-thorities were assembled to measure and test the horse’s abili-ties. A constellation of #nancial, cultural, and scienti#c inter-ests had a part to play in the construction of Hans’s intelligence and a stake in whether it was truly remarkable.

We can see two distinct mythologies at work. !e #rst myth is that nonhuman systems (be it computers or horses) are analogues for human minds. !is perspective assumes that with su&cient training, or enough resources, humanlike intel-ligence can be created from scratch, without addressing the

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

Introduction 5

fundamental ways in which humans are embodied, relational, and set within wider ecologies. !e second myth is that intelli-gence is something that exists independently, as though it were natural and distinct from social, cultural, historical, and politi-cal forces. In fact, the concept of intelligence has done inordi-nate harm over centuries and has been used to justify relations of domination from slavery to eugenics.-

!ese mythologies are particularly strong in the #eld of arti#cial intelligence, where the belief that human intelligence can be formalized and reproduced by machines has been axi-omatic since the mid- twentieth century. Just as Hans’s intel-ligence was considered to be like that of a human, fostered carefully like a child in elementary school, so AI systems have repeatedly been described as simple but humanlike forms of intelligence. In 1950, Alan Turing predicted that “at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”. !e mathe-matician John von Neumann claimed in 1958 that the human nervous system is “prima facie digital.”/ MIT professor Marvin Minsky once responded to the question of whether machines could think by saying, “Of course machines can think; we can think and we are ‘meat machines.’ ”$0 But not everyone was convinced. Joseph Weizenbaum, early AI inventor and creator of the #rst chatbot program, known as 12345, believed that the idea of humans as mere information processing systems is far too simplistic a notion of intelligence and that it drove the “perverse grand fantasy” that AI scientists could create a ma-chine that learns “as a child does.”$$

!is has been one of the core disputes in the history of arti#cial intelligence. In 1961, MIT hosted a landmark lecture series titled “Management and the Computer of the Future.” A stellar lineup of computer scientists participated, including

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

6 Introduction

Grace Hopper, J. C. R. Licklider, Marvin Minsky, Allen Newell, Herbert Simon, and Norbert Wiener, to discuss the rapid ad-vances being made in digital computing. At its conclusion, John McCarthy boldly argued that the di+erences between human and machine tasks were illusory. !ere were simply some complicated human tasks that would take more time to be formalized and solved by machines.$'

But philosophy professor Hubert Dreyfus argued back, concerned that the assembled engineers “do not even consider the possibility that the brain might process information in an entirely di+erent way than a computer.”$( In his later work What Computers Can’t Do, Dreyfus pointed out that human intelligence and expertise rely heavily on many unconscious and subconscious processes, while computers require all pro-cesses and data to be explicit and formalized.$) As a result, less formal aspects of intelligence must be abstracted, eliminated, or approximated for computers, leaving them unable to pro-cess information about situations as humans do.

Much in AI has changed since the 1960s, including a shi% from symbolic systems to the more recent wave of hype about machine learning techniques. In many ways, the early #ghts over what AI can do have been forgotten and the skep-ticism has melted away. Since the mid- 2000s, AI has rapidly expanded as a #eld in academia and as an industry. Now a small number of powerful technology corporations deploy AI systems at a planetary scale, and their systems are once again hailed as comparable or even superior to human intelligence.

Yet the story of Clever Hans also reminds us how nar-rowly we consider or recognize intelligence. Hans was taught to mimic tasks within a very constrained range: add, subtract, and spell words. !is re"ects a limited perspective of what horses or humans can do. Hans was already performing re-markable feats of interspecies communication, public perfor-

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

Introduction 7

mance, and considerable patience, yet these were not recog-nized as intelligence. As author and engineer Ellen Ullman puts it, this belief that the mind is like a computer, and vice versa, has “infected decades of thinking in the computer and cognitive sciences,” creating a kind of original sin for the #eld.$* It is the ideology of Cartesian dualism in arti#cial intelligence: where AI is narrowly understood as disembodied intelligence, removed from any relation to the material world.

What Is AI? Neither Arti#cial nor IntelligentLet’s ask the deceptively simple question, What is arti#cial intelligence? If you ask someone in the street, they might mention Apple’s Siri, Amazon’s cloud service, Tesla’s cars, or Google’s search algorithm. If you ask experts in deep learn-ing, they might give you a technical response about how neu-ral nets are organized into dozens of layers that receive labeled data, are assigned weights and thresholds, and can classify data in ways that cannot yet be fully explained.$, In 1978, when dis-cussing expert systems, Professor Donald Michie described AI as knowledge re#ning, where “a reliability and competence of codi#cation can be produced which far surpasses the highest level that the unaided human expert has ever, perhaps even could ever, attain.”$- In one of the most popular textbooks on the subject, Stuart Russell and Peter Norvig state that AI is the attempt to understand and build intelligent entities. “Intelli-gence is concerned mainly with rational action,” they claim. “Ideally, an intelligent agent takes the best possible action in a situation.”$.

Each way of de#ning arti#cial intelligence is doing work, setting a frame for how it will be understood, measured, val-ued, and governed. If AI is de#ned by consumer brands for corporate infrastructure, then marketing and advertising have

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

8 Introduction

predetermined the horizon. If AI systems are seen as more re-liable or rational than any human expert, able to take the “best possible action,” then it suggests that they should be trusted to make high- stakes decisions in health, education, and crimi-nal justice. When speci#c algorithmic techniques are the sole focus, it suggests that only continual technical progress mat-ters, with no consideration of the computational cost of those approaches and their far- reaching impacts on a planet under strain.

In contrast, in this book I argue that AI is neither ar-ti!cial nor intelligent. Rather, arti#cial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classi#-cations. AI systems are not autonomous, rational, or able to discern anything without extensive, computationally intensive training with large datasets or prede#ned rules and rewards. In fact, arti#cial intelligence as we know it depends entirely on a much wider set of political and social structures. And due to the capital required to build AI at scale and the ways of seeing that it optimizes AI systems are ultimately designed to serve existing dominant interests. In this sense, arti#cial intelligence is a registry of power.

In this book we’ll explore how arti#cial intelligence is made, in the widest sense, and the economic, political, cul-tural, and historical forces that shape it. Once we connect AI within these broader structures and social systems, we can es-cape the notion that arti#cial intelligence is a purely techni-cal domain. At a fundamental level, AI is technical and social practices, institutions and infrastructures, politics and culture. Computational reason and embodied work are deeply inter-linked: AI systems both re"ect and produce social relations and understandings of the world.

It’s worth noting that the term “arti#cial intelligence”

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

Introduction 9

can create discomfort in the computer science community. !e phrase has moved in and out of fashion over the decades and is used more in marketing than by researchers. “Machine learning” is more commonly used in the technical literature. Yet the nomenclature of AI is o%en embraced during fund-ing application season, when venture capitalists come bearing checkbooks, or when researchers are seeking press attention for a new scienti#c result. As a result, the term is both used and rejected in ways that keep its meaning in "ux. For my pur-poses, I use AI to talk about the massive industrial formation that includes politics, labor, culture, and capital. When I refer to machine learning, I’m speaking of a range of technical ap-proaches (which are, in fact, social and infrastructural as well, although rarely spoken about as such).

But there are signi#cant reasons why the #eld has been fo-cused so much on the technical—algorithmic breakthroughs, incremental product improvements, and greater convenience. !e structures of power at the intersection of technology, capi-tal, and governance are well served by this narrow, abstracted analysis. To understand how AI is fundamentally political, we need to go beyond neural nets and statistical pattern recog-nition to instead ask what is being optimized, and for whom, and who gets to decide. !en we can trace the implications of those choices.

Seeing AI Like an AtlasHow can an atlas help us to understand how arti#cial intel-ligence is made? An atlas is an unusual type of book. It is a collection of disparate parts, with maps that vary in resolu-tion from a satellite view of the planet to a zoomed- in detail of an archipelago. When you open an atlas, you may be seek-ing speci#c information about a particular place—or perhaps

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

10 Introduction

you are wandering, following your curiosity, and #nding unex-pected pathways and new perspectives. As historian of science Lorraine Daston observes, all scienti#c atlases seek to school the eye, to focus the observer’s attention on particular telling details and signi#cant characteristics.$/ An atlas presents you with a particular viewpoint of the world, with the imprimatur of science—scales and ratios, latitudes and longitudes—and a sense of form and consistency.

Yet an atlas is as much an act of creativity—a subjective, political, and aesthetic intervention—as it is a scienti#c collec-tion. !e French philosopher Georges Didi- Huberman thinks of the atlas as something that inhabits the aesthetic paradigm of the visual and the epistemic paradigm of knowledge. By implicating both, it undermines the idea that science and art are ever completely separate.'0 Instead, an atlas o+ers us the possibility of rereading the world, linking disparate pieces dif-ferently and “reediting and piecing it together again without thinking we are summarizing or exhausting it.”'$

Perhaps my favorite account of how a cartographic ap-proach can be helpful comes from the physicist and tech-nology critic Ursula Franklin: “Maps represent purposeful en-deavors: they are meant to be useful, to assist the traveler and bridge the gap between the known and the as yet unknown; they are testaments of collective knowledge and insight.”''

Maps, at their best, o+er us a compendium of open path-ways—shared ways of knowing—that can be mixed and com-bined to make new interconnections. But there are also maps of domination, those national maps where territory is carved along the fault lines of power: from the direct interventions of drawing borders across contested spaces to revealing the colo-nial paths of empires. By invoking an atlas, I’m suggesting that we need new ways to understand the empires of arti#cial intel-ligence. We need a theory of AI that accounts for the states and

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

Introduction 11

corporations that drive and dominate it, the extractive min-ing that leaves an imprint on the planet, the mass capture of data, and the profoundly unequal and increasingly exploitative labor practices that sustain it. !ese are the shi%ing tecton-ics of power in AI. A topographical approach o+ers di+erent perspectives and scales, beyond the abstract promises of arti-#cial intelligence or the latest machine learning models. !e aim is to understand AI in a wider context by walking through the many di+erent landscapes of computation and seeing how they connect.'(

!ere’s another way in which atlases are relevant here. !e #eld of AI is explicitly attempting to capture the planet in a computationally legible form. !is is not a metaphor so much as the industry’s direct ambition. !e AI industry is making and normalizing its own proprietary maps, as a cen-tralized God’s- eye view of human movement, communication, and labor. Some AI scientists have stated their desire to cap-ture the world and to supersede other forms of knowing. AI professor Fei- Fei Li describes her ImageNet project as aiming to “map out the entire world of objects.”') In their textbook, Russell and Norvig describe arti#cial intelligence as “relevant to any intellectual task; it is truly a universal #eld.”'* One of the founders of arti#cial intelligence and early experimenter in facial recognition, Woody Bledsoe, put it most bluntly: “in the long run, AI is the only science.”', !is is a desire not to create an atlas of the world but to be the atlas—the dominant way of seeing. !is colonizing impulse centralizes power in the AI #eld: it determines how the world is measured and de-#ned while simultaneously denying that this is an inherently political activity.

Instead of claiming universality, this book is a partial ac-count, and by bringing you along on my investigations, I hope to show you how my views were formed. We will encounter

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

12 Introduction

well- visited and lesser- known landscapes of computation: the pits of mines, the long corridors of energy- devouring data centers, skull archives, image databases, and the "uorescent- lit hangars of delivery warehouses. !ese sites are included not just to illustrate the material construction of AI and its ide-ologies but also to “illuminate the unavoidably subjective and political aspects of mapping, and to provide alternatives to hegemonic, authoritative—and o%en naturalized and rei#ed—approaches,” as media scholar Shannon Mattern writes.'-

Models for understanding and holding systems account-able have long rested on ideals of transparency. As I’ve writ-ten with the media scholar Mike Ananny, being able to see a system is sometimes equated with being able to know how it works and how to govern it.'. But this tendency has serious limitations. In the case of AI, there is no singular black box to open, no secret to expose, but a multitude of interlaced sys-tems of power. Complete transparency, then, is an impossible goal. Rather, we gain a better understanding of AI’s role in the world by engaging with its material architectures, contextual environments, and prevailing politics and by tracing how they are connected.

My thinking in this book has been informed by the disci-plines of science and technology studies, law, and political phi-losophy and from my experience working in both academia and an industrial AI research lab for almost a decade. Over those years, many generous colleagues and communities have changed the way I see the world: mapping is always a collective exercise, and this is no exception.'/ I’m grateful to the scholars who created new ways to understand sociotechnical systems, including Geo+rey Bowker, Benjamin Bratton, Wendy Chun, Lorraine Daston, Peter Galison, Ian Hacking, Stuart Hall, Donald MacKenzie, Achille Mbembé, Alondra Nelson, Susan Leigh Star, and Lucy Suchman, among many others. !is book

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

Introduction 13

bene#ted from many in- person conversations and reading the recent work by authors studying the politics of technology, in-cluding Mark Andrejevic, Ruha Benjamin, Meredith Brous-sard, Simone Browne, Julie Cohen, Sasha Costanza- Chock, Virginia Eubanks, Tarleton Gillespie, Mar Hicks, Tung- Hui Hu, Yuk Hui, Sa#ya Umoja Noble, and Astra Taylor.

As with any book, this one emerges from a speci#c lived experience that imposes limitations. As someone who has lived and worked in the United States for the past decade, my focus skews toward the AI industry in Western centers of power. But my aim is not to create a complete global atlas—the very idea invokes capture and colonial control. Instead, any author’s view can be only partial, based on local observations and interpre-tations, in what environmental geographer Samantha Saville calls a “humble geography” that acknowledges one’s speci#c perspectives rather than claiming objectivity or mastery.(0

Just as there are many ways to make an atlas, so there are many possible futures for how AI will be used in the world. !e expanding reach of AI systems may seem inevitable, but this is contestable and incomplete. !e underlying visions of the AI #eld do not come into being autonomously but instead have been constructed from a particular set of beliefs and perspec-tives. !e chief designers of the contemporary atlas of AI are a small and homogenous group of people, based in a handful of cities, working in an industry that is currently the wealthiest in the world. Like medieval European mappae mundi, which illustrated religious and classical concepts as much as coordi-nates, the maps made by the AI industry are political inter-ventions, as opposed to neutral re"ections of the world. !is book is made against the spirit of colonial mapping logics, and it embraces di+erent stories, locations, and knowledge bases to better understand the role of AI in the world.

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

14 Introduction

Topographies of ComputationHow, at this moment in the twenty- #rst century, is AI concep-tualized and constructed? What is at stake in the turn to arti-#cial intelligence, and what kinds of politics are contained in the way these systems map and interpret the world? What are the social and material consequences of including AI and re-lated algorithmic systems into the decision- making systems of social institutions like education and health care, #nance, gov-ernment operations, workplace interactions and hiring, com-

Heinrich Bünting’s mappa mundi, known as "e Bünting Clover Leaf Map, which symbolizes the Christian Trinity, with the city of Jerusalem at the center of the world. From

Itinerarium Sacrae Scripturae (Magdeburg, 1581)

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

Introduction 15

munication systems, and the justice system? !is book is not a story about code and algorithms or the latest thinking in com-puter vision or natural language processing or reinforcement learning. Many other books do that. Neither is it an ethno-graphic account of a single community and the e+ects of AI on their experience of work or housing or medicine—although we certainly need more of those.

Instead, this is an expanded view of arti#cial intelligence as an extractive industry. !e creation of contemporary AI sys-tems depends on exploiting energy and mineral resources from the planet, cheap labor, and data at scale. To observe this in ac-tion, we will go on a series of journeys to places that reveal the makings of AI.

In chapter 1, we begin in the lithium mines of Nevada, one of the many sites of mineral extraction needed to power contemporary computation. Mining is where we see the ex-tractive politics of AI at their most literal. !e tech sector’s demand for rare earth minerals, oil, and coal is vast, but the true costs of this extraction is never borne by the industry itself. On the so%ware side, building models for natural lan-guage processing and computer vision is enormously energy hungry, and the competition to produce faster and more e&-cient models has driven computationally greedy methods that expand AI’s carbon footprint. From the last trees in Malaysia that were harvested to produce latex for the #rst transatlantic undersea cables to the giant arti#cial lake of toxic residues in Inner Mongolia, we trace the environmental and human birth-places of planetary computation networks and see how they continue to terraform the planet.

Chapter 2 shows how arti#cial intelligence is made of human labor. We look at the digital pieceworkers paid pennies on the dollar clicking on microtasks so that data systems can seem more intelligent than they are.($ Our journey will take us

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

16 Introduction

inside the Amazon warehouses where employees must keep in time with the algorithmic cadences of a vast logistical empire, and we will visit the Chicago meat laborers on the disassembly lines where animal carcasses are vivisected and prepared for consumption. And we’ll hear from the workers who are pro-testing against the way that AI systems are increasing surveil-lance and control for their bosses.

Labor is also a story about time. Coordinating the actions of humans with the repetitive motions of robots and line ma-chinery has always involved a controlling of bodies in space and time.(' From the invention of the stopwatch to Google’s TrueTime, the process of time coordination is at the heart of workplace management. AI technologies both require and cre-ate the conditions for ever more granular and precise mecha-nisms of temporal management. Coordinating time demands increasingly detailed information about what people are doing and how and when they do it.

Chapter 3 focuses on the role of data. All publicly acces-sible digital material—including data that is personal or po-tentially damaging—is open to being harvested for training datasets that are used to produce AI models. !ere are gigantic datasets full of people’s sel#es, of hand gestures, of people driving cars, of babies crying, of newsgroup conversations from the 1990s, all to improve algorithms that perform such functions as facial recognition, language prediction, and ob-ject detection. When these collections of data are no longer seen as people’s personal material but merely as infrastruc-ture, the speci#c meaning or context of an image or a video is assumed to be irrelevant. Beyond the serious issues of pri-vacy and ongoing surveillance capitalism, the current practices of working with data in AI raise profound ethical, method-ological, and epistemological concerns.((

And how is all this data used? In chapter 4, we look at

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

Introduction 17

the practices of classi#cation in arti#cial intelligence systems, what sociologist Karin Knorr Cetina calls the “epistemic ma-chinery.”() We see how contemporary systems use labels to predict human identity, commonly using binary gender, es-sentialized racial categories, and problematic assessments of character and credit worthiness. A sign will stand in for a sys-tem, a proxy will stand for the real, and a toy model will be asked to substitute for the in#nite complexity of human sub-jectivity. By looking at how classi#cations are made, we see how technical schemas enforce hierarchies and magnify in-equity. Machine learning presents us with a regime of norma-tive reasoning that, when in the ascendant, takes shape as a powerful governing rationality.

From here, we travel to the hill towns of Papua New Guinea to explore the history of a+ect recognition, the idea that facial expressions hold the key to revealing a person’s inner emotional state. Chapter 5 considers the claim of the psychologist Paul Ekman that there are a small set of univer-sal emotional states which can be read directly from the face. Tech companies are now deploying this idea in a+ect recog-nition systems, as part of an industry predicted to be worth more than seventeen billion dollars.(* But there is consider-able scienti#c controversy around emotion detection, which is at best incomplete and at worst misleading. Despite the un-stable premise, these tools are being rapidly implemented into hiring, education, and policing systems.

In chapter 6 we look at the ways in which AI systems are used as a tool of state power. !e military past and present of arti#cial intelligence have shaped the practices of surveillance, data extraction, and risk assessment we see today. !e deep interconnections between the tech sector and the military are now being reined in to #t a strong nationalist agenda. Mean-while, extralegal tools used by the intelligence community

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

18 Introduction

have now dispersed, moving from the military world into the commercial technology sector, to be used in classrooms, police stations, workplaces, and unemployment o&ces. !e military logics that have shaped AI systems are now part of the work-ings of municipal government, and they are further skewing the relation between states and subjects.

!e concluding chapter assesses how arti#cial intelli-gence functions as a structure of power that combines infra-structure, capital, and labor. From the Uber driver being nudged to the undocumented immigrant being tracked to the public housing tenants contending with facial recognition sys-tems in their homes, AI systems are built with the logics of capital, policing, and militarization—and this combination further widens the existing asymmetries of power. !ese ways of seeing depend on the twin moves of abstraction and extrac-tion: abstracting away the material conditions of their making while extracting more information and resources from those least able to resist.

But these logics can be challenged, just as systems that perpetuate oppression can be rejected. As conditions on Earth change, calls for data protection, labor rights, climate justice, and racial equity should be heard together. When these inter-connected movements for justice inform how we understand arti#cial intelligence, di+erent conceptions of planetary poli-tics become possible.

Extraction, Power, and PoliticsArti#cial intelligence, then, is an idea, an infrastructure, an in-dustry, a form of exercising power, and a way of seeing; it’s also a manifestation of highly organized capital backed by vast sys-tems of extraction and logistics, with supply chains that wrap

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

Introduction 19

around the entire planet. All these things are part of what arti-#cial intelligence is—a two- word phrase onto which is mapped a complex set of expectations, ideologies, desires, and fears.

AI can seem like a spectral force—as disembodied com-putation—but these systems are anything but abstract. !ey are physical infrastructures that are reshaping the Earth, while simultaneously shi%ing how the world is seen and understood.

It’s important for us to contend with these many aspects of arti#cial intelligence—its malleability, its messiness, and its spatial and temporal reach. !e promiscuity of AI as a term, its openness to being recon#gured, also means that it can be put to use in a range of ways: it can refer to everything from consumer devices like the Amazon Echo to nameless back- end processing systems, from narrow technical papers to the biggest industrial companies in the world. But this has its use-fulness, too. !e breadth of the term “arti#cial intelligence” gives us license to consider all these elements and how they are deeply imbricated: from the politics of intelligence to the mass harvesting of data; from the industrial concentration of the tech sector to geopolitical military power; from the deraci-nated environment to ongoing forms of discrimination.

!e task is to remain sensitive to the terrain and to watch the shi%ing and plastic meanings of the term “arti#cial intelli-gence”—like a container into which various things are placed and then removed—because that, too, is part of the story.

Simply put, arti#cial intelligence is now a player in the shaping of knowledge, communication, and power. !ese re-con#gurations are occurring at the level of epistemology, prin-ciples of justice, social organization, political expression, cul-ture, understandings of human bodies, subjectivities, and identities: what we are and what we can be. But we can go fur-ther. Arti#cial intelligence, in the process of remapping and

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

20 Introduction

intervening in the world, is politics by other means—although rarely acknowledged as such. !ese politics are driven by the Great Houses of AI, which consist of the half- dozen or so companies that dominate large- scale planetary computation.

Many social institutions are now in"uenced by these tools and methods, which shape what they value and how deci-sions are made while creating a complex series of downstream e+ects. !e intensi#cation of technocratic power has been under way for a long time, but the process has now accelerated. In part this is due to the concentration of industrial capital at a time of economic austerity and outsourcing, including the defunding of social welfare systems and institutions that once acted as a check on market power. !is is why we must con-tend with AI as a political, economic, cultural, and scienti#c force. As Alondra Nelson, !uy Linh Tu, and Alicia Headlam Hines observe, “Contests around technology are always linked to larger struggles for economic mobility, political maneuver-ing, and community building.”(,

We are at a critical juncture, one that requires us to ask hard questions about the way AI is produced and adopted. We need to ask: What is AI? What forms of politics does it propa-gate? Whose interests does it serve, and who bears the greatest risk of harm? And where should the use of AI be constrained? !ese questions will not have easy answers. But neither is this an irresolvable situation or a point of no return—dystopian forms of thinking can paralyze us from taking action and prevent urgently needed interventions.(- As Ursula Franklin writes, “!e viability of technology, like democracy, depends in the end on the practice of justice and on the enforcement of limits to power.”(.

!is book argues that addressing the foundational prob-lems of AI and planetary computation requires connecting issues of power and justice: from epistemology to labor rights,

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms

Introduction 21

resource extraction to data protections, racial inequity to cli-mate change. To do that, we need to expand our understand-ing of what is under way in the empires of AI, to see what is at stake, and to make better collective decisions about what should come next.

This content downloaded from 128.95.104.109 on Sat, 01 Jan 2022 22:24:13 UTCAll use subject to https://about.jstor.org/terms