AlgorithmsReading.pdf

The Ploughshares Monitor | Spring 2018 5

Joy Buolamwini, now a PhD stu-dent at MIT’s Center for Civic Media, was an undergraduate when she first encountered a problem with facial recognition

software. She was trying to teach a robot to play peek-a-boo, but the robot did not seem to recognize her (Couch 2017). The robot’s facial recognition software seemed to detect her colleagues, but not her. Buol-amwini needed the help of a roommate to finish her assignment (Buolamwini 2016).

Discriminatory dataAs a graduate student several years later, Buolamwini, who is African-American, en-countered the problem again. She decided to test the software by putting on a white mask. Then the software recognized her. Boulamwini realized that she was the vic-tim of algorithmic bias. The math-based process or set of rules (algorithm) used in this machine-learning system reflected implicit human values.

By then, facial recognition technology was entering the mainstream (Lohr 2018) and Buolamwini knew that she had to speak out. She has been at the forefront of discussions on how algorithms can lead to discriminatory practices and why the data used in new technologies must be transparent.

In Buolamwini’s case, the software’s dataset was predominantly white and male. This is not uncommon. A widely used facial recognition system uses data that is more than 75 per cent male and 80 per cent white (Lohr 2018). In her research, Buolamwini finds that facial recognition software achieves 99 per cent accuracy if the subject is a white man, but only 35 per cent accuracy if the subject is a woman with darker skin.

Facial recognition software illustrates only some of the possible problems of biased machine learning systems. A system using a historical dataset, in which certain groups were excluded or particularly tar-geted, will replicate these biases. Biases can be compounded if the teams doing the coding are not diverse and fail to consider how the software could be used against different members of society.

Consider this: police in the United States are making more use of facial rec-ognition software that was originally used by the military in war zones and to combat terrorism abroad.

Why bias mattersExperts are telling us that the data and mathematical models on which innovative and disruptive technologies are based are not neutral, but are shaped by the views of

As new technology is shaped by old biases, stereotypes, and prejudices, users must remain vigilant

By Branka Marijan

Algorithms are not impartial

The Ploughshares Monitor | Spring 20186

their creators. Included in these views are some very old prejudices, stereotypes, and structural inequalities.

As mathematician Cathy O’Neil says in her new book, Weapons of Math Destruc-tion, we trust mathematical models too much and are afraid to question the math because we believe we lack the requisite expertise (Chalabi 2016). O’Neil notes that some of the algorithms impacting people’s lives are secret and the interests they re-flect are hidden. She urges everyone to question how decisions are made and the ways in which they impact certain popula-tions.

Prof. Laura Forlano of the Illinois Institute of Technology points out that algorithms are not impartial. “Rather, al-gorithms are always the product of social, technical, and political decisions, negotia-tions and tradeoffs that occur throughout their development and implementation. And, this is where biases, values, and dis-crimination disappear into the black box behind the computational curtain” (For-lano 2018).

In her 2018 book, Automating Inequality: How High-Tech Tools Profile, Police, and Pun-

ish the Poor, Virginia Eubanks traces how new algorithms are further embedding biases about the poor and putting these vulnerable populations in an ever more precarious position. The political and socioeconomic forces long at play are re-inforced by new technologies.

The effects on our securityBias in justice, military, and security appli-cations is particularly worrisome.

Some U.S. judges use a new system to help them determine if parole should be granted. Already disadvantaged people are being given longer sentences because an algorithm indicates that they have a higher chance of reoffending. But an investiga-tion into this system revealed that it may be biased against minorities (Knight 2017). Similarly, predictive policing that uses al-gorithms to determine where and when certain criminal activity will occur has been seen to be racist.

Algorithms and machine learning ap-peal to developers of new military tech-nologies and weapons. Based on neutral and impartial data, autonomous weapons systems, they claim, will be more respon-

ARTIFICIAL INTELLIGENCE

Joy Buolamwini gives a TED talk on the bias of algorithms.Photograph TED

The Ploughshares Monitor | Spring 2018 7

sible and accountable than human soldiers. The argument is that these systems, coded to respect international humanitarian law and protect non-combatants, will improve security for civilians. But no developer should be allowed to hide behind sup-posedly objective models, even though few companies and governments appear to be willing to deal with algorithmic bias (Knight 2017).

We also can’t simply adopt the view of Google AI chief John Giannandrea, who has suggested that algorithmic bias and not killer robots should be of great-est concern to the public (Knight 2017). We know that militaries are interested in developing autonomous systems, and we have no reason to believe that they are dedicated to removing bias. As a society, we can’t know precisely how algorithmic bias will be encoded in new weapons sys-tems, but we can be reasonably certain that bias will be present.

How to get algorithmic accountability Some people are pressing for algorithmic accountability. Buolamwini began the Al-gorithmic Justice League to involve the tech community and engaged citizens in identifying bias in different technologies.

Some governments are starting to con-sider the implications of the latest tech. The Canadian government has conducted several consultations on using AI in gov-ernance.

More must be done.Tech companies need to be attuned to

bias and held accountable. Yes, it can be difficult for all parties to understand how certain algorithms work and how machine learning systems make certain determina-tions. But ignorance cannot be used as an excuse—the fallout from a lack of con-sideration could be too great.

Much can be done to ensure that checks are in place to prevent bias or a badly designed algorithm from being used to make decisions and determinations that impact people’s lives. As O’Neil points out, all models can be interrogated for ac-curacy. Just as we audit and evaluate other products and systems, we must be able to do the same with emerging technologies.

Civil society organizations must pay closer attention to AI use in their re-spective fields. Ordinary citizens need to be more informed about how decisions that impact their lives are being made. They should have the right to demand that businesses be more transparent about the types of data and algorithms that they use.

And, as security and military uses of artificial intelligence increase, all of us will need to become even more vigilant—about the uses of AI and machine learning and about the existence of bias in new technology applications.

There is still much we can and must do to counter bias, and to regulate and con-trol the new technology. In cases involving weapons systems, a minimal requirement should be that humans control critical de-cisions, such as the decision to kill. This is a clear moral and ethical imperative. □

ARTIFICIAL INTELLIGENCE

References

Buolamwini, Joy. 2016. How I’m fighting bias in algorithms. TED Talk, November.

Chalabi, Mona. 2016. Weapons of Math Destruction: Cathy O’Neil adds up the damage of algorithms. The Guardian, October 27.

Couch, Christina. 2017. Ghosts in the Machine. PBS, October 25.

Eubanks, Virginia. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.

Forlano, Laura. 2018. Invisible algorithms, invisible politics. Public Books, February 1.

Knight, Will. 2017. Google’s AI chief says forget Elon Musk’s killer robots, and worry about bias in AI systems instead. MIT Technology Review,

October 3.

Lohr, Steve. 2018. Facial recognition is accurate, if you’re a white guy. The New York Times, February 9.

Branka Marijan is a Program Officer with Project Ploughshares.

[email protected]

Copyright of Ploughshares Monitor is the property of Institute of Peace & Conflict Studiesand its content may not be copied or emailed to multiple sites or posted to a listserv withoutthe copyright holder's express written permission. However, users may print, download, oremail articles for individual use.