
5 CASES OF ALGORITHMIC BIAS

In this presentation, we will look at 5 cases of algorithmic bias to understand its causes and consequences. These cases will demonstrate how prejudices about gender and race can easily infiltrate into Artificial Intelligence programs, reinforcing systems of oppression, perpetuating dominant ideologies and power relations, and affecting the lives of historically marginalized groups. Finally, you'll find a set of questions to think critically about algorithmic bias and discuss implications.
START





DISCUSSION
BACK
NEXT
After exploring these cases discuss the following questions:
1
Why is the study of algorithmic bias important for our society? What can we learn about ourselves through it? Instead of perpetuating disparities, how could it be used in a positive way?
2
In the design process of algorithms, how could we ensure that these remain objective and transparent? What are the main challenges to achieving this goal?
3
Biases in technology inevitably configure our behavior and view of reality. How has your life been impacted by technologies that are biased? Can you identify how it has influenced your thinking? How can you undo such conditioning?
4
Biases are a reflection of our ideologies. Describe how algorithms could reflect ideologies that are just? Can you find any examples of that in your experience with technology?
5
We’ve described cases of algorithmic bias that affect women and people of color. What other minorities could be in danger due to algorithmic bias and how?
6
Many algorithms are built on patterns that are a result of correlations of old data. If algorithms are built on machine learning, which entails using data from the past which is biased, how can we approach creating patterns that are bias-free?
7
What are the underlying forces behind algorithmic bias that might be difficult to break? What type of socioeconomic change does it require?
REFERENCES
BACK
TOP
Algorithmic bias (2022, January 2022). In Wikipedia. https://en.wikipedia.org/wiki/Algorithmic_bias
Angwin, J. , Larson, J. , Mattu, S., Kirchner, K. (2016, May 23). Machine Bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Barbican Centre (2021). Joy Buolamwini: Examining racial and gender bias in facial analysis software. Google Arts & Culture. https://artsandculture.google.com/story/BQWBaNKAVWQPJg?hl=en
Bogen, M. (2019, May 6). All the ways hiring algorithms can introduce bias. Harvard Business Review. https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introduce-bias
Buolamwini, & J., Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77-91). PMLR.
Choudhary, L., & Burdak, B. S. (2012). Role of ranking algorithms for information retrieval.https://doi.org/10.5121/ijaia.2012.3415
Crawford, K. (2021). The Atlas of AI. Yale University Press.
Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1), eaao5580-eaao5580. https://doi.org/10.1126/sciadv.aao5580
Eubanks V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St Martin’s Press.
FaceFirst (n.d.). Amazing uses for face recognition. https://www.facefirst.com/blog/amazing-uses-for-face-recognition-facial-recognition-use-cases/
Floyd, B. (2021, Oct 20). Algorithmic bias in facial recognition. Keyo. https://www.keyo.co/biometric-news/algorithmic-bias-in-facial-recognition
Gawronski, Q. (2019, November 7). Racial bias found in widely used health care algorithm. NBC News. https://www.nbcnews.com/news/nbcblk/racial-bias-found-widely-used-health-care-algorithm-n1076436
Google (2022). How Search Algorithms Work. https://www.google.com/search/howsearchworks/algorithms/
Hao, K. (2019, January 21). AI is sending people to jail - and getting it wrong. MIT Technology Review. https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/
Harcourt, B. (2015) Exposed: Desire and Disobedience in the Digital Age. Cambridge, MA: Harvard University Press.
Illing, S. (2018, April 6). How search engines are making us more racist. Vox. https://www.vox.com/2018/4/3/17168256/google-racism-algorithms-technology
Lauret, J. (2019). Amazon’s sexist AI recruiting tool: how did it go so wrong?. Becoming Human. https://becominghuman.ai/amazons-sexist-ai-recruiting-tool-how-did-it-go-so-wrong-e3d14816d98e
Monster (n.d.). Algorithmic hiring: complex hiring by numbers?. https://hiring.monster.com/resources/recruiting-strategies/workforce-planning/hiring-algorithms/
Noble, S. U. (2018). Algorithms of oppression. New York University Press.
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science (American Association for the Advancement of Science), 366(6464), 447-453. https://doi.org/10.1126/science.aax2342
O’Neil C (2016) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown.
Yong, El. (2018, January 18). A popular algorithm is no better at predicting crimes than random people. The Atlantic. https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/
Završnik, A. (2021). Algorithmic justice: Algorithms and big data in criminal justice settings. European Journal of Criminology, 18(5), 623-642. https://doi.org/10.1177/1477370819876762
TOP