top of page

5 CASES OF ALGORITHMIC BIAS

TOP
White Noise on Black Background_edited.jpg

In this presentation, we will look at 5 cases of algorithmic bias to understand its causes and consequences. These cases will demonstrate how prejudices about gender and race can easily infiltrate into Artificial Intelligence programs, reinforcing systems of oppression, perpetuating dominant ideologies and power relations, and affecting the lives of historically marginalized groups.  Finally, you'll find a set of questions to think critically about algorithmic bias and discuss implications.

START

OVERVIEW

1

COMPUTATIONAL

CRIMINOLOGY

BACK

NEXT

1
White Noise on Black Background

Algorithmic decision-making tools have become popular in the criminal justice system. These allow to shrink budgets, decrease legitimacy and address the overload of cases (Završnik, 2021, p.625). More importantly, these systems are used to assess the likelihood of the recidivism of offenders in bail and parole procedures. 

ENAy06eRQ_I_edited.jpg
pexels-pixabay-259006_edited.jpg

2

JOB

RECRUITMENT

BACK

NEXT

2

In the last decade hiring algorithms have been used to help managers spend less time reading resumes that don’t match job requirements. Predictive tools parse and score resumes to assess candidate competencies in new ways and make more objective decisions. Complex hiring algorithms use data science to correlate the performance of employees with data from candidates. 

ai.jpg
press-release-hospitalized-black-patients-covid-19-lower-risk-death-than-white-patients_ed

3

HEALTHCARE

ALLOCATION

BACK

NEXT

3
White Noise on Black Background_edited.jpg

Algorithms are used in the Healthcare sector to improve and standardize decisions made in the delivery of medical care. Medical algorithms assist in standardizing the selection and application of treatment regimens, reduce the potential introduction of errors and predict outcomes in critical care scoring systems. Advocates argue that using algorithms allows better diagnostic and more sophisticated patient care. 

harnessing-the-power-of-ai-in-healthcare-blog-fairwarning.jpg
Facial-Recognition_1400x840_edited.jpg

4

FACIAL

RECOGNITION

BACK

NEXT

4
White Noise on Black Background

Facial recognition is a biometric technology that identifies and verifies a person’s identity by mapping out the data points of a person’s facial features. It uses AI algorithms to learn how to identify a person, verify them against a single image in a database, and identify a person against multiple images (Floyd, 2021). Facial recognition is used for preventing retail crime, finding missing people, smarter advertising, protecting law enforcement, identifying people on social media platforms, facilitating secure transactions, amongst others (FaceFirst, n.d.).

1_x1JZIt-6lSDI5UPjP7Hdxw_edited.jpg
firmbee-com-31OdWLEQ-78-unsplash_edited.jpg

5

SEARCH

ENGINES

BACK

NEXT

5

Search engines help people find the information they are looking for online using keywords or phrases using ranking algorithms. According to Choudhary & Shankar Burdak (2012), “ranking algorithms are used by the search engines to present the search results by considering the relevance, importance and content score and web mining techniques to order them according to the user interest” (p.3). Popular page ranking algorithms include Google’s PageRank, Weighted Page Rank, The HITS Algorithm, amongst others. 

google_1607176308.png

DISCUSSION

BACK

NEXT

After exploring these cases discuss the following questions:

1

Why is the study of algorithmic bias important for our society? What can we learn about ourselves through it? Instead of perpetuating disparities, how could it be used in a positive way?

2

In the design process of algorithms, how could we ensure that these remain objective and transparent? What are the main challenges to achieving this goal? 

3

Biases in technology inevitably configure our behavior and view of reality. How has your life been impacted by technologies that are biased? Can you identify how it has influenced your thinking? How can you undo such conditioning? 

4

Biases are a reflection of our ideologies. Describe how algorithms could reflect ideologies that are just? Can you find any examples of that in your experience with technology?

5

We’ve described cases of algorithmic bias that affect women and people of color. What other minorities could be in danger due to algorithmic bias and how?

6

Many algorithms are built on patterns that are a result of correlations of old data. If algorithms are built on machine learning, which entails using data from the past which is biased, how can we approach creating patterns that are bias-free?

7

What are the underlying forces behind algorithmic bias that might be difficult to break? What type of socioeconomic change does it require? 

DISCUSSION
REF

REFERENCES

BACK

TOP

Algorithmic bias (2022, January 2022). In Wikipedia. https://en.wikipedia.org/wiki/Algorithmic_bias

 

Angwin, J. , Larson, J. , Mattu, S., Kirchner, K. (2016, May 23). Machine Bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

 

Barbican Centre (2021). Joy Buolamwini: Examining racial and gender bias in facial analysis software. Google Arts & Culture. https://artsandculture.google.com/story/BQWBaNKAVWQPJg?hl=en

 

Bogen, M. (2019, May 6). All the ways hiring algorithms can introduce bias. Harvard Business Review. https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introduce-bias

 

Buolamwini, & J., Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77-91). PMLR.

 

Choudhary, L., & Burdak, B. S. (2012). Role of ranking algorithms for information retrieval.https://doi.org/10.5121/ijaia.2012.3415

 

Crawford, K. (2021). The Atlas of AI. Yale University Press.

 

Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1), eaao5580-eaao5580. https://doi.org/10.1126/sciadv.aao5580

 

Eubanks V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St Martin’s Press. 

 

FaceFirst (n.d.). Amazing uses for face recognition. https://www.facefirst.com/blog/amazing-uses-for-face-recognition-facial-recognition-use-cases/

 

Floyd, B. (2021, Oct 20). Algorithmic bias in facial recognition. Keyo. https://www.keyo.co/biometric-news/algorithmic-bias-in-facial-recognition

 

Gawronski, Q. (2019, November 7). Racial bias found in widely used health care algorithm. NBC News. https://www.nbcnews.com/news/nbcblk/racial-bias-found-widely-used-health-care-algorithm-n1076436

 

Google (2022). How Search Algorithms Work. https://www.google.com/search/howsearchworks/algorithms/

 

Hao, K. (2019, January 21). AI is sending people to jail - and getting it wrong. MIT Technology Review. https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/

 

Harcourt, B. (2015) Exposed: Desire and Disobedience in the Digital Age. Cambridge, MA: Harvard University Press. 

 

Illing, S. (2018, April 6). How search engines are making us more racist. Vox. https://www.vox.com/2018/4/3/17168256/google-racism-algorithms-technology

 

Lauret, J. (2019). Amazon’s sexist AI recruiting tool: how did it go so wrong?. Becoming Human. https://becominghuman.ai/amazons-sexist-ai-recruiting-tool-how-did-it-go-so-wrong-e3d14816d98e

 

Monster (n.d.). Algorithmic hiring: complex hiring by numbers?. https://hiring.monster.com/resources/recruiting-strategies/workforce-planning/hiring-algorithms/

 

Noble, S. U. (2018). Algorithms of oppression. New York University Press. 

 

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science (American Association for the Advancement of Science), 366(6464), 447-453. https://doi.org/10.1126/science.aax2342

 

O’Neil C (2016) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown.

 

Yong, El. (2018, January 18). A popular algorithm is no better at predicting crimes than random people. The Atlantic. https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/

 

Završnik, A. (2021). Algorithmic justice: Algorithms and big data in criminal justice settings. European Journal of Criminology, 18(5), 623-642. https://doi.org/10.1177/1477370819876762

TOP

bottom of page