SHE100: Joy Buolamwini, The Scientist Who Proved AI Could Be Racist

Published 9 hours ago7 minute read
Owobu Maureen
Owobu Maureen
SHE100: Joy Buolamwini, The Scientist Who Proved AI Could Be Racist

In the early 2010s, artificial intelligence (AI) was emerging as a transformative force, promising breakthroughs in healthcare diagnostics, autonomous vehicles, and automated decision‑making across industries.

Most engineers and executives saw AI as neutral, a tool guided solely by data and code. But one researcher’s discovery at the heart of one of the world’s most respected technology laboratories revealed something: AI can fail people in starkly unequal ways, reflecting and amplifying social bias rather than erasing it.

That researcher was Joy Buolamwini, whose work exposed racist and sexist patterns in facial recognition systems.

Her findings not only advanced academic understanding of algorithmic fairness but also forced major tech companies and policymakers to rethink how AI is developed, tested, and regulated.

A Life Shaped by Movement Between Worlds

Joy Adowaa Buolamwini was born on January 23, 1990, in Edmonton, Alberta, Canada. Her parents were Ghanaian immigrants, and her childhood was marked by frequent movement between cultures and continents. She lived in Ghana, Barcelona, Spain, and various parts of the United States, giving her early exposure to diverse languages, identities, and worldviews.

These early experiences seem to have shaped more than just her adaptability; they seeded a sensitivity to how people of different backgrounds are seen and unseen by dominant systems. This theme would later become central to her work.

Buolamwini’s academic trajectory combined technical rigor with creative exploration. She earned her undergraduate degree in Computer Science at the Georgia Institute of Technology, where she also studied dance. This dual focus — logical computation alongside artistic expression- provided a unique foundation for understanding not just how machines operate but how they interpret the human body and identity.

Image Credit: Wikipedia | Joy Buolamwini at Wikimania 2018 in Cape Town

She was later named a Rhodes Scholar, which allowed her to study at Oxford University. Following that, she pursued graduate work at the Massachusetts Institute of Technology (MIT), where she completed a Master’s degree and a Ph.D. in Media Arts and Sciences at the MIT Media Lab, a hub known for interdisciplinary research blending technology, design, and humanistic inquiry.

The Moment the Code Failed to See Her

The seminal moment that pivoted Buolamwini’s research occurred during her graduate studies at MIT. While experimenting with facial recognition software, systems designed to detect and identify faces in digital image, she observed something puzzling: the technology struggled to detect her face.

Image Credit: AP News

At times, the software failed altogether. Only when she wore a white theatrical mask, which artificially lightened her features, did the system begin to recognize her as a face. This realization was not simply amusing; it was deeply concerning. It suggested the technology was not equally effective for all faces.

This moment, a graduate student standing in a lab, counterintuitively more “visible” to the algorithm with a white mask than without, became the seed for a major research project.

Buolamwini would later describe her discovery as a moment of seeing the “coded gaze”, a term she uses to capture how algorithms perceive the world not as humans do, but through patterns in data that can reflect social inequalities.

Gender Shades: A Comprehensive Audit of AI Vision

Buolamwini’s insight evolved into a rigorous scientific investigation. In 2018, she and co‑researcher Timnit Gebru published a landmark study titled “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.”

This paper was not small or speculative. It systematically evaluated leading commercial facial recognition systems created by companies such as:

  • Microsoft

  • IBM

  • Face++ (a widely used platform from the Chinese tech company Megvii)

The study assessed how accurately these systems classified gender across different skin tones and genders. To do this rigorously, the researchers developed an evaluation dataset with balanced representation across darker and lighter skin tones.

The results were unequivocal:

  • For lighter‑skinned males, some systems achieved error rates below 1%.

  • For darker‑skinned females, error rates soared as high as 34.7%.

In real terms, this meant that systems widely used for identifying people performed reasonably for lighter‑skinned men but were far less reliable for women with darker skin. The disparities were not subtle; they were statistically significant, consistent, and striking.

Crucially, the study made clear that these were not isolated bugs but structural issues rooted in training data. The systems had been trained on image datasets that lacked diversity, a majority of images featured lighter‑skinned people, particularly men. As a result, the algorithms “learned” to see some types of faces far better than others.

This wasn’t just a theoretical problem. Facial recognition technology was already being deployed in high‑stakes contexts: airport security, law enforcement, hiring algorithms, and identity verification for financial services.

Algorithmic Justice League: Turning Research Into Advocacy

Buolamwini’s findings might have remained academic if not for her decision to build a platform for broader impact. In 2016, before “Gender Shades” was published, she founded the Algorithmic Justice League (AJL). The organization’s mission is to expose and combat bias in automated systems; not just in research journals but in public discourse, policy, and corporate practice.

AJL uses research, visual media, public speaking, and collaborative events to push for:

  • Transparency in algorithmic systems

  • Inclusive and representative training data

  • Ethical standards in AI development

  • Regulatory oversight where appropriate

Through AJL, Buolamwini bridged the gap between technical discovery and societal impact. The organization participated in academic conferences, testified before governmental bodies, and engaged with journalists to take complex technical findings into the public arena.

Policy and Regulation

Buolamwini’s research influenced discussions in legislative and regulatory environments. Her work was cited in hearings on Capitol Hill and referenced by civil liberties organizations calling for stricter controls on facial recognition in policing and public surveillance.

Cities like San Francisco and Boston moved to restrict the use of facial recognition by city agencies, citing civil rights concerns, debates shaped in part by the empirical evidence Buolamwini helped produce.

Buolamwini also appeared in the documentary Coded Bias, which premiered at the Sundance Film Festival and brought the issue of algorithmic discrimination into popular culture. The film traced the development of facial recognition technologies and highlighted how seemingly neutral code can reflect deep‑rooted social biases.

Logo of the Algorithmic Justice League

Her TED Talk, appearances on major media outlets, and invitations to speak at forums like the World Economic Forum further spread awareness of algorithmic bias as not merely a technical problem but a societal one.

Recognition and Legacy

Buolamwini’s influence has been acknowledged across industries:

  • She has been named to Forbes’ 30 Under 30 list.

  • She appeared on the Bloomberg 50 list of influential figures shaping global trends.

  • The BBC included her among its 100 Women list, recognizing her leadership at the intersection of technology and justice.

Whatsapp promotion

Her work is now central to the field known as AI ethics, a domain that includes technical fairness research, public policy, legal frameworks, and social critique.

In 2023, she published Unmasking AI: My Mission to Protect What Is Human in a World of Machines, a book that recounts her journey and explores the implications of bias in automated systems.

Conclusion

Joy Buolamwini did more than reveal a technical flaw in software. She exposed how human biases can become codified into systems that increasingly mediate our lives.

Her work has changed the way industry, policymakers, and the public think about artificial intelligence, not as an impartial oracle but as a human construct that can reflect the inequalities of the world that produced it.

By proving that AI can be racist, she opened a path toward making it fair — not perfect, not unerring, but more representative of the full diversity of human experience.

Loading...
Loading...
Loading...

You may also like...