SHE100: Joy Buolamwini, The Scientist Who Proved AI Could Be Racist
In the early 2010s, artificial intelligence (AI) was emerging as a transformative force, promising breakthroughs in healthcare diagnostics, autonomous vehicles, and automated decision‑making across industries.
Most engineers and executives saw AI as neutral, a tool guided solely by data and code. But one researcher’s discovery at the heart of one of the world’s most respected technology laboratories revealed something: AI can fail people in starkly unequal ways, reflecting and amplifying social bias rather than erasing it.
That researcher was Joy Buolamwini, whose work exposed racist and sexist patterns in facial recognition systems.
Her findings not only advanced academic understanding of algorithmic fairness but also forced major tech companies and policymakers to rethink how AI is developed, tested, and regulated.
A Life Shaped by Movement Between Worlds
Joy Adowaa Buolamwini was born on January 23, 1990, in Edmonton, Alberta, Canada. Her parents were Ghanaian immigrants, and her childhood was marked by frequent movement between cultures and continents. She lived in Ghana, Barcelona, Spain, and various parts of the United States, giving her early exposure to diverse languages, identities, and worldviews.
These early experiences seem to have shaped more than just her adaptability; they seeded a sensitivity to how people of different backgrounds are seen and unseen by dominant systems. This theme would later become central to her work.
Buolamwini’s academic trajectory combined technical rigor with creative exploration. She earned her undergraduate degree in Computer Science at the Georgia Institute of Technology, where she also studied dance. This dual focus — logical computation alongside artistic expression- provided a unique foundation for understanding not just how machines operate but how they interpret the human body and identity.
She was later named a Rhodes Scholar, which allowed her to study at Oxford University. Following that, she pursued graduate work at the Massachusetts Institute of Technology (MIT), where she completed a Master’s degree and a Ph.D. in Media Arts and Sciences at the MIT Media Lab, a hub known for interdisciplinary research blending technology, design, and humanistic inquiry.
The Moment the Code Failed to See Her
The seminal moment that pivoted Buolamwini’s research occurred during her graduate studies at MIT. While experimenting with facial recognition software, systems designed to detect and identify faces in digital image, she observed something puzzling: the technology struggled to detect her face.
At times, the software failed altogether. Only when she wore a white theatrical mask, which artificially lightened her features, did the system begin to recognize her as a face. This realization was not simply amusing; it was deeply concerning. It suggested the technology was not equally effective for all faces.
This moment, a graduate student standing in a lab, counterintuitively more “visible” to the algorithm with a white mask than without, became the seed for a major research project.
Buolamwini would later describe her discovery as a moment of seeing the “coded gaze”, a term she uses to capture how algorithms perceive the world not as humans do, but through patterns in data that can reflect social inequalities.
Gender Shades: A Comprehensive Audit of AI Vision
Buolamwini’s insight evolved into a rigorous scientific investigation. In 2018, she and co‑researcher Timnit Gebru published a landmark study titled “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.”
This paper was not small or speculative. It systematically evaluated leading commercial facial recognition systems created by companies such as:
Microsoft
IBM
Face++ (a widely used platform from the Chinese tech company Megvii)
The study assessed how accurately these systems classified gender across different skin tones and genders. To do this rigorously, the researchers developed an evaluation dataset with balanced representation across darker and lighter skin tones.
The results were unequivocal:
For lighter‑skinned males, some systems achieved error rates below 1%.
For darker‑skinned females, error rates soared as high as 34.7%.
In real terms, this meant that systems widely used for identifying people performed reasonably for lighter‑skinned men but were far less reliable for women with darker skin. The disparities were not subtle; they were statistically significant, consistent, and striking.
Crucially, the study made clear that these were not isolated bugs but structural issues rooted in training data. The systems had been trained on image datasets that lacked diversity, a majority of images featured lighter‑skinned people, particularly men. As a result, the algorithms “learned” to see some types of faces far better than others.
This wasn’t just a theoretical problem. Facial recognition technology was already being deployed in high‑stakes contexts: airport security, law enforcement, hiring algorithms, and identity verification for financial services.
Algorithmic Justice League: Turning Research Into Advocacy
Buolamwini’s findings might have remained academic if not for her decision to build a platform for broader impact. In 2016, before “Gender Shades” was published, she founded the Algorithmic Justice League (AJL). The organization’s mission is to expose and combat bias in automated systems; not just in research journals but in public discourse, policy, and corporate practice.
AJL uses research, visual media, public speaking, and collaborative events to push for:
Transparency in algorithmic systems
Inclusive and representative training data
Ethical standards in AI development
Regulatory oversight where appropriate
Through AJL, Buolamwini bridged the gap between technical discovery and societal impact. The organization participated in academic conferences, testified before governmental bodies, and engaged with journalists to take complex technical findings into the public arena.
Policy and Regulation
Buolamwini’s research influenced discussions in legislative and regulatory environments. Her work was cited in hearings on Capitol Hill and referenced by civil liberties organizations calling for stricter controls on facial recognition in policing and public surveillance.
Cities like San Francisco and Boston moved to restrict the use of facial recognition by city agencies, citing civil rights concerns, debates shaped in part by the empirical evidence Buolamwini helped produce.
Buolamwini also appeared in the documentary Coded Bias, which premiered at the Sundance Film Festival and brought the issue of algorithmic discrimination into popular culture. The film traced the development of facial recognition technologies and highlighted how seemingly neutral code can reflect deep‑rooted social biases.
Her TED Talk, appearances on major media outlets, and invitations to speak at forums like the World Economic Forum further spread awareness of algorithmic bias as not merely a technical problem but a societal one.
Recognition and Legacy
Buolamwini’s influence has been acknowledged across industries:
She has been named to Forbes’ 30 Under 30 list.
She appeared on the Bloomberg 50 list of influential figures shaping global trends.
The BBC included her among its 100 Women list, recognizing her leadership at the intersection of technology and justice.
Her work is now central to the field known as AI ethics, a domain that includes technical fairness research, public policy, legal frameworks, and social critique.
In 2023, she published Unmasking AI: My Mission to Protect What Is Human in a World of Machines, a book that recounts her journey and explores the implications of bias in automated systems.
Conclusion
Joy Buolamwini did more than reveal a technical flaw in software. She exposed how human biases can become codified into systems that increasingly mediate our lives.
Her work has changed the way industry, policymakers, and the public think about artificial intelligence, not as an impartial oracle but as a human construct that can reflect the inequalities of the world that produced it.
By proving that AI can be racist, she opened a path toward making it fair — not perfect, not unerring, but more representative of the full diversity of human experience.
More Articles from this Publisher
Africa Has 416 Million Internet Users, But Only 12% Are on 5G. What Does This Mean?
Only 12% of Africa's 416 million internet users are on 5G. South Africa leads the continent while Ghana's 3G still domin...
MTN Just Handed Eight Nigerian Startups ₦45M at a 100-Hour Event In Lagos. What Is The Full Story Behind This?
The Gathering on 100 ran for 100 hours at the National Stadium Lagos. Eight startups walked away with ₦45 million. One w...
Do You Know Sleeping at 4AM and Waking Up at 12PM Is Not 8 Hours of Sleep?
You are counting the hours but your body is counting the timing. Read about what a delayed sleep schedule is actually do...
Michael Jackson's Biopic Just Made $217 Million in One Weekend. The King of Pop Is Still Undefeated
People showed up to cinemas in red leather jackets and white gloves. The film opened to $217 million globally, became th...
4 Mental Health Conditions African Families Call "Spiritual Problems"
Four conditions with medical names and medical solutions being treated as spiritual problems in African homes. This is w...
6 Things Nigerians Normalise That Are Actually a Crisis
Sitting in traffic for four hours, buying your own electricity, dying in hospitals that cannot treat you, here are six t...
You may also like...
Premier League Shockwave: Is Newcastle United Facing Relegation Battle?

Newcastle United faces a dramatic reversal of fortunes, plummeting from trophy winners and Champions League contenders t...
Mohamed Salah's Liverpool Swan Song Looms with Anticipated Return

Mohamed Salah is set for a timely return from a minor muscle injury, confirmed to be available before Liverpool's season...
Devil Wears Prada 2 Stuns Critics: Groundbreaking Score & Nostalgic Bliss

The highly anticipated "The Devil Wears Prada 2" reunites Meryl Streep and Anne Hathaway for a new chapter twenty years ...
aespa's NINGNING Becomes Global Face for Luxury Brand of Her Childhood Dreams

NINGNING of aespa has been named a global ambassador for Gucci, solidifying her status in luxury fashion. This partnersh...
Rosalía Honored with Coveted Ivors Songwriting Award in 2026

Rosalía is set to receive the International Songwriter of the Year award at The Ivors ceremony in London on May 21. This...
Jensen Ackles Spills on 'The Boys' 'Supernatural' Reunion and Near-Injury Stunt!

In 'The Boys' Season 5, Episode 5, Soldier Boy and Homelander's hunt for V1 leads to a 'Supernatural' reunion with Jense...
'The Last of Us' Season 3 Promises Game-Changing Twist to Challenge Fan Favorites!

Gabriel Luna offers updates on his roles, detailing a significant perspective shift for Tommy in "The Last of Us" Season...
Infinix Ignites Nigerian Market: NOTE 60 Ultra Redefines Premium Smartphone Experience!

Infinix has officially launched its premium NOTE 60 Ultra smartphone in Nigeria, signaling a strategic push into the hig...
