At the end of a tumultuous year for both her professional life and her country of ancestry, Dr Timnit Gebru decided to do more than complain about the impact that technology was having on political discourse and established an institute specifically to address the harms that artificial intelligence (AI) causes on marginalised groups.
Not everyone would have the courage to take on a company so large that it’s name is a verb — over both its human resources record and on its policies — but in a year in which the choices of tech companies have dominated political discourse, it is one of the more urgent questions of our time, and Gebru is on it.
Through her public criticism of Google, Gebru has highlighted an important and developing debate in tech policy research. Her public confrontations online with senior management at Google underscore the ways in which tech platforms claiming to encourage research on their own systems end up producing hackneyed and partial accounts because they are unwilling to allow their work to stand up to true, rigorous, academic scrutiny.
This is the challenge faced by researchers trying to understand the impact that algorithms are having on the way we receive, consume and respond to political information curated by proprietary AI models: How can we truly understand the impact that technology is having on our public sphere if the tech companies won’t let anyone see what’s under the bonnet?
Gebru could have easily chosen to focus her complaints on her own experience with their terrible human resources responses. Instead, she has remained steadfast in defence of her research (in partnership with Emily Bender) and in keeping Big Tech accountable to marginalised populations.
The Distributed Artificial Intelligence Research Institute has received financial backing from some of the largest philanthropic organisations in the world and promises to shake up the practice of throwing AI at all of the world’s problems even where there is no clear evidence that it works.
But Gebru is also challenging her constructive dismissal at Google, where she says she was forced out for showing that facial recognition AI in the company was loaded with racial biases that were harming people of colour broadly and black people specifically. Google didn’t just dispute her research, they punished her for it, pushing her out of the company because she stood by her research and its implications. This is also part of another conversation about how technology companies in the West can be a hostile environment for women and people of colour.
Gebru has also come out as a vocal critic of the war in Ethiopia, particularly because the technologies that she criticises are at the centre of the accusations of malpractice levelled against social media companies for their inability to rein in hate speech. For these comments, she has received a barrage of attack from both supporters and defenders of the various parties of the conflict.
This once again highlights how choices that may seem abstract within the tech companies — to allow certain kinds of speech to go unmonitored, to underinvest in the systems that keep online users safe — can have real-world effects halfway around the world. One hopes that with her new institute, Gebru’s voice will only grow louder and clearer in defence of the voices that technology routinely ignores.