Coded Bias: how AI threatens our civil rights

Coded Bias: how AI threatens our civil rights

Dystopian reality — Director Shalini Kantayya discusses her new film shedding light on the urgent threats machine learning poses to individual freedoms and democracy, and what society must do to combat these sinister technologies.

In 2018, Amazon was forced to scrap its AI recruitment tool after discovering one very crucial problem: the system did not like women. The algorithm had been trained to vet applicants based on the type of CVs submitted to the company over the past 10 years, most of which came from men – a reflection of the male-dominated tech industry. This made the supposedly ‘neutral’ system teach itself to penalise female applicants.

This is just one startling example featured in a new documentary, Coded Bias, illustrating how algorithmic bias can dangerously infiltrate our everyday lives. Often without our knowledge, AI-based systems are making decisions about who gets housing, or a car loan – or a job. There are also the more punitive forms of the technology, from racist facial recognition to the predictive policing systems, which target some of the more marginalised sections of society. 

Coded Bias follows Joy Buolamwini, a computer scientist at MIT who realised that most facial recognition systems she encountered were more likely to detect her face if she wore a white mask. This initial finding prompted Buolamwini to dig deeper, and to uncover the true extent to which algorithms both uphold and amplify our racist, sexist, capitalist world – research that has since led her to take on tech behemoths such as Amazon and Microsoft. 

Following the UK release of the film earlier this week (April 5), Huck spoke to Coded Bias’s director, Shalini Kantayya, about the extent of unbridled tech in today’s world, building fairer systems and the grassroots organisations leading the fight against sinister technologies. 

What first sparked your interest in this topic? 

To be honest, three years ago, I didn’t know what an algorithm even was. Everything I knew about AI came from the imagination of Steven Spielberg, or Stanley Kubrick, or Ridley Scott, or Terry Gilliam. But my background as a science fiction fanatic didn’t really prepare me for how AI is being used in the now. 

I stumbled down the rabbit hole when I read a book called Weapons of Mass Destruction by Cathy O’Neil. I don’t think I really realised the ways in which algorithms and machine learning AI is increasingly becoming a gatekeeper of opportunity, deciding such important things as who gets hired, who gets health care, how long a prison sentence someone may serve…

What I really began to realise is that everything that we love as people in a democracy – access to information, fair elections, fair housing, equal rights – and all of the advances that we’ve made over over 50 years of civil rights, could essentially be rolled back by these black-box algorithms under the guise of these machines being neutral, when they’re not.

Why was it important for you to show in the film the scope of ordinary people AI can harm – from a Houston teacher to Brooklyn renters? 

I think what was so startling to me in the making of this film is that we’re all vulnerable to the impacts of AI, and that it impacts almost every sector of society. 

Sometimes, we don’t even know when we’ve been denied an opportunity, because an AI gatekeeper has made a decision about us. And I think that’s what’s incredibly frightening – that we don’t know when we’ve been cut out of a job search in the first round, because an AI sorting system like the one Amazon design has sorted us out for a reason. That is opaque to us.

I suppose a lot of people still think of ‘privacy’ as something that doesn’t impact them, or that they have nothing to hide… 

I’m someone who very much does not identify by the word ‘privacy’. The phrase that I think is more accurate is ‘invasive surveillance’. It’s the kind of data that companies like Facebook and Google have about us that make the East German Stasi look like they had a light touch. 

Before, I didn’t understand really what a complete psychological profile can be built about us online, and that these systems, with some degree of confidence, can predict things as intimate as our sexuality even before we’re aware of what our sexuality is.

It can predict our behaviour and market it to us in very predatory ways based on all the things that it knows about us. With the Cambridge Analytica scandal, you can see how this could be a threat to democratic process and civil rights. And so I think that what I’m concerned about more broadly is that we are essentially picking up the tools of authoritarian states, oftentimes with no democratic rules in place. 

In the film, you capture the police trial of facial recognition in London – something which is well-documented in the UK. Are these technologies something you’d be able to document as easily in the US? 

I actually had to go to Europe [to film police surveillance], because there is no way that I could have filmed those scenes in the United States; no laws that would make that process transparent to me. In the US, police are using facial recognition, in secret, essentially, with no government oversight.

I’m curious to see if the UK will still have the protections of General Data Protection Regulation as you exit Europe, but I feel so grateful that that process is made transparent by UK laws. I was able to capture those moments by following the work of Big Brother Watch, and my hat is really off to them for doing the research that we essentially can’t do in the States because of the wild wild west that we live in when it comes to data protection. 

In Coded Bias, the point is made, both implicitly and explicitly, that places like the US and UK are much more similar to China in terms of surveillance than we might actually think. Why was it important to make that comparison? 

I showed that vignette in China as kind of a Black Mirror episode inside of a documentary. But I think that it’s a reality that’s much closer to us in democracies than we actually know and think about, and we haven’t yet begun to question this invisible nudge of big tech and the way that it is changing behaviour and reshaping society.

I think there’s a part of us that looks at that vignette in China and says, like, ‘Wow, I could buy a candy bar with my face. How cool would that be?” And I think that when it comes to technology, we really have to ask ourselves: is the goal of human civilisation to be as efficient as possible to go as fast as possible? Or is the goal of human civilisation to build a society that honours the inherent value and dignity of every human being? And if it’s the latter, then we need a radically different approach to designing technology than the path we’re on right now.

It is often pointed out that tech’s predominantly white male workforce feeds bias into AI. How do we make sure that the scientists building these systems are drawn from a more diverse pool? 

One of the things that I’ve grown compassion for is that human bias is not in a few bad people. It’s not something that just in white men. Bias is an innate human condition that we all have, and that is oftentimes unconscious to us. So it becomes absolutely vital and urgent that we build inclusive teams when you’re building technology that is deployed in the world.  

I do think campaigns that are grassroots really make a difference. But it’s also a pipeline problem. There is big tech money in the colleges and universities that are teaching computer science. And what gives me hope is I’m seeing a new generation of computer science students leading the way and saying, ‘I think I need an ethics course’ and ‘I cannot be prepared to design for society, if I don’t know anything about society.’ 

Shortly after the film’s completion, AI ethicist Timnit Gebru, who is featured in the film, was forced out of her role at Google. Does this strengthen the message of the film, or change how people might view Coded Bias

I will say that the problem is so much bigger than Timnit, in terms of the pattern of big tech perpetually discrediting and dismissing research that shines a light on bias and AI before there is a groundswell of people who are saying, ‘you need to act on this’. 

What is heartening about what happened to Dr. Debru is that thousands of Google employees did a virtual walkout and signed a petition and there have been resignations in the wake of her firing. It strengthens the urgency to tackle these issues and points to the fact that we cannot trust big tech to regulate themselves.

Coded Bias spotlights some of the grassroots organisations leading the fight for change. Did you want to ensure there was a sense of hope underpinning this film? 

It definitely has a fair amount of dystopia. But it also, I think, ultimately a beautiful film. I make documentaries because they remind me that everyday people can make a difference.

I saw that in the making of this film, for example, with the Brooklyn residents, who not only successfully stopped their landlord from installing invasive surveillance technology that’s proven to be racially biased, they also inspired the first legislation in the state of New York that would protect other housing residents to do the same.

Oftentimes, people ask me, ‘What is the future hope for this, are you optimistic about the technology?’ And I really believe it’s a script we’re all writing together. 

What I hope Coded Bias will do is raise awareness, and let us know that we all have a seat at the table to question and to shape the way these technologies that we interact with every day are designed. It’s my deep belief that we have a moonshot moment to call for greater ethics and more humanity in the technologies that will shape and define the future.

Coded Bias is now available to stream on Netflix. Find out more about the film by visiting the official website

Enjoyed this article? Like Huck on Facebook or follow us on Twitter.