Sandra Wachter

Me, Myself and AI: Privacy, fairness and explainability in the age of algorithms     

In this short piece Sandra Wachter, Associate Professor and Senior Research Fellow at the Oxford Internet Institute, shares some of her research on AI driven decision making.

Professor Sandra Wachter

  1. What was the problem that you were looking to address?

AI driven decision-making is becoming more commonplace in our society. Algorithms decide who should be admitted to university, who should get insurance or a loan, and who should be sent to prison. AI ranks the search results when browsing the web, sorts the posts on social media networks and chooses the news, advertisements, and prices we see.

Algorithmic decision-making is based on sifting through large amounts of data to find patterns and similarities in historical datasets to predict whether we will get sick, how well we will do at university, and whether we are able to repay a loan or will buy a certain product.

Cost and time efficiency as well as accuracy rank at the forefront of the advantages for companies and the public sector. AI can also improve the lives of customers by offering them more diverse services and personalised goods.

However, there are also very significant ethical challenges that need to be kept in mind. Those algorithms often operate as inscrutable black boxes whose decisions we are not fully able to understand. The need for large datasets to make accurate predictions causes issues around privacy as well as discrimination. While many have agreed on the nature of these challenges, it is very often assumed that existing laws will be capable of guarding against them.

  1. What was your argument?

After assessing European laws around data protection, non-discrimination, trade secrets, competition as well as consumer protection, it became clear that the legal frameworks are not prepared to deal with those new challenges. I was able to show that we do not have sufficient protections against algorithmic decision-making and no right to understand how decisions are made about us. This is problematic. One of my papers showed that our friends, interests, hobbies, clicks & likes, or in other words our online presence and behaviours, say more about us then we think. They can reveal sensitive attributes such as sexual orientation, gender, religion, ethnicity, ability, and political opinions or beliefs. Algorithms can infer very sensitive details about us without us ever being aware of what they have learned. Yet, in a different paper, I was also able to show that we do not have much control over the information that AI learns about us (inferred data).  Finally, in another piece, I demonstrated that the unintuitive and subtle way through which algorithms discriminate against people poses a risk of leaving already oppressed communities, as well as new groups, without legal protection.

  1. Why is this research important? 

Following on from my work on identifying existing legal loopholes, I started to look for solutions. My co-authors and I came up with the idea of ‘counterfactual explanations’ that would allow you to understand why a certain decision was made in a certain way, give you grounds to contest it, and tell you how you would need to change your situation in order to get the desired or better outcome (e.g. receive the loan, be admitted to university). Many companies including Google and Vodafone have implemented our work.

Similarly, we wanted to tackle the problem of algorithmic bias and discrimination. We proposed a novel bias test called Conditional Demographic Disparity that is particularly well-suited to detect unintuitive bias and heterogeneous, minority-based and intersectional discrimination whilst at the same time allowing contextual and case-by-case interpretation of the law. Most recently, in March 2021 I published new work explaining how to prevent biased decision-making depending on the jurisdiction, application and sector. Amazon has decided to implement our bias test in their own bias toolkit Sage Maker Clarify, which is available for all customers of Amazon Web Services. This is tremendously exciting. I am delighted by these great steps towards greater algorithmic accountability through making AI explainable and preventing biased outcomes.

  1. What further questions need to be addressed?

Going forward, interdisciplinary fundamental work and creative solutions will be more important than ever. The versatile uses of AI are penetrating almost every sector and legal domain. The way we understand data protection, non-discrimination, competition, human rights, contract and torts law, to name a few, will dramatically change going forward. In the future, it will be critical for the tech community to understand the legal and ethical implications of their work. At the same time, the legal community needs to understand how technology works to be able to govern it effectively. If we can break down the siloes between these disciplines, I believe we can create technologies that benefit individuals and society, and at the same time respect fundamental rights and freedoms.