For the longest time, algorithms were thought to be objective and neutral. After all, computer codes don’t have moral structures and didn’t live experiences that could introduce a bias, right? Apparently, not. One of the most worrying recent revelations has been about the biases built into algorithms which make them discriminating in a hidden and automatic way, and thus difficult to recognize, making it the first problem in holding algorithms accountable at all – how do you even spot what’s wrong before you can talk about accountability?
This is where the work of journalists like Julia Angwin become really important. Angwin has extensively covered and written about issues like data privacy, surveillance, and algorithms, most recently with ProPublica, where she worked on a series on algorithmic injustices. She’s often noted as one of the pioneers of a new field of algorithmic accountability journalism. Although she wasn’t personally present at the session, her presentation and the following discussion with the moderator Fabio Chiusi, brought up some incredibly relevant issues that we should worry about – not just as journalists but also as users of platforms with these algorithms. Chiusi is a freelance journalist who writes about digital democracy and consequences of innovation.
Angwin started her presentation by discussing two of the investigations she did in this area, both of which made it more than evident that algorithmic bias should be a serious concern for everyone. The first one was related to Compas, which is a tool that uses algorithms to predict the risk of recidivism, that is, the tendency of a person to commit a crime in the future. It’s used by the criminal justice system in the United States. Her investigation showed that even when all the other factors, like past records, were held constant, black defendants were 45 percent more likely to be given a higher risk score than white defendants, thus revealing an inherent racial bias in the algorithm. The results were just as conclusive for the other investigation which showed that insurance prices were higher for the same safe drivers in minority neighbourhoods, as compared to wealthy, white neighbourhoods, for a study based in Chicago.
The next part of the conversation was consequentially about the impact of her investigations – whether any changes had been made to the algorithms, and more importantly, who is responsible for holding the platforms that use these algorithms to account. Chiusi then brought up the concept of ‘fairness’, which brings in an aspect of ethics and philosophy into the picture with a discussion about what even if a fair algorithm looks like, and whether we are trying to answer questions about fairness that have been asked for centuries with no satisfactory answers. Angwin agreed that the discussions in the aftermath of the investigations often ended up becoming philosophical debates about the definition of fairness. However, she pointed to the possibility of continuing this debate from the perspective of the impact that these algorithms have on people, and the new data that’s now available for research.
Naturally, Facebook came up several times during the course of the discussing. Chiusi mentioned Mark Zuckerberg’s perspective of Artificial Intelligence solving things like hate speech. Interestingly, Angwin mentioned that when she had spoken to people from Facebook while reporting on hate speech, they had been quick to say that AI would not solve the problem.
A big question during the session was about what to ask from the platforms like Facebook that use these algorithms. The algorithmic code itself is a contested answer because not only is it the intellectual property of those companies, but it would also not be of much use to lawmakers or journalists without specialized knowledge of computer coding. “I do think that we can audit the outcomes of these algorithms… I believe that the way to look at these algorithms is to look at what decisions they’re making, and audit those decisions. Are they making the right decisions? Are the decisions they’re making fair?” suggested Angwin. While she remarked that she wasn’t good at solutions, she still offered a few basic and practical ways to make algorithmic use more accountable. Her first solution was to enforce fines on platforms like Facebook for any data breaches, which is so obvious that one wonders how it hasn’t already been enforced. She also backed having independent ethics reviews for the platforms. She suggested that journalists should also consider educating themselves in basic data analysis to be able to report on algorithmic injustice.