Friday links

  • The ACLU together with four researchers in algorithmic accountability is challenging the CFAA (The Computer Fraud and Abuse Act), arguing that its provisions make it illegal to do the necessary auditing of algorithms to test for discrimination and bias.
  • The popular word2vec embedding method for words might learn biased associations, such as associating the word ‘nurse’ with the gender ‘female’ and so on. A new paper seeks to fix this problem.
  • Diversity in teams that build AI might help the algorithms themselves be less biased.

Testing algorithmic decision-making in court.

Well that was quick!

On the heels of the ProPublica article about bias in algorithmic decision-making in the criminal justice system, a lawsuit now before the Wisconsin Supreme Court could mark the first legal determination about the use of algorithmic methods in sentencing.

The first few paragraphs of the article summarize the issue at hand:

When Eric L. Loomis was sentenced for eluding the police in La Crosse, Wis., the judge told him he presented a “high risk” to the community and handed down a six-year prison term.

The judge said he had arrived at his sentencing decision in part because of Mr. Loomis’s rating on the Compas assessment, a secret algorithm used in the Wisconsin justice system to calculate the likelihood that someone will commit another crime.

Mr. Loomis has challenged the judge’s reliance on the Compas score, and the Wisconsin Supreme Court, which heard arguments on his appeal in April, could rule in the coming days or weeks. Mr. Loomis’s appeal centers on the criteria used by the Compas algorithm, which is proprietary and as a result is protected, and on the differences in its application for men and women.

Racist risk assessments, algorithmic fairness, and the issue of harm

By now, you are likely to have heard of the fascinating report (and white paper) released by ProPublica describing the way that risk assessment algorithms in the criminal justice system appear to affect different races differently, and are not particularly accurate in their predictions. Even worse, they are even worse at predicting outcomes for black subjects than for white. Notice that this is a separate problem than ensuring equal outcomes pace disparate impact: it’s the problem of ensuring equal failure modes as well.

Screenshot_2016-05-24-08-53-55~2

There is much to pick apart in this article, and you should read the whole thing yourself. But from the perspective of research in algorithmic fairness, and how this research is discussed in the media, there’s another very important consequence of this work.

It provides concrete examples of people who have possibly been harmed by algorithmic decision-making. 

We talk to reporters frequently about the larger set of questions surrounding algorithmic accountability and eventually they always ask some version of:

Can you point to anyone who’s actually been harmed by algorithms?

and we’ve never been able to point to specific instances so far. But now, after this article, we can.

 

Algorithmic Fairness at the LSE

In April, I attended (virtually) a workshop organized by the Media Policy Project of the London School of Economics on “Automation, Prediction and Digital Inequalities”. 

As part of the workshop, I was asked to write a “provocation” that I read at the workshop. This was subsequently converted into a blog post for the MPP’s blog, and here it is.

The case I make here (that I will expand on in the next post) is for trying to develop a mathematical framework for thinking about fairness in algorithms. As a computer scientist, this idea seems like second nature to me, but I recognize that to the larger community of people thinking about fairness in society, this case needs to be argued.