By now, you are likely to have heard of the fascinating report (and white paper) released by ProPublica describing the way that risk assessment algorithms in the criminal justice system appear to affect different races differently, and are not particularly accurate in their predictions. Even worse, they are even worse at predicting outcomes for black subjects than for white. Notice that this is a separate problem than ensuring equal outcomes pace disparate impact: it’s the problem of ensuring equal failure modes as well.
There is much to pick apart in this article, and you should read the whole thing yourself. But from the perspective of research in algorithmic fairness, and how this research is discussed in the media, there’s another very important consequence of this work.
It provides concrete examples of people who have possibly been harmed by algorithmic decision-making.
We talk to reporters frequently about the larger set of questions surrounding algorithmic accountability and eventually they always ask some version of:
Can you point to anyone who’s actually been harmed by algorithms?
and we’ve never been able to point to specific instances so far. But now, after this article, we can.