Testing algorithmic decision-making in court.

Well that was quick!

On the heels of the ProPublica article about bias in algorithmic decision-making in the criminal justice system, a lawsuit now before the Wisconsin Supreme Court could mark the first legal determination about the use of algorithmic methods in sentencing.

The first few paragraphs of the article summarize the issue at hand:

When Eric L. Loomis was sentenced for eluding the police in La Crosse, Wis., the judge told him he presented a “high risk” to the community and handed down a six-year prison term.

The judge said he had arrived at his sentencing decision in part because of Mr. Loomis’s rating on the Compas assessment, a secret algorithm used in the Wisconsin justice system to calculate the likelihood that someone will commit another crime.

Mr. Loomis has challenged the judge’s reliance on the Compas score, and the Wisconsin Supreme Court, which heard arguments on his appeal in April, could rule in the coming days or weeks. Mr. Loomis’s appeal centers on the criteria used by the Compas algorithm, which is proprietary and as a result is protected, and on the differences in its application for men and women.

Racist risk assessments, algorithmic fairness, and the issue of harm

By now, you are likely to have heard of the fascinating report (and white paper) released by ProPublica describing the way that risk assessment algorithms in the criminal justice system appear to affect different races differently, and are not particularly accurate in their predictions. Even worse, they are even worse at predicting outcomes for black subjects than for white. Notice that this is a separate problem than ensuring equal outcomes pace disparate impact: it’s the problem of ensuring equal failure modes as well.

Screenshot_2016-05-24-08-53-55~2

There is much to pick apart in this article, and you should read the whole thing yourself. But from the perspective of research in algorithmic fairness, and how this research is discussed in the media, there’s another very important consequence of this work.

It provides concrete examples of people who have possibly been harmed by algorithmic decision-making. 

We talk to reporters frequently about the larger set of questions surrounding algorithmic accountability and eventually they always ask some version of:

Can you point to anyone who’s actually been harmed by algorithms?

and we’ve never been able to point to specific instances so far. But now, after this article, we can.

 

Algorithmic Fairness at the LSE

In April, I attended (virtually) a workshop organized by the Media Policy Project of the London School of Economics on “Automation, Prediction and Digital Inequalities”. 

As part of the workshop, I was asked to write a “provocation” that I read at the workshop. This was subsequently converted into a blog post for the MPP’s blog, and here it is.

The case I make here (that I will expand on in the next post) is for trying to develop a mathematical framework for thinking about fairness in algorithms. As a computer scientist, this idea seems like second nature to me, but I recognize that to the larger community of people thinking about fairness in society, this case needs to be argued.

White House Report on Algorithmic Fairness

The White House has put out a report on big data and algorithmic fairness (announcement, full report).  From the announcement:

Using case studies on credit lending, employment, higher education, and criminal justice, the report we are releasing today illustrates how big data techniques can be used to detect bias and prevent discrimination. It also demonstrates the risks involved, particularly how technologies can deliberately or inadvertently perpetuate, exacerbate, or mask discrimination.

The table of contents for the report gives a good overview of the issues addressed:

Big Data and Access to Credit
The Problem: Many Americans lack access to affordable credit due to thin or non-existent credit files.
The Big Data Opportunity: Use of big data in lending can increase access to credit for the financially underserved.
The Big Data Challenge: Expanding access to affordable credit while preserving consumer rights that protect against discrimination in credit eligibility decisions

Big Data and Employment
The Problem: Traditional hiring practices may unnecessarily filter out applicants whose skills match the job opening.
The Big Data Opportunity: Big data can be used to uncover or possibly reduce employment discrimination.
The Big Data Challenge: Promoting fairness, ethics, and mechanisms for mitigating discrimination in employment opportunity.

Big Data and Higher Education
The Problem: Students often face challenges accessing higher education, finding information to help choose the right college, and staying enrolled.
The Big Data Opportunity: Using big data can increase educational opportunities for the students who most need them.
The Big Data Challenge: Administrators must be careful to address the possibility of discrimination in higher education admissions decisions.

Big Data and Criminal Justice
The Problem: In a rapidly evolving world, law enforcement officials are looking for smart ways to use new technologies to increase community safety and trust.
The Big Data Opportunity: Data and algorithms can potentially help law enforcement become more transparent, effective, and efficient.
The Big Data Challenge: The law enforcement community can use new technologies to enhance trust and public safety in the community, especially through measures that promote transparency and accountability and mitigate risks of disparities in treatment and outcomes based on individual characteristics.

Obama invokes Rawls

In a commencement speech at Howard University, Obama does an implicit shout out to Rawls and the Veil of Ignorance:

If you had to choose one moment in history in which you could be born and you did not know ahead of time who you were going to be , what nationality, or it gender, what race, whether you would be rich, poor, and a or straight — gay or straight, what faith you would be born into, you would not choose 100 years ago. you would not choose the 1950’s, the 1960’s, or the 1970’s. he would choose right now. If you had to choose a time to be in the world to be younger, gifted and black in america, you would choose right now

While I don’t necessarily agree with his conclusion, the fact that he invokes the Veil of Ignorance is what I find interesting.

See the clip here (starts at 12:45)

http://www.c-span.org/video/?409107-1/president-obama-delivers-commencement-address-howard-university&start=704

Keynote at ICWSM

I’m deeply honored that the organizers of the 10th ICWSM (The AAAI conference on weblogs and social media) have invited me to kick off the conference with an opening keynote on May 18. Here’s what I’ll be talking about.

Algorithmic Fairness: From social good to mathematical framework

Machine learning has taken over our world, in more ways than we realize. You might get book recommendations, or an efficient route to your destination, or even a winning strategy for a game of Go. But you might also be admitted to college, granted a loan, or hired for a job based on algorithmically enhanced decision-making. We believe machines are neutral arbiters: cold, calculating entities that always make the right decision, that can see patterns that our human minds can’t or won’t. But are they? Or is decision-making-by-algorithm a way to amplify, extend and make inscrutable the biases and discrimination that is prevalent in society?

To answer these questions, we need to go back — all the way to the original ideas of justice and fairness in society. We also need to go forward — towards a mathematical framework for talking about justice and fairness in machine learning. I will talk about the growing landscape of research in algorithmic fairness: how we can reason systematically about biases in algorithms, and how we can make our algorithms fair(er).