Wisconsin Supreme Court decision on COMPAS

We finally have the first legal ruling on algorithmic decision making. This case comes from Wisconsin, where Eric Loomis challenged the use of COMPAS for sentencing him.

While the Supreme Court denied the appeal, it made a number of interesting observations and recommendations:

  • “risk scores may not be considered as the determinative factor in deciding whether the offender can be supervised safely and effectively in the community.”
  • “the following warning must be given to sentencing judges: “(1) the proprietary nature of COMPAS has been invoked to prevent disclosure of information relating to how factors are weighed or how risk scores are to be determined; (2) risk assessment compares defendants to a national sample, but no cross- validation study for a Wisconsin population has yet been completed; (3) some studies of COMPAS risk assessment scores have raised questions about whether they disproportionately classify minority offenders as having a higher risk of recidivism; and (4) risk assessment tools must be constantly monitored and re-normed for accuracy due to changing populations and subpopulations.”

Like Danielle Citron (the author of the Forbes article) I’m a little skeptical that this will be enough. Warning labels on cigarette boxes didn’t really stop people smoking. But I think as part of a larger effort to increase awareness of the risks, and to make people even stop and think a little before blindly forging ahead with algorithms, this is a decent first step.

At the AINow Symposium in New York (that I’ll say more about later), one proposed extreme along the policy spectrum regarding algorithic decision-making was to place a moratorium on the use of algorithms entirely. I don’t know if that makes complete sense. But a heavy heavy dose of caution is definitely warranted, and rulings like this might lead to a patchwork of caveats and speedbumps that help us flesh out exactly where algorithmic decision making makes more or less sense.

 

Advertisements

Testing algorithmic decision-making in court.

Well that was quick!

On the heels of the ProPublica article about bias in algorithmic decision-making in the criminal justice system, a lawsuit now before the Wisconsin Supreme Court could mark the first legal determination about the use of algorithmic methods in sentencing.

The first few paragraphs of the article summarize the issue at hand:

When Eric L. Loomis was sentenced for eluding the police in La Crosse, Wis., the judge told him he presented a “high risk” to the community and handed down a six-year prison term.

The judge said he had arrived at his sentencing decision in part because of Mr. Loomis’s rating on the Compas assessment, a secret algorithm used in the Wisconsin justice system to calculate the likelihood that someone will commit another crime.

Mr. Loomis has challenged the judge’s reliance on the Compas score, and the Wisconsin Supreme Court, which heard arguments on his appeal in April, could rule in the coming days or weeks. Mr. Loomis’s appeal centers on the criteria used by the Compas algorithm, which is proprietary and as a result is protected, and on the differences in its application for men and women.

An FDA for algorithms?

Andrew Tutt put out an interesting position paper where he argues that we need the equivalent of an FDA for algorithms. The paper

explains the diversity of algorithms that already exist and that are soon to come. In the future most algorithms will be “trained,” not “designed.” That means that the operation of many algorithms will be opaque and difficult to predict in border cases, and responsibility for their harms will be diffuse and difficult to assign. Moreover, although “designed” algorithms already play important roles in many life-or-death situations (from emergency landings to automated braking systems), increasingly “trained” algorithms will be deployed in these mission-critical applications.

It’s an interesting argument. Two things that come to mind when I think about this:

  • The FDA ultimately still deals with drugs that operate on the body. I feel that algorithms that apply across multiple domains will require much more varied domain expertise, and it might be hard to do this within a single agency.
  • A regulatory agency is slow. The FDA has been slow to react to the demands of personalized medicine, especially for rare diseases where the normal expectations of drug protocols might not be possible to achieve. How would a regulatory agency be nimble enough to adjust to the even more rapidly changing landscape of algorithm design?

Fairness: The view from abroad

Research in algorithmic fairness is inextricably linked to the legal system. Certain approaches that might seem algorithmically sound are illegal, and other approaches rely on specific legal definitions of bias.

This means that it’s hard to research that crosses national boundaries. Our work on disparate impact is limited to the US. In fact, the very idea of disparate impact appears to be US-centric.

Across the ocean, in France, things are different, and more complicated. I was at the Paris ML meetup organized by the indefatigable Igor Carron, and heard a fascinating presentation by Pierre Saurel.

I should say ‘read’ instead of ‘heard’. His slides were in English, but the presentation itself was in French. It was about the ethics of algorithms, as seen by the French judicial system, and was centered around a case where Google was sued for defamation as a result of the autocomplete suggestions generated during a partial search.

Google initially lost the case, but the ruling was eventually overturned by the French Cour de Cassation, the final court of appeals. In its judgement, it made the argument that algorithms are by definition neutral and cannot exhibit any sense of intention, and therefore Google can’t be held responsible for the results of automatic algorithm-driven suggestions.

This is a fine example of defining the problem away: if an algorithm is neutral by definition, then it cannot demonstrate bias. Notice how the idea of disparate impact gets around this by thinking about outcomes rather than intent.

But a consequence of this ruling is that bringing cases of algorithmic bias in French courts will now be much more difficult.

The jury is still out on this issue across the world. In Australia, Google was held liable for search results that pointed to defamatory content: in this case, an algorithm was producing the results, but the company was still viewed as liable.

 

Should algorithms come under the purview of FOIA ?

Nick Diakopoulos studies computational and data journalism, and has long been concerned about algorithmic transparency to aid journalism. In the link above, he points to a case in Michigan where the city of Warren was being sued to reveal the formula they used to  calculate water and sewer fees.

Thinking about FOIA (Update: the Freedom of Information Act) for algorithms (or software) brings up all kinds of interesting issues, legal and technical:

  • Suppose we do require that the software be released? Can’t it just be obfuscated so that we can’t really tell what it’s doing, except as a black box ?
  • Suppose we instead require that the algorithm be released. What if it’s a learning algorithm that was trained on some data? If we release the final trained model, that might tell us what the algorithm is doing, but not why.
  • Does it even make sense to release the training data (as Sorelle suggests)? What happens if the algorithm is constantly learning (like an online learning algorithm)? Then would we need to timestamp the data so we can roll back to whichever version is under litigation? (This last suggestion was made by Nick in our twitter conversation).
  • But suppose the algorithm instead makes use of reinforcement learning, and adapts in response to its environment. How on earth can we capture the entire environment used to influence the algorithm?

If we replaced ‘algorithm’ by ‘human’, none of this makes sense. If we’re deciding whether a human decision maker erred in some way, we don’t need to know their life story and life experiences. So we shouldn’t need to know this for an algorithm.

But a human can document their decision-making process in a way that’s interpretable by a court. Maybe that’s what we need to require from an algorithmic decision-making process.

Code, Speech and Action

Neil Richards writes in Technology Review about Apple’s attempt to defend itself in the San Bernadino IPhone case by claiming protection under the First Amendment (i.e code = speech).

As Kate Crawford points out, the danger of this approach is that equating code with speech could allow algorithms to act in discriminatory ways under First Amendment protection.

It seems to me that a key distinction here is speech vs action. Algorithms that make decision might be viewed as “acting” and there’s no First Amendment protection for actions that can discriminate. But then again, I’m not a lawyer.

Dominique Cardon on algorithmic fairness

Issues of fairness in algorithms are not limited to the US. There is a robust research effort in many parts of Europe on issues of fairness and discrimination. The regulatory regimes and legal frameworks are different in Europe, which makes the discussions quite different. But the concerns are universal.

The linked article (in French) is an interview with Dominique Cardon, a French sociologist who’s written a new book on algorithms and big data. Since I don’t speak French, my understanding of the interview is limited to the translation. However, he makes a number of interesting points about our new algorithmic world:

  • that the effect of algorithmic governance in our world cannot be thought of in terms of traditional controls like censorship. It’s more of a bottom up ‘nudge’-based system that constructs elaborate and invisible rewards. To use the somewhat trite but still-useful analogy, it’s not 1984, but a Brave New World, and is all the more insidious for it.
  • We can’t disengage from an algorithmic world: the whole “if you don’t like being watched, don’t go on Facebook” argument is rapidly losing credibility, because eventually we will have no choice but to participate and contribute to the floods of data being generated. This makes “opening the black box” even more crucial.

Not surprisingly, I was happy to hear his points about fairness in algorithms: specifically,

You can not ask an algorithm to be “neutral.” However, it must be “fair.” [….] And for that, it is useful for researchers and civil society [to] create instruments of verification and control.