Bloomberg profile of Richard Berk

Richard Berk is one of the founding fathers of automated risk assessment, and systems based on his work are being deployed in Pennsylvania and other locations. This Bloomberg profile of him has many interesting (and terrifying) nuggets. As always, you should read the whole thing (if Bloomberg’s horrible page rendering doesn’t trigger a headache), but here are some highlights.

What’s interesting in the system he designed is how it’s optimized for cost of incarceration, rather than for accuracy. In the particular case described in the article, this actually makes the system less harsh, because a finding of a problem triggers expensive therapy. On the other side though, there’s a political component: it’s far riskier to release someone who might commit a crime than it is to keep incarcerated someone who might be reformed. As Berk puts it:

The policy position that is taken is that it’s much more dangerous to release Darth Vader than it is to incarcerate Luke Skywalker

The problem of course is that incarcerating Luke Skywalker could turn him into a new Darth Vader, and I don’t know if this is factored into the analysis.

He also says later

Berk argues that eliminating sensitive factors weakens the predictive power of the algorithms. “If you want me to do a totally race-neutral forecast, you’ve got to tell me what variables you’re going to allow me to use, and nobody can, because everything is confounded with race and gender,” he said.

This seems a little binary to me. It’s not an either-or where you either have to keep all sensitive attributes or throw them all out. There are ways to quantify and even subtract out the influence of certain problematic attributes without having to throw out all the information: in fact, we have a paper on this!

As the article, Berk is heading to Norway:

Berk wants to predict at the moment of birth whether people will commit a crime by their 18th birthday, based on factors such as environment and the history of a new child’s parents. This would be almost impossible in the U.S., given that much of a person’s biographical information is spread out across many agencies and subject to many restrictions. He’s not sure if it’s possible in Norway, either, and he acknowledges he also hasn’t completely thought through how best to use such information.

The idea that data can be collected to make such predictions is certainly alluring and tempting. But everything we’re beginning to understand about predictions based on algorithms suggests that making such predictions in the absence of any understanding of the model behavior and why it’s making its decisions is a recipe for disaster.

I’ll note that the recidivism predictions typically work 6 months to 2 years out, and are not particularly accurate! Trying to predict 18 years out is rather scary.

Advertisements

Wisconsin Supreme Court decision on COMPAS

We finally have the first legal ruling on algorithmic decision making. This case comes from Wisconsin, where Eric Loomis challenged the use of COMPAS for sentencing him.

While the Supreme Court denied the appeal, it made a number of interesting observations and recommendations:

  • “risk scores may not be considered as the determinative factor in deciding whether the offender can be supervised safely and effectively in the community.”
  • “the following warning must be given to sentencing judges: “(1) the proprietary nature of COMPAS has been invoked to prevent disclosure of information relating to how factors are weighed or how risk scores are to be determined; (2) risk assessment compares defendants to a national sample, but no cross- validation study for a Wisconsin population has yet been completed; (3) some studies of COMPAS risk assessment scores have raised questions about whether they disproportionately classify minority offenders as having a higher risk of recidivism; and (4) risk assessment tools must be constantly monitored and re-normed for accuracy due to changing populations and subpopulations.”

Like Danielle Citron (the author of the Forbes article) I’m a little skeptical that this will be enough. Warning labels on cigarette boxes didn’t really stop people smoking. But I think as part of a larger effort to increase awareness of the risks, and to make people even stop and think a little before blindly forging ahead with algorithms, this is a decent first step.

At the AINow Symposium in New York (that I’ll say more about later), one proposed extreme along the policy spectrum regarding algorithic decision-making was to place a moratorium on the use of algorithms entirely. I don’t know if that makes complete sense. But a heavy heavy dose of caution is definitely warranted, and rulings like this might lead to a patchwork of caveats and speedbumps that help us flesh out exactly where algorithmic decision making makes more or less sense.