Everything you need to know about evidence-based sentencing

Moneyballing Justice: “Evidence-Based” Criminal Reforms Ignore Real Evidence

There are many issues with so-called evidence-based sentencing reforms – from the lack of basic statistical validity, to the lack of transparency, to their discriminatory impact – and this article surveys all of them, with detailed links for much more information. Here’s some of the high profile criticism these methods have received:

Attorney General Eric Holder has warned that use of predictive data in sentencing is likely to adversely affect communities of color. University of Michigan legal scholar Sonja Starr explains that risk scores are based primarily or wholly on an individual’s prior characteristics, including criminal history – some instruments include not only convictions, but arrests and failure to appear in court. Other allegedly criminogenic factors “unrelated to conduct” often include homelessness, “unemployment, marital status, age, education, finances, neighborhood, and family background, including family members’ criminal history.” Starr asserts that because poor people and people of color bear the brunt of mass incarceration, “[p]unishment profiling will exacerbate these disparities.”

Yet some proponents seem to still have missed the point, including the folks behind the Public Safety Assessment – Court from the Laura and John Arnold Foundation:

Importantly, because it does not rely on factors like neighborhood or income, the PSA-Court is helping deliver these results without discriminating on the basis of race or gender.

That might be true, or it might not be. Neighborhood and income certainly aren’t the only possible attributes correlated with race and gender. The group wouldn’t release the list of 9 attributes when the authors of this article asked, so it’s not verifiable. Yet the title of the article about their tool is “Data and Research for Increased Safety and Fairness.” If they really care about fairness, I hope they release their data or at least the methodology by which they determined it wasn’t discriminatory.

More Facebook nymwars

Say my name: Facebook’s unfair “real names” policy continues to harm vulnerable users

Another community is hit by the Facebook nymwars. We don’t know for sure that Facebook is using an algorithm to evaluate whether users’ names are real or fake, but given the large number of users involved it seems likely. So:

Dear Facebook – please evaluate your “fake” name recognition algorithm using class-conditioned error measures. Finding that they work well on most users may still mean that they work terribly for most users who are drag queens.

Facebook, nymwars, and real names

Last year, Moritz Hardt wrote this in his article on fairness:

Suppose a social network attempted to classify user names into ‘real’ and ‘fake’. Anybody still remember Nymwars? White male American names are pretty straightforward to deal with compared with ethnic names. In some ethnic groups, names tend to be far more diverse. Fewer people (if any) carry the same name—a typical sign of a ‘fake’ profile among white Americans.

The lesson is that statistical patterns that apply to the majority might be invalid within a minority group.

Last month, this happened on Facebook:

My mother’s is Lone Hill, my father’s is Lone Elk. Even though word wise they are very different, the meanings are worlds apart as to how they were both given. And both sides of my family carry the name with pride. We also still practice the ceremony of individual name giving and I have often included my Lakota name in the parenthesis or nickname option on facebook. That name is Oyate Wachinyanpi, given to me by my father, meaning People Depend On. My children each have their own individual Lakota names as do my brother and his children, all given to us by our father, grandfather.

That being said Facebook shut me out for using my father’s and my mother’s last names. I switched it back to my mother’s last name and they let me sign on for a few hours, then shut me back out again when I was trying to comment. When I tried to log back in same message as before except they wanted proof of ID. To date I have sent 3 forms of ID, one with a picture, my library card, and a piece of mail in file form. I received a generated message to be patient while they investigate to see if I am a real person.

Hiring by Algorithmic Speech Analysis is Automated Linguistic Profiling

“Now Algorithms Are Deciding Whom To Hire, Based On Voice”

“The benefit of computer automation isn’t just efficiency or cutting costs. Humans evaluating job candidates can get tired by the time applicant No. 25 comes through the door. Those doing the hiring can discriminate. But algorithms have stamina, and they do not factor in things like age, race, gender or sexual orientation. ‘That’s the beauty of math,’ Salazar says. ‘It’s blind.’”

Jobaline helps companies determine who to hire based on a voice analysis. But that’s not inherently fair, as they claim. Instead, machine learning algorithms that pattern match against human preferences are likely to replicate the potentially prejudicial decisions of those people. In this case, that’s likely to lead to automated linguistic profiling – discrimination based on accent and dialect as correlated with race and ethnicity. The algorithms may be literally blind to the race of the applicants, but that won’t keep them from making discriminatory hiring decisions.

While we can’t blame Jobaline for trying to tout the wonders of their “algorithm” (and yes, those are scare quotes there), we’d expect NPR to be at least a little bit more critical about claims that “[math is] blind”. At the very least, the reporters should read all the things that Cathy O’Neill has been saying for avery long time.

The Dangers of Evidence-based Sentencing

States predict inmates’ future crimes with secretive surveys

”…Across the country, states have turned to a data-driven movement to drive down prison populations, reduce recidivism and save billions of dollars. One emerging practice is the use of risk-and-needs assessment tools, which are questionnaires that explore issues beyond criminal history. They are based on surveys of offenders making their way through the justice system.”

The article highlights two problems with evidence-based sentencing models. On the one hand, they are based on the results of surveys filled out by inmates, and can be gamed. On the other hand, they might punish people merely for being poor or uneducated, or not being part of the “right” groups.

A bigger issue is that there is no regulation of the features being used to make these prediction, which raises issues of bias and disparate impact.