A funny thing happened on the way to the arXiv….

As I mentioned in the previous post, Sorelle Friedler, Carlos Scheidegger and I just posted a note to the arXiv on worldviews for thinking about fairness and nondiscrimination.

We uploaded the article last Friday, and it appeared on the arXiv on Sunday evening. By Monday late morning (less than 24 hours after the article was posted), we received this email:

I’m a reporter for Motherboard, VICE Media’s technology news site who frequently covers bias in machine learning. I read your paper posted to arXiv and would love to interview one of you for a piece on the work.

I assumed the reporter was referring to one of the two papers we’ve written so far on algorithmic fairness. But no, from the subject line it was clear that the reporter was referring to the article we had just posted! 

I was quite nervous about this: on the one hand it was flattering and rather shocking to get a query that quickly, and on the other hand this was an unreviewed preprint.

In any case, I did the interview. And the article is now out!

Advertisements

Algorithmic Fairness at the LSE

In April, I attended (virtually) a workshop organized by the Media Policy Project of the London School of Economics on “Automation, Prediction and Digital Inequalities”. 

As part of the workshop, I was asked to write a “provocation” that I read at the workshop. This was subsequently converted into a blog post for the MPP’s blog, and here it is.

The case I make here (that I will expand on in the next post) is for trying to develop a mathematical framework for thinking about fairness in algorithms. As a computer scientist, this idea seems like second nature to me, but I recognize that to the larger community of people thinking about fairness in society, this case needs to be argued.

“Investigating the algorithms that govern our lives”

This is the title of a new Columbia Journalism Review article by Chava Gourarie on the role of journalists in explaining the power of algorithms. She goes on to say

But when it comes to algorithms that can comput what the human mind can’t, that won’t be enough. Journalists who want to report on algorithms must expand their literacy into the areas of computing and data, in order to be equipped to deal with the ever-more-complex algorithms governing our lives.

I’m quoted in this article, as are other researchers, and Moritz Hardt’s Medium article on how big data is unfair is mentioned as well.

As they say, read the rest 🙂

NPR: Can Computers be Racist?

Yes.

As will come as no surprise to readers of this blog, algorithms can make biased decisions.  NPR tackles this question in their latest All Tech Considered (which I was interviewed for!).

They start by talking to Jacky Alcine, the software engineer who discovered that Google Photos had tagged his friend as an animal:

As Jacky points out: “One could say, ‘Oh, it’s a computer,’ I’m like OK … a computer built by whom? A computer designed by whom? A computer trained by whom?” It’s a short segment, but we go on to talk a bit about how that bias could come about.

What I want to emphasize here is that, while hiring more Black software engineers would likely help and make it more likely that these issues would be caught quickly, it is not enough. As Jacky implies, the training data itself is biased. In this case, likely by including more photos of white people and animals than of Black people. In other cases, because the labels have been created by people whose past racist decisions are being purposefully used to guide future decisions.

Consider the automated hiring algorithms now touted by many startups (Jobaline, Hirevue, Gild, …). If an all-white company attempts to use their current employees as training data, i.e., attempts to find future employees who are like their current employees, then they’re likely to continue being an all-white company. That’s because the data about their current employees encodes systemic racial bias such as differences between white and Black SAT test-takers even when controlling for ability. Algorithmic decisions will find and replicate this bias.

We need to be proactive to keep such biases from influencing algorithmic decisions.

The quantified life

Rose Eveleth podcasts Flash Forward, a show centered around the idea of taking a concept from the future and imagining what it would be like to live in such a world. She’s done a number of fascinating episodes including one on living with perfect lie detectors, and one on the future of sex robots.

Her latest is on the quantified life, in which algorithmic fairness makes a brief cameo. The episode spans far and wide, talking about the level to which you could quantify your life, futuristic fiction about machine-driven perfection, and even whether calorie counting even makes sense any more.