Being Hopeful about Algorithms

I’ve been attending “think-events” around algorithmic fairness of late, firstly in Philadelphia (courtesy of the folks at UPenn) and then in DC (courtesy of the National Academy of Science and the Royal Society).

At these events, one doesn’t see the kind of knee-jerk reaction to the idea of fairness in learning that I’ve documented before. But there’s a very thoughtful critique that comes from people who’ve spent a lot of time themselves thinking and working on these topics. And it goes something like this.

Do we expect more from algorithms than we expect from people? And is that reasonable?

I first heard this critique much earlier at  a Dagstuhl meeting on this topic, when I was asked this question by H. V. Jagadish (who has a great course on ethics in data mining). It came up indirectly during discussions at the Philadelphia event (about which I hope to say something later) and was phrased in this form by Vint Cerf at the Sackler Forum.

I found myself unable to answer it convincingly. We’ve had 1000s of years to set up institutions based on humans decision making. These processes have been flawed, anfdbiased. People have made decisions with implicit and explicit bias.

Why then do we demand then that algorithms do more? Why do we demand that they account for themselves and explain themselves in ways that we don’t ask human judges to do?

I used to have an answer. I argued that algorithms speak the language of mathematics and so we need to translate all our human ideals – of ethics, fairness and justice – into a form that an algorithm could understand. But then we start talking about accountability, interpretability, how an algorithm might explain itself, and what that might even mean.

Jon Kleinberg has this analogy of a learning algorithm as this incredibly obtuse friend that you bring to a party, that you have to explain EVERYTHING to. Where the food is, what the drinks are, what people are saying, and so on. We don’t have to do this for real people because they have a vast body of prior context to work with. Indeed, this prior context is what decides how they function in the world, and is made up of all kinds of heuristics and “biasing” of the space of possible outcomes (as Joanna Bryson puts it).

So it would seem that asking an algorithm for its “audit trail” is the equivalent of asking (say) a human judge “give me the entire story of your life experiences that explains why you made this decision”.

And of course we never do this. In fact, all we really do is set out a series of guidelines and expect the judges to be more or less consistent with them. Similarly for hiring, or credit decision, or any other kind of decision making. In other words, we expect a certain degree of procedural consistency while accepting that individuals may apply discretion based on their own perspective.

So I return to the question from before. Why do we expect an automated decision making process to be any better?

There’s an optimistic take on this. We can’t expect an audit trail from a human decision maker because we don’t have the capacity to generate one. That my verdict on a dog owner might in part be due to being bitten by a dog as a child is something that I’m unlikely to be able to cogently articulate. But it is at least a little unfair that I sentence dog owners more harshly for this reason.

But if we are able to produce such an audit trail from an algorithmic decision maker we do have the hope of revealing implicit preferences and biases based on the *algorithm’s* “life experiences” aka “training data”. And so we can expect more because we have the ability to do so.

An alternate perspective on this goes as follows. We’ve built up over the decades and centuries a system of checks and balances and accountability procedures for evaluating the quality of human decision making. We have laws that require non-discrimination, we have ways to remove decision-makers who make arbitrary decisions, and we have a social structure that makes decision-makers feel a sense of responsibility for their decisions.

None of these exist for algorithmic decision-making, or realistically can. We can’t call an algorithm to account for a bad decision: ultimately all liability rests on legal persons. So the next best thing is to make an algorithm assist in the decision making process, but require transparency so that the human decision-maker can’t blame the algorithm for bad decisions “It’s not me, it’s the algorithm!”, a story that played out in Cory Doctorow’s Human Readable.

There’s a tension between “let’s use automation wherever reasonable”, and “wait. how are you shifting harm?”. We don’t want to stop the deployment of algorithms in decision-making, and frankly I doubt that one could even if one wanted to. But it’s also not unreasonable to express some caution (and perhaps some humility) when doing this. We’re not expecting perfection from automated decision-making: it’s perfectly reasonable to expect just that we can do better than human decision makers. But why not expect that as well as expect a decision that we can understand? Why essentially give up by saying “the algorithm cannot both be powerful and understandable”. To me, that’s the real failure of hope.

A funny thing happened on the way to the arXiv….

As I mentioned in the previous post, Sorelle Friedler, Carlos Scheidegger and I just posted a note to the arXiv on worldviews for thinking about fairness and nondiscrimination.

We uploaded the article last Friday, and it appeared on the arXiv on Sunday evening. By Monday late morning (less than 24 hours after the article was posted), we received this email:

I’m a reporter for Motherboard, VICE Media’s technology news site who frequently covers bias in machine learning. I read your paper posted to arXiv and would love to interview one of you for a piece on the work.

I assumed the reporter was referring to one of the two papers we’ve written so far on algorithmic fairness. But no, from the subject line it was clear that the reporter was referring to the article we had just posted! 

I was quite nervous about this: on the one hand it was flattering and rather shocking to get a query that quickly, and on the other hand this was an unreviewed preprint.

In any case, I did the interview. And the article is now out!

On the (im)possibility of fairness…

Ever since we started thinking about algorithmic fairness and the general issue of data-driven decision-making, there’s always been this nagging issue of “well what if there are cues in data that seem racist/sexist/(–)-ist and yet provide a good signal for a decision?”

There’s no shortage of people willing to point this out: see for example my post on the standard tropes that appear whenever someone discovers bias in some algorithmic process. Most of the responses betray a unexamined belief in the truth of what algorithms discover in data, and that is not satisfying either.

So the problem we’ve faced is this. If you examine closely the computer science literature on fairness and bias, it becomes clear that people are talking at cross-purposes: essentially arguing about why your orange is not more like my apple. And it has become clear that this is because of different assumptions about the world (how biased it is, how unbiased certain features are, and so on).

Here’s the pitch:

Can we separate out assumptions and beliefs about fairness from mechanisms that we deploy to ensure it? And in doing so, can we provide a useful vocabulary for talking about these issues within a common framework?

Here’s the result of our two-year long quest:

On the (im)possibility of fairness

What does it mean for an algorithm to be fair? Different papers use different notions of algorithmic fairness, and although these appear internally consistent, they also seem mutually incompatible. We present a mathematical setting in which the distinctions in previous papers can be made formal. In addition to characterizing the spaces of inputs (the “observed” space) and outputs (the “decision” space), we introduce the notion of a construct space: a space that captures unobservable, but meaningful variables for the prediction.
We show that in order to prove desirable properties of the entire decision-making process, different mechanisms for fairness require different assumptions about the nature of the mapping from construct space to decision space. The results in this paper imply that future treatments of algorithmic fairness should more explicitly state assumptions about the relationship between constructs and observations.

This paper has been a struggle to write. It’s a strange paper in that the main technical contribution is mainly conceptual: establishing what we think are the right basic primitives that can be used to express (mathematically) concepts like fairness, nondiscrimination, and structural bias.

We owe a great debt to our many friends in the social sciences community, as well as the decades of research on this topic in the social sciences. Much of the conceptual development we outline has been laid out in prose form by the many theories of social justice starting with Rawls, but particularly by Roemer. Our main goal has been to mathematize some of these ideas so that we can apply them to algorithms.

 

There’s a great deal of trepidation with which we release this: it’s in many ways a preliminary work that raises more questions than it answers. But we’ve benefited from lots of feedback within CS and without, and hope that this might clarify some of the discussions swirling around algorithmic fairness.

White House Report on Algorithmic Fairness

The White House has put out a report on big data and algorithmic fairness (announcement, full report).  From the announcement:

Using case studies on credit lending, employment, higher education, and criminal justice, the report we are releasing today illustrates how big data techniques can be used to detect bias and prevent discrimination. It also demonstrates the risks involved, particularly how technologies can deliberately or inadvertently perpetuate, exacerbate, or mask discrimination.

The table of contents for the report gives a good overview of the issues addressed:

Big Data and Access to Credit
The Problem: Many Americans lack access to affordable credit due to thin or non-existent credit files.
The Big Data Opportunity: Use of big data in lending can increase access to credit for the financially underserved.
The Big Data Challenge: Expanding access to affordable credit while preserving consumer rights that protect against discrimination in credit eligibility decisions

Big Data and Employment
The Problem: Traditional hiring practices may unnecessarily filter out applicants whose skills match the job opening.
The Big Data Opportunity: Big data can be used to uncover or possibly reduce employment discrimination.
The Big Data Challenge: Promoting fairness, ethics, and mechanisms for mitigating discrimination in employment opportunity.

Big Data and Higher Education
The Problem: Students often face challenges accessing higher education, finding information to help choose the right college, and staying enrolled.
The Big Data Opportunity: Using big data can increase educational opportunities for the students who most need them.
The Big Data Challenge: Administrators must be careful to address the possibility of discrimination in higher education admissions decisions.

Big Data and Criminal Justice
The Problem: In a rapidly evolving world, law enforcement officials are looking for smart ways to use new technologies to increase community safety and trust.
The Big Data Opportunity: Data and algorithms can potentially help law enforcement become more transparent, effective, and efficient.
The Big Data Challenge: The law enforcement community can use new technologies to enhance trust and public safety in the community, especially through measures that promote transparency and accountability and mitigate risks of disparities in treatment and outcomes based on individual characteristics.

“Investigating the algorithms that govern our lives”

This is the title of a new Columbia Journalism Review article by Chava Gourarie on the role of journalists in explaining the power of algorithms. She goes on to say

But when it comes to algorithms that can comput what the human mind can’t, that won’t be enough. Journalists who want to report on algorithms must expand their literacy into the areas of computing and data, in order to be equipped to deal with the ever-more-complex algorithms governing our lives.

I’m quoted in this article, as are other researchers, and Moritz Hardt’s Medium article on how big data is unfair is mentioned as well.

As they say, read the rest 🙂

“Racist algorithms” and learned helplessness

Twitter user Dan Hirschman posts another example of search results that are — let’s just say — questionable:

Aside from the problematic search results (and again, this is an image search), what’s interesting about this is the predictable way in which the discussion unfolds.

There’s a standard pattern of discourse that I see when talking about bias in algorithms (I’ll interject commentary in between the elements).

It starts with the example

Which is usually quickly followed by the retort:

It’s true that if we interpret “racist algorithm” as “algorithm that cackles evilly as it intentionally does racist things”, then an algorithm is not racist. But the usage here is a Turing-test sense i.e the algorithm does something that would be considered racist if a human did it. At least in the US, it is not necessary (even for humans) to show racist intent in order for their actions to be deemed discriminatory; this is essentially the difference between disparate treatment and disparate impact. 

Unlike France.

The retort is often followed by algorithms don’t discriminate, people discriminate:

and also garbage in, garbage out:

This is strictly speaking correct. One important source of bias in algorithms is the training data it’s fed, and that of course is provided by humans. However, this still points to a problem in the use of the algorithm: it needs better training examples, and a better learning procedure. We can’t absolve ourselves of responsibility here, or the algorithm.

But eventually, we always end up with data is truth:

There is a learned helplessness in these responses. The sentiment is, “yes there are problems, but why blame the helpless algorithm, and in any case people are at fault, and plus the world is racist, and you’re trying to be politically correct, and data never lies, and blah blah blah”.

Anything to actually avoid engaging with the issues.

Whenever I’ve had to talk about bias in algorithms, I’ve tried be  careful to emphasize that it’s not that we shouldn’t use algorithms in search, recommendation and decision making. It’s that we often just don’t know how they’re making their decisions to present answers, make recommendations or arrive at conclusions, and it’s this lack of transparency that’s worrisome. Remember, algorithms aren’t just code.

What’s also worrisome is the amplifier effect. Even if “all an algorithm is doing” is reflecting and transmitting biases inherent in society, it’s also amplifying and perpetuating them on a much larger scale than your friendly neighborhood racist. And that’s the bigger issue. As Zeynep Tufekci points out

That is to say, even if the algorithm isn’t creating bias, it’s creating a feedback loop that has powerful perception effects. Try doing an image search for ‘person’ and look carefully at the results you get.

Predictive policing in action

Predictive policing is the idea that by using historical data on crime, one might be able to predict where crime might happen next, and intervene accordingly. Data And Society has put together a good primer on this from the 2015 Conference on Data and Civil Rights that they organized last year (which I attended: see this discussion summary).

If you’re not in the know about predictive policing, you’d be shocked to hear that police jurisdictions all around the country are already using predictive policing software to manage their daily beats. PredPol, one of the companies that provides software for this, says (see the video below) that their software is used in 60 or so jurisdictions.

Alexis Madrigal from Fusion put together a short video explaining the actual process of using predictive policing. It’s a well-done video that in a short time explores many of the nuances and challenges of this complex issue. Some thoughts I had after watching the video:

  • Twice in the episode (once by the CEO of Predpol and once by a police officer) we hear the claim “We take demographics out of the decisionmaking”. But how? I have yet to see any clear explanation of how bias is eliminated from the model used to build predictions, and as we know, this is not an easy task. In fact, the Human Rights Data Analysis Group has done some new research illustrating how Predpol can AMPLIFY biases, rather than removing them.

     

  • At some point, the video shows what looks like an expression of a gradient and says that Predpol constructs an “equation” that predicts where crime will happen. I might be splitting hairs, but I’m almost certain that Predpol constructs an algorithm, and as we already know, an algorithm has nowhere near the sense of certainty, determinism and precision that an equation might have. So this is a little lazy: why not just show a picture of scrolling code instead if you want some visual.
  • The problems we’ve been hearing about with policing over the past few years have in part been due to over-aggressive responses to perceived behavior. If an algorithm is telling you that there’s a higher risk of crime in an area, could that exacerbate this problem?
  • Another point that HRDAG  emphasizes in their work is the difference between crime and the reporting of crime. If you put more police in some areas, you’ll see more crime being reported in that area. It doesn’t mean that there’s actually more crime committed in that area.