On centering, solutionism, justice and (un)fairness.

Centering

One of the topics of discussion in the broader conversation around algorithmic fairness has been the idea of decentering: that we should move technology away from the center of attention – as the thing we build to apply to people – and towards the sides – as a tool to instead help people.

This idea took me a while to understand, but makes a lot of sense. After all, we indeed wish to use “tech for good” — to help us flourish — the idea of eudaimonia that dates back to Aristotle and the birth of virtue ethics.

We can’t really do that if technology remains at the center. Centering the algorithm reinforces structure; the algorithm becomes a force multiplier to apply uniform solutions for all people. And that kind of flattening – the treatment of all the same way – is what leads to procedural ideas of fairness as consistency, as well as systematically unequal treatment of those that are different.

Centering the algorithm feeds into our worst inclinations towards tech solutionism – the idea that we should find the “one true method” and apply it everywhere.

So what should we, as computer scientists, do instead? How can we avoid centering the algorithm and instead focus on helping people flourish, while at the same time allowing ourselves to be solution-driven? One idea that I’m becoming more and more convinced of is that, as Mitchell and Hutchinson argue in their FAT* 2019 paper, we should make the shift from thinking about fairness to thinking about (un)fairness.

Unfairness

When we study fairness, we are necessarily looking for something universal. It must hold in all circumstances — a process cannot be fair if it only works in some cases. This universality is what leads to the idea of an all-encompassing solution – “Do this one thing and your system will be fair”. It’s what puts the algorithm at the center.

But unfairness comes into many guises, to paraphrase Tolstoy. And it looks different for different people under different circumstances. There may be general patterns of unfairness that we can identify, but they often emerge from the ground up. Indeed, as Hutchinson and Mitchell put it,

Individuals seeking justice do so when they believe that something has been unfair

Hutchinson & Mitchell. 50 Years of Test (Un)fairness: Lessons for Machine Learning. ACM FAT* 2019.

And to the extent that our focus should be on justice rather than fairness, this distinction becomes very important.

How does a study of unfairness center the people affected by algorithmic systems while still satisfying the computer scientist’s need for solutions? Because it aligns nicely with the idea of “threat models” in computer security.

Threat Models

When we say that a system is secure, it is always with respect to a particular collection of threats. We don’t allow a designer to claim that a system is universally secure against threats other than those explicitly accounted for. Similarly, we should think of different kinds of unfairness as attacks on society at large, or even attacks on groups of people. We can design tools to detect these attacks and possibly even protect against them — these are the solutions we seek. But addressing one kind of attack does not mean that we can fix a different “attack” the same way. That might require a different solution.

Identifying these attacks requires the designer to actually pay attention to the subject of the threat — the groups or individuals being targeted. Because if you don’t know their situation, how on earth do you expect to identify where their harms are coming from? This allows us a great deal more nuance in modeling, and I’d even argue that it pushes the level of abstraction for our reasoning down to the “right” level.

This search for nuance in modeling is precisely where I think computer science can excel. Our solutions here would be the conception of different forms of attack, how they relate to each other, and how we might mitigate them.

We’re already beginning to see examples of this way of thinking. One notable example that comes to mind is the set of strategies that fall under what has been termed POTs (“Protective Optimization Technologies”) due to Overdorf, Kulynych, Balsa, Troncoso and Gürses (one, two). They argue that in order to defeat the many problems introduced by optimization systems – a general framework that goes beyond decision-making to things like representations and recommendations – we should design technology that users (or their “protectors”) could use to subvert the behavior of the optimization system.

POTs have challenges of their own – for one thing they can also be gamed by players with access to more resources than others. But they are an example of what decentered solution-focused technology might look like.

I wrote this essay partly to help myself understand what decentering even might mean in a tech context, and why current formulations of fairness might be missing out on novel perspectives. I’ll have more to say on this in a later post.

Advertisements

Thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s