In Conversation

Elisa Celis and the fight for fairness in artificial intelligence

Celis, an assistant professor of statistics and data science, discusses her work to advance fairness, inclusion — and ethics — in AI research.
Elisa Celis

Elisa Celis doesn’t see any reason why artificial intelligence (AI) can’t reflect a society’s best values, as well as its commercial interests.

It’s the idea at the heart of her efforts as a data scientist: Fairness and inclusion can be written into algorithms, right alongside speed and efficiency, says Celis: The hard part is finding the civic will to make it happen.

Celis, an assistant professor in the Department of Statistics and Data Science, has experience in both the theoretical and practical sides of AI. She’s conducted cutting-edge research at the École Polytechnique Fédérale de Lausanne in Switzerland, worked as head of crowdsourcing at the Xerox Research Centre in India, and last year helped implement an algorithm to fairly select a commission that will rewrite the state constitution in a Swiss canton.

Since joining Yale at the beginning of the year, she’s also helped organize a full-day AI ethics workshop on campus and taught a new course, “Data Science Ethics.”

YaleNews spoke with Celis about her research and her thoughts on the future of AI in government, law enforcement, social media, and business. The following is an edited version of the conversation.

What do you mean by “fairness” when discussing artificial intelligence?

There’s no one definition I can point to; it’s very context dependent. But broadly speaking, the algorithms we now use are built around data, and this data encodes societal biases. So, when we collect and analyze this data, the first question we want to ask is, “Which of those biases are being captured by the algorithm and are those biases being exacerbated?” — which is what we tend to find in out-of-the-box models. We certainly don’t want to make inequality worse than what already exists in the world and in the data. By removing bias from algorithms, we can help rectify injustices rather than propagate them.

We have actual people being affected by these algorithms. We see things in the news such as algorithms that predict recidivism — whether someone will re-commit a particular crime — and set a bail amount or pass that information on to a judge who decides whether or not to set bail. The algorithms used to make these predictions end up relying on correlations with socioeconomic status, or race, or gender. So someone who might have a very similar background to you but differs across race or gender might have a very different outcome because of what the algorithm predicts.

Do you think people are generally aware of the degree to which these algorithms are already part of everyday life?

Largely, no. When I started working in this area I was so surprised by the extent that algorithms are already used in various parts of governance — such as financial institutions deciding who gets a loan and at what rate. They’re also involved in the judicial system and in policing, in deciding where police cars are going to be sent to patrol. On the first pass, this seems like a good thing to do — we want to have more police where there’s crime. On the other hand, when you send more police cars somewhere, you tend to find more crime there. You get into these reinforcement loops. And, of course, you find this in the online world, everything from the news you get, to the job ads you see, to which friends are suggested to you on Facebook. These might seem like very subtle things but they’re all completely controlled by algorithms.

I’ve always been interested in the social effects of technology. Before I was working on this topic I worked at Xerox Research heading up their crowdsourcing research team. That was a very unique space because there you see one way that data is being generated and how algorithms are being used on that data. Then you come full circle to see where the predictions of those algorithms were affecting how businesses and corporations make decisions and impact society. I really got a sense of how all these parts of the pipeline that had potential for bias, and when I had the opportunity to take on a more academic role I knew this was basically the main topic I wanted to focus on.

Your work does seem to bridge theory and practice. Which side of the equation interests you the most?

Both are crucial. You can’t have one without the other. That is definitely the main thing that drew me to Yale, because the type of work I want to do is not possible at a purely technical institution. I need my colleagues in sociology, and in law, and in psychology, and in business. Some of the technical fields suffer from a bit of a grandiose complex, where you think the world’s problems will be solved only by technical solutions. While those technical solutions are important, they don’t look at the big picture. As soon as something is a human problem, you need the human component for solutions as well.

Before joining the Yale faculty, you helped design an algorithm for an election in Switzerland. What was that experience like?

This was last year in November in a canton called Valais. A canton is like a state in the U.S. In Valais, they wanted to rewrite their state constitution. The question is: “Who is going to rewrite it?” They needed to elect a body of people to do this rewriting, and there was a lot of concern in terms of what this group was going to look like. It’s a very interesting canton because it has one big city but also a significant rural population. There were many concerns as to how you make sure the people writing this constitution are representative of the entire population.

We first looked at some of the attributes they might want to consider in diversifying the elected committee — there are many possibilities from gender to race to political affiliation. The first part of the process was to have people vote on the question: “Which attributes do you want diversified?” The ones that were voted in were gender, locality, and age — this meant that whatever committee was finally selected should have diverse representatives across these three categories. Only then did people vote on individual candidates. The next part of the process was to select the winners of the election in a way that had as many votes as possible but was diverse across the selected attributes. To do this we developed a new algorithm and made all of the code open source so that anyone can look at it and test it.

It was very exciting to see the process from start to finish, and people seemed happy with the results. It’s a multi-year process so they’ve not written the constitution yet. My hope is there will be some discussion of building this same process into the constitution so that this is the way they elect high-level committees.

What can you tell us about the new undergraduate course you’re teaching at Yale?

It’s called “Data Science Ethics.” I came in with an idea of what I wanted to do, but I also wanted to incorporate a lot of feedback from students. The first week was spent asking: “What is normative ethics? How do we even go about thinking in terms of ethical decisions in this context?” With that foundation, we began talking about different areas where ethical questions come out, throughout the entire data science pipeline. Everything from how you collect data to the algorithms themselves and how they end up encoding these biases, and how the results of biased algorithms directly affect people. The goal is to introduce students to all the things they should have in their mind when talking about ethics in the technical sphere.

The class doesn’t require coding or technical background, because that allows students from other departments to participate. We have students from anthropology, sociology, and economics, and other departments, which broadens the discussion. That’s very valuable when grappling with these inherently interdisciplinary problems.

How much appetite do you see in the tech world to create more of a balance between commerce and equality?

I’ve seen a positive trend over the past year or so — in part due to public pressure — in which there has been an effort by most of the big tech companies to have internal research teams or advisory boards looking at this. But it’s unclear where that will lead, so we need to have parallel conversations about it, including conversations about regulation. What would that look like? What fairness standards do we want tech companies to have? I’m seeing more openness to working with researchers in this area.

What are you working on now?

I’ve been building connections to other departments. In the long term, something I’m very excited about is partnering with other people from sociology, psychology, and law to start bridging this gap between the technical component and the societal component. When I build an algorithm, I need a bunch of parameters before deploying it. These parameters determine not just what my optimization function is, but also what I mean if I want to balance it to make it fair. What parameters should we work with if we’re talking about race, or about gender? What are the legal considerations? I want to be able to adapt my algorithms to these problems in a way that’s both meaningful and useful. As part of this, I recently co-founded the Computation and Society Initiative at Yale along with Nisheeth Vishnoi (computer science) and Alan Gerber (political science). I am very excited about the creation of this interdisciplinary space and the people it is bringing together.

Share this with Facebook Share this with X Share this with LinkedIn Share this with Email Print this

Media Contact

Fred Mamoun: fred.mamoun@yale.edu, 203-436-2643