‘Uncovered, unknown, and uncertain’: Guiding ethics in the age of AI

Luciano Floridi, director of Yale’s new Digital Ethics Center, describes his approach to creating ethical frameworks for AI and other new technologies.
Luciano Floridi

Luciano Floridi (Photo by Mara Lavitt)

Well before the rise of Google, Amazon, Facebook, and other tech behemoths, philosopher Luciano Floridi contemplated the ethical and conceptual implications of the information age, producing work that presciently addressed the world-changing benefits and potential risks of digital technology.

For example, a seminal paper he published in 1996 foresaw many of the ways people use the internet to disinform.

You could see that the internet was going to be an amazing communication channel, and new modes of communication have always brought disinformation with them,” said Floridi, the founding director of Yale’s Digital Ethics Center (DEC) and professor in the practice in the Cognitive Science Program in the Faculty of Arts and Sciences. “I laid out what I thought would happen.”

Floridi, who joined Yale last summer from Oxford University, has brought this pioneering approach to the new center, where he leads a team of 12 postdoctoral and post-graduate researchers in studying the governance, ethical, legal, and social implications of digital innovation and technologies and their human, societal, and environmental consequences. In its work, the center seeks to identify the benefits of digital innovations, enhance their potential as a force for good, and mitigate their risks.

The DEC has internal and external roles. A campus hub for helping Yale scholars with questions and projects relating to digital ethics and the social impact of digital technologies, it is also an international research center that aims to detect and address ethical issues concerning artificial intelligence and other technological innovations before they arise. And it provides advice to governments, companies, and nongovernmental organizations on novel questions concerning digital ethics. 

In a recent conversation with Yale News, Floridi discussed his work with the European Union to establish protections in the use of artificial intelligence, the center’s advisory role with businesses and governments, and the interplay between philosophy and digital innovation.

The interview has been edited and condensed.

One of your goals in establishing the center was to create an international center of excellence working on the impact and governance of new digital technologies. What does success look like?

Luciano Floridi: We try to anticipate problems by looking to the future. That is, we aim to do pioneering work that expands knowledge and informs better policy, laws, or business strategies. What do we do with new technology? What is the right thing to do? Do we need legislation? What kind of frameworks should be erected? The answers can determine whether a company invests here or there, or whether the government issues some regulation of this kind rather than that kind. It moves history at the end of the day.

We don’t study and comment on current trends. Rather, we want to be, if we can, the first to step into areas that are previously uncovered, unknown, and uncertain. Now, that comes with risks. If you take this pioneering approach and get things right half of the time, then that is a huge success. That means we’re publishing a paper once or twice a year examining a particularly important issue before anyone else does. That’s huge.

But that also means that sometimes we’ll miss. We will publish papers that don’t really pan out, which is okay. It’s the cost of being pioneering in our approach. But if you develop a successful track record with those papers that really land, you become a leader in the space as opposed to following what everybody else is doing. That’s our goal.

What is one of the most significant examples of the ways your work has been influential?

Floridi: I’ve been involved in setting an ethical framework for the governance and use of artificial intelligence in the European Union. I was one of the initiators of the AI Act, legislation that will ensure that AI systems used in the E.U. are safe and respect fundamental rights. It’s close to being enacted.

As part of that work, my colleagues and I created the first auditing model to evaluate AI systems, according to European legislation. The act establishes rules on the use of AI that will be applied in each of the E.U.’s 27 member states. Companies will have to comply with rules. Someone will have to perform auditing to assess whether companies are following the rules. We began working on that five years ago. Our paper laying out the auditing model has been downloaded 12,000 times by researchers in just a few weeks, which is extraordinary.

Working with colleagues in Europe, we also published the first paper on how to model AI risk in accordance with the scale defined by E.U. legislation, from zero to five. Zero is completely safe. Five is dangerous and shouldn’t be done.

What’s an example of an AI application that falls somewhere between completely safe and clearly dangerous?

Floridi: One complicated use involves anything to do with biometrics — biological measurements, like facial recognition, that can be used to identify and monitor people. Biometrics carry serious risks. Bad actors can use them to steal people’s identity. There are privacy risks when governments use biometrics to surveil their populations.

At the same time, we use facial recognition to unlock our smart phones, which is much more secure than a password. Biometrics can be used to prevent and respond to terrorist attacks. Our risk model provides a way to know exactly when to intervene against a use of AI and when not to intervene.

In building the model, we borrowed some techniques from risk modeling on climate change and adapted them to AI-related risks, such as deepfakes and other methods for spreading disinformation. Some argue that the risk of these things is inflated because people have always used propaganda and disinformation to manipulate public opinion. That’s true but the quantity, quality, and the cheap nature of AI-generated misinformation is extraordinary. You can produce it industrially and it doesn’t cost much of anything. Clearly this is a greater threat than the misinformation produced, say, in the 1950s, when maybe you would publish a pamphlet of misinformation about a political candidate.

The center advises governments and businesses on ethical questions involving technological innovations. What’s an example of that work?

Floridi: We’re not a consultancy, but when a government or a company has an interesting ethical issue, they often ask for our advice. If you anticipate problems and tackle them early, they cost a fraction in terms of human suffering and financial resources than they do if you wait until they explode. Then you’re left with a big mess and, sometimes, it causes irrevocable human suffering.

For instance, I served on the U.K. government’s advisory board for the National Health Service’s COVID-19 app, which was a voluntary contact tracing app intended to monitor the spread of the virus. You could see immediately there was a serious problem concerning the interplay of privacy and safety and the [application programming interface]. The app would be monitoring users’ movements 24 hours a day, seven days a week. It was very close to something you’d encounter in a surveillance state. I warned them that a tradeoff needed to be found that provided sufficient data but also protected people’s privacy. Unfortunately, they released the app without incorporating my and other experts’ recommendations, which led to disaster. Headlines decried the threat to privacy and technical failures.

They went back to the drawing board, incorporated the suggestions, and released a new version. But they’d thrown away millions of dollars and we don’t know how many people suffered in the absence of an app to monitor spread.

What are some other issues the center is currently working on?

Floridi: We’re working with other colleagues here at Yale on brain implants. It concerns the development of new chips that can be more flexible. But there are also implications in terms of who controls the chips or what to do if the company that produces the chip goes bust, which has happened [after a clinical trial in Australia]. In that case, the court decided the chip had to be removed. What are the social implications of this technology? Someone could opt to have a brain implant to enhance their performance. All this sounds a little bit futuristic, but it’s already here. Chips have been implanted in people for a while, especially to address serious cognitive challenges.

We also work on digital infrastructure. We just finished a paper on the governance and control of undersea cables. Companies are being reined in by governments, which are slowly but firmly realizing that almost all internet traffic goes through these cables. Whoever controls the cables, controls the internet. Today, governments want to control the infrastructure. We’re studying the implication of that.

How does philosophy lend itself to addressing the ethical issues of the digital revolution?

Floridi: I belong to the Greek tradition of philosophers, who involved themselves in the real world. Plato and Aristotle established and ran institutions, the Academy and the Lyceum, respectively. These were people committed to pursuing the truth and grappling with real-world problems. Socrates died for his ideas, not that I want to go down that road. The Greek tradition is highly intellectual, but also practical and pragmatic. It’s concerned with philosophical problems, not philosophers’ problems. Sooner or later, everyone faces philosophical problems. Life is full of them. They are small, big, and immense. Why are we here? Is there an afterlife? That’s human essence.

The digital revolution is truly historical. Humanity hasn’t often made this kind of enormous leap. The agricultural revolution and the industrial revolution are other examples. So much is changing so dramatically that a lot of the conceptual tools that we have need to be completely rethought and others need to be designed. It’s a magical time for philosophy. It’s an opportunity we cannot miss.

Share this with Facebook Share this with X Share this with LinkedIn Share this with Email Print this