We’re still learning about what artificial intelligence can do in the field of public discourse…and in the places in which we talk to each other these days.
It’s a big question – how will AI factor into our sense of debate, and the polemics that we use in connection with each other about our political beliefs?
There are some initiatives and efforts going on that start to look at these issues – for instance, the EU AI Act, while maybe not getting directly into the issue of moderating debates, establishes transparency guidelines for digital interactions and data. EC spokespersons explain:
“As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy… (the act) says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.”
At a Davos Imagination in Action panel facilitated by MIT Connection Science professor Sandy Pentland, we had a number of important voices talking about these issues, including Gabriele Mazzini, who is not only a member of the European Commission who worked on the AI Act, but also an MIT Connection Science fellow, having spent time in Boston researching IT.
In the panel, we talked about some of these possibilities…
Sandy started with Lily Tsai, Director of MIT’s GOV/LAB program and a Ford Professor of Political Science.
“There’s a lot of economic and social uncertainty,” she said of the current era in which AI is emerging. “We’re in a period of global polycrises – this makes people anxious and vulnerable to to populists and demagogues who want to take advantage of our vulnerability, and it means that if they promise us social and moral order, we might take their offer, and be okay with dismantling democratic institutions.”
Social media companies, she suggested, have also impacted the scenario in their own ways. “(They) have stumbled into creating the perfect conditions for autocrats and demagogues to take advantage,” she said.
When platforms like Facebook and Twitter make people angry and afraid, she added, they are often prevented from coming together to look at solutions to problems. She talked about how a “moderate majority,” exhausted from the online flame wars and anxious of being scapegoated by extremes, will possibly draw away from public discourse altogether.
“We need different kinds of online spaces that are slower, cooler and less viral,” Tsai said.
At MIT, she noted, scientists are looking at genAI tools for online spaces that foster deliberation and democracy, where, for example, chatbots may advise people on how to craft effective rhetoric or more cohesive responses to questions.
“A lot of this can really work,” Pentland said, turning to Mazzini for an explanation of some of the ways that the European Commission worked on the AI Act.
“We did not have in mind this topic of questions around democracy as we started thinking (about it),” Mazzini said, enumerating some of the challenges that the AI Act meant to address. “The essence (of the planning) was – for safety, impact on fundamental rights, and so on. … We foresaw transparency obligations when it comes to generated content.”
Mazzini cited a broad definition of AI that ensures transparency for media, and said the AI Act is just one tool in the toolbox, for responding to “a risk of deception” and dealing with things like deepfakes.
“It was a quick process,” he said, where he started in 2019 and the law was passed in April of 2021.
Sandy also talked to MIT Media Lab PhD student Robert Mahari, who discussed the goal of making AI tools better regulated, by design.
“AIs are good at being optimized to certain goals,” he said. “We can (attempt to) figure out what kinds of harms we really want to guard against – design metrics for these, and then optimize systems around preventing these harms.”
He likened it to a classical example: elevator design.
“If anything goes wrong, you stop,” he said, citing a need to deal with issues like privacy on the front end, and to design AI to achieve regulatory objectives. He also called for dialogue between regulators and innovators, and looking at “measurable risks.”
In response to some of the presentation, in concluding remarks, Tsai talked about appealing to the “better angels of our nature” and how to make AI cultivate that.
Mazzini talked about an inherent flaw in the law – that its slowness makes it hard to address things like the quick advent of artificial intelligence, although he also talked about its utility in the AI era.
“The law has the benefit of democratic legitimacy,” he said.
Listening to all of this, I was thinking about the sort of double-edged sword that AI poses – on one hand, all sorts of applications could really improve how we talk to one another, and how we collaborate across our differences. On the other hand, we have to work very hard to make sure that these tools don’t become dominant in shaping our thinking in some sort of inappropriate way.
In some of these other blog posts, we see a lot of the experts in this field talking about exactly how to manage that divide and straddle the happy medium, balancing all of these values together when addressing the impact of AI in our lives. I thought that this panel was especially instructive in talking about some of that transatlantic deliberation process that’s integral to what entrepreneurs and others are doing right now.