Businesses are already using AI, but how are they going about addressing their risks?
This fall, Kevin Werbach, professor of legal studies and business ethics at the Wharton School of the University of Pennsylvania, is leading a new course as part of the schoolâs executive education program that will focus on âaccountable AI,â a term he uses to refer to the practice of understanding and addressing AI risks and limitations.
Some 18% of S&P 500 companies mention AI in their 2023 annual reports as a ârisk factor,â Werbach explains on the first episode of his podcast, The Road to Accountable AI, which launched in April.
âThat probably should have been higher,â he says on the podcast.
In an interview with The Inquirer, Werbach explains some AI issues and what accountable AI entails.
âThere have been example after example of failures of AI systems, systems that are biased based on race and gender and other kinds of characteristics, systems that make very significant errors,â he said. âThe ordinary person out there who is hearing about AI is probably hearing at least as much about the failures as the benefits today. The reality is both are real, but to me that means we need to think about how to maximize the benefits and minimize the dangers.â
The interview has been condensed for brevity and clarity.
What is accountable AI?
Accountable AI is the phrase that I prefer to describe the practice around understanding and addressing the various kinds of risks and limitations of implementation of AI. People sometimes talk about AI ethics, which is a piece of it, but thereâs a danger that just focusing on ethics leads people to emphasize the principles as opposed to practical steps to take in organizations. Sometimes thereâs talk about responsible AI â which is closer â but still doesnât necessarily get organizations focused on how to create effective structures of accountability. So I talk about accountable AI as the entire set of frameworks around understanding issues with AI systems, figuring out how to manage them, govern them, mitigate risks, and putting into place mechanisms of accountability so people feel that they are the ones who have the obligation to take the kinds of actions that are necessary.
In your podcast you mention a Pew 2023 survey in which a majority of respondents were more concerned than excited about AI in daily life. Why are you excited about AI, and do you think others should be, too?
Iâm excited about AI because there are countless ways that we can use it either to do things humans are doing â better or faster â or in some cases to do things at scales that humans really canât effectively do. Especially with the rise of generative AI, where this is not just machine learning that gets developed and implemented by data scientists, itâs something anyone in an organization can interact with directly. Thereâs just an infinite number of places where weâre going to find out ways that AI will make business more effective and potentially make peopleâs lives better.
On the other hand, there are huge, huge problems and concerns. Those range from small scale issues â we know that large language models (generative AI systems like ChatGPT) will hallucinate and create information thatâs just simply wrong â to there could be catastrophic effects of these technologies being used for weapons development and terrorism and so forth, and everything in between.
What does regulation for AI look like today? Should businesses be creating their own internal policies?
Definitely businesses should be creating their own internal policies. There are a whole range of different issues … So for example, if you do not have an enterprise license to these tools, then typically any queries that you send to the chatbot get stored and can be used by the company thatâs providing that service. If youâre in a financial services firm and someone is asking a question that reveals very sensitive, confidential information for the firm, you might just think âitâs like Iâm typing in a search term to Google,â but potentially, youâre giving up sensitive private information, which could then be used to train future models and [be] accessible to the rest of the world ⦠thatâs something an organization should think about.
What are some of the ethical considerations business should be thinking about when choosing to implement AI tools into their processes?
Itâs really important for businesses to think about what are their general ethical principles that are important to them. Thatâs something that they probably should be doing already with technology. Weâve had many years of controversies about issues like privacy and security and fairness â those are ethical values that are relevant to technology in general, which are very relevant to AI ⦠If a health-care firm is using AI to read X-rays effectively, thatâs something different from a marketing firm thatâs using AI to generate copy, but there are ethical issues in both contexts. Itâs really a matter of mapping out the major ethical issues that the firmâs concerned with.
Do you think more businesses should be using AI? Is this a good moment for businesses to be trying this out?
Everyone should at least understand what the technology is and what itâs capable of. The generation of generative AI systems â the chatbots and so forth â are so novel and powerful in ways that we havenât really experienced before, that everyone should at least get a handle on them. It doesnât necessarily mean everyone needs to adopt them or theyâre going to change everyoneâs life overnight, but everyone should understand what they do well and donât do well, and how far along we are, and how fast the technology is evolving, because otherwise theyâre going to be surprised. This is accessible to everyone, so your competitor very well may be experimenting with the technology if youâre not.
What are some of the risks of using AI tools, and how can businesses protect themselves or mitigate them?
One risk is accuracy, especially with generative AI. One big challenge is you may not quite understand why the AI produced a certain result, which in some situations might not be important, but in a situation where, letâs say the AI system says hire this person and not that person, and the person who didnât get hired challenges that decision, how do you explain that âthe AI told me toâ? Thereâs a big set of technical challenges around explaining AI.