In 2024, a Toledo man was pulled over by police, attacked by a police dog and wrongfully arrested in what turned out to be a case of mistaken identity. AI cameras installed and run by Flock Safety misread the man’s license plate, mistaking the number seven for a two, triggering an incorrect report of a stolen vehicle. It’s one of a growing number of cases linking failures by Flock Safety’s AI-powered license plate readers to real-world harm. In another example of the potential perils of AI use, Workday, a cloud-based enterprise software company that provides applications for human resources and finance, is currently defending a class-action lawsuit brought by job candidates who assert that the company’s AI-powered applicant screening software led to racial, age and disability discrimination in hiring processes across several employers.
The AI revolution is here, and boards are being called to provide more oversight, even as the implications for both strategy and risk are still coming into focus. The Flock and Workday cases underscore how ethical and responsible use of AI is now a board-level governance issue, requiring oversight of both internal and third-party AI, including bias auditing, transparency and human accountability, amid rising litigation and regulatory scrutiny.
While how a company thinks about AI use will vary based on industry and business model, there is widespread agreement that every board should be discussing it. “Never in my career have I seen technology that stacks up to AI in terms of its breadth of impact on things like strategy, productivity, the customer experience and cybersecurity in particular, but especially people and talent and how work gets done and the nature of work,” says Dan Hesse, a former technology executive who now serves as chairman of Akamai Technologies, sits on the board of PNC Financial Services (where he chairs the technology committee) and serves on the private board of LAN Party Technologies. “It’s going to have an enormous impact.”
Liat Ben-Zur, who serves on the boards of TalkSpace, Compass Group and Splash Top Inc., and advises companies on AI strategy, agrees. “There is not a single company in the world that shouldn’t be talking about AI. Not because I think that every single company needs to become a tech company and not because I think that every single product is ideal for AI. But because, if you are a company, if you are a going concern, it is very unlikely that you don’t have a finance group, an HR group, a marketing group, a sales group and a customer support, and I think AI is totally transforming every single one of those functions.”
The board’s role is to ensure thoughtful governance structures are in place, including clear boundaries and policies, defined accountability and control mechanisms, and oversight of risks like bias, privacy breaches and security vulnerabilities. “When people hear the term ‘ethical oversight of AI,’ they tend to use it as a bit of a value statement. To me, it’s a governance job,” says Ben-Zur. “It means that the board is making sure that the company has some clear boundaries, some clear control mechanisms, discipline accountability for AI systems, that it’s looking out for systems that could potentially harm people, whether that’s because of bias or privacy breaches or manipulation recommendations. It’s looking out for AI that opens up new security risks.” For the board to provide this oversight effectively it must first level-set how the company and the board itself are thinking about and approaching the responsible use of AI. The following questions are an excellent place to start.
Do we understand how AI is already being used inside the company? Even if AI is not yet core to the business strategy, tools that utilize it are becoming ubiquitous across our homes and workplaces, all but ensuring that AI is already in use inside the business — whether you know it or not. Thus, says Ben-Zur, the board should start by asking and understanding how AI is already being used inside the organization “I would start with a request for leadership to show the board an inventory of AI use cases across the business and across functions, operations and products. That should include vendors that we might be using. Because if we can’t even list where we’re using AI, then we definitely aren’t going to be able to govern it,” says Ben-Zur. “The question then becomes, what is the governance around third-party tools that we are bringing in, just like you would have around cybersecurity vendors and third-party vendors of other critical functions.”
Understanding the AI use cases inside the company also helps the board ensure those uses are being properly categorized by level, enabling more effective oversight. From both a liability and reputation risk standpoint, a board should probe how a company’s application of AI falls along a spectrum of low-, medium- and high-risk use cases. High-risk use cases could include things like hiring or firing, compensation and benefits, or lending, health care decisions, pricing, protection and safety considerations, and purchasing ability. While most use cases will likely be low-risk, Ben-Zur emphasizes the management team should flag high-risk use cases and those areas should warrant deeper scrutiny. “In those instances, you want to understand what tests we are we going to require before we launch our own AI. And then what monitoring are we going to do after launch to ensure that we’re tracking if there’s drift or bias or problems in production? You want to make sure there’s a human override or appeals path for those high-risk use cases. It’s important to at least document and talk about that as a board.”
Do we understand what data we are using and how? Data is the raw material foundational to every AI system. How well those systems function depends on the quality of the data going in. From a board perspective, this means, at its core, oversight of ethical AI use is a data-governance issue. This requires the board to pressure-test with leadership how data is sourced, validated, updated and monitored over time.
“I’ve seen the word ‘ethical’ thrown out a lot in the context of AI, and I think it’s the wrong word most of the time,” says Hesse. “I don’t think it’s an ethical decision as much as it is a values decision. For example, is protecting customer and employee privacy an important value of your company? Breaking the law regarding privacy is an ethical issue, but some companies, where privacy is a core value, will go beyond what the law requires. It’s legal to lay off employees, but does your company value training its people and making jobs more meaningful and enjoyable? Your AI guardrails, in addition to obeying the law, are your company’s values. I may not want to work for a company that only complies with the minimum required by the law when it comes to privacy and jobs, but I wouldn’t say they’re unethical. I prefer the word ‘responsible’ over ‘ethical’ in most cases.”
As a former insurance company CEO and board member in the industry, Peter Wilson has spent a lot of time understanding how AI is impacting the insurance space — a place with plenty of high-risk use cases that govern things like who gets coverage, how much they pay for the coverage and how benefits are paid out. Data quality is one of the first things he pressure-tests when evaluating AI use. “When I hear someone getting excited because they cut application processing time by using these tools, my first response is ‘Great. Where did you get the data and do you really understand what you’ve built?’” says Peter Wilson, a former board member of QBE and former CEO, insurance, at AXIS Capital. He stressed the need to pressure-test management’s thinking and approach, especially when the focus skews heavily toward cost savings and prioritization of efficiencies and speed for its own sake. The board, in its oversight role, should ask questions that help the leadership team zoom out and consider value creation more holistically. “I want to know, can we build something that will actually help deliver a better more understandable and transparent product to our customers?” says Wilson. “And then controlling the data, data privacy, regulatory constraints, all those other things that lay up on top of that. The issue is, ‘How expansive is the data set? Where are you getting that data set from? Is it a valid data set? Or are you looking at a complete black box where you have no idea what’s going on inside the box?’”
Do we have a culture of shared risk oversight and are we balancing innovation with risk mitigation? Data governance can provide a strong foundation to help the board balance speed with responsible use, a key tension when it comes to AI adoption. “I want people to be pushing to say, how do we become more innovative? How do we do things better, quicker, cheaper and more responsive to customers? I want all that, but we can’t have it run amok. We need to understand that there are both lots of benefits and lots of potential pitfalls associated with this technology, and to make sure that we’ve got the right balance,” says Wilson. ”You need to make sure that those people sitting on the front lines, who are actually utilizing this technology day in and day out, have a risk mindset associated with it. I know lots of companies do essentially risk temperature checks to see if people are self-reporting issues within a company to say, ‘Hey, I found this issue.’ As opposed to saying, ‘I hope internal audit doesn’t find this.’ They’re actually coming forward and saying we need to find a way to button this up.”
Wilson also points to the need to balance the needs of all key stakeholders when considering the potential risks, especially through the lens of responsible use of AI. “Ethics in this regard frequently involves confidentially, conflicts of interest and moral trade-offs. There are instances where driving increased shareholder value through AI adoption might not be good for your customers. For example, there is a trade-off between efficiency and competitive advantage with the environmental impact of this technology, a trade-off that benefits shareholders but at the probable expense to our broader society.”
Ken Daly, the former CEO of the National Association of Corporate Directors, asked new board members learning about oversight of financial controls, “Why do you have brakes on a car?” The answer? “So you can go fast.” This maxim is especially apropos when it comes to AI adoption. As headlines amplify AI’s sweeping impact across industries, organizations risk feeling pulled between paralysis and urgency. But the fundamental tension between risk and speed is not new, says Hesse, and it’s one the board is uniquely positioned to help leadership navigate.
“There is a propensity to want to move fast [with AI]. Your competitors are likely moving quickly. There are plusses and minuses to going ‘all in’ and maximizing AI adoption broadly and quickly versus taking incremental steps to make sure you have it right before proceeding to the next step,” says Hesse. “Just make sure your eyes are open to what the risks are and have systems, gauges or key performance indicators along the way so you know if something’s starting to go off the rails a little bit. But, again, a company’s values and, therefore, its AI guardrails should never change, whether you’re adopting AI rapidly or taking more of a measured approach.”
Do we know who is ultimately accountable for AI use? As with all enterprise risk, assigning accountability for oversight is key. This is true both for leadership and the board itself.
When it comes to accountability within the company, it’s critical that internal audit is sufficiently up to speed enough to fulfill its role, says Wilson.“I want to know, is the internal audit department sufficiently staffed and capable to be able to understand these models? Is there transparency through the process, where they can see exactly what’s going on, whether the models are drifting at all, whether we can validate the outcomes that are coming out and we’ve eliminated as best we can all bias that would be introduced.”
Ben-Zur suggests that, while it’s critical to identify accountability, it should not be at the expense of empowering the full leadership team to embrace responsibility for both strategy and risk across their individual functions. “As an advisor, I talk about the importance of distributed ownership so that there’s a feeling of innovation, ownership and accountability from an operational perspective. You want the chief marketing officer to own AI and marketing and how it completely redefines marketing. You want the chief revenue officerto own AI in sales and customer support and rethink how it changes sales and customer support completely. Same thing in finance, same thing in operations. You want those leaders to feel like they own AI, not that they’re going to rely on some tech guy to come in and tell them what tool to use, because that’ll never win long-term. So, on the one hand, I talk about the importance of distributed ownership so that there’s a feeling of innovation, ownership and accountability from an operational perspective. From a board perspective, it’s important to have clarity on who’s the accountable executive overall though for the company on day-to-day risk and what’s the escalation path to the CEO and to the board? Because if everyone owns it, then no one owns it. There’s a nuance there that I think is important.”
The same is true for the board itself. AI oversight is becoming so critical that it should not solely live with any one committee or director. Ultimately, says Hesse, board oversight of AI is the responsibility of the full board. “But if there are going to be many AI issues, you can’t have everything come to the full board because if everything’s a priority, nothing’s a priority. The board chair, lead independent director and the committee chairs should decide where each potential AI issue should reside, and if a committee issue raises in importance, the committee chair should know when to bring it to the full board.”
Hesse continues, saying “AI is having a profound impact on work and organization structure — where work gets done, how many people you hire and fire, evolving skill sets and training. These issues, in my view, should sit with the human resources or compensation committee. Other AI issues, like mergers and acquisitions for example, might sit in the finance committee. The “committee of last resort” ideally is the technology committee for those boards that have them. Or, because of the large enterprise risk management implications, the committee of last resort could be the risk or audit committees. But I like to see boards spending more time looking at AI’s opportunities than its risks.”
For the nom/gov committee, this may mean updating your board skills matrix and board succession planning to reflect the right technology expertise and ensuring committee charters reflect these new oversight mandates.
In other words, AI (and the opportunities and risks it presents) is so wide-ranging that each committee should be thinking about oversight of AI and other exponential technologies. If your committee charters don’t reflect this, you might have a gap in oversight.
How am I keeping up to speed on what I need to understand to provide effective oversight? Continuing board education is core to effective oversight in every area of ethics and governance but is especially critical now as AI becomes a strategic priority that can reshape the business. While deep technical expertise isn’t required, every director needs enough fluency to challenge assumptions, oversee governance frameworks, and guide both the risks and opportunities AI presents across the enterprise. Just as AI is a full-board concern, so, too, is education on the issues it raises. “I think you’ve got to really push on the board because a lot of the board members come from an era where this is not a technology that they’re very comfortable with or understand in terms of its complexity,” says Wilson. “You need to be far more deliberate around it. Bring experts into the boardroom or have board members go through specific training where they’re going through various modules and really understanding this technology. And it’s not one that’s once and done. This is a continuous learning issue.”
One of the most exciting things about AI oversight is that it creates a unique opportunity for the board and leadership team to learn and problem-solve together, says Hesse. “It’s a great opportunity for the board and management to engage in new areas of strategy. Boards can veer from governance into management, which is rarely helpful, when reviewing current company operations and performance, and meetings can devolve into a backward-looking interrogation instead of a positive, future-focused creativity. AI provides an opportunity for boards and management to engage in a more constructive and collaborative way.”
As AI continues to evolve, the question is not whether it will reshape the business, but whether governance will evolve quickly enough to shape its impact.






