The board’s oversight in the age of AI: ethics, compliance and competitive advantage

The board’s oversight in the age of AI: ethics, compliance and competitive advantage


Artificial intelligence (AI) is reshaping the global business landscape at an almost dizzying speed. Many organizations, and their directors, are eager to take advantage of its potential to transform business operations and service delivery, and fear being left behind if they don’t. As with any evolving business question, corporate boards across Canada may be questioning their role and responsibilities in guiding their organizations across the AI frontier. Though the specifics will vary across sectors, successful AI strategies rely on the same key components, underscored by strong governance.

Directors in Canada are already well-versed in navigating the challenges of balancing innovation against evolving regulatory environment and emerging best practices. Navigating the rapid pace of change will require corporate boards to keep an eye ahead to the future of AI regulation, and emerging Canadian and global standards, including the NIST AI Risk Management Framework, and the proposed federal Artificial Intelligence and Data Act. And while the board of directors will have a role in ensuring the corporation has the procedures it needs to comply with new rules on AI and data use, they are also in the key position of encouraging and guiding the organization’s AI driven growth, and ensure that value is realized.

This Update aims to answer some of the key questions that corporate boards may have as they face the challenges posed by new AI technology.

How should the board think about the integration of AI into corporate strategy holistically?

AI priorities must be aligned with business goals. Boards of directors looking for the key places to begin applying an AI strategy should start by taking stock of their existing business strategy and core corporate goals, and then assess how AI can be integrated into the organization’s plans.

The businesses that are more successful in their AI strategies use AI cases that are solving a clear business problem. Initial steps typically involve looking at ways to optimize or automate business processes, and pursuing those opportunities that require lower implementation efforts. Calculating the business case for cost savings is usually more straightforward, and early wins can be crucial to achieving stakeholder buy-in for a subsequent and grander AI vision. Experience implementing such projects builds internal expertise and know-how that better prepares the organization to tackle more ambitious opportunities.

Beyond early quick-wins, thoughtful operationalization of an AI strategy will often require the organization to look beyond short-term efficiencies and consider longer-term, systems-level approaches that may help to distinguish itself in the marketplace, such as new means of delivering services or offering new services that were not possible before. Corporate directors will need to apply experience and judgement in assessing management’s capabilities and the organization’s readiness to visualize new innovative business opportunities applying AI and data, overseeing project implementation, and the integration of AI into core business goals and processes, and holding management accountable for investment outcomes.

What are the key pillars of an AI governance strategy?

A responsible end-to-end AI governance strategy should be a priority. The following are common foundational pillars for corporate boards to consider:

  • Accountability and risk management. Good corporate governance requires clear lines of accountability regarding the development and use of AI within a risk management framework and compliance program proportionate to the scale of the activities and their impact internally and externally.
  • Ethics. A board of directors should consider potential ethical considerations arising from implementing AI in business, including concerns in gathering underlying data and the potential uses for the outcomes of AI systems, and assess whether the system aligns with the organization’s values.
  • Transparency. As with existing data and privacy frameworks, ensuring transparency (both in the use of and risks associated with) an organization’s use of AI tools will be fundamental in addressing risks associated with AI use.
  • Bias and fairness. AI systems have the potential to adversely impact fairness and equity, such as by perpetuating biases, and a board of directors should assess management’s plans to mitigate such risks.

For Canadian organizations, the ISED Voluntary Code of Conduct on Responsible Development and Management of Advanced Generative Systems serves as a valuable reference.

What are the key risks and ethical concerns relating to AI adoption?

Corporate boards should keep abreast of, and regularly assess, management’s plans to proactively mitigate the key risks associated with the use of AI systems. Though risks will be evolving and context dependent, some of the key challenges to be aware include:

  • Reputation and fairness. A board of directors should develop or update clear guidelines and ethical standards for the development and use of AI, review management’s proposed measures for monitoring the operation of AI systems, and assess management’s plans for addressing any reputational risk that may arise from unmanaged deployment.
  • Confidentiality. A board of directors should consider whether existing policies relating to confidentiality may need to be clarified in the context of the use of AI, such as prohibiting submitting sensitive, confidential or proprietary information into externally provided generative AI services.
  • Intellectual property. AI models, as well as their attributes and architecture may not be adequately protected by existing intellectual property frameworks. A board of directors needs to carefully consider how an organization approaches protecting competitive advantages the organization may realize from AI models.

Boards should require management to report on a regular basis on the systems it will implement, and the measures it will use, to evaluate the operation of the AI system, and should require that the organization periodically review and update its risk management frameworks, along with technological adoption.

While responsible implementation is important, failing to adapt swiftly to the new technological possibilities, and integrate AI into their strategy early, can also affect an organization’s long-term viability.

What is the responsibility of board members with respect to AI education?

To successfully guide organizations through the adoption of AI, corporate boards must have an understanding and an awareness of AI use opportunities and risks, especially as they relate to the organization’s strategy. As both the capabilities of the technology and the pace of regulatory initiatives in this area are rapidly evolving, board members will also need to update their knowledge on a regular basis.

Our clients are conducting periodic director education sessions, involving relevant reading materials and presentations by management, as well as periodic presentations from outside experts (such as technologists, engineers, and ethicists) to help their board members enhance their knowledge.

How should the board allocate responsibility for AI oversight?

As AI becomes integrated into corporate strategy, directors should have some level of understanding of AI and the organization’s approach to its investment, development, use and deployment. In addition, diversity of backgrounds is critical to the adoption of responsible AI and collaboration across disciplines is important to making well-rounded decisions on use of AI reflecting the exercise of informed business judgement.

It can be helpful to assign to a specific technology or AI oversight and development committee responsibility for conducting a more detailed dive into the organization’s AI opportunities, risks and plans, for reporting to the board of directors as whole, as well as for overseeing deployment of approved AI initiatives.

Looking forward

A fully committed leadership will be the key to any success in the adoption of AI. Furthermore, as more attention is drawn to the need for responsible governance and regulation of AI from government and regulatory bodies, directors will need to maintain a similar level of awareness and prepare for increased scrutiny. Given AI’s potential for strategic, operational and competitive disruption, the board has a key role to pay in overseeing AI adoption and steering the organization across this new frontier.

Originally Appeared Here