How Companies Can Take a Global Approach to AI Ethics

How Companies Can Take a Global Approach to AI Ethics

Getting the AI ethics policy right is a high-stakes affair for an organization. Well-published instances of gender biases in hiring algorithms or job search results may diminish the company’s reputation, pit the company against regulations, and even attract hefty government fines. Sensing such threats, organizations are increasingly creating dedicated structures and processes to inculcate AI ethics proactively. Some companies have moved further along this road, creating institutional frameworks for AI ethics.

Many efforts, however, miss an important fact: ethics differ from one cultural context to the next. First, ideas about right and wrong in one culture may not translate to a fundamentally different context. Second, even when there is alignment on right and wrong, there may well be important differences in the ethical reasoning at work — cultural norms, religious tradition, etc. — that need to be taken into account. Finally, AI and related data regulations are rarely uniform across geographies, which may introduce heterogeneity in the compliance aspects of AI ethics. Failure to take these factors into account can have a real business impact on companies, as well as their customers.

Right now, emerging global standards around AI ethics are largely built around a Western perspective. For example, the AI Ethics Guidelines Global Inventory (AEGGA), a centralized database of reports, frameworks, and recommendations, collected 173 guidelines by April 2024 and noted that “the overwhelming majority [came] from Europe and the U.S.”  — not as global as one may imagine. Yet many companies simply adopt these standards and apply them globally.

Western perspectives are also implicitly being encoded into AI models. For example, some estimates show that less than 3% of all images on ImageNet represent the Indian and Chinese diaspora, which collectively account for a third of the global population. Broadly, a lack of high-quality data will likely lead to low predictive power and bias against underrepresented groups — or even make it impossible for tools to be developed for certain communities at all. LLMs can’t currently be trained for languages that aren’t heavily represented on the Internet, for instance. A recent survey of IT organizations in India revealed that the lack of high-quality data remains the most dominant impediment to ethical AI practices.

As AI gains ground and dictates business operations, an unchecked lack of variety in ethical considerations may harm companies and their customers.

To address this problem, companies need to develop a contextual global AI ethics model that prioritizes collaboration with local teams and stakeholders and devolves decision-making authority to those local teams. This is particularly necessary if their operations span several geographies.

Adding Context to AI Ethics

Many companies are in a strong position to build a global, contextual AI process from the ground up. This is primarily because they don’t have one yet — there’s nothing to dismantle or rework, no protocols on which people need to be retrained. This is arguably a better position to be in than those companies that have already rolled out a single, monolithic AI ethics policy. Organizations that are in the midst of defining their policies are actually well-poised to get this right.

Based on our work, as well as interviews with relevant stakeholders representing AI users and developers from different geographies, developing and implementing a contextual AI ethics policy for the organization requires three steps.

First, companies need to agree on driving principles that need to be applied across geographies. In most of the cases we looked at, the global team was responsible for this initiative: they worked with teams in different parts of the world to develop, discuss, and refine these guiding principles.

Second, relevant teams need to be set up across the organization’s different regions. A significant presence in a geographical location — or a plan to expand there soon — warrants a cross-functional team. (How exactly one should constitute this team is a subject for a different article.) These teams will be responsible for operationalizing the global team’s AI ethics framework, and should be rewarded accordingly — otherwise organizations risk heaping more responsibility on already marginalized groups. In some cases, organizations may need to bring in outside, particularly in regions where an organization still has little experience.

Third, the global leadership team needs to engage these regional AI Ethics teams in a series of conversations during initial development. Feedback from these teams needs to be incorporated into the company’s AI strategy, which is then passed to these local teams that are empowered to adapt them to their context. These adaptations need to be communicated with global leadership and refined based on feedback.

For an example of how this works, consider Hewlett Packard Enterprise (HPE). At HPE, the Chief Compliance Officer partnered with their AI research lab to write global AI principles, pulling in representation from every function and product division. With the geographical diversity built into the team, the ethical considerations under consideration were more likely to be representative of the places where the company operates. HPE’s compliance team, for example, created a globe-spanning matrix of principles and geographically specific regulations and governmental frameworks, ensuring that HPE’s global principles were filtered through a local lens.

Continuous Interaction When Contextualizing AI Ethics

As a starting point, one must consider the possibility that top management may be insufficiently aware of the local context and, hence, may see a deviation from the global AI ethics directive as a mistake. For instance, a global AI ethics team that is examining an employee pay algorithm might mandate that employee leave not be factored into promotion decisions. The motivation here is to encourage men and women to take parental leave or sick leave if necessary without worrying about the impact on their careers. However, in countries such as Japan such a policy would likely be heavily modified. Groundbreaking work from Hilary Holbrow demonstrated that in Japanese companies, employees perceive policies such as this as deeply unfair. Culturally, it wouldn’t be appropriate to deploy a policy in this way without devoting significant resources to gaining buy-in into this approach.

This instance shows that while recognized as a favorable policy change in large parts of the globe, equality-based goals being used in AI algorithms may not elicit the same positive response. There are a few instances wherein local stakeholders are creating data resources to enable organizations to sense the context more accurately. Canadian First Nations communities, for example, have an established framework and training program regarding data collection and usage that is essential for any organization that wants to operate in this area.

Beyond enhancing the awareness of the local context, continuous engagement can also help global leadership strike a delicate balance of deferring to local teams in some cases but overruling them in others. HPE approached this problem by building automated processes. When starting a new initiative or bidding process that involves AI, their compliance software automatically schedules a meeting between team personnel and members of the local AI governance group. The local team provides context for the conversation, while the global team provides more high-level expertise on HPE’s principles and AI governance framework. Over time, this has built up a “case law” within HPE of how to approach different AI ethics issues.

HPE was dealing with the challenge of the inherent unknowability of exceptions and local questions that are unknown at the global level. Rather than attempting to create an AI ethics policy that exhaustively lists out different scenarios, which would inherently leave out something, HPE built a general framework and process that allows specific questions to be answered and build up a track record over time. This also has the benefit of dealing with an inherently dynamic world — even if something is ethical today, that may change in the future. Having a process where ethical decisions are tracked and changed accounts for this inherent dynamism.

Unfortunately, there’s no hard and fast rule on how to build these processes. Even in the example from above, it may be that equity is such a core value to the company that leadership feels it necessary to override local objections. In general, this should be rare, assuming that global leadership clearly communicates its goals and strategy, and all of these decisions should be reviewed at least annually. This periodic review approach is essential for AI governance — technologies change, local context changes, governance efficacy data is gathered, and company strategy shifts. These regular discussions are essential for the sustainability of this effort.

The Importance of Having AI Ethics Vision

Beyond the operationalization process of AI ethics, another important consideration emerged from our interviews: a lack of vision. A respondent from the AI ethics team of a leading association of technology companies in India stated that the majority of organizations were adopting a compliance-based view of ethics. Thus, organizations are likely to adopt AI ethics policies provided that the same is demanded by their clients, who are mainly based in the West, or upon the demand of local regulators.

A somewhat related finding emerged from a recent panel by MIT and Boston Consulting Group wherein participants agreed that the focus currently is on the economic benefits of AI. These instances indicate that the gold rush, triggered by AI, has relegated operationalizing ethical considerations to a lower priority or reduced it to a purely compliance issue. Not only are such narrow approaches the counter to the publicly stated position of organizations, but they may also reduce formulating AI ethics policies to a mere “check-mark,” further dimming the possibility of organizations emphasizing the contextualization of such policies.

HPE originally planned to develop its AI ethics principles over a 6-week period. This task expended an over one year exercise to develop a framework that was authentic to HPE and created processes to enable local adaptation. Much of this time was devoted to resolving thorny issues around seemingly simple statements — “We obey the law” might seem trivial, for example, but as one considers the statement, innumerable questions arise. Which law? How do we weigh local laws against global human rights principles? What stakeholders need to be consulted on these decisions? Companies that aren’t prepared to seriously engage in these discussions will inevitably under-invest in the initiative and instead create an ineffective, check-box framework that will hamper time to market, lead to inferior products, and ultimately not mitigate the liability issues that many companies are concerned with.

Making AI Ethics Configurable through Technology

Finally, our interviews revealed an interesting and more positive trend — namely that technology products are rapidly occupying the space between externally developed AI models and the organization’s internal users. In doing so, these products are converting a somewhat abstract notion of AI ethics into digitally configurable parameters. For instance, we discussed with the product manager of a large language model (LLM) security product how their product categorizes ethical use into four buckets: religion, gender, race, and health. The product allows the monitoring of the performance of an externally developed LLM on each of these buckets, which are further categorized into more specific terms at the prompt level. Thus, the user organization can configure these buckets to define the ethically acceptable use of LLMs as part of their AI ethics policy. As such, configurational interfaces improve efficacy and extend beyond LLMs, and they may allow local AI ethics teams to contextualize broad ethical frameworks formulated by the top management more readily.

HPE’s approach, while possibly less “cutting edge” on the tech side, nonetheless employs algorithms and automated processes to proactively engage frontline developers and salespeople in asking ethical questions, determining if their use case is already covered by existing case law, and tracking the results. Companies should emulate this example — focusing on using technology in ways that accentuate their own AI ethics processes instead of jamming in technologies with vague promises of plausible automation.

Conclusion

From the discussion above, contextualizing AI ethics emerges as an important challenge. Although there are several considerations necessitating the contextualization of AI ethics, the response of AI user organizations has been anything but uniform. While organizations such as HPE have moved ahead of the curve, formulating elaborate processes and structures for contextualizing AI ethics, others appear to be fumbling with the fundamental question of how to create practices for AI ethics, with some merely adopting the regulatory lens. However, as AI gains traction rapidly, organizations will face the question of how to formulate and operate contextually sensitive AI ethics policies.

One answer to these questions is forming and continuously engaging with local AI ethics teams. To this end, we offer three recommendations. First, a company should engage with local employees to frame its AI ethics narrative. HPE embarking on an exhaustive review of different regional approaches to AI ethics and governance, combined with their continued interaction with local teams, is an excellent example of this. Second, early in the process, the company should negotiate on points of conflict wherein the company’s values stand at odds with the values prevalent in the geography. This point is evident from the example mentioned above of Japanese cultural values and an algorithmic intervention that omits leaves taken for promotional decisions. Finally, while there may be a company-wide view on AI use, companies may empower local leaders with some autonomy concerning the implementation of AI initiatives on ethical grounds, including the regulatory variations in AI ethics norms. Of course, to what extent such autonomy can be afforded is a matter of further negotiation.

As companies adopt these recommendations, it is important not to view AI ethics as an objective finish line. On the contrary, much like AI’s technology components, AI ethics are also highly in flux. Hence, companies are never “done” with AI ethics and governance. It’s a continuous process.

Originally Appeared Here