Establishing AI ethics presents a need for HR leadership

Human resource leaders are now at the center of conversations about AI ethics in the workplace. This is not only because they are responsible for shaping and administering employee-related policies but also because HR has been using artificial intelligence for years—whether it’s been acknowledged or not—through vendor partners and key workplace technology platforms.

The Deloitte Technology Trust Ethics team recently released findings from a new survey delving into C-suite perspectives on preparing the workforce for ethical AI. In this report, 100 executives shared their thoughts on establishing AI policies and guidelines for their organizations.

The results paint a clear picture: HR leaders will be needed more than ever.

The C-suite isn’t turning a blind eye to the need for ethics training for their labor force, according to report co-author Beena Ammanath, leader of Deloitte’s Technology Trust Ethics practice. She wrote that strategies such as upskilling, hiring for new roles and acquiring companies that have existing AI capabilities “demonstrate they recognize the immense possibility that only the human element can generate from AI.”

Beena Ammanath, leader of Deloitte’s Technology Trust Ethics practice

HR leaders are nodding their heads, familiar with upskilling and hiring for capabilities that meet the organization’s needs. While the hot-button topic right now is the effective and ethical implementation of artificial intelligence, HR teams have upskilling and hiring in their DNA.

According to the study, more than half of business leaders plan to bring on talent to fill AI-related roles such as ethics researcher, compliance specialist and technology policy analyst.

Second, some executives are eyeing chief ethics officer and chief trust officer roles, recruiting efforts that will likely land on the desks of HR leaders.

Policy creation

The world has open access to generative tools, and everyone is talking about artificial intelligence. However, in many organizations, AI strategy discussions so far have been merely theoretical. Now that the European Parliament has marked a threshold by approving the Artificial Intelligence Act, global business leaders are pressed to document policies defining appropriate artificial intelligence use cases and understand risk threats.

Executives told Deloitte that publishing clear policies and guidelines is the “most effective method of communicating AI ethics to the workforce.” Nearly 90 percent of surveyed organizations are implementing these procedures now or soon. Experts suggest that human resource leaders should have a say in creating these guidelines.

Asha Palmer, SkillsoftAsha Palmer, Skillsoft

“HR doesn’t want employees held to policies and procedures that haven’t been appropriately structured,” advises Asha Palmer, SVP of compliance solutions at enterprise learning platform Skillsoft.

She points out that HR leaders will likely be involved in the aftermath if employees fail to comply with a policy that hasn’t been properly positioned or communicated from the start.

Establishing employee trust

While the C-level executives Deloitte surveyed said ethical guidelines for emerging technologies such as generative AI were critical to revenue growth, 90% also stated that guidelines are important in maintaining employee trust. Over 80% also affirmed that ethical guardrails are essential in attracting talent. Building employee trust and attracting talent ranked higher than meeting shareholder expectations or compliance with existing regulations.

Report co-author Kwasi Mitchell, chief purpose and DEI officer at Deloitte U.S., wrote that employer organizations are instrumental in the responsible adoption and implementation of AI. “I’m encouraged by the inputs we’re seeing from C-level leaders to prioritize ethical awareness, training and use so we can collectively produce better outcomes for our businesses and people as a result,” he said.

A recent report by PwC found that just 67% of employees say they trust their employers. Meanwhile, 86% of business leaders believe they are highly trusted by their employees, highlighting the potential to ease this disconnect by demonstrating a commitment to ethical implementation.

Building women leaders

This month, IBM published a survey of 200 U.S.-based C-suite officers, executives and mid-level managers about AI adoption, including an equal number of women and men. The report suggests that women have a unique opportunity to be pioneers in the ethical implementation of artificial intelligence at work: “They can wield generative AI responsibly, but forcefully—and make sure the organizations they work for take notice,” according to the report’s authors.

IBM researchers found that company policies are the top factor that would encourage women to use generative AI at work. Men, however, prove to be more motivated to use AI to gain a competitive advantage in the job market and increase their pay. Additionally, more than half of the women surveyed said they use generative AI to bolster their job security.

IBM experts point out that “generative AI can only learn from the data it’s trained on—and data tends to reflect existing inequalities.” When organizations encourage women to participate with gen AI at work, they are positioned to identify biased outputs and, as the IBM researchers believe, begin to shrink the gender divide.

These findings present an opportunity for HR leaders and managers to tap into a population of employees who plan to leverage their employment as a stage for building new AI-related skills. “When we think about learning, and the excitement of learning, HR can find individuals who want to reinvent themselves professionally,” says Palmer.

Originally Appeared Here

You May Also Like

About the Author: Rayne Chancer