As artificial intelligence adoption expands across American businesses, a new survey finds that less than half of executives surveyed say their organizations have policies in place for employee use of tools like generative AI.
Littler, the employment and labor law practice that represents management, released its 2024 AI C-Suite Survey of more than 330 C-suite executives. While just 44% of respondents cite a company policy on employee use of gen AI, it represents a significant increase from last year—when just 10% said the same.
Marko Mrkonich, Littler shareholder and a core member of the firm’s AI and Technology Practice Group, says employers have made encouraging progress on workplace generative AI policies—though he says it’s not surprising that there’s so much more work to do.
“There are several practical challenges that come with creating an effective policy for such a ubiquitous and evolving technology, including securing alignment and internal buy-in—especially when views about generative AI’s risk level and opportunities can vary widely among stakeholders,” Mrkonich explains.
According to the survey, among employers that have established generative AI policies, 74% require employees to adhere to those policies, while 23% merely offer guidelines for use of the tech in the workplace. Only 3% say their organizations prohibit employees from using generative AI altogether.
As for what those generative AI policies cover, 55% say that employee use is limited to approved tools. About half limit employees to using tools approved by managers and supervisors or a centralized AI decision-making group. A smaller percentage of executives say their organizations limit employees to using gen AI for approved tasks (40%), while about 21% only allow certain groups of employees to use the tech at work.
“The current generative AI policy landscape represents a continuum, with organizations typically starting by vetting particular tools and then looking at specific tasks and how they are used by different groups and departments,” notes Niloy Ray, Littler shareholder and a core member of the firm’s AI and Technology Practice Group.
Given that uses of both generative and predictive AI can vary widely by employee role, it’s important that executives focus on defining who the decision-makers are, ensuring they are knowledgeable about the use of AI across the organization and effectively socializing requirements and guidelines among employees.
When it comes to enforcement of generative AI policies, 67% of executives say their organizations are setting clear expectations for use and relying on employees to meet them. More than half are using access controls that limit AI tools to specific groups and relying on employee reporting of violations.
A workplace policy is only as good as an organization’s ability to get employees to follow it—and successful expectation-setting goes hand in hand with employees actually understanding those expectations, according to Britney Torres, senior counsel in Littler’s AI and Technology Practice Group. She adds that that’s where training and education are critical, yet fewer than a third of executives (31%) say their organizations currently offer such programs for generative AI.
“To effectively implement a generative AI policy, it’s vital that leaders agree on the organization’s ultimate objective and how they’ll get there,” Torres says. “That includes training both on compliance issues to mitigate risk and technical use to realize the greatest benefits from the technology.”
An uncertain future for AI in HR
While policy development around AI use by employees is still far from universal, when it comes to the use of the tech in HR and talent acquisition processes, C-suite executives seem to clearly see its benefits. Two-thirds of executives (66%) say their organizations are using AI in HR functions, including to create HR-related materials (42%), recruit (30%) and source candidates (24%).
At the same time, with AI-related lawsuits predicted to rise and an ever-growing patchwork of AI regulation coming to the fore, C-suite executives are eying the legal risks, Torres says. Nearly 85% of those surveyed are concerned with litigation related to the use of AI in HR functions, and 73% say their organizations are decreasing the use of the tools for such purposes as a result of regulatory uncertainty.
Brad Kelley, a shareholder in Littler’s AI and Technology practice and a member of the firm’s Workplace Policy Institute, explains that while the U.S. currently lacks an AI framework akin to the EU AI Act, there has been a sharp rise in regulatory activity to address AI use in the workplace—and C-suite executives are taking note.
“In the absence of comprehensive U.S. legislation, federal agencies have filled the void with a series of AI guidelines while state and local laws continue to proliferate,” says Kelley, a former senior official at the Equal Employment Opportunity Commission and the Department of Labor.
“As the regulatory risks grow, it becomes increasingly important for executives to evaluate how their teams are using AI tools and to consider the impact of regulatory changes as part of their broader business planning,” Kelley says.