Who Cares About the Ethics of AI? Women Do

Who Cares About the Ethics of AI? Women Do


Ann Skeet is the senior director of leadership ethics at the Markkula Center for Applied Ethics at Santa Clara University. Views are her own.

 

As the media reports almost daily, the adoption of generative artificial intelligence (AI) is growing at an incredible rate. However, it may not be growing equally among all demographics.

Studies are emerging about the growing gender gap in the use of AI, suggesting that women are adopting the technology more slowly and foreshadowing negative implications for women’s workforce advancement in the coming years as a result.

A study from NHH, the Norwegian School of Economics, finds “that female students adopt AI less frequently than their male counterparts, particularly among those with higher academic skills.”[1] Male students are 25% more likely to report high usage of ChatGPT or similar generative AI tools.[2]

 A second study from Harvard Business School[3] (HBS) shows women adopting AI tools at a 25% lower rate than men on average.[4] This second study synthesized the data from 18 studies covering more than 140,000 people, including the NHH study.[5] It surveyed college students and workers, business owners, data analysts, software developers, and executives from several countries including the United States.

 As the NHH study identified, and the HBS study confirmed, women are worried about using AI, particularly if it’s perceived as cheating. Rembrandt Koning, an author of the HBS study comments, “Women face greater penalties in being judged as not having expertise in different fields. They might be worried that someone would think, even though they got the answer right, they ‘cheated’ by using ChatGPT.”[6]

 The virtue ethics lens in the Markkula Center’s Framework for Ethical Decision Making, encourages decisionmakers to consider which option allows them to be the person they want to be, to act in ways according to the highest potential of their character and on behalf of values like honesty, integrity, and fairness. Something about using generative AI isn’t meeting the virtue standards of some women.

 In the NHH study, male students were less likely to view using generative AI as cheating. You can change the way women feel about the use of AI-powered tools, the study found, by changing policy. When policies were in place forbidding the use of AI, male students tended to use AI tools anyway, while women did not. When a policy was in place specifically allowing the use of AI, the gap closed. Over 80% of both men and women used it, suggesting that policies encouraging AI use can help to encourage women to use AI more often. Business executives in a recent roundtable hosted by the Science of Diversity and Inclusion Initiative (SODI), where the NHH study results were shared, confirmed that they were seeing similar trends in their companies: women need to be given explicit permission and encouragement to use AI. There were other findings in the studies, but a common theme in research and in business settings was the importance of behaving ethically to women.

 In my own evaluations of when and how to use AI, I definitely question whether it is ethical and wise. I think about the skills I might be losing by relying on AI to do some of my thinking, even as a brainstorming partner. I am aware of its tendency to return inaccurate answers and find that verifying responses sometimes negates the perceived time savings. I also weigh the environmental impact of AI use and ask myself if what I want to do is really worth the hit to the planet. And, yes, I consider how AI use might evolve in a way that reduces our connections to other people.

 I use AI selectively, both because I know I will need AI skills in the future workplace and because I have a responsibility to understand it so I can educate the students attending my university about it. Though I know we need to be preparing all of our students for a workplace infused with artificial intelligence, I do so somewhat reluctantly, given the number of ethical issues related to AI, not just the possible perception that I will be seen as cheating by using it.

 A male colleague recently shared that he responded to a request from a former student to recommend podcasts to her by consulting AI for a list. While he marveled at how efficient this was and how quickly he was able to draft his email response with the help of an email AI agent, I couldn’t help but think that the student wanted his recommendation. Afterall, the student could get her own recommendations from AI. As I evaluate that situation, I think about the relationship between my colleague and his former student. How would she feel if she knew he just turned to AI? Did he tell her which suggestions were his and which were from AI?

 Carol Gilligan and other researchers have proposed that women and men may approach ethical reasoning differently, with women potentially emphasizing care and relationships more in moral decision making. This is represented in the Markkula Center’s Framework by the care ethics lens.

 Perhaps we are seeing some of these differences in AI adoption rates. I am certainly thinking about my relationships to others when I express concern about losing connections to other people. I am thinking about care and relationships when I consider my responsibility to understand AI so I can educate future generations about it. And I am thinking about the relationship between my colleague and his student when I reflect on his choice to use AI in that instance.

I don’t know if there are more women who identify themselves as AI ethicists then men, but it sure feels like it. (There are a preponderance of women serving in AI ethics roles around the world, as a list of 100 Brilliant Women in AI Ethics published each year since 2018 reveals.) If, as these studies suggest, women have concerns about using AI, be it for virtuous reasons or because they view it as corrosive to human relationships, then they might be experiencing a form of moral injury, an injury to their conscience, just by using it. And this should trouble us all. Dignity speaks to one’s sense of being worthy and also of self-respect. Asking women, in fact asking anyone, to use AI that compromises their ethics compromises their dignity.

 Rather than buy into the sometimes binary discussions about whether AI is good or evil, AI companies and developers need a more nuanced approach. For AI to truly scale, people in AI companies and the businesses integrating AI into their operations should ask women more about what concerns them about AI and respond to those concerns. It’s the only dignified thing to do.

 

[1] Carvajal, Daniel and Franco, Catalina and Isaksson, Siri, Will Artificial Intelligence Get in the Way of Achieving Gender Equality? (October 31, 2024). NHH Dept. of Economics Discussion Paper No. 03, The paper has been revised in the Paper Series, and can be found here: https://openaccess.nhh.no/nhh-xmlui/handle/11250/3122396, Available at SSRN: https://ssrn.com/abstract=4759218 or http://dx.doi.org/10.2139/ssrn.4759218

[2] Ibid.

[3] Blanding, Michael, “Women are Avoiding AI. Will Their Careers Suffer?” Working Knowledge, Harvard Business School, February 20, 2025.

[4] Otis, Nicholas G., Delecourt, Solène, Cranney, Katelynn and Koning, Rembrandt, “Global Evidence on Gender Gaps and Generative AI,” Harvard Business School Working Paper, No. 25-023, October 2024. (Revised January 2025.)

[5] Ibid.

[6] Blanding, Michael, “Women are Avoiding AI. Will Their Careers Suffer?” Working Knowledge, Harvard Business School, February 20, 2025.



Content Curated Originally From Here