Like many health care providers, consumers have been exploring generative artificial intelligence and are optimistic about its potential to address key challenges like access and affordability.
In fact, two-thirds of consumers think generative AI could reduce extended wait times for physician appointments, notes a recent Deloitte report. Yet despite that optimism, consumer adoption of generative AI for health reasons has remained essentially flat over the past year, with only 37% of respondents using it.
One of the most prominent and growing reasons for the stagnant adoption is distrust in the information that the tool produces, survey data showed. Thirty percent of the respondents said they don’t trust the information delivered by generative AI systems, compared with 23% of respondents who voiced similar concerns in last year’s survey.
Consumer distrust of generative AI-provided information has increased among all age groups, with a sharp increase among millennials and baby boomers.
As it stands, consumers generally are using free and publicly available generative AI to engage with the technology for health and wellness purposes. However, due to the evolving nature of the technology, these versions may sometimes produce inaccurate information that can erode consumer trust.
This presents an opportunity for provider organizations to bolster trust by educating consumers, providing them with generative AI tools designed for health care applications and addressing privacy concerns.
3 Ways to Improve Generative AI Communications
1 | Engage clinicians as change agents.
Nearly three-quarters of respondents view physicians as their most trusted source of information for treatment options. These clinicians could serve as key influencers, educating consumers about the potential advantages of provider-curated and -monitored gen AI tools, such as facilitating faster and more accurate diagnoses, and delivering personalized care. They also could help present this information in a way that consumers can understand easily, thereby increasing their trust in gen AI.
This is in line with survey findings that showed consumers are comfortable with their doctors using generative AI to share information about new treatments (71%), interpret diagnostic results (65%) and diagnose conditions and illnesses (53%).
2 | Be transparent with consumers.
While consumers may be comfortable with generative AI being used for making health care decisions, they want clarity. Some 80% said they would like to be informed about how their provider is using the technology to augment health care decisions, identify treatment options and provide support.
The report’s authors urge health care organizations to consider developing transparent processes and designing regulatory and patient-protection programs. This involves providing consumers with clear information about data collection methods, usage and safeguarding, as well as educating them about the limitations of the technology.
For example, a clinical recommendation that has been generated with the assistance of gen AI may require a disclaimer stating that it was system-derived. Along with this, consumers should be provided with accessible data or explanations as to why that recommendation was made.
3 | Enlist community partners as technology advocates.
Share information about generative AI with credible community organizations and equip them to address questions about the technology, its applications and effectiveness for the communities they represent.
Identify potential partners like community health centers, state and local health agencies, faith-based organizations and others. By aligning their messaging with these organizations, health care companies can enhance consumer understanding and acceptance of gen AI on a wider scale, the report states.