Ambient assistant Nabla joins the Coalition for Health AI to establish guidelines for ethical AI

Ambient assistant Nabla joins the Coalition for Health AI to establish guidelines for ethical AI


CHAI is a coalition of health systems, startups, government and patient advocates working on AI

Ethics in artificial intelligence is important no matter the field, as it concerns not only vast amounts of data that needs to be protected, but there are also issues with biases, along with what many call “hallucinations,” or when AI just simply makes things up. That makes AI ethics especially important in healthcare, where the decisions that are made can literally be life and death.

The Coalition for Health AI (CHAI) is a private-sector coalition committed to developing industry best practices and frameworks to address the need for independent validation for quality assurance, representation, and ethical practices for health AI. It’s a coalition made up of leaders and experts representing health systems, startups, government and patient advocates, CHAI has established working groups focusing on privacy & security, fairness, transparency, usefulness, and safety of AI algorithms.

Now CHAI has a new member, announcing that Nabla, an ambient AI assistant for clinicians, is joining the organization.

Nabla’s product, Copilot, generates clinical notes in seconds; instead of having to type notes during consultations, clinicians can rely on Nabla Copilot to transcribe the entire encounter and generate an accurate clinical note that is directly integrated into the EHR. This allows providers to be entirely focused on patient care and to save an average of two hours per day which are usually spent on documentation. Benefits include reduced cognitive load from clerical burdens, less stress, and feelings of burnout. Copilot also automates the generation of patient instructions and referral letters.

In March, the company partnered with Children’s Hospital Los Angeles, and since then it has expanded to serves over 85 provider organizations and supports more than 45,000 clinicians across the U.S. It also signed new health system contracts with the University of Iowa Health Care and Carle Health, and tripled its adoption rate, increasing from 3 million visits annually at the start of 2024 to 9 million visits per year. 

Alex Lebrun, co-founder and CEO of Nabla, spoke to VatorNews about what’s lacking when it comes to ethics in healthcare AI, his vision for what that looks like, and how CHAI will help it get there. 

VatorNews: Ethics in AI is a hot topic, and especially so in healthcare, where data needs to be secured and bias can literally be life and death. What do you see as some of the most important areas where you believe ethics are important in healthcare?

Alex Lebrun: I think there are many ethical aspects to consider in healthcare AI, and we’ve heard directly from our health system partners about the importance of addressing these issues. Their feedback has been crucial in shaping our approach at Nabla, especially around key areas like bias, reliability, and transparency—each of which directly impacts patient care and clinician trust.

Bias: Bias in AI can lead to unequal predictions that impact patient outcomes, potentially widening disparities in care. For instance, to reduce language biases, we trained our proprietary speech-to-text models on over 50 different accents, minimizing the impact of voice characteristics on documentation accuracy.

Reliability: Clinicians need reliable AI tools that they can trust to keep patients safe. Our systems are designed to work within clear, defined scopes to reduce risks and maintain consistency. We’ve also implemented a proprietary framework that cross-checks documentation against transcripts and patient context, ensuring every fact is fully supported and verifiable.

Transparency: Transparency is critical for fostering trust. At Nabla, we share our governance practices openly and collaborate without renowned institutions (CHAI, AMIA, CHIME and more) on industry standards to help build trust and confidence in our ambient AI assistant.

VN: Is enough being done so far to ensure that AI in healthcare is being deployed responsibly? If not, what do you believe can and should be done?

AL: While AI in healthcare holds immense potential, there’s still a long way to go to ensure its responsible deployment. Many healthcare organizations are proactively setting their own AI governance standards, but only a small fraction have fully developed strategies addressing critical ethical issues like bias and safety. Without a universal governance framework, many AI tools in healthcare lack comprehensive ethical evaluations. A 2024 McKinsey survey found that, while 70% of healthcare organizations believe they are prepared to integrate AI, only 30% have fully developed responsible AI strategies that address key ethical considerations. To bridge this gap, we believe healthcare organizations, developers, and policymakers must prioritize collaboration to establish clear, transparent, and standardized guidelines that can evolve alongside the technology.

VN: What is your vision for an ethical framework for AI? How do you think we can ensure accuracy and confidentiality? 

AL: At Nabla, our vision for an ethical AI framework centers on transparency and trust with our clinician community. Our approach is grounded in three core pillars—privacy, reliability, and safety—reinforced by a blend of real-time model monitoring, clinician feedback, and safety features embedded directly into our product.

To ensure documentation accuracy, we have developed a proprietary framework through which each note produced is split into atomic facts, which are checked via an LLM query against the transcript and the patient context. Only facts for which we find definitive proof are considered valid. Moreover, each new model version undergoes rigorous review by professional medical human scribes to confirm the documentation is comprehensive and meets industry standards.

Additional safeguards include prompts for clinicians to review notes before exporting and an intuitive feedback tool for flagging any issues. We continuously monitor edits and feedback, gaining real-world insights that improve our model’s accuracy and reliability.

Confidentiality is paramount at Nabla. We offer a flexible, customer-first data storage approach, allowing health systems to define their retention policies. The standard retention period is 14 days, customizable to as little as a few seconds or extended further if needed. We never store encounter audio, and customer data isn’t used for model training by default. Feedback is de-identified following HIPAA standards, and health systems have the option to contribute specific data for model improvement, with complete control over their information.

VN: How did you first become involved with CHAI? What is it about the organization that made you want to join?

AL: We got involved with the Coalition for Health AI because we recognized the positive impact they’re making in healthcare AI governance. We were particularly impressed by the valuable work CHAI has already accomplished, such as developing the Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare and releasing the draft Responsible Health AI Framework.

At Nabla, transparency has always been central to our approach. With our background as Machine Learning researchers, we value these intricate questions around governance and felt a responsibility to be part of the conversation. CHAI’s strong commitment to transparency and trust-building, along with the establishment of quality assurance labs, aligns closely with our mission. Joining CHAI lets us actively contribute to these essential discussions and work toward harmonizing AI standards across the industry.

VN: What do you hope to accomplish by being a part of CHAI? What will you see as a win?

AL: At Nabla, we’ve always placed our community of clinicians at the center of our work, building our product based on their feedback and needs. By joining CHAI, we hope to establish another valuable channel to stay connected to our ecosystem—allowing us to listen closely to clinicians’ expectations, respond to their questions, and foster greater transparency and trust in healthcare AI. Right now, healthcare AI governance is fragmented, with many organizations developing their own frameworks. A win for the entire ecosystem would be achieving a more unified set of standards across the industry, making it easier for clinicians to understand, assess, and choose AI solutions that prioritize safety for clinicians and patients alike.

VN: Is there anything else that I should know?

AL: Nabla is moving toward becoming a proactive, real-time AI assistant that helps doctors make decisions on the spot. Thanks to solid partnerships and user trust, we’re ready to take this next big step. We are currently working on Active CDI to give instant feedback to clinicians during consultations, making sure their documentation meets coding standards and reduces claim denials.

(Image source: chai.org)

Originally Appeared Here