The Virginia-based healthcare system, Sentara Health, has taken a concrete step in defining key principles for the implementation of Artificial Intelligence (AI) to ensure safety, privacy, and transparency. The majority of healthcare executives anticipate the widespread use of AI technologies in the medical field within the next three years, according to recent research. Nevertheless, only 40% of these organizations have planned or reviewed their AI regulatory guidance.
While some are flexible to experiment with AI, healthcare institutions cannot afford to overlook the potential risks. We must acknowledge both the benefits and dangers of AI, and this has led us to establish clear guidelines. Embracing the full potential of AI could save the healthcare industry up to $360 billion annually, another study argues. However, the industry and patients alike remain cautiously reserved due to potential errors and failures with AI use in healthcare. These could deteriorate public trust in healthcare and negate AI’s potential benefits.
Likewise, risks, including data privacy breaches, algorithm bias, unsafe AI application, and potential job redundancies for healthcare workers, cannot be ignored. There are concerns that AI tools without human supervision, for instance, may misdiagnose a condition or suggest inappropriate treatment, leading to costly mistakes. Additionally, there is a need to address patients’ apprehensions about healthcare providers using AI, as 60% reportedly expressed discomfort with this.
To address these challenges, Sentara Health established an AI Oversight Program. This group includes senior leaders from our organization tasked with overseeing the development and use of AI tools within Sentara’s integrated delivery network. The network comprises 12 hospitals, five stand-alone emergency departments, over 1 million health plan members, and a group that conducts over 2.8 million patient visits annually.
The AI Oversight Committee, which I chair along with David Torgerson, our Chief Analytics Officer, includes other experts from our organization like the Chief Nursing Officer and Chief Data Officer, as well as representatives from the legal and ethics departments. Our set principles, which reflect our commitment to innovation, safety, and high-quality patient care, ensure that AI solutions are developed within a safe, responsible, and trustworthy framework.
These principles govern human oversight, robustness and safety of the AI tools, adherence to privacy and data governance standards, transparency, non-discrimination and fairness, benefit, accountability, promotion of environmental and societal well-being. These guidelines must be met before any AI initiative proceeds within Sentara Health.
One success story from implementing these principles is the AI tool for drafting clinical notes. Recognizing that physicians spend considerable time documenting patient visits, often taking away from face-to-face patient interaction, this tool has helped reduce administrative tasks and improved the quality of interactions between patients and doctors.
In essence, adhering strictly to our guiding principles ensures that AI tools align with Sentara’s focus on enhancing our consumers’ overall health and well-being. It also ensures full compliance with all regulatory, legal, and ethical considerations. This adherence paves the way for other AI initiatives to unfold and live up to AI’s promise.