Artificial Intelligence and Patient Safety: Using AI and
Machine Learning to Support the Sacred “First, do no harm”

Given the optimism surrounding the application of artificial intelligence (AI) to improve healthcare, the role of AI and related technologies, such as machine learning, to improve patient safety should be paramount for several reasons:

  • The imperative of every provider and health system is to “First, do no harm”
  • A preponderance of validated research shows that patient harm is pervasive
  • In the era of value-based care, the cost of patient harm is shown to be material

Pascal Metrics, in a study just published in the patient safety-dedicated issue (sponsored by the Gordon and Betty Moore Foundation) of Health Affairs, “An Electronic Health Record-Based Real-Time Analytics Program For Patient Safety Surveillance and Improvement,” demonstrates the applicability of a novel method of predicting all-cause harm using an advanced ensemble machine learning model with AI techniques such as boosting, bagging, and random forest.

While the accuracy of Pascal’s Global Safety Risk (GSR) Score is high (c-statistic of 0.9), the deeper significance lies in the following findings:

(1) Reconciling the use of scalable clinical validation and advanced machine learning and AI

(2) Using this foundation to extend into the prediction of specific adverse event categories

(3) Demonstrating a path for AI to be applied across a clinical operational environment in a variety of use cases that together will improve the safety and outcomes of care

1. Clinical Validation and Artificial Intelligence: Irreconcilable or Bedfellows?

Some in the research community have viewed clinical validation as a manual process that is too costly and perhaps unnecessary.  Consequently, the assumed goal is that patient harm can be measured exclusively with EHR and health IT data in real-time without any clinical review, adjudication, and validation.

Others, rooted in decades of practicing sound epidemiology, hold that clinical validation is fundamental to achieving “ground truth” in clinical practice – in achieving high and adequate confidence in a purported clinical outcome.  Historically, this has required the exercise of clinical judgment, particularly in patient safety where one of the primary obstacles to progress is the lack of common definitions of what constitutes patient harm.

As the novel method validated by Pascal Metrics shows, clinical validation and advanced AI application are, in fact, complementary.  Using clinically validated EHR-based adverse event outcomes data to train advanced AI machine learning models is economically viable, scientifically essential, and clinically useful.

 

Indeed, without relying on clinically validated EHR-based adverse event outcomes, the AI applied to patient safety will not be sensitive and specific to the true AE outcomes and clinically credible or trusted by providers and health systems who have the opportunity to improve care. And without fully taking advantage of advanced AI, machine learning, and real-time deployment technologies, more patients over time will be harmed than might otherwise be the case.

2. Specific Patient Harm Prediction: When Mortality and Morbidity Data Aren’t Enough

These outcomes referenced above are clinically validated EHR-based adverse event outcomes (“AE Outcomes”).  This data set is fine-grained and extends to the adverse event category level, e.g. medication-related glycemic bleeding, hypoglycemia, pressure ulcers, and so on.  This is a critical distinction, as efforts to date in the peer-reviewed published research have relied on mortality and morbidity outcomes data.

But if the goal is to predict whether, for example, a patient will be harmed by medication-related glycemic bleeding, what is optimal to train a predictive model are medication-related glycemic bleeding outcomes data – not whether a patient died or suffered from other less related complications.

The issue problematic for efforts to date is the unavailability of such data, i.e. the AE Outcomes referenced above that Pascal Metrics used to train its all-cause harm predictive model.

Another problem is the challenge of pervasiveness:  some patient harms occur infrequently, and it is difficult to accumulate a sufficient number of AE Outcomes in any one adverse event category to train a specific harm predictive model.

However, Pascal Metrics has been collecting AE Outcomes since 2008 and holds the largest AE Outcomes database worldwide.  Therefore, Pascal Metrics looks forward to extending the all-cause harm model referenced in the Health Affairs paper to predict different specific patient harms.

3. AI Applications in Patient Safety: A Multi-faceted Opportunity

Because patient harm and the desire to avoid preventable death is paramount in the minds of researchers and practitioners, often the tacit assumption is:  the “Holy Grail” of applying machine learning and AI to patient safety is build the best model possible to predict patient harm.

However, in the world of value-based care, it’s clear that this is a complex environment with many inputs, processes, decisions, stakeholders, and events that precede the incidence of patient harm.  Consequently, the opportunity to use machine learning and AI is not to apply one model to one patient to predict and avoid one harm but, rather, to provide a solution to predict different harms, collect all types of harm data, and reduce harms by rapid-cycle quality improvement process across a highly complex healthcare delivery environment.

It is this opportunity to use AI and related technologies well in advance of a patient being at risk for serious harm that Pascal Metrics is addressing in its pioneering work and ongoing roadmap to use the world’s richest data set of clinically validated health IT-based adverse event outcomes – along with advanced software and deep, proven operational clinical expertise to use real-time data and technologies to support the improvement in the safety and reliability of care.

Read our Blog on this topic for additional perspective.

Contact Us

We welcome the opportunity to answer your questions and to discuss the latest developments in clinically effective patient safety and quality improvement programs.

Scroll Up