Photo by DeepMind on Unsplash

Bridge The Implementation Gap: Make AI Useful in Healthcare

Gaurav Nukala
Product Coalition
Published in
4 min readMar 28, 2023

--

Machine learning is now showing impressive results in analyzing clinical data, sometimes even outperforming human clinicians. This is especially true in image interpretation, like radiology, pathology, and dermatology, thanks to convolutional neural networks and large data sets.

But it’s not just images — diagnostic and predictive algorithms have also been built using other data sources like electronic health records and patient-generated data.

Despite these advancements, there’s a problem:

Not enough of these algorithms are being used in actual healthcare settings. Even the most tech-savvy hospitals aren’t using AI in their daily workflows.

A recent review of deep learning applications using electronic health records identified the need to focus on implementation and automation to have a direct clinical impact.

To close the gap between development and deployment, we need to focus on making models that are actionable, safe, and useful for doctors and patients, rather than just optimizing their performance metrics.

Machine Learning can provide actionable recommendations for clinicians

To be useful in a clinical setting, a machine learning algorithm must be actionable, meaning it should suggest a specific intervention for the clinician or patient to take. Unfortunately, many models are developed with great discriminatory or predictive power, but without clear instructions on what to do with the results.

In contrast, established risk scores like the Wells score for pulmonary embolism or the CHADS-VASC score for stroke assessment are useful because they provide a clear path for clinical action based on the score value.

Machine learning algorithms can be designed in the same way, with actionable recommendations for clinicians based on the output.

A recent study using deep learning for optical coherence tomography scans provided simple recommendations like urgent referral or observation.

It’s essential to consider user-experience design as a critical part of any health machine learning pipeline, so the algorithm can be seamlessly integrated into the clinical environment.

Effective models with patient safety in mind

Designing models with patient safety in mind is crucial. Unlike medications or medical devices, the safety of algorithms is still a significant concern for clinicians and patients due to issues like interpretability and external validity.

We need empirical evidence to demonstrate the safety and efficacy of algorithms in real-world settings, and ongoing surveillance to ensure their resilience and performance over time.

To achieve widespread use, developers must engage with regulatory bodies and consider additional dimensions of patient safety, such as algorithmic bias and model brittleness. Incorporating appropriate risk mitigation and clinician input will accelerate the translation of algorithms into clinical benefit.

Patient feedback should also be solicited to ensure the algorithm design aligns with patient needs and preferences. By building a comprehensive framework that addresses these issues, we can ensure that algorithms contribute to the overall safety and effectiveness of healthcare delivery.

Cost utility assessments

To evaluate the value of a machine learning project, a cost utility assessment should be conducted. This assessment compares the clinical and financial consequences of working without the algorithm to working with it, including the potential for false positives and negatives. The goal is to estimate reduction in morbidity or cost associated with using the algorithm.

For instance, let’s say we’re developing an algorithm to screen electronic health records for undiagnosed cases of a rare disease like familial hypercholesterolemia. A cost utility assessment would consider the savings associated with early detection, balanced against the cost of unnecessary investigations for false-positive cases and the expenses of deploying and maintaining the algorithm.

This assessment should be conducted early on in the project and regularly reviewed as the model is deployed, to ensure that the algorithm’s benefits continue to outweigh the costs. By incorporating cost utility assessments, we can make sure that machine learning projects have a real impact on patient outcomes and are worth the investment.

Let’s bridge the implementation gap

Machine learning frameworks have made model training more efficient, making it easier to create clinical algorithms. However, to fully leverage these algorithms in improving healthcare quality, we need to shift our attention to practical implementation issues such as actionability, safety, and utility.

The potential of AI in healthcare is often viewed through the lens of our technological aspirations. To make this potential a reality, we must focus on bridging the implementation gap and safely deploying algorithms in clinical settings.

Thank you for reading! I write about product management, healthcare, decision-making, investing, and startups. Please follow me on Medium, LinkedIn, or Twitter.

--

--

Product executive; Built products at Apple and 3 unicorns; Follow me to hear my thoughts on product, healthcare, AI/ML, startups