The FDA released an Artificial Intelligence/Machine Learning Action Plan
“The plan outlines a holistic approach based on total product lifecycle oversight to further the enormous potential that these technologies have to improve patient care while delivering safe and effective software functionality that improves the quality of care that patients receive. To stay current and address patient safety and improve access to these promising technologies, we anticipate that this action plan will continue to evolve over time.”
Yesterday, the FDA released its first official action plan to start monitoring the use of AI/ML-based medical software. I am a huge advocate for using AI in healthcare so this is great news! I have been writing immensely about how AI was used in fighting covid, diagnosing prostate cancer, colorectal cancer, and much more.
We have also been seeing huge advancements by google such as solving the protein folding problem through AI. There is a lot of potential for using such a solution in various distinct areas in drug discovery if you are interested in finding out more about this, check out my article here:
Why is this important?
I remember talking with my dad a few months ago about my dream to start a medical AI startup. His first question was:
How are you going to get people to trust your product? Especially if it’s something like diagnosing cancer, not everyone is willing to put their lives in the hands of software…..
I do agree with him to some extent. There is a huge amount of medical AI papers being released annually and nearly none of them are being used in the industry. This is mainly because there are a lot of trust issues based on the missing interpretability of those models .
However, this is likely to improve immensely if there is a huge corporation like the FDA reviewing and approving these pieces of software. The FDA has been the de-facto standard for approving drugs in the US.
What is the plan?
I want to highlight 2 of the most important steps of the plan:
- Developing a framework for continuous evaluation and improvement of ML models
This reminds me of Continuous Integration and Deployment in web development. It is quite similar to some extent, however, in medical AI this does provide some challenges.
Although there are tons of useful metrics in evaluating ML-based algorithms and networks (such as F1-score, sensitivity, etc…), these are always based on lab experiments. In labs, the images are super clean, pre-processed manually and the environment is almost simulated. However, this isn’t the case in the real world.
Existing rules for deploying AI in clinical settings, such as the standards for FDA clearance in the US or a CE mark in Europe, focus primarily on accuracy. There are no explicit requirements that an AI must improve the outcome for patients, largely because such trials have not yet run. But that needs to change, says Emma Beede, a UX researcher at Google Health: “We have to understand how AI tools are going to work for people in context — especially in health care — before they’re widely deployed.”
A few months ago, Google attempted to use a neural network that they developed to detect diabetic retinopathy. They had a plan to deploy this in India since the number of ophthalmologists in certain parts of India is quite low compared to the number of diabetic retinopathy cases.
What makes India particularly susceptible to the extreme effects of this disease is the lack of availability of eye specialists, which in turn leads to nearly 45 per cent patients suffering vision loss before diagnosis. Google, using AI, is hoping to help Indian doctors detect diabetic retinopathy early.
The main issue was that it sometimes failed to give a result at all because it was mostly trained on high-quality scans, which isn’t always the case in the real world.
it was designed to reject images that fell below a certain threshold of quality
The main aim of this article isn’t to attack the use of AI in healthcare, it’s actually the opposite. I think with the FDA backing up some pieces of software, more companies are likely to try getting into this field. Also, the FDA can help companies as Google perform rigorous training and testing rounds to avoid an issue like this in the future.
2. Fostering a patient-centered approach, including device transparency to users 
This is quite an important one. Patients should not need to be experts in machine learning to trust that these neural networks do work. Although the published ML papers do provide visualizations and graphs, they usually don’t provide evidence that the algorithms are going to work in real-world scenarios. This is often referred to as the “black-box” problem.
I also have to note that some of the metrics presented are sometimes almost useless. For instance, if you have 100 patients, 98 of them don’t have cancer and 2 of them do have it. And you have a non-functioning neural network that only predicts no-cancer for patients, this network would have 98% accuracy! Sounds amazing, but it’s really not.
There is a fair amount of research in visualizing activations of Neural networks to boost their interpretability. An excellent suggestion that was used in a few digital pathology papers, is a framework known as “DeepDream”, this framework aids in visualizing activations in CNNs, highlighting image features that led to, for example, the classification of an image.
Using this framework on neural networks would show the most significant features that they were able to capture and thus improve robustness. Other methods also exist such as Class Saliency maps.
The plans aren’t finalized yet and the FDA is still receiving feedback from stakeholders. Also, there has been a new launch of a Digital Health Center that is likely to speed the formation of these plans.
Launched in September of 2020, the CDRH Digital Health Center of Excellence is committed to strategically advancing science and evidence for digital health technologies within the framework of the FDA’s regulatory and oversight role. The goal of the Center is to empower stakeholders to advance health care by fostering responsible and high-quality digital health innovation.
Source: FDA Announcement
I still believe that AI is going to do great things in the medical field. However, one of the best things about this announcement is that it highlights the most crucial issues currently present. I hope that in the long run, developers and companies will start incorporating changes in their papers and products to accommodate and solve these issues. If this happens, it’s likely that we see a lot more of these papers and systems being used in the real world.
 Bridging the interpretability gap on TDS
 FDA Announcement