January 17, 2021

8 Links

8 Links of Separation

Uber-like and breast cancer AI algorithm is not the same. | by Claudia Morato | Nov, 2020

Medical and scientific information generation transformed the healthcare, pharma, device, and technology industry at a pace never seen before. Data generated, captured, and analyzed accurately and in real-time is the industry’s new currency and an expectation from patients regarding potential cure or improvement in outcomes.

Key challenges faced today by the pharma industry is complex in the face of a challenging environment due to:

  1. The pressure to reduce R&D cycles
  2. Shorter time in the market
  3. Declining peak sales
  4. Price and reimbursement pressure
  5. Need to innovate and thrive in digital transformation with the development of an entirely new capability

While pharma struggles to establish the right digital business model, technology giants ramped their investment in digital health initiatives. Google, Facebook, Apple, Amazon invested heavily in health monitoring devices and virtual care. In addition to the partnership with health insurance, the tech giants are devoting their expertise to Artificial Intelligence, especially in Machine Learning. Radiology is one of AI’s early beneficiaries with the prediction for early detection, more accurate assessment of complex images, and less expensive testing for patients across clinical areas.

Photo by Mitchell Luo on Unsplash

In January 2020, Google published at Nature a high potential AI for breast cancer screening. While the model developed by McKinney et al. was “capable of surpassing human experts in breast cancer prediction,” it also demonstrated some important limitations of AI solutions in healthcare, especially when human lives are the core of the algorithm. The lack of transparency in the publication alerted the scientific community to a potential “promotion of a closed technology” created by the Big tech company in one of the most important scientific journals.

It is undoubtful Google expertise in the technology field. On the other side, technology companies are not used to manage important aspects of Ethics and Transparency in research traditionally done by Pharmaceutical companies and Academic health centers. What technology industries, pharma/Lifesciences, and Academia learn from this episode?

  1. Be prepared to share details about the methodology, computer code used to train the model that arrives in a set of parameters;

In drug development, it is imperative to disclose the potential mechanism of action, molecule structure, PK/PD, and the results in preclinical and clinical models. Rigorous ethical standards apply to guarantee transparency, data accuracy, and real benefits to patients. The same principle should also apply to deep-learning models to avoid hiding behind the potential non-reproducible algorithm’s complexity.

2. Supplementary appendix is not enough to describe the algorithm

Clinical trials can be replicated daily in the real world as well as in additional research. In the case of AI models, computational reproducibility is essential for AI applications, especially when the AI models claim to be “superior to breast cancer experts in diagnosing.” There is a need to transparently disclose the coding as a textual description is insufficient to evaluate.

3. Scientific community is alert and will demonstrate important missing information to validate the research

Haibe-Kans et al. highlighted at least 6 essential hyperparameters for reproducing the study for each of the models used: learning rate, learning rate schedule, optimizer, momentum, batch size, and epochs.

4. Disclose model prediction and data labels in case your raw data can’t be released

Privacy of patient health information should always come first. In the last 20 years, raw data availability from 1 to 20%.

5. Use available platforms to share code and deep learning models

GitHub, GitLab are some of the coding platforms. For deep-learning, TensorFlow Hub, Model Zoo, and Pytorch can be used.

An alternative is to create artificial examples with public datasets to demonstrate how the data can be processed to train the model and generate the predictions.

6. Consider the regulatory implications and uncertainties about medical device classification and AI technology use.

As AI enters medical practice, developers, companies, and physicians need to know how regulatory agencies and law will assign liability for injuries that arise from the interaction between algorithms and practitioners.

7. Collaboration between the medical, computer science, tech companies, and patients is essential since the start.

Photo by National Cancer Institute on Unsplash

Ethics in research should apply to tech development in the field, including an extensive discussion of its implications to avoid problems shown in other areas (Microsoft bot, racist algorithms, and self-driving killer car).

Artificial intelligence contributed to the resolution of a variety of medical conditions, including in the oncology field. However, ethics and transparency in the research should be the key imperatives to avoid promotional and commercial interests from big groups to threaten the main Medical principle:

“Primum non nocere” — DO no harm

Malcare WordPress Security