he use of Artificial Intelligence (AI) for healthcare solutions is increasing day by day. AI can empower doctors with additional analysis and a combination of decision support, enhance the patient experience, improve population health, reduce costs, and improve the work-life balance of healthcare providers by relieving burden. However, in order to serve its purpose correctly, an AI needs to be trustworthy. Trustworthiness means that healthcare organizations, doctors, and patients should be able to rely on the AI solution as being lawful, ethical, and robust. In this ACM-Selects article, based on our experience from Trustworthy AI assessments of use cases in healthcare, we have collected a number of good-read resources. The collected articles will assist the reader in gaining a footing in Trustworthy AI for healthcare. This is the second part of this series, the first of which is available here.
We value your feedback and look forward to your guidance on how we can continue to improve ACM Selects together. Your suggestions and opinions on how we can do better are welcome via email through selects-feedback@acm.org.
Concepts to Ensure Trustworthy AI in Healthcare
[Transparency] Explainability for artificial intelligence in healthcare: a multidisciplinary perspective
Originally published in BMC Medical Informatics and Decision Making volume 20, Article number: 310, November 30, 2020.
Transparency is a key factor in achieving trust in healthcare. The following paper shows the importance of transparency and explainability for AI in healthcare from different perspectives, especially how explanations can make an AI system more trustworthy.
[Explainability] The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies
Originally published in Journal of Biomedical Informatics, Volume 113, January 2021, 103655.
Explainable modeling can contribute to trustworthy AI, but the benefits of explainability still need to be proven in practice and complementary measures might be needed to create trustworthy AI in healthcare. The following article discusses the role of explainability in creating trustworthy artificial intelligence for health care.
[End-User informedness] The four dimensions of contestable AI diagnostics - A patient-centric approach to explainable AI
Originally published in Artificial Intelligence in Medicine, Volume 107, July 2020, 101901.
Gaining the trust of patients is as important as gaining the trust of doctors. The patients should be able to contest the diagnoses of AI diagnostic systems, and effective contestation of patient-relevant aspects of AI diagnoses requires the availability of different types of information about 1) the AI system’s use of data, 2) the system’s potential biases, 3) the system performance, and 4) the division of labor between the system and health care professionals.
Co-design to Ensure Trustworthy AI in Healthcare
Co-design of a Trustworthy AI System for Skin Lesion Classifier
Originally published in Digital Impacts, a section of the journal Frontiers in Human Dynamics, Vol. 3, July 13, 2021, 688152.
Designing a Trustworthy AI System in healthcare needs a co-design approach where stakeholders from all sectors impacted by the AI system can organically participate. The following article discusses a co-design approach for an AI used to detect skin cancer.
"The human body is a black box": supporting clinical decision-making with deep learning
Originally published in FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, January 2020, Pages 99–109.
Another excellent example of co-designing of AI tools called the Sepsis watch. The authors report on the challenges they faced and what worked for them with this real-world use case.
Risks
Achieving Trustworthy AI requires us to mitigate risks. The following few articles discuss sources of risks in AI for healthcare.
Prediction models for diagnosis and prognosis of covid-19: a systematic review and critical appraisal
First published in The BMJ as 10.1136/bmj.m1328 on April 7, 2020.
This review article indicates that almost all published prediction models are poorly reported and at high risk of bias such that their reported predictive performance is probably optimistic.
Dual use of artificial-intelligence-powered drug discovery
First published in Nature Machine Intelligence, Volume 4, pages 189–191, March 7, 2022.
This article discusses how artificial intelligence (AI) technologies for drug discovery could be misused for de novo design of biochemical weapons with very low effort required.
Deskilling of medical professionals: an unintended consequence of AI implementation?
First published in Giornale Di Filosofia, Volume 2, December 15, 2021.
Replacing/deskilling is a large concern for AI systems, in every sector. The following paper shows how past technological advances impacted and changed the skills of healthcare professionals and shows how AI systems could lead to another deskilling and what problems could arise from this.