Trustworthy AI in Healthcare #02

Post by 
Published 
August 2, 2022
T

he use of Artificial Intelligence (AI) for healthcare solutions is increasing day by day. AI can empower doctors with additional analysis and a combination of decision support, enhance the patient experience, improve population health, reduce costs, and improve the work-life balance of healthcare providers by relieving burden. However, in order to serve its purpose correctly, an AI needs to be trustworthy. Trustworthiness means that healthcare organizations, doctors, and patients should be able to rely on the AI solution as being lawful, ethical, and robust. In this ACM-Selects article, based on our experience from Trustworthy AI assessments of use cases in healthcare, we have collected a number of good-read resources. The collected articles will assist the reader in gaining a footing in Trustworthy AI for healthcare. This is the second part of this series, the first of which is available here.

We value your feedback and look forward to your guidance on how we can continue to improve ACM Selects together. Your suggestions and opinions on how we can do better are welcome via email through selects-feedback@acm.org.

Concepts to Ensure Trustworthy AI in Healthcare

[Transparency] Explainability for artificial intelligence in healthcare: a multidisciplinary perspective

Originally published in BMC Medical Informatics and Decision Making volume 20, Article number: 310, November 30, 2020.

Transparency is a key factor in achieving trust in healthcare. The following paper shows the importance of transparency and explainability for AI in healthcare from different perspectives, especially how explanations can make an AI system more trustworthy. 

[Read More]

 

[Explainability] The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies

Originally published in Journal of Biomedical Informatics, Volume 113, January 2021, 103655.

Explainable modeling can contribute to trustworthy AI, but the benefits of explainability still need to be proven in practice and complementary measures might be needed to create trustworthy AI in healthcare. The following article discusses the role of explainability in creating trustworthy artificial intelligence for health care. 

[Read More]  

[End-User informedness] The four dimensions of contestable AI diagnostics - A patient-centric approach to explainable AI

Originally published in Artificial Intelligence in Medicine, Volume 107, July 2020, 101901.

Gaining the trust of patients is as important as gaining the trust of doctors. The patients should be able to contest the diagnoses of AI diagnostic systems, and effective contestation of patient-relevant aspects of AI diagnoses requires the availability of different types of information about 1) the AI system’s use of data, 2) the system’s potential biases, 3) the system performance, and 4) the division of labor between the system and health care professionals. 

[Read More]

Co-design to Ensure Trustworthy AI in Healthcare

Co-design of a Trustworthy AI System for Skin Lesion Classifier

Originally published in Digital Impacts, a section of the journal Frontiers in Human Dynamics, Vol. 3, July 13, 2021, 688152.

Designing a Trustworthy AI System in healthcare needs a co-design approach where stakeholders from all sectors impacted by the AI system can organically participate. The following article discusses a co-design approach for an AI used to detect skin cancer.

[Read More]

 

"The human body is a black box": supporting clinical decision-making with deep learning

 Originally published in FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, January 2020, Pages 99–109.

Another excellent example of co-designing of AI tools called the Sepsis watch. The authors report on the challenges they faced and what worked for them with this real-world use case.

[Read More]

 

Risks

Achieving Trustworthy AI requires us to mitigate risks. The following few articles discuss sources of risks in AI for healthcare.

Prediction models for diagnosis and prognosis of covid-19: a systematic review and critical appraisal

First published in The BMJ as 10.1136/bmj.m1328 on April 7, 2020.

This review article indicates that almost all published prediction models are poorly reported and at high risk of bias such that their reported predictive performance is probably optimistic. 

[Read More]

 

Dual use of artificial-intelligence-powered drug discovery

First published in Nature Machine Intelligence, Volume 4, pages 189–191, March 7, 2022.

This article discusses how artificial intelligence (AI) technologies for drug discovery could be misused for de novo design of biochemical weapons with very low effort required.

[Read More] 

Deskilling of medical professionals: an unintended consequence of AI implementation?

First published in Giornale Di Filosofia, Volume 2, December 15, 2021.

Replacing/deskilling is a large concern for AI systems, in every sector. The following paper shows how past technological advances impacted and changed the skills of healthcare professionals and shows how AI systems could lead to another deskilling and what problems could arise from this.

[Read More]

THere's More

Recommended Selects

See all selects
Sep
29
//
2022
Getting Started Series

Getting Started with Internet of Things: IoT Applications

This Selects finalizes with an example application domain of Industrial Internet ofThings (IIoT), and a source to delve into state-of-the-art IoT research trends.
Aug
30
//
2022
Getting Started Series

Getting Started with Internet of Things: Computing and Communication

The selection includes easy to read articles describing and motivating the IoT, and later deep dives into the major aspects of IoT such as communication protocols, edge-to-cloud continuum, AI and data analytics, and security/privacy.
Aug
2
//
2022
Computing in Practice Series

Trustworthy AI in Healthcare #02

AI needs to be trustworthy. Trustworthiness means that healthcare organizations, doctors, and patients should be able to rely on the AI solution as being lawful, ethical, and robust.

Help guide ACM Selects!

Let us know how we can improve your ACM Selects experiences, what topics you would like us to cover in the future, whether you would like to contribute and/or subscribe to our newsletter by emailing selects-feedback@acm.org.

We never share your info. View our Privacy Policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
continue learning with the acm digital library!
explore ACM DL