he use of Artificial Intelligence (AI) for healthcare solutions is increasing day by day. AI can empower doctors with additional analysis and a combination of decision support, enhance the patient experience, improve population health, reduce costs, and improve the work-life balance of healthcare providers by relieving burden. However, in order to serve its purpose correctly, an AI needs to be trustworthy. Trustworthiness means that healthcare organizations, doctors, and patients should be able to rely on the AI solution as being lawful, ethical, and robust. In this ACM-Selects article, based on our experience from Trustworthy AI assessments of use cases in healthcare, we have collected a number of good-read resources. The collected articles will assist the reader in gaining a footing in Trustworthy AI for healthcare. This is the first part of this series - we will be publishing part two soon.
We value your feedback and look forward to your guidance on how we can continue to improve ACM Selects together. Your suggestions and opinions on how we can do better are welcome via email through email@example.com.
Definition of Trustworthy AI in general and in healthcare
It is important to know what exactly is meant by “trust”. The following collection of articles can clarify the understanding of trustworthy AI to the readers.
From the European Perspective
Ethics guidelines for trustworthy AI
Originally published by the European Commission on April 19, 2019.
The following article is a recommended initial reading and a guideline prepared by the EU high-level expert group on AI. The guideline summarizes European values into a set of high-level requirements for AI systems.
Trustworthy Artificial Intelligence (AI) in Healthcare
Originally published by MedTech Europe on November 19, 2019.
Comments/specifications of the EU guidelines specific to the healthcare sector from industry representatives
What we talk about when we talk about trust: Theory of trust for AI in healthcare
Originally published in Intelligence-based Medicine, Vol. 1-2, November 2020.
A short essay written by healthcare ethicists that puts the EU Guidelines into perspective and provides additional ideas.
From the American Healthcare Perspective
Trustworthy Augmented Intelligence in Health Care
Originally published in the Journal of Medical Systems on January 12 2022.
If you want to know about a framework for the development and use of AI through the lens of the patient-physician interaction, promoting an evidence-based, ethical approach that advances health equity and reinforces the core values of medicine, read the following article
Assessment of Trustworthy AI in healthcare
After understanding the definition of trustworthiness, it would be a good idea to understand how one may go about assessing Trustworthy AI in practice.
Who Is Included in Human Perceptions of AI?: Trust and Perceived Fairness around Healthcare AI and Cultural Mistrust
First published CHI ’21, May 8–13, 2021, Yokohama, Japan.
The cultural mistrust related to human systems plays a role in people’s perceptions of algorithmic decisions. This work shows how participants with low mistrust trusted human decisions more than algorithmic decisions and regarded them as fairer. However, participants with high mistrust of human systems perceived algorithmic and human decisions to be equally trustworthy and fair. The results suggest that Black non-trusting participants have higher mistrust and find healthcare decisions less fair. This calls for future research to purposely recruit and study different social groups and different dimensions that can account for individual differences in experiences with AI to understand whether one approach will universally improve people’s trust in AI and/or whether a different approach would be needed.
Z-Inspection® is one such approach that has been used to assess a number of AI systems used in healthcare. Z-Inspection is founded on the principles defined by the EU high-level expert group on AI.
[Read More] [Read More (arxiv)]
What a Philosopher Learned at an AI Ethics Evaluation
First published in AI Ethics Journal 2020 on December 14, 2020.
A short commentary on Z-inspection by Brusseau J, giving us how one can evaluate AI ethics from the perspective of a philosopher.
The medical algorithmic audit
First published in The Lancet Digital Health, Vol. 4 No. 5, on April 5, 2022.
Auditing is an important part of the assessment. The following article talks about a framework that guides the auditor through a process of considering potential algorithmic errors in the context of a clinical task, mapping the components that might contribute to the occurrence of errors, and anticipating their potential consequences.