rom product recommendations to image recognition to healthcare and criminal justice, algorithmic systems are shaping our lives in both obvious and subtle ways. It is important to understand where and how these systems can reflect, amplify or introduce bias into our decision-making, particularly in life-altering and vulnerable situations where these biases can result in harm.
This week's Select provides a snapshot of work being done in algorithmic fairness. Our selections were made with the intention of:
- Providing a starting point to understand the nuances of algorithmic bias;
- Work and results from research, research-to-practice, and interdisciplinary discussions;
- An example for how fairness can be integrated and iterated upon in products and services;
- Further ACM resources and technical for those interested in learning more.
As always, we kindly encourage sending your feedback and suggestions to email@example.com for how we can do better. We look forward to your guidance on how we can continue to improve ACM Selects together.
TL;DR (Too long; do read)
That’s not fair!
First published in XRDS: Crossroads, the ACM Magazine for Students, Vol. 25, No. 3, April 2019.
In her XRDS: Crossroads article, Deborah Raji shares her thoughts on algorithmic bias in machine learning systems. Raji explains how the inherent characteristics that make machine learning systems appealing can be prone to unfairness, and the misconceptions we have about human understanding and trained machine understanding. This article provides an excellent and nuanced introduction to the challenges surrounding algorithmic fairness as well as what technical and human parameters computer scientists should consider when applying machine learning in the real world.
Algorithmic Bias and Fairness: Crash Course AI #18
First published as Algorithmic Bias and Fairness: Crash Course AI #18 for PBS' Crash Course AI.
Written by ACM Future of Computing Academy Alumni Yonatan Bisk, Lara Yarosh and Tim Weninger, PBS’s YouTube Series Crash Course AI presents an introductory dive into five common types of algorithmic biases in artificial intelligence and machine learning.
"Fairness in machine learning requires our attention. This is a decision-making technology of unprecedented impact."
- Deborah Raji,
Founder/Executive Director, Project Include; Fellow, Mozilla
Things to Know
10 things you should know about algorithmic fairness
First published in Interactions, Vol. 26, No. 4, June 2019.
Algorithmic fairness remains one of the biggest challenges when using machine learning systems at large. These systems can reflect, amplify or introduce biases in areas such as predictive policing, child welfare and the online housing market where algorithmically aided decision making can have serious consequences on underrepresented individuals and/or groups of people. In her Interactions article, Google Senior Staff User Experience Researcher Allison Woodruff provides her perspective on 10 things that define the problem space, with the goal of clearing up misconceptions about algorithmic fairness and providing practical guidance for real-world practice.
A snapshot of the frontiers of fairness in machine learning
First published in Communication of the ACM, Vol. 63, No. 5, April 2020.
While algorithmic fairness, accountability and transparency is a rapidly growing subfield of machine learning, researchers are still struggling to find fundamental definitions for fairness that can be applied towards systems in different scenarios and settings. This Communications of the ACM article by Alexandra Chouldechova and Aaron Leon Roth captures several key ideas and directions regarding machine learning fairness as discussed by a group of 50 experts from academia, industry and policy. While their convening and this article is not intended to comprehensively cover the entire field of work, it serves as a good entry point developing a scientific foundation for understanding algorithmic bias.
Fairness in Machine Learning with Tulsee Doshi
First publish as an ACM Tech Talk, February 26, 2020.
In this ACM Tech Talk, Google Product Lead for ML Fairness and Responsible AI Tulsee Doshi discusses several lessons Google has learned through the development of their products and research as well as some of the approaches that their developers have taken to address concerns around fairness. Doshi also touches upon the role of explainability for fairness, and some tools and techniques to be involved in this field. This tech talk provides perspective on how fairness can be applied in industry, and the variety of ways in which it can be integrated into an existing ecosystem of products and services.
Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms
Published by the Brooking Institute as a report on May 22, 2019.
This paper presents a framework for algorithmic hygiene, which identifies some specific causes of biases and employs best practices to identify and mitigate them. It also presents a set of public policy recommendations, which promote the fair and ethical deployment of AI and machine learning technologies. It draws upon the insight of 40 thought leaders from across academic disciplines, industry sectors, and civil society organizations who participated in two roundtable discussions related to algorithmic design, accountability, and fairness, as well as the technical and social trade-offs associated with various approaches to bias detection and mitigation. The paper starts by discussing some real world and contemporary examples of algorithmic bias, causes of bias, how to detect bias and potential ways to mitigate them. We highly recommend this article to anyone who is looking to get a contemporary picture in a short read.
Perspectives on Fairness
What is the Point of Fairness?
First published in Interactions, Vol. 26, No. 4, April 2020.
As machine learning becomes more ubiquitous, questions of AI and information ethics loom large. Much concern has been focused on promoting AI that results in more fair outcomes that do not discriminate against protected classes, such as those marginalized on the basis of gender and race. Yet, little of that work has specifically investigated disability.
An interview with Lauren Maffeo: understanding the risks of machine learning bias
First published in Ubiquity, January 2019.
In this interview, research analyst Lauren Maffeo and ACM Ubiquity editor Bushra Anjum discuss Lauren's perspective on machine learning (ML) bias. She describes why the bias exists in ML and why the bias in black-box ML algorithms may have “far-reaching consequences”. Lauren then talks about how she became interested in this topic, her initiatives to spread awareness on the ML bias as well as some steps to be taken to address this bias for the future of computing.
ACM Code of Ethics and Professional Conduct
The Code is designed to inspire and guide the ethical conduct of all computing professionals and anyone who uses computing technology in an impactful way.
Computing professionals' actions change the world. To act responsibly, they should reflect upon the wider impacts of their work, consistently supporting the public good. The ACM Code of Ethics and Professional Conduct ("the Code") expresses the conscience of the profession.
ACM Technology Policy Council
Independent, nonpartisan, and technology-neutral research and resources to policy leaders, stakeholders, and the public about public policy issues, drawn from the deep technical expertise of the computing community.
ACM’s global Technology Policy Council sets the agenda for ACM’s global policy activities and serves as the central convening point for ACM's interactions with government organizations, the computing community, and the public in all matters of public policy related to computing and information technology. The Council’s members are drawn from ACM's global membership. It coordinates the activities of ACM's regional technology policy groups and sets the agenda for global initiatives to address evolving technology policy issues.
Statement on Algorithmic Transparency and Accountability
First published as a statement by the ACM U.S. Public Policy Council and ACM Europe Council Policy Committee in 2017.
Recognizing the ubiquity of algorithms in our daily lives, as well as their far-reaching impact, the ACM US Technology Policy Committee and the ACM Europe Technology Policy Committee have issued a statement and a list of seven principles designed to address potential harmful bias. The US ACM committee approved the principles earlier in 2017, and the European ACM committee approved them on May 25, 2017.
Bringing together all stakeholders to address the ethical and societal impact of computing.
The ACM Special Interest Group on Computers and Society brings together computer professionals, specialists in other fields, and the public at large to address concerns and raise awareness about the ethical and societal impact of computers. As part of its ongoing efforts to gather and report information, thus stimulating the exchange and dissemination of ideas, SIGCAS publishes an online magazine and co-sponsors national and international conferences such as the International Symposium on Technology and Society, the Computers, Freedom and Privacy Conference, the Computers and Quality of Life Symposium, and the Computer Ethics and Philosophical Enquiry Conference.
ACM FAT* (formerly ACM FAccT)
Next conference to be held in 2021.
A computer science conference with a cross-disciplinary focus that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.
[Read the ACM press release on 2020 highlights]
AAAI/ACM Conference on AI, Ethics, and Society
Virtual conference to be held on May 19-21, 2021.
Concerns about the impact of AI on society have continued to grow in the year since AAAI and ACM joined to create the first Conference on AI, Ethics and Society. In the vision of this joint effort, it is only through multidisciplinary engagement and scholarship that we can hope to develop good responses to the challenge of ensuring that AI develops in a way that is safe and beneficial for everyone.
50 Years of Test (Un)fairness: Lessons for Machine Learning
First presented at FAT* '19: Proceedings of the Conference on Fairness, Accountability, and Transparency, January 2019.
Quantitative definitions of what is unfair and what is fair have been introduced in multiple disciplines for well over 50 years, including in education, hiring, and machine learning. In this paper, the authors trace how the notion of fairness has been defined within the testing communities of education and hiring over the past half century, exploring the cultural and social context in which different fairness definitions have emerged. In some cases, earlier definitions of fairness are similar or identical to definitions of fairness in current machine learning research, and foreshadow current formal work.
Improving Fairness in Machine Learning Systems: What Practitioners Need?
First presented at CHI '19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, May 2019.
The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention. Through 35 semi-structured interviews and an anonymous survey of 267 ML practitioners, the authors conduct the first systematic investigation of commercial product teams' challenges and needs for support in developing fairer ML systems. The authors identify areas of alignment and disconnect between the challenges faced by teams in practice and the solutions proposed in the fair ML research literature. Based on these findings, they highlight directions for future ML and HCI research that will better address practitioners' needs.
Putting Fairness Principles into Practice: Challenges, Metrics, and Improvements
First presented at AIES '19: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, January 2019.
As more researchers have become aware of and passionate about algorithmic fairness, there has been an explosion in papers laying out new metrics, suggesting algorithms to address issues, and calling attention to issues in existing applications of machine learning. In this paper the authors provide a case-study on the application of fairness in machine learning research to a production classification system, and offer new insights in how to measure and address algorithmic fairness issues. This paper discusses open questions in implementing equality of opportunity and describes our fairness metric, conditional equality, that takes into account distributional differences. Further, the paper provides a new approach to improve on the fairness metric during model training and demonstrates its efficacy in improving performance for a real-world product.