Moral Crumple Zones: Agency and Accountability in Human-AI Interaction

Share Share Share Share Share

An EPIC Talk with MADELEINE CLARE ELISH, Data & Society

video Approx 50 minutes

Overview

Breathless rhetoric about AI has promised safer, more accurate systems that would take the “human out of the loop.” With more nuanced visions of AI, not to mention some high-profile catastrophes, the prevailing rhetoric now promises that keeping a “human in the loop” at key places will ensure effective oversight. But neither model is an accurate way to understand agency—complex dynamics of cooperation and control in human-AI systems—or who should be held accountable when something goes wrong.

In this talk Madeleine will outline the distributed nature of agency in sociotechnical systems and present the concept of a “moral crumple zone” to describe how responsibility for an action may be misattributed to a human actor who actually had limited control over the behavior of the system. Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a highly complex and automated system may—accidentally or intentionally—bear the brunt of moral and legal responsibility when the overall system malfunctions. While the crumple zone in a car is meant to protect the human driver, the moral crumple zone protects the integrity of the technological system, at the expense of the nearest human operator.

Madeleine will present some case studies of high-profile accidents and invite participants to explore challenges and opportunities they bring to light for the design and regulation of human-robot systems.

References:

Elish, Madeleine Clare (2019) Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction. Engaging Science, Technology and Society 5(2019):40–60.

Elish, Madeleine Clare (2018) The Stakes of Uncertainty: Developing and Integrating Machine Learning in Clinical Care. 2018 Ethnographic Praxis in Industry Conference Proceedings, pp. 364–380

Elish, Madeleine Clare and Tim Hwang (2015) “Praise the Machine! Punish the Human! The Contradictory History of Accountability in Automated Aviation.” Data & Society Working Paper, February 24, 2015.

Mateescu, Alexandra and Madeleine Clare Elish (2019) AI in Context: The Labor of Integrating New Technologies. Data & Society report, January 30, 2019.

Moss, Emanuel and Friederike Schüür (2018) How Modes of Myth-Making Affect the Particulars of DS/ML Adoption in Industry. 2018 Ethnographic Praxis in Industry Conference Proceedings, pp. 264–280.

Green Ben (2018) “Fair” Risk Assessments: A Precarious Approach for Criminal Justice Reform. Presented at the 5th Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2018), Stockholm, Sweden.

Presenter

Madeleine Clare Elish is a cultural anthropologist whose work examines the social impacts of AI and automation on society. As Research Lead and co-founder of the AI on the Ground Initiative at Data & Society, she works to inform the ethical design, use, and governance of AI systems through the application of social science research and human-centered ethnographic perspectives. Her recent research has focused on how AI technologies affect understandings of equity, values and ethical norms and how professional work lives change in response. She has conducted field work across varied industries and communities, ranging from the Air Force, civilian drone regulation, and commercial aviation to precision agriculture and emergency clinical care. Her research has been published and cited in scholarly journals as well as publications including The New York Times, Slate, The Guardian, Vice, and USA Today. She holds a PhD in Anthropology from Columbia University and an SM in Comparative Media Studies from MIT.