fairness

Tech Colonialism Today

EPIC2019 Keynote Address, Providence, Rhode Island SAREETA AMRUTE, Director of Research, Data & Society; Associate Professor of Anthropology, University of Washington Studies on the social effects of computing have enumerated the harms done by AI, social media, and algorithmic decision-making to underrepresented communities in the United States and around the globe. I argue that the approaches of enumerating harms and arguing for inclusion have to be unsettled. These approaches, while important, frame populations as victims whose existence is dominated by and divided from centers of power. We lack a structural analysis of how these harms fit into a larger social economic pattern. I ask us to consider instead whether all of these harms add up to computing technologies today being one of the largest aspects of a colonial relationship today. Using historical evidence, the talk will consider what makes something ‘colonial’ to begin with, and then weigh corporate computing’s relationship with the world to gauge whether...

What’s Fair in a Data-Mediated World?

Chair: ELIZABETH CHURCHILL, Distinguished Researcher, IBM Almaden Research Center Panelists: MIRIAM LUECK AVERY, Mozilla ASTRID COUNTEE, Data for Democracy NATHAN GOOD, Good Research This EPIC2018 panel addresses questions of fairness and justice in data-centric systems. While the many social problems caused by data-centric systems are well known, what options are available to us to make things better? Chair Elizabeth Churchill draws the panelists and audience into conversation about making change on many levels, in our daily work as well as larger-scale collaborations. Elizabeth Churchill is a Director of User Experience at Google. She has built research groups and led research in a number of well-known companies, including as Director of Human Computer Interaction at eBay Research Labs, Principal Research Scientist and Research Manager at Yahoo!, and Senior Scientist at PARC and Fuji Xerox’s Research lab. Elizabeth has more than 50 patents granted or pending, 5 co-edited and 2 co-authored books (Foundations for...

Humans Can Be Cranky and Data Is Naive: Using Subjective Evidence to Drive Automated Decisions at Airbnb

STEPHANIE CARTER Airbnb RICHARD DEAR Airbnb Case Study—How can we build fairness into automated systems, and what evidence is needed to do so? Recently, Airbnb grappled with this question to brainstorm ways to re-envision the way hosts review guests who stay with them. Reviews are key to how Airbnb builds trust between strangers. In 2018 we started to think about new ways to leverage host reviews for decision making at scale, such as identifying exceptional guests for a potential loyalty program or notifying guests that need to be warned about poor behavior. The challenge is that the evidence available to use for automated decisions, star ratings and reviews left by hosts, are inherently subjective and sensitive to the cross-cultural contexts in which they were created. This case study explores how the collaboration between research and data science revealed that the underlying constraint for Airbnb to leverage subjective evidence is a fundamental difference between ‘public’ and ‘private’ feedback. The outcome of this integrated,...