ethics

Ethnography, Ethics & Time

Japanese panel painting
  "Ethnographers flit forwards and backwards all the time as we create research objectives, wonder whether what we learnt yesterday is really the full story, and create and debate theories." Ethnographers are not time travelers, but we may be close. Our frameworks and methodologies develop a nuanced understanding of how relationships, processes, and objects evolve over time. This 'temporal expertise' is key to enacting our ethical responsibility to the past and future, says anthropologist Dr. Oliver Pattenden, who will explore these themes in his upcoming EPIC Talk on April 25, 2023: From Complex Histories to Possible Futures: Ethical Practice Across Time. In this Q&A, he discusses the intersection of ethnography, ethics, and time; how to encourage organizations to focus on ethics as a core component of decision-making, and what he's learned from his three-year-old son lately. We hope you enjoy this rich conversation! When people think about ethics, time probably isn’t the first thing that comes to mind. What inspired...

Tutorial: Creating a Responsible AI Practice

Learn models and principles to ensure organizations are creating, using, and deploying AI that coworkers, customers, and society can trust. Instructors: KATHY BAXTER, Principal Architect of Ethical AI Practice, Salesforce) & YOAV SCHLESINGER, Architect of Ethical AI Practice, Salesforce Overview This video has been edited to protect the privacy of participants in the live tutorial. Our lives are directed, enriched, influenced, and sometimes harmed by AI, in both obvious and not-so-obvious ways. Although many AI regulations have been implemented in the last few years and more regulation is coming, organizations cannot wait until they are compelled by external forces to develop responsible AI practices. For the sake of your business, customers, and society, everyone has a responsibility to ensure that the technology they help build, sell or deploy is fair, transparent and ethical. In this tutorial, we will walk participants through the steps of creating a responsible AI practice using a combination of lecture,...

Tutorial: Power Tools for Equity in Research & Design

Tutorial: Power Tools for Equity in Research & Design
This tutorial gives you and your teams robust, actionable tools for navigating inequity and shifting power hierarchies, from project planning to implementation. Instructors: CHELSEA MAULDIN, Executive Director, Public Policy Lab & NATALIA RADYWYL, Head of Research & Capability, Today Design Overview This video has been edited to protect the privacy of participants in the live tutorial. To do ethical, equitable work in any domain, we need robust tools for assessing and addressing power. Whether we’re creating products, services, or policies, inequities can create direct and indirect risks for research participants and underserved populations. This tutorial gives you robust, actionable tools for navigating inequity through a project life cycle, including planning, research, design, and implementation. You will: Identify power dynamics in research and design projects Learn frameworks and tools to navigate power dynamics through a project lifecycle Learn power-based assessments to use with individuals...

Designing and Conducting Inclusive Research: How a Global Technology Company and an Online Research Platform Partnered to Explore the Technology Experiences of Users who are Deaf and Hard of Hearing

Presentation slide projected on stage: title, "Benefits of Remote Technology". text: "My phone I use for basically everything. I use it to have..." On the right is a photo of what appears to be a desk with a computer monitor (unclear)
DANA C. GIERDOWSKI Lenovo KAREN EISENHAUER dscout PEGGY HE Lenovo This case study examines how researchers at Lenovo and dscout partnered to conduct a mobile ethnographic study on the technology experiences of individuals who are d/Deaf and hard of hearing, with the goal of making their products and research practices more accessible and inclusive. The study revealed common frustrations and pain points people experience when using their every-day technology. The researchers also learned valuable research design and operations lessons related to recruiting participants who are d/Deaf and hard of hearing, providing accommodations, and establishing an accessible research environment. This case explores the benefits of mobile-forward research design, and the additional considerations and adaptations necessary for collecting both asynchronous and synchronous data from individuals who have hearing loss and who have different communication modes and preferences, including American Sign Language. The authors discuss how more inclusive...

Cultivating Resiliencies for All: The Necessity of Trauma Responsive Research Practices

Presentation slide: an arial view of a meandering river. Overlain text: "A Trauma Responsive Development Model. 1. Aware. 2. Sensitive. 3. Informed. 4. Responsive"
MATTHEW BERNIUS Code for America RACHAEL DIETKUS Social Workers Who Design DOWNLOAD PDF This paper is an exploration of trauma, how and why it can surface during ethnographic and qualitative research, and the importance of anticipating its potential presence. We present a model to help plan for and mitigate the risks of trauma and demonstrate how it fits into broader methodological discussions of conducting safer and more ethical, responsible, and humane research. We close by discussing one pathway for a journey from being sensitive and aware of trauma to actively responding to it at both the individual and organizational levels across your work. Keywords: Trauma informed care, trauma responsive research and design, design research, ethics, qualitative methods Article citation: 2022 EPIC Proceedings pp 9–34, ISSN 1559-8918, https://www.epicpeople.org/epic INTRODUCTION To say that the past few years have been full of trauma feels like a bit of an understatement. As we write this paper, the world is two-and-a-half...

Exiting the Road to Hell: How We Reclaim Agency & Responsibility in Our Fights for Justice

Exiting the Road to Hell
Keynote Speaker: PANTHEA LEE, Reboot Panthea Lee is a strategist, organizer, designer, and facilitator, and the Executive Director of Reboot. She is passionate about building transformative coalitions between communities, activists, movements, and institutions to tackle structural inequity—and working with artists to realize courageous social change. Panthea is a pioneer in designing and guiding multi-stakeholder processes to address complex social challenges, with experience doing so in 30+ countries with partners including UNDP, MacArthur Foundation, Luminate, CIVICUS, Wikimedia, Women’s Refugee Commission, and governments and civil society groups at the national, state, and local levels. The global co-creation efforts she’s led have launched new efforts to protect human rights defenders, tackle public corruption, strengthen participatory democracy, advance equity in knowledge access, reform international agencies, and drive media innovation. Panthea began her career as a journalist, ethnographer, and cultural...

Immersive Ethics: Anticipating Risks and Harms in Virtual and Augmented Reality

Immersive ethics
Moderator: JILLIAN POWERS, Cognizant Panelists: JORDAN KRAEMER, Anti-Defamation League; ARWA MICHELLE MBOYA, Magic Leap; JESSICA OUTLAW, The Extended Mind LLC As new technologies, from AI to immersive experiences, are developed at scale, they raise ethical concerns for research and design. Data-driven systems have repeatedly been shown to entrench social biases along lines of race, gender, and class, from racist algorithms in the criminal justice system to misgendering trans and nonbinary people. Immersive technologies, such as virtual reality (VR) and augmented reality (AR), however, raise separate and thorny questions for ethical design. Immersive technologies create novel experiences of embodiment and reality, not to mention new sources of personal data. These facets create distinctive challenges for ethics, equity, and inclusion, intensifying the potential harms of misinformation, harassment, privacy violations, surveillance, or unequal access. How can ethnographic research anticipate emergent ethical questions specific...

Creating Future Imaginaries through Indigenous AI

Creating Future Imaginaries through Indigenous AI
Keynote Speaker: JASON LEWIS, Concordia University Jason Edward Lewis' multidisciplinary research and creative practice has been central to developing Indigenous media art in North America and worldwide, establishing a vital conversation about the interaction between Indigenous culture and computational technology. His contributions comprise scholarly writing, art making and technology research, as well as his leadership of the Initiative for Indigenous Futures and his creation of the Indigenous Futures Research Centre. A digital media theorist, poet, and software designer, Lewis is currently University Research Chair in Computational Media and the Indigenous Future Imaginary and Professor of Computation Arts at Concordia University. At Concordia he also serves as Special Advisor to the Provost on Indigenous Spaces. Lewis spent a decade working in a range of industrial research settings, including Interval Research, US West's Advanced Technology Group, and the Institute for Research on Learning, and, at the turn of...

Tutorial: Power Tools for Equity in Research & Design

Power Tools for Equity in Research & Design
Instructors: CHELSEA MAULDIN (Executive Director, Public Policy Lab) & NATALIA RADYWYL (Research Director, Public Policy Lab) This tutorial gives you robust, actionable tools for navigating inequity through a project life cycle. Overview This tutorial was conducted at EPIC2021. Exercises and discussions have been omitted to protect the privacy of participants. To do ethical, equitable work in any domain, we need robust tools for assessing and addressing power. Whether we’re creating products, services, or policies, inequities can create direct and indirect risks for research participants and underserved populations. This tutorial gives you robust, actionable tools for navigating inequity through a project life cycle. Public Policy Lab developed Power Tools over many years of innovative and effective work with at-risk communities. Across planning, research, design, and implementation, the instructors will teach you how to use Power Tools to check biases, inform theories of change and logic models, identify effective...

Tutorial: Data and Ethnography for Better AI Product Development

video link
OVETTA SAMPSON Microsoft Overview Ovetta Sampson covers when, how, and where to integrate ethnography and data science in the exploratory research process to have better and more ethical AI product development outcomes. With a combination of lecture, case study examples, and exercises, attendees gained a clear understanding of why making data a stakeholder in user research will create a more ethical and human-centered AI product. This tutorial is created especially for researchers who understand the need to mix ethnography and data science but just don’t know quite how to do it. Topics include: Bringing data into research planning to help identify and reduce bias Bringing data science into synthesis to help illuminate system solutions Bringing data science into insight and design principle generation Aligning qualitative data and behavioral data This tutorial was presented in full at EPIC202020. The video includes instructor presentations; discussions and breakout sessions are excluded for the privacy of...

Scaling Dignity: An Antidote to Poverty?

LORENN RUSTER 3A Institute, Australian National University PechaKucha Presentation—A wise woman once shared with me that the opposite of poverty isn't wealth. It's dignity. In a world where scale is about optimising for something bigger, faster, easier, broader and more profitable, we risk decision-making that is at odds with preserving, enabling and enhancing human dignity. What if we changed our focus to instead work out how we scale human dignity? This PechaKucha draws on my career across consulting, social enterprise and academia in geographies from Sydney CBD to rural Uganda and highlights three moments where I experienced dignity that I believe can scale. Through the telling of stories it shows glimmers of how we can choose a definition of scale that preferences dignity. It can look like making space for a chicken gift, enshrining dignity in our organisational values and structures and building question-asking muscles. If we believe that the opposite of poverty is dignity, then scaling dignity is an antidote to poverty....

There’s No Playbook for Praxis: Translating Scholarship into Action to Build a More Ethical Bank

JEFFREY GREGER Varo The US banking industry has a long history of excluding, exploiting, or simply ignoring low-income communities, recent immigrants, and racial minorities. In this paper, I share my experiences creating a community of practice where employees of a rapidly-growing banking startup can identify and confront the ethical challenges facing the financial technology (fintech) industry. This community is informed by insights from four years of activism and anthropological research that I conducted with small teams of service designers and ethnographers developing financial services for and with low- to moderate-income communities around the world. Through this research, I identified three institutional logics—insularity, decontextualization, and technological hubris—which limit efforts to build a more inclusive, equitable banking system. These logics hold the potential to lead well-intentioned organizations, and the practitioners they employ, to harm the marginalized communities they set out to help. This paper concludes...

Where Can We Find an Ethics for Scale?: How to Define an Ethical Infrastructure for the Development of Future Technologies at Global Scale

IAN DULL ReD Associates FANI NTAVELOU BAUM ReD Associates THOMAS HUGHES ReD Associates Despite companies facing real consequences for getting ethics wrong, basic ethical questions in emerging technologies remain unresolved. Companies have begun trying to answer these tough questions, but their techniques are often hindered by the classical approach of moral philosophy and ethics – namely normative philosophy – which prescribe an approach to resolving ethical dilemmas from the outset, based on assumed moral truths. In contrast, we propose that a key foundation for ‘getting ethics right’ is to do the opposite: to discover them, by going out into the world to study how relevant people resolve similar ethical dilemmas in their daily lives – a project we term ‘grounded ethics’. Building from Durkheim's theory of moral facts and more recent developments in the anthropology of morals and ethics, this paper explores the methods and theory useful to such a mission – synthesizing these into a framework to guide future...

Tutorial: Ethics in Data-Driven Industries

EMANUEL MOSS CUNY/Data & Society FRIEDERIKE SCHÜÜR Cityblock Health Overview Technology companies have discovered ethics in the wake of public pressure to consider the consequences of their products. This has been prompted by the finding that machine learning and artificial intelligence (ML/AI) systems, as fundamentally pattern-seeking technologies, can and do exacerbate long-term structural inequalities. Companies and employees also struggle with the challenges posed by the dual-use nature of technology. This tutorial will prepare you to understand and contribute to the more ethical development and deployment of ML/AI systems. It covers: An overview of ethical challenges in ML/AI today An introduction to the development of ML/AI systems, designed to give you insight into the reasoning processes and workflows of technical colleagues and how they generally address issues like accuracy and fairness (no quantitative background required!) A overview of current efforts to design more ethical ML/AI systems,...

Calibrating Agency: Human-Autonomy Teaming and the Future of Work amid Highly Automated Systems

LEE CESAFSKY Alliance Innovation Lab – Silicon Valley ERIK STAYTON Alliance Innovation Lab – Silicon Valley, MIT MELISSA CEFKIN Alliance Innovation Lab – Silicon Valley This paper explores how the design of everyday interactions with artificial intelligence in work systems relates to broader issues of interest to social scientists and ethicists: namely human well-being and social inequality. The paper uses experience designing human interactions with highly automated systems as a lens for looking at the social implications of work design, and argues that what human and automation each do is less important than how human and automation are structured to interact. The Human-Autonomy Teaming (HAT) paradigm, explored in the paper, has been a promising alternative way to think about human interactions with automation in our laboratory's research and development work. We argue that the notion of teaming is particularly useful in that it encourages designers to consider human well-being as central to the operational success...