This 2019 project conducted in the US and the UK sought to understand which conspiracy theories are harmful and which are benign, with an eye towards finding ways to combat disinformation and extremism. This case study demonstrates how ethnographic methods led to insights on what “triggered” conspiracy belief, the social and emotional roles conspiracy theories played in believers’ lives, and how conspiracy belief was often a reflection of a person's general sense of societal alienation. We discovered that any extreme version of a conspiracy theory could be harmful. The findings of this project changed how the client—and by extension engineers behind major tech platforms—understood harmful conspiracy-related content, and led to a refinement of the algorithms defining the discoverability of this content. The aim of this project was to scale and amplify through algorithmic interventions the work of individual debunkers.
Keywords: Conspiracy theories,...
Facebook Reality Labs
The not-too-distant future may bring more ubiquitous personal computing technologies seamlessly integrated into people's lives, with the potential to augment reality and support human cognition. For such technology to be truly assistive to people, it must be context-aware. Human experience of context is complex, and so the early development of this technology benefits from a collaborative and interdisciplinary approach to research — what the authors call “hybrid methodology” — that combines (and challenges) the frameworks, approaches, and methods of machine learning, cognitive science, and anthropology. Hybrid methodology suggests new value ethnography can offer, but also new ways ethnographers should adapt their methodologies, deliverables, and ways of collaborating for impact in this space. This paper outlines a few of the data collection and analysis approaches emerging from hybrid methodology, and learnings about impact and team collaboration,...