Advancing the Value of Ethnography

Data Science and Ethnography: What’s Our Common Ground, and Why Does It Matter?

Share:

As EPIC2018 program co-chairs, we developed the conference theme Evidence to explore how evidence is created, used, and abused. We’ll consider the core types of evidence ethnographers make and use through participant observation, cultural analysis, filmmaking, interviewing, digital and mobile techniques, and other essential methods, as well as new approaches in interdisciplinary and cross-functional teams.1

We’ve also made a special invitation to data scientists to join us in Honolulu to advance the intersection of computational and ethnographic approaches. Why?

One of us is a data scientist (Tye) and the other an ethnographer (Dawn), both working in industry. We regularly see data science and ethnography conceptualized as polar ends of a research spectrum—one as a crunching of colossal data sets, the other as a slow simmer of experiential immersion. Unfortunately, we also see occasional professional stereotyping. A naïve view of “crunching” can make it seem as if all numerical work was brute computational force, as if data scientists never took the time to understand the social context from which data comes. A naïve view of ethnography can make it seem as if ethnography were casual description, “anecdotal” rather than systematic research and analysis grounded in evidence. Neither discipline benefits from these misunderstandings, and in fact there is more common ground than is obvious at first glance.

We also believe that the stakes in expanding this common ground are pretty high. The work of data science is increasingly ubiquitous—computational systems are there “in the wild” when ethnographers go into the field, and have consequences for the human experience that is so central to ethnographic understanding. Data science also offers new opportunities for mixed methods research, for example to generate a multidimensional understanding of human experience both digital/online and offline.

For data scientists, meanwhile, ethnography can offer a richer understanding of data and its provenance, and the sociocultural implications of data science work. As Nate Silver has written, “Numbers have no way of speaking for themselves. We speak for them. We imbue them with meaning…. Before we demand more of our data, we need to demand more of our selves.” There is huge potential when we demand “more of ourselves”, but to realize that potential, people from both fields have to be in the room. It’s not enough to have one side of a conversation.

To spark discussion about these issues, we sat down together to talk about the practices and possibilities we’re seeing in the data science/ethnography landscape. What are some features of data science and ethnography that make collaboration possible in practical terms, not just in theory? What areas of common ground form a basis for creating value, not just for our respective fields but for the organizations we work for and the people and communities we work with?

Our characterizations here gloss complex debates within each field. Exciting innovation is increasingly being developed, often defying stereotypes that “data scientists do this and ethnographers do that.” Nevertheless, data science/ethnography collaborations are still rare, and part of the reason they are rare is that a basic understanding of how “the other” actually works is still hard to come by. So we offer this conversation (edited) between the two of us as a starting point for many more discussions that will move us collectively forward. Please join in!

A Conversation about Data Science and Ethnography

TYE: Research combining quantitative and qualitative methods have been around for a while, of course. There’s a clichéd logic to mixed methods research–“quant” + “qual”, “hard” + “soft”. EPIC people have broken down assumptions about the quant/qual divide and reframe the relationship between ethnography and big data, but the fact is, mixed methods research combining ethnographic and data science approaches is still rare.2 Some examples are Gray’s (et al.) study of Mechanical Turk workers, Haines’ multidimensional research design, and Hill and Mattu’s investigative journalism, and Bob Evans’ work on PACO. There was also our own early work on making sense of telemetry data (written up here for computer scientists and here for ethnographers).

DAWN: Work like this isn’t common yet, but I think it will grow as the debate about fairness in machine learning (ML) is heating up. There are now many calls for more interdisciplinary work in order to make ML systems work better for society. Making technical systems fairer is a problem that we all share, so I absolutely agree that there’s a need. But as anyone who has done this kind of research can attest, the real value—and the real work to realize that value—is in the details.

TYE: Some of those details in how we tend to work can be surprisingly similar. Before diving deep, if we look down from 10 miles up, both data science and ethnography have undergone similar expansions: from coherent historical lineages and epistemologies they have expanded into trans-disciplinary practices, with the accompanying tensions and identity issues. Still, there’s a recognizable core to both.

DAWN: That’s true for ethnography, certainly. And it’s created quite a bit of consternation in anthropology, where ethnography originated! Ethnography is now used across anthropology, sociology, marketing, strategy, design, and other fields, but regardless of where it’s used, the core is about understanding people’s beliefs and behaviors and how these change over time. Ethnography is a research skill that makes it possible to see what the world looks like from inside a particular context. If “man [sic] is an animal suspended in webs of significance he himself has spun” (Geertz), this skill involves systematically tracing out the logic of those webs, and examining how those webs structure what people do and think. Depending on the domain of study, these webs can be large scale or small, and in applied work they are often about people’s multidimensional roles as customers, users, employees, or citizens. Ethnographers look at the social world as dynamically evolving, emergent systems. They are emergent systems because people reflexively respond to the present and past, and this response shapes what they do in the future. Years of building ethnography from this core has generated both analytical techniques and a body of knowledge about sociocultural realities.

TYE: Data science, across its variety of forms, is rooted in statistical calculations—involving both the technical knowledge and skill to assess the validity and applicability of these calculations, and the knowledge and skill to implement software or programming functions that execute the calculations. Underpinning the application of statistical calculations are assumptions about systemic structures and their dynamics—e.g., whether or not entities or events operate independently from one another, whether the variability of measurements, relative to an assumed or imputed trend or structure, is “noise” adhering to a separate set of rules (or not), and so on. Historically, these skill sets and conceptions of reality have been most heavily utilized in scientific inquiry, in finance and insurance, and business operations research (e.g., supply chain management and resource allocation). More recently, data science has expanded into a much larger set of domains: marketing, medicine, entertainment, education, law, etc. This expansion has shifted a large portion of data scientists toward data about people—some of that data is directly generated, like emails and web searches, some of it is sensed, like location or physical activity.

DAWN: So has that changed what’s considered “core”?

TYE: No, not really. The core expectation of data scientists is still to apply appropriate statistical calculations, though there has been an increasing emphasis on integrating statistical calculations into wider systems—both human systems and technological.

DAWN: For ethnographers, we still have a core too—it’s still about the ability to appropriately interpret belief systems and behaviors. But now many belief systems and behaviors are changing with data science practices and technical systems. Your core has come into my core! For example, in many parts of the world, datasets from Fitbits or social media metrics are as likely to be found in someone’s home as other cultural artifacts. While both areas have a core set of expectations, they both have to extend beyond their core in order to deal with data about social life—data which has very real social consequences.

TYE: This is all the more true in industry contexts, where we often have to make social decisions, or design decisions, regardless of expertise.

DAWN: One difference is that in many data science scenarios, the available data has already been collected, whereas most ethnographic projects include field research time to gather new data.

TYE: Although this tendency doesn’t hold true all the time, it is a common expectation, and that expectation results in a divergent initial perspective on projects: data scientists often think about working within the available datasets while ethnographers tend to begin their projects by thinking expansively about what dataset could be created, or should be created given the state of the art of the relevant discipline (anthropology, sociology and so forth). This difference in perspectives leads to different attribution models for the results. Data scientists will often describe their results as derived from the data (even if the derivation is complex and practically impossible to trace). Data scientists will readily recognize that they made decisions throughout the project that impacted the results, but will often characterize these decisions as being determined by the data (or by common and proven analyses of the data). You have a totally different way of dealing with that.

DAWN: Yes, for sure. It’s all coming from “the data” but ethnographers themselves are a part of the data. A crucial part. If you were an active part of its creation—if you were there, having conversations with people, looking them in the eye as they try to make sense of your presence—you just can’t see it any other way. It’s unavoidable. You’re also aware of all of the other contingent factors involved in the data you collected in that moment. So we have to be explicitly reflective and critical of how our social position influenced the results. I am woman from the United States, and when I was studying consumption patterns in Russia for my PhD, this influenced the way I was treated by locals. The two countries already have a ‘web of significance’—including a shared Cold War history—that means that before anyone starts communicating, assumptions are being made. My research questions were also influenced by what questions were conceivable from this position as opposed to some other position. The ethnographic approach to this issue is to treat bias as data about the phenomenon to be explained—not as a corrupting factor to be eliminated. What is it about being a woman, or a person from the US, that elicited some kinds of responses, but not others? What is it about one’s research funding, or intellectual networks, or politics, that leads to this research question and not that one? To use this observation, but ignore that one as “noise”? The ethnographer takes a cold hard look at these issues of context and social position as part of the analysis process.3

TYE: So we have different ways of attributing the results, but the research process is somewhat similar, from what I have experienced. The three main steps in the data science process are:

  1. data sourcing—more than mere access, it’s also about understanding lineage and assessing quality and coverage;
  2. data transformation—from filtering and simple arithmetic transformations to complex abductions like predictions and unsupervised clustering; and
  3. results delivery—both socially and programmatically (i.e., as lines of code).

Of course, this conception of the data science process glosses over some significant details, like the wide range of data transformations involved from cleaning through modeling, and the range of skills and domain knowledge required to deliver results. Also, different data scientists specialize and concentrate their time and energy on different steps.

DAWN: Different ethnographers also concentrate their efforts differently. Some revel in field research and description while others are more focused on building concepts and theories from those descriptions. Although not commonly articulated this way, ethnographic work relies on the same basic steps you just outlined: source data, transform it, and share insights.

TYE: One thing I’ve observed about ethnography is that ethnographers often collect metadata simultaneously to collecting data—e.g., taking notes on why they might have made certain observations instead of others, how the observations align or conflict with their expectations, etc. Provenance is built-in. The equivalent metadata about provenance might be recorded post hoc for the data scientist, or she might have to create it by talking to the stakeholders who did the collection.

DAWN: We don’t make hard distinctions between metadata and data because you don’t know which is which until you do the analysis, but the provenance is definitely still there. Yet in some ways, we also both have a notion of data transformation, even if the ethnographers don’t call it that. And both kinds of transformation rely on trained judgment. In data science, trained judgment determines which aggregation function, or classification method, best transforms data into to appropriate patterns. Analogously, each ethnographer has to have a way of going from field notes or verbatim speech to a higher-level pattern that says something about the research question. It’s an “aggregation function” in a way, but the aggregation happens through identifying shared qualities in the data. And that’s a matter of trained judgment. You look at what the literature says on the topic, how often this or that theme seems to come up, who talks about it a lot and who talks about it only a little, what their incentives are to talk about it in a certain way or ignore other things, and you triangulate. In many cases, the “write up” is the place where analysis really occurs—where all those transformations become an argument that some phenomenon is happening.

TYE: Trained judgment is certainly paramount in data science as well. The relative value of an experienced data scientist versus a freshly minted one is the experience they can draw on to steer a set of transformations (be it data manipulations or algorithm configurations or sample weighting schemes) toward a better, more robust set of outputs. Where I do see some significant divergence is around the sharing of results: ethnographic findings are rarely delivered programmatically. They are textual manuscripts and/or presentations that explicate the findings, including how the ethnographer’s social position shaped those findings.

DAWN: I have in fact seen one ethnography done through programming—it was an analysis of Github’s count of repositories, and the claims Github made about it, by essentially “recounting” the repositories number by making API calls, processing it in R, and then both reporting on that process and analyzing the new numbers in relation to the scholarship on cultures that form in places like Github. But the code was not the ‘deliverable’, the writing was.

TYE: Ultimately, though, both processes are driven by people—the data scientist(s), the ethnographer(s). It is often the intuition of the researcher or exogenous sociocultural pressures that drive iterations in the research process. Neither is a linear set of steps. But a key difference is that ethnographic work critically assesses the role of the researchers as an explicit, expected part of the research process. If data science projects were truly determined by the data alone (sensor data, click data and so forth), then repeated analyses should yield identical results. They don’t. More light has been shed on this recently and is captured by concepts like “p-hacking”. Minimally, it’s clear that data science processes could benefit from more documentation and critical reflection on the effect of the data scientist themselves. The ethnographer’s ability to identify and explicate researcher biases and social pressures could be helpful.

DAWN: I’m always curious about how data scientists measure the consistency or sensitivity of results from datasets. You have a notion of confidence intervals that communicates in a shorthand way “this is the size of grain of salt you have to take.” Ethnography doesn’t look at the world probabilistically, so we can never say, “9 of 10 times this will be the case.” But there are patterns, and those patterns can be relied upon for some purposes but not others. Even though we have messy complicated debates about how culture “scales” (which isn’t the same thing as reliability of results, but it’s related), we still don’t have clear ways to communicate to clients “this is the size of the grain of salt you need to take.”

TYE: Another area ripe for collaboration is in research question formulation and data gathering. To the extent that some data scientists do collect their own data, an ethnographic intuition about what is fruitful to collect, what biases are likely to come into play, and what outliers could be signal, can be useful.

DAWN: And vice versa. Some research questions are only conceivable after a data scientist has looked at a particular dataset, and finds patterns that require further explanation. Those questions can be hugely ethnographically interesting, and not ones that ethnographers can find on their own. Pursuing them qualitatively could lead to new data science questions, and so on (see also this). In both cases, explicitly documenting research iterations, and the factors that drive them, is critical to reaching the most robust and viable results.

TYE: We touched on data provenance earlier, but I want to come back to it from the perspective of quantitative data. In particular, I think it is critical to keep in mind that the systems that generate quantitative data are necessarily embedded in socio-technical systems. The technological elements of those systems (electronic sensors, software-based telemetry, etc.) are designed, manufactured, and maintained by sociocultural factors. So, a data scientist who is diligently trying to understand where their data comes from in order to interpret it, will sooner or later need to understand sociocultural phenomena that produced data, even if that understanding is more meta-data than data. It would make sense to co-develop rubrics for assessing the quality of data generated by socio-technical systems. Shining a bright light on the deepest lineage of data that impacts business or design decisions is important for everyone involved. Such assessments could lead to more cautious ways of using data, or be used in efforts to improve the explainability of technical systems.

DAWN: I agree there’s a lot of potential in collaborating to illuminate the systems that create data. Part of that potential, I think, will be realized by leveraging the different epistemological assumptions behind our respective approaches. For example, there is unquestionable value in using statistical models as a lens to interpret and forecast sociocultural trends—both business value and value to growing knowledge more generally. But that value is entirely dependent on the quality of the alignment between the statistical model and the sociocultural system(s) it is built for. When there are misalignments and blind spots, the door is opened to validity issues and negative social consequences, such as those coming to light in the debates about fairness in machine learning. There are real disconnects between how data-intensive systems currently work, and what benefits societies.

TYE: Definitely! There’s a lot work to be done in assessing the quality of that alignment. It requires knowledge from both domains to determine what is missing, what is under/over-emphasized, and what is mischaracterized entirely.

DAWN: There’s lot’s more to talk about here, and in Hawaii. I’m curious about what others have done, about how teams have formed, what worked and what didn’t, and what ethical issues arose in the process. It’s worthwhile conversation to have in and of itself, and my hope is that it will feed into our broader conversation about what evidence means, too.

NOTES

1. Some people considering whether they should submit have asked us if we are only seeking mixed methods work, or work that deals with data collected electronically. The answer is a resounding no: as with any other EPIC, we think there is value in juxtaposing ethnographic work from a variety of different contexts, using a variety of different methods and analytic stances.

2. A growing number of publications at EPIC (e.g., by Churchill, Fiore-Silfvast & NeffNafus, and Norvaisas & Karpfen) and elsewhere (e.g., by Elish & boyd and Kitchin) have also furthered the discussion about ethnography’s relationship to data and ML, though much work has remained conceptual.

3. This is not to say that ethnographers think it is a good idea to use data that badly represents a population under study, or that they are somehow okay with other sources of bias associated with the statistical meaning of the word. Instead, it is a claim about the need for skillful interpretation when these issues are larger than data collection procedure or appropriate statistical technique, like the problem of which research questions get asked and which do not.

REFERENCES

anderson, ken, Dawn Nafus, Tye Rattenbury & Ryan Aipperspach (2009). “Numbers Have Qualities too: Experiences with Ethno‐Mining.” Ethnographic praxis in industry conference proceedings.

Churchill, Elizabeth (2017). “The Ethnographic Lens: Perspectives and Opportunities for New Data Dialects.” Perspectives, Ethnographic Praxis in Industry Community, September 26, 2017.

Crawford, Kate (2017). “The Trouble with Bias.” NIPS 2017 Keynote.

Elish, M. C., and boyd, d. (2017). “Situating Methods in the Magic of Big Data and Artificial Intelligence.” Communication Monographs.

Evans, Bob (2016). “Paco—Applying Computational Methods to Scale Qualitative Methods.” Ethnographic Praxis in Industry Conference Proceedings.

Fiore-Silfvast, Brittany and Gina Neff (2013). “What We Talk about When We Talk Data: Valences and the Social Performance of Multiple Metrics in Digital Health.” Ethnographic Praxis in Industry Conference Proceedings.

Geertz, Clifford (1973). The Interpretation of Cultures. New York: Basic Books.

Gray, Mary L., et al. (2016). “The Crowd is a Collaborative Network.” Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing. ACM.

Haines, Julia Katherine (2017). “Towards Multi‐Dimensional Ethnography.” Ethnographic Praxis in Industry Conference Proceedings.

Hill, Kashmir & Surya Mattu (2018). “The House that Spied on Me.” Gizmodo, February 7.

Kitchin, R. (2014). “Big Data, New Epistemologies and Paradigm Shifts.” Big Data & Society 1(1).

Mackenzie, Adrian (forthcoming). “Operative ethnographies and large numbers.” In Knox, H. and D. Nafus, Ethnography for a Data Saturated World. Manchester: Manchester University Press.

Nafus, Dawn (2016). “The Domestication of Data: Why Embracing Digital Data Means Embracing Bigger Questions”. Ethnographic Praxis in Industry Conference Proceedings, 384–399.

Norvaisas, Julia Marie & Jonathan “Yoni” Karpfen (2014). “Little Data, “Big Data and Design at LinkedIn.” Ethnographic Praxis in Industry Conference Proceedings.

Patel, Neal H. (2011). “For a Ruthless Criticism of Everything Existing: Rebellion Against the Quantitative-Qualitative Divide.” Ethnographic Praxis in Industry Conference Proceedings: 43.

Rattenbury, Tye, Dawn Nafus, and ken anderson (2008). “Plastic: A Metaphor for Integrated Technologies.” Proceedings of the 10th International Conference on Ubiquitous Computing, ACM.

Selman, Bill, (2014). “Why Do We Conduct Qualitative User Research?Mozilla UX, October 30.

Image: Overlapping Rhythms by Rosa Say (CC-BY-NC-ND 2.0) via Flickr.


Tye RattenburyTye Rattenbury is a Senior Director of Data Science and Machine Learning at Salesforce, supporting the Customer Success organization. He is primarily focused on generating predictions of customer attrition, likelihoods to purchase additional products and services, and forecasts of support volume. Prior to Salesforce, Tye held various data science roles at Intel, R/GA, Facebook, and Trifacta. He holds a PhD in Computer Science from UC Berkeley and a BS in Applied Mathematics from CU Boulder.

Dawn Nafus is a Senior Research Scientist at Intel Corporation, where she conducts anthropological research for new product innovation. Her ethnographic research has been primarily on experiences of time, data literacy, self-tracking and wearables. Most recently, she has been working on instrumentation and data interpretation for community-based environmental health projects. Her work takes place in the US and Europe. She is the editor of Quantified: Biosensing Technologies in Everyday Life and co-author of Self-Tracking. She holds a PhD from University of Cambridge.

0 Comments

Share:

Tye Rattenbury, Salesforce