FREE ARTICLE: Please sign in or create a free account to access the leading collection of peer-reviewed work on ethnographic practice. To access video, Become an EPIC Member.
HYBRID METHODOLOGY PROCESS: GUIDELINES FOR THE PROCESS OF INTERDISCIPLINARY TEAM COLLABORATION
Research that is both interdisciplinary and collaborative requires a balancing act between the practices of one discipline and another, such that the team develops new hybrid practices — this in turn means that working together is a process that cannot be taken for granted. The sections above have outlined the key hybrid methodology approaches for research, analysis, and impact. What follows are guidelines for effective collaboration within an interdisciplinary team, the order of the points organized by when in the project process that point is most relevant and useful (from framing to analysis), with the last point being about the general ethos throughout a project, based on our learnings.
Let the Research Question Be the Team’s Home Base
For complex research questions, we need to flip the decision-making process on its head. Rather than using a discipline to define the methodology, we instead let the research question drive the methodology decisions. The major advantage of a highly interdisciplinary team is that it unlocks a large set of tools and methods that can be used to answer a central research question. We found that certain methods came to the fore at distinct stages of our research, and that each discipline had something crucial to contribute at different stages of the design and analysis, so we strove to set aside the mentality of “this is how we conduct research in Discipline X” and instead adopt the thinking, “this is how we best answer Question Y.” The resulting process is more than interdisciplinary; the cross-pollination and switching between methods becomes so frequent and fluid as to create something more like a hybrid — hence hybrid methodology.
Prepare for an Immersion into Each Other’s Fields
Interdisciplinary projects work best when each discipline is given opportunity to contribute, but also when each discipline understands the other. This is not simply learning about each discipline’s methodologies and problem-solving approaches, but deeply understanding their perspectives and world views. We would advocate for an early immersion, in which each disciplinary expert spends the day shadowing the other, trying to understand how each views the world. This entails listening and observing with openness — what does the workflow for a machine learning engineer actually look like? How does a cognitive scientist run an experiment? How does an ethnographer conduct participant observation in the field? Each discipline expert should spend some time in the role of the other prior to fieldwork. When in the field, this spirit of immersion in each other’s perspectives can continue by having researchers with different expertise gather data together. We agreed that two researchers, each from a different discipline, should go into the field together to meet with each participant. This setup gives researchers with a range of knowledge a shared perspective from which to draw — they can discuss how they, in pairs, observed or noted different aspects of the same context, having both been in the field.
Build in the Ability to Iterate Extensively
Interdisciplinary projects require constant developing and improving of approaches based on contributions across disciplines and shared learnings as a unit. We advocate for building in ways to iterate throughout the process. For example, data collection might be structured so that it occurs in two parts with a break in-between to assess and refine approaches and develop early insights. The team can then reconvene at the end of the second part of data collection to review the revised approaches and analyze the data. The discipline experts should regularly review and weigh in on analyses in progress. Time and logistics for this iteration should be built into the project timeline and scope — for instance, ensuring all experts have opportunities to meet and work together in real-time at key moments in the research when approaches are being built, assessed, or (if necessary) rebuilt. This may not be unique to hybrid methodology, but it is likely especially critical given the diversity of the research and researchers.
Work with Fuzzy Definitions and Cross-Disciplinary Translations
Language becomes especially important in interdisciplinary projects, as different disciplines might have different definitions of the same term (e.g., “context”) or terms might not yet exist for newly observed phenomena. It is vital to do translation exercises across disciplines, particularly with terms that are common among the disciplines but defined differently in each — for instance, how do machine learning concepts map onto anthropological concepts (e.g. “abstraction” and “pattern”), and how do cognitive science understandings of experience map onto phenomenological and philosophical understandings (e.g. emotion and effort)? In cases where a phenomenon is not well defined by either discipline, new language emerges. We found ourselves working with fuzzy definitions, making a point to talk about what we did not fully know yet, in an effort to define as we went along what these terms meant (for example, the terms we used to break down the components of context), and working toward more concreteness of terms over the course of the project.
Recognize the Value of Different Types of Data
“Data” is one of those terms that is common across disciplines and yet comes in unique forms, from pixels to 0s and 1s to the thick description of a wink (Geertz 1973). Interdisciplinary projects benefit from the full team re-defining “data,” such that each discipline feels that there is both familiar and unfamiliar data being captured. It is important to recognize the value in unfamiliar data and to recognize that data which feels unusable for one discipline is actually incredibly relevant in another. Many disciplines (anthropology, machine learning, cognitive science) value taking a data-driven approach, but that “data” itself may look very different for each discipline.
Find the Highest Helpful Level of Abstraction
In order for an insight or concept (e.g. about human experience or human behavior) to be relevant and actionable across disciplines, it needs to have a certain level of abstraction from raw data so that it translates not only across individual data points but across different disciplines, yet it cannot be so abstract that it loses too much specificity and actionability, rendering it meaningless for each discipline. In our case, abstractions ideally allow us to develop knowledge that generalizes beyond any one individual’s experience of context, to allow for actionability or relevance beyond our participant pool. For example, it might be too abstract to say that social interactions are one aspect of context that impacts experience, but to say that certain types of social interactions (e.g. caretaking, collaboration) impact the experience of context might be at the “right” level of abstraction to be directive about what value to offer in interventions or how to build for those interventions. In our project we have learned the value of ‘imperfectly useful abstractions’ that helped us to generalize enough given we were addressing a technology that doesn’t yet exist, and yet that required constant re-evaluation and adjustments to the granularity of the abstraction (similar to our points about fuzzy definitions and translations above). Abstractions help us to pinpoint relevancy. In the words of scientist and engineer Edsger W Dijkstra, “[…] the purpose of abstracting is not to be vague, but to create a new semantic level in which one can be absolutely precise” (Dijkstra 1972, 864).
Know When and How to Shift Between Description and Interpretation
In our project, we constantly discussed toggling between “bottom-up” and “top-down” analysis — and essentially this was a discussion about when to dwell in description and when to dwell in interpretation. It has been vital for us to have a high degree of granularity in the data (knowing that the data itself takes various forms), and staying close to the data for perhaps longer than on other applied projects, before reaching conclusions. But it has also been vital for us to move towards interpretation perhaps sooner than felt comfortable in other traditional within-discipline approaches (because given the quantity and quality of data captured, and the unfamiliarity with some of the data, we could have stayed close to the data for a long time). Moving to interpretation of the data allows us to build initial ontologies and categories for how to sort and make sense of the data, tying it to clear implications for what it is we are trying to inform. What has been most vital is the shifting between description and interpretation and back again — once we have some potential interpretations, going back to the descriptions to re-evaluate and refine.
Know When and How to Shift Between Talking About Approaches for How to Do Work and Using Approaches to Do Work
A consequence of having a process that cannot be taken for granted is that the team must make deliberate decisions and reach consensus on what teams would otherwise intuitively dive straight into doing — and this takes time. For example, once a disciplinary team has its data, that team generally knows how to analyze that data; this was not the case with us. We spent a considerable amount of time discussing which analysis approaches we would need in order to answer our project’s questions, debating the pros and cons of each approach (and in these discussions it can be initially difficult for value judgements to not come into play, particularly about what data or results should look like). While these discussions were certainly crucial, we had to learn when to stop talking and start doing (or trying-to-do), in order to achieve tangible results. In such interdisciplinary situations, deciding on an approach can seem scary and wrong — what if it turns out the approach doesn’t work and ends up being a waste of time? But when it felt like the team had spent too much time on a “meta” discussion about what to do, we learned to time-box discussions and instead invest the time the team would have spent debating the approaches into instead testing one or two (even for just a couple of hours), then regrouping. The fruitfulness of an approach can sometimes only be assessed by giving it a try and looking at the results. Instead of resolving methods debates based on “best practice,” interdisciplinary projects may need to resolve these debates based on “the shoe that fits.”
Seek Out Methodological Bricolage
In all, we have learned that interdisciplinary projects require some discomfort and compromise. Methodologies and approaches require give-and-take — no methodology is going to work as neatly as it would in its home-discipline. The orientation of the group should be towards a methodological bricolage of sorts: melding together traditional approaches in untraditional ways to make something new. Each discipline should be constantly looking to the edges of the field (e.g. how can we ask for scores of people’s mental effort in-the-moment that take into account the reasons why the scores were given? How can we break a moment phenomenologically down to a handful of seconds in collaboration with participants?). This approach ultimately pushes each discipline further, together.
DISCUSSION: CONTEXT-AWARE ASSISTIVE TECHNOLOGY, HYBRID METHODOLOGY, AND THE IMPACT OF ETHNOGRAPHERS
Context-Aware Assistive Technology
Hybrid methodology has proven useful in beginning to address the complex problem of understanding the individual experience of context for personal computing and assistive technology. For instance, the study’s findings indicate that people’s broader goals and their social context and relationships play a critical role in characterizing high mental effort, even more so than environmental and task-based context (Jonker et al. in review). From a practical standpoint, these findings identify the most worthwhile context factors to pursue in future cognitive science and machine learning research. Moreover, the study has helped create new terms (or abstractions) to define different experiences of context, and different components of context that become relevant to an individual. This has challenged the notion that context — in particular, mental effort in context — is only experienced in terms of highs and lows, more or less, good or bad. It has even challenged the assumption that mental effort is a singular construct — it may in fact be the case that there are several “flavors” of mental effort in the real world (Jonker et al. in review). A deeper understanding of context has sought to help inform some of the success criteria of context-aware assistive technology that does not yet exist yet — assistive technology that perhaps knows not only what to intervene with, when, and how, but also when not to intervene. There are many unanswered questions about how assistive technology can help, rather than hinder, how people want to act upon their world, but hopefully there is now also the beginning of a collaborative way to talk about those questions.
Hybrid methodology presents an opportunity (and challenge) for disciplines to move beyond comfort zones. For anthropologists, it can mean coming up with a theory for understanding very messy and complicated contexts in a way that yields insights relevant to machine learning and cognitive science. For cognitive scientists it can mean exploring how lab studies and field studies build on or supplement one another, and how isolated variables studied in a lab (such as cognitive load during a puzzle challenge) can be studied systematically in everyday contexts alongside a number of other variables (such as emotion or mind-wandering) to further inform an understanding of cognition. For data and computer scientists and engineers it can mean understanding how qualitative data might provide helpful abstractions that can uncover new value propositions for machine learning and feature engineering. Across disciplines, there is an opportunity and challenge to explore how qualitative and quantitative analyses can work together on a shared data set. We hope that future interdisciplinary teams (particularly teams that bring new disciplines into the mix beyond the ones here) develop new methods at the intersection of existing ones, and new ways of analyzing, and defining what constitutes as, data. We hope these teams develop new types of outcomes that are relevant and impactful in “home disciplines,” and new processes for collaborating to best bring out what is both at the core and the cutting edge of each discipline.
Next Steps for Ethnographers
Ethnography, in theory, holds promise complementing the approaches of machine learning and cognitive science, and addressing the challenges inherent in highly-controlled lab settings because it is embedded in the everyday, complex, “messy” reality of human life. Ethnographers are experts of context, abstracting out from thick descriptions of individuals. An algorithmic model, too, needs to be able to generalize to similar contexts and similar groups of users. Ethnography could have the potential to provide useful abstractions, descriptions and re-descriptions of the data that can inspire machine learning scientists to engineer new features that they had not before considered. It could help engineers determine what data and sensors to prioritize from the end-user’s perspective. Ethnography could also have the potential to both augment quantitative metrics on cognition (such as mental effort highs and lows) with qualitative descriptors, and help to record such measurements more seamlessly in naturalistic settings. This contribution is deeply valuable because knowing that metrics like mental effort are high or low does not do enough to inform the device of when and how to intervene, or if it should intervene at all. The device also needs to know why and how mental effort spikes or drops because of an individual’s experience of context. Ethnography can perform the knowledge discovery to scope out a space for future data collection and machine learning.
But ethnography, in practice, has yet to truly integrate into the early development of how these ubiquitous technologies work — both their ability to parse context and their ability to support human cognition. User research and qualitative data are typically part of defining “what we build” while machine learning and cognitive science are typically part of defining “how we build” — and there is little collaboration. This setup works well enough when the machine learning researchers know which data they will need to use for more constrained problems and use cases, but in the enormous complexity of everyday contexts (i.e. “the real world”), ethnographers can generate data, insights, and deliverables that help to define and scope machine learning work and bring qualitative insights early into the shaping of technologies and capabilities that do not exist yet. This requires that ethnographers roll up their sleeves, understand new emerging spaces, dive deeply and openly into new disciplines, and adaptively build a hybrid methodology around emerging research questions. It requires rethinking ethnographic research and outputs, and making these understandable and relevant to collaborator-disciplines. Although it is a challenge, the applied ethnographers who are willing to take it on may find themselves contributing to the definition of the next wave of ubiquitous computing, and in the process pushing the boundaries of ethnography’s methods and applications.
Maria Cury is a manager at ReD Associates. Currently Maria studies technology in daily life to advise on product development, and is interested in advancing applied ethnographic research methods. Maria received an MSc in Visual, Material, and Museum Anthropology from Oxford, and a BA in Anthropology with Visual Arts certificate from Princeton University. email@example.com
Eryn Whitworth is a post-doctoral research scientist at Facebook Reality Labs. Currently, Eryn is focused on advancing the practice and discourse in product user experience research through the development of new data representations and data sets depicting mundane activity. She received a PhD in information studies from the University of Texas at Austin. firstname.lastname@example.org
Sebastian Barfort is a data scientist at ReD Associates. Sebastian works with clients in technology, financial services and healthcare to translate ethnographic insights into algorithms. Sebastian received a PhD in behavioral economics from the University of Copenhagen and double master’s degrees from London School of Economics and NYU. email@example.com
Séréna Bochereau is a technical program manager at Facebook Reality Labs. Séréna has a PhD in Haptics from Sorbonnes Universities and an MEng in Materials Science from University of Oxford. firstname.lastname@example.org
Jonathan Browder is a research scientist at Facebook Reality Labs. His research interests include multisensory perception, human behavior in augmented and virtual reality, and non-parametric modeling. He received a Ph.D. and MA from Washington University in St Louis and a BA from Washington and Lee University, all in mathematics. email@example.com
Tanya Jonker is a research scientist at Facebook Reality Labs. Her work focuses on interaction between futuristic technologies and human cognition. She is currently exploring input and interactions with augmented and mixed reality, and how these devices might enable new types of cognitive offloading. Tanya received a PhD in Cognitive Psychology from the University of Waterloo. Tanya.Jonker@oculus.com
Sophie Kim is a UX research scientist in Facebook Reality Labs at Facebook. Sophie focuses on bringing human-centered and experience-driven approaches to future-facing research and development. She has a special interest in augmented reality interactions and how ethnographic research can help inform it. Sophie received a PhD in Human Factors Engineering from Virginia Tech. firstname.lastname@example.org
Mikkel Krenchel is the Director of ReD Associates North America. He has spent a decade advising leaders across a wide range of Fortune 500 companies on corporate and product strategy, and led ReD’s emerging practice for integrating social and data science. email@example.com
Morgan Ramsey-Elliot is a partner at ReD Associates, where he works with technology, financial services, and retail companies. He enjoys working at the intersection of “old” and “new,” advising on product strategy for both well-established companies striving to adapt to the digital economy, and digital-native companies growing into maturity. firstname.lastname@example.org
Friederike Schüür, PhD is a data and machine learning scientist. She leads machine learning efforts at Cityblock Health, serves on the data advisory board of USA for UNHCR, and she is a long-standing data science for social good volunteer with DataKind. She loves data in all its shapes and sizes. email@example.com
David Zax is a senior consultant at ReD Associates, where he has focused on conducting ethnographic research for tech companies. Prior to ReD he was a freelance journalist contributing to Fast Company, Technology Review, This American Life, and The New York Times. Contact: David.firstname.lastname@example.org
Joanna Zhang is a senior researcher at ReD Associates. In past lives, she’s waitressed in NYC, coached high school debate, organized public health campaigns, designed for an architecture studio, supported digital strategy at the White House, and dipped potato chips in chocolate by hand at a candy/hotdog shop. email@example.com
Citation: 2019 EPIC Proceedings pp 254–281, ISSN 1559-8918, https://www.epicpeople.org/epic