Advancing the Value of Ethnography

The Conceit of Oracles

Share:

Cite this article:

Ethnographic Praxis in Industry Conference 2013. © Ethnographic Praxis in Industry Conference, some rights reserved. https://epicpeople.org/the-conceit-of-oracles/

Good morning, I am really excited to be here for my first EPIC conference. There are just so many amazing people in the audience as I look at you guys, and so many of you guys I’ve been following on blogs and Twitter and especially Natalie Hanson’s anthrodesign listserv. I can’t wait to talk to you guys all afterwards. Just as a reminder, I don’t know if Simon already said it, but if you’re tweeting or instragramming—use the conference hashtag EPIC 2013. If throughout the talk you have any questions, or if anything resonates with you, this is my Twitter and Instagram handle.

For over twelve centuries in Ancient Greece in consulting oracles, a person who could predict the future was a part of everyday Hellenistic life. People—poor, wealthy, slave and free—asked oracles for them to answer important life questions such as should I get married, or will I come back from war alive, or questions related to business matters. Should I invest in this voyage? There were questions related to political affairs like should we advance into this territory? Now, the most famous and powerful oracle was the Pythia, the priestess of the Oracle of Delphi at the Temple of Apollo—Apollo being the god of prophecy. Now recent research from geologists and other experts has revealed that when giving prophecies the Pythia inhaled enormous amounts of ethylene gas, because it just so happens that the Temple of Apollo was actually built over two massive earthquake faults which created fissures that allowed for the release of petrochemical fumes from the deep earth. Essentially, when the Pythia went into prediction mode, she was essentially, you know, tripping out.Now, the Pythia passed down her oracular predictions derived from hallucinations to priests, who then interpreted her chemically induced babble as official words for kings, dignitaries and philosophers. It is pretty crazy that for several centuries this was how big and small decisions were made, but there was a methodology to this entire process. It was not just like random babble. There wasn’t just the Pythia. There were many priests at the temple who would listen and interpret her predictions. The priests consulted with the person soliciting the prediction. They learned the details of their situation, which helped them to present the Pythia’s predictions in a more relevant context. This process was super tedious, because the Pythia’s words were often very indecipherable. Oftentimes, people had to wait days before a prediction was ready.

Now, this was their form of research. None of this actually sounds really crazy, you know, in terms of basic data gathering. The priest would actually talk to the people soliciting, and so it’s completely logical to them. But then you go back and realize that the Pythia was actually high the entire time.The reliance on prophecy is not just a Grecian phenomenon. From the oracle bones of Ancient China to the Mayan calendars, oracles have helped people answer the big question of what happens next.

Why—why has humanity been so determined to get a certain answer to this timeless and difficult question. It is because the future is scary. Making decisions without any assurances of the outcome can be frightening. This is just as true today as it was for the Ancient Greeks. Now, while today we might dismiss the prediction of someone tripping on fumes being released from the cracks in the earth, we still believe that prophecy is possible. The Oracle of Delphi is still here. We still have the same question of what comes next.We may feel that our modern capability to predict the future is far superior to the abilities of the Ancient Greek oracles, because we have the advantage of the breakthroughs from the scientific revolution that created the scientific method—a rationalistic set of approaches to investigate the world.One of the most fascinating notions to emerge out of the scientific method is the idea that processes of investigation have to be based on empirical and measurable evidence. Measurability was important because it allowed other scientists to iterate on other scientists’ experiments. Now, the idea that the world could be operationalized into a set of precise quantifiable measurements—the light, the stars, the human body—and could be mediated as a set of numbers was a new and powerful idea. But when you actually look at the history of the methodology and of how it is derived, it wasn’t necessarily so clear cut, because it was not just based on merit. It was based on the politics of their time.

Now, the history of modern measurement starts with this guy here, Irish-born English mathematical engineer, William Thomson, 1st Baron—a.k.a. Lord Kelvin was his official name. Now, during the age of measurement and the scientific revolution he said that if you can’t measure something, then it does not qualify as knowledge. His exact words were “when you cannot measure it and when you cannot express it in numbers, your knowledge is of meager and unsatisfactory kind.”

Now, his idea that anything worthy of knowledge is measureable is commonly referred to as the curse of Kelvin . Now, his pronouncement may seem a bit harsh, but he was living in the age of measurement, at a time when a series of discoveries from non-Euclidean geometry to the Doppler effect made new forms of measurement possible. Now, one of the key discoveries of his day—electricity—made it possible to measure things through their electrical fields. At the same time, it provided a whole new field of things to measure—electricity itself. Scientists felt compelled, in a sense, and they even saw it as their calling to quantify the world. How do we measure the quantity of a message being sent through a telegram. How do we measure the speed in a motor. How do we measure the length of a telephone call. But just as scientists had to measure the things that electricity made possible, they also had to measure electricity itself.

Now, it is actually worth noting that in this very room that we are in right now known as the Faraday Theatre, 157 years ago in 1856 Michael Faraday gave a public lecture demonstrating electromagnetic induction as part of the Royal Institution’s Christmas lectures. He recreated electricity outside of a lab setting. If you look at this, it is kind of pretty cool actually where Simon is sitting. Can someone actually take a picture right here? I think that this is pretty awesome. We need to do this, because who would have thought that some Chinese-American chick would be standing 157 years later in the same room. Do we have Simon in the background here? I am going to pause for one more moment to lecture city, to ethnography. I am standing in this same place and that is pretty amazing.

At the time, Faraday was able to demonstrate electricity outside of a lab setting. The problem, though, at that time was that no one had yet figured out how to harness it for commercial use. So then thirty days after Faraday’s demonstration, Lord Kelvin, the science philosopher of his time changed things with his mandate of measurement, and so as electricity spread around the world people developed all these electric measuring devices—Ayrton and Perry ammeter, Edison voltmeter, Edison direct-reading meter, the watt reader, Thomson mirror galvanometer, electrodynamometer, psychometer, and Ferranti meter.

The reasons why there were so many devices is partly because this was all new, partly because no one could agree for decades on how to measure electricity. The fights about standards ripped apart friendships and played on nationalism. Nikola Tesla’s alternating current or Thomas Edison’s direct current. Do we use Britain’s ohm unit, or Germany’s seimens unit. Do we measure the weight of electricity or the light coming out of the bulbs. These were the questions that they were asking back then. This one thing that seemed so standard that we all rely on now that is like known-known was actually very unknown and mysterious.

Now, when I was going through the correspondences and patents of that time, you could really like feel that people not only had different ideas about how to measure, but what the measurement meant to them. It was deeply personal. People talked about their personal lives when they wrote up these mathematical formulas. The debates about electricity were just as heated as open or closed-source—you know, Web browser apps, native apps, single purchase, subscription software, Mac or PC, Android or Apple, HTML5 or Flash. The interpretation of a measurement was just as unclear as an interpretation of a Facebook like a Twitter mention or a Google search. The measurement was not always absolute. It was not a given. It had to be worked out over time. With that said, we now take measurement as being truth. The contemporary myth that we are living in where a measurement is absolute has become just as unquestioned and sacred in modern times as the oracles in Ancient Greece; just that now our myths look different and our methods for answering them have changed.

During these late and early 20th century debates, they reflect a period in Western History where science challenges religion as the authoritative source to answer the age-old question of what comes next, and so chanting priests were out and scientists were in. They held the power to produce a convincing image of what happens next. Their predictions were based on measurement. This is the beginning of the error in conflating measurement with knowledge.

The measurement frenzy that electricity started has not ended, for it continues with the computer which is just electricity in a constrained form of ones and zeros. Now, it is no coincidence that in 1951 the world’s first commercially available general-purpose electric computer, the Ferranti Mark 1, which is humongous came from one of the largest electricity companies in the U.K.—Ferranti Limited—which made capacity meters in the late 19th century up through the 20th century. The Mark 1 developed at Manchester University was a big deal. This high-speed electric computer could add columns of numbers together and store them in memory. It is also interesting to note that ASCII text originated from the development of the Mark 1, and Alan Turing wrote Mark 1’s first operation’s manual.

The invention of computers like Mark 1, and others that came after, introduced a new language dealing with information, speed, and most importantly, predictability. Even though computers were incredibly expensive and only a few institutions in the world could afford them, broadcast television played a pivotal role in introducing the computer into the public arena as a prediction machine. It all started on the night of the 1952 U.S. presidential election race between Adlai Stevenson and Dwight Eisenhower, also more popularly known as Ike. For the first time on this night, a television studio used a computer to do a live demo of its predictive powers. On the election night show, Anchor Walter Cronkite of CBS asked the 16,000-pound Remington Rand UNIVAC for calculations.

As the story goes, UNIVAC predicted a landslide victory for Eisenhower. Initially, people thought that UNIVAC’s numbers were totally absurd, because public polls predicted a tight race and not a landslide. However, UNIVAC was right and Ike won the election. So with only a one-percent (1%) sample of the voting population, UNIVAC’s prediction was correct within one percent (1%). From that point on the electronic brain as it was commonly referred to in the media garnered superhero status.

UNIVAC ushered in computers into public discourse, and in Postwar America a computer that could predict the future was very appealing to a country coming out of a World War that was in the midst of the McCarthyism scare and worried that Communism might destroy Democracy. The country was fighting the Korean War and several countries were already experimenting with nuclear bombs. The Crusade for Freedom, a U.S. government propaganda campaign against Communism was already underway. The U.S. government urged people to send freedom-grams, organize rallies and donate truth dollars to fund the crusade, which is really a CIA-backed campaign to fight Communism around the world. For many people 1952 was a very scary place.

UNIVAC’s very existence was borne out of that fear. UNIVAC’s original purpose was to predict nuclear fallout, but then engineers quickly saw that it could be useful to see other futures as well such as the weather. [SHOW VIDEO: UNIVAC. Bigger and faster and more accurate weather predictions than were ever possible before.] From weather to presidential elections, the electronic brain seemed to offer something that no one or nothing could compete with—correct predictions of the future.

UNIVAC told institutions over and over again from businesses to governments that this was good for them. [SHOW VIDEO: UNIVAC is saving time and increasing efficiency for science, business, industry and governments. Of course, that is the business of Remington Rand.] These commercials that played all over helped to popularize the computer as a hero, and then pop culture solidified UNIVAC’s hero status. The UNIVAC held a starring role in a 1956 Bugs Bunny cartoon where Wile E. Coyote builds a UNIVAC electronic brain to come up with an answer on how to capture Bugs Bunny. [SHOW VIDEO] UNIVAC even showed up on the front cover of a 1961 issue of Superman where Lois Lane is presented with the perfect husband chosen by the matchmaker UNIVAC. The fact that UNIVAC had made its way into pop culture reflects a profound public awareness of its abilities.

This is before any personal computers were even on the market, because you have to keep in mind that this is the 1950s and ‘60s where popular ideas and images of the computer as a prediction and measurement machine were already well formed. In the same way that electricity made new forms of measurement possible and produced new things to measure, the computer did the same thing. All of a sudden people started to think that hey, we could measure everything with computer networks. What we are talking about is a sudden curve in measurability. We went from thousands and thousands of years of not being able to measure anything, and within a couple hundred years we could measure sound, light, energy, weather, and information. This is the beginning of the age of data. Data, the rogue offspring of Kelvin is his dream, but it is also Kelvin’s curse.

We agree that we know more about our world than ever before through measurement. Here is the thing: more knowledge does not enable us to predict the future any better than the Ancient Greeks. I know that is a pretty bold statement. How could this be? We are in the situation because knowledge is relative. The more knowledge we gather, the more we force the planning horizon closer making it even more difficult to predict the future.

Now, according to quantum computing physicist, David Deutsch, this approaching planning horizon is the precise conundrum we are living in. Deutsch argues that the biggest issue of our time is not the inability to predict the future, but rather it is our inability to come to terms with an unpredictable future. The problem is not whether we know how much unearthed oil is left or how long a currency will be stable. The problem is that we believe we know because the computers have given us the answers—algorithms, forecasts, predictions—with a confidence level that enables us to make decisions and enact policies that shape entire markets and lives. We not only believe that computers can predict the future, but even more so we believe that their predictions suffice as explanatory knowledge, and the most authoritative kind.

Deutsch coming from the hard sciences thinks that our extreme belief in data-derived computers to tell us about our future devalues explanatory knowledge and theories, the stuff that scientists produce. Now, the computer is also useful to the extent that we know what questions to ask it. As Deutsch said, prediction, even perfect universal prediction is simply no substitute for explanation.

Let us just take an example. Let’s say that we had a perfect oracle that told us, “Look, in fifty years gas-powered cars will no longer be on the market.” But we would still need to know the answers to a number of questions. How will we travel without cars? Will cities still be around? What happened to all of that shale gas that is being discovered? Are we using biofuels? Are we using new technology? Will our civilization still be around, and who is supposed to do what? What will we do when it happens? What does this all mean? The prediction of a carless future is just data. We would still need explanations to figure out what to do.

An oracle means to speak and right now those that speak the language of computers—binary— speak the language of truth which in one way or another is a language used to predict the future. The myth that we are living in is one that tells us that computers are oracles. Along the way from the Ferranti Mark 1 to the UNIVAC and to deep-gathering data systems we have made a mistake in conflating computational data with explanatory knowledge. The metaphor of the computer as a machine that can measure and help us understand the world is wrong. It is good at computing things, but actually pretty bad at explaining things. Many organizations are dealing with closer planning horizons by investing in more data, but not necessarily more knowledge.

Companies like Variant, NORIS, NICE, Palantir and more offer gathering and surveillance software to organizations, businesses, and government agencies promising them that more big data leads to more informed actions, or according to NICE it’s “the right action.” But it was this kind of data software that led to members of the U.S. Joint Terrorism Task Force (JTTF) showing up at Michele Catalano’s home after she and her husband separately had Googled the terms “pressure cooker, bombs, backpack,” around the same time of the Boston bombing. Now, the U.S. NSA did not have the context in which to understand the search terms. Michele wanted to buy a pressure cooker to make food, not bombs. Her husband was looking for a backpack for their son, not for a terrorist plot. The problem was not just in misinterpreting the words they Googled like “backpack, pressure cooker,” but the problem was in treating the terms as data sources rather than considering the human context of the search inquiries. From the perspective of the government’s big data machine, searches are no longer human questions; they are merely a list of keywords. Now, we can see this story as an example of government misinterpreting human data, but this story also reflects a larger trend within institutions—private and public—that are acting upon data without context leading to results that impact real human life.

Now, the mistake of treating computational data as knowledge has led to several errors within institutions in the public and private sector, many of which we are working in these institutions and so are familiar with them. First, organizations are acting on numbers instead of acting on understanding. Organizations that rely on the numbers alone can lose their vision.

After 131 years of being the dominant player in the photography market, Kodak filed for Chapter 11 bankruptcy, because they failed to make the transition from analog to digital photography. Now, their data wasn’t necessarily incorrect. It told them that digital was going to be a big thing, and so they invested billions into developing new digital services and products. They actually had some of the first digital cameras to market, but in their version of the future photography everything would be more or less the same, except that the camera would be digital. People would still need photo paper. They would still share things by printing pictures out on paper, but they didn’t think that people might not always want to print out a photograph. If the Internet enables you to share photos over long distances, you might not want to print out a photograph at all.

Now, these are things that actually seem really obvious to us, but back then these are things that you would have only been able to glimpse, if you took the time to understand everything else that was going on in people’s lives as opposed to just seeing the camera as an isolated piece of equipment within existing production chains. This is the work of ethnography, to give the data context, and to correct a vision that over-relies on numbers.

Now, some organizations believe that numbers are a far more superior form of knowledge than stories. This is short-sighted. All numbers need interpretation and analysis. And if you want them to be understandable and actionable, they need stories. Even scientists need stories.

Now primatologist, Frans (de Waal) argues that a sense of fairness, the groundwork for morality can be seen in our ancestors—monkeys. Now, he says that he shows these graphs to scientists all the time. I don’t know if you can see some of these numbers, but they don’t really understand what the graphs mean until they see this video that I am going to show you. I want you to watch what happens when the monkey on the left gets a grape as a reward instead of a cucumber. Grapes are like crack in the monkey world. [SHOW VIDEO]You could feel, like you empathize with the monkey on the left, right? It’s like really not cool when you see someone else getting something better than you. De Waal’s fellow primatologists aren’t stupid. They looked at the graphs before and they could see the patterns and the data, but they didn’t necessarily understand or empathize and really get what the patterns meant until they saw this video. The story of de Waal and the monkeys shows that if you do not experience something directly, you may not find the real meaning in the data; although, there are many organizations that believe they can understand the world of their users and markets without living in it. This hasn’t been true in any place or time, and so why would this be truer now. It is amazing how many decisions are made in organizations based on data, without any context or understanding of what it actually means for stakeholders.

Now, a similar thing is happening with financial data, which is increasingly being relied upon to provide direction for consumer research. As a result, even qualitative research has started to look a lot more quantitative—relying on focus groups and surveys, two methods that do not produce the data that can lead to great stories with valuable insights that are going to be actionable. This means that qualitative research in some organizations is compounding the kind of thinking that is creating the problem in the first place. The perceived inferior status of stories is reinforced through bad qualitative research. Now, there are many firms and individuals out there doing great work. Many of you guys are in this room, and I would say all of you guys if you’re here, but we are in the trenches. We have to do the harder work of convincing institutions to invest in their services over qualitative firms that can promise quick results and quick data at a very, very low cost. Just like any other industry, there is a wide range of quality in qualitative research, while some clients are starting to question the inconsistency of the research that they are getting.

One prominent case that a lot of us are following is Unilever. A consumer goods company that makes everything from toothpaste to perfume, they have openly declared that they are not getting consistent qualitative research. They are doing something about it. They have created the qualitative researchers accreditation program to accredit researchers as either a research lead or moderator. Unilever has identified eighty, because there are eighty characteristics of what they are looking for in a quality candidate, including how the potential candidate writes up a mock brief and runs a focus group. Unilever said that they will only work with accredited research leads or moderators, but Unilever’s approach is certainly flawed. They are using measurements to solve a problem that has been created through measurement. They can’t even see that their taxonomy of moderator vs. research lead already reflects just how deeply they have been affected by the curse of Kelvin.

Unilever is not alone. Their frustrations with inconsistent and surface insight work is shared throughout the industry, but creating an accreditation system for researchers would be similar to like the Tate or Metropolitan Art Museum creating a Likert scale to select their artists, or Apple saying that we are only going to hire accredited designers. Unilever’s positivistic notion of qualitative work has veered so far away from people that they have forgotten that gathering stories—the work of understanding humans as a creative process—is one that cannot be easily measured by watching how a potential candidate conducts a focus group or writes a mock brief.

In the work that ethnographers do, understanding meaning is a creative work and creative work is simply just difficult to standardize and to scale. Across the board we have mistaken data for knowledge. Related to this, what we have done is mistaken the ways that computers work with the ways that the world works. This is a terrible mistake. In a way, the world really does work the way that computers function, not in the computing part, but rather the communicating part. They have added something new—the network. The network has, of course, changed how we derive knowledge in how we communicate and who we are.

Social media and digital devices are reshaping the way that we interact, but not in the way that data predicts. Knowing how many Facebook smileys people use in reference to your brand page tells you nothing about what your brand means to them. Knowing people’s Fitbit Flex and Nike+ FuelBand or Jawbone UP stats—it doesn’t tell you how to build the next health app or how to solve major health problems that we’re facing. Knowing that X numbers of stakeholders are listening to music on Spotify does not tell you about the future of music, or the material context of the interaction. Where are they sitting? Whom are they sitting with? How are they holding their devices? What does that mean for them? What are they listening to? Whom are they listening with?

Even if you are not working in a tech industry, let’s say you are in health design policy and you work with NGOs or in consumer goods. It is increasingly becoming the case that knowing how your stakeholders use social media platforms is just as important, if not more important, in understanding which country or neighborhood your users live in. Now, a large part of my work is advising companies that do not come from the tech industry on how to understand opportunities in the digital space. I wanted to share with you one way that I frame the importance of deep contextual knowledge for organizations that are trying to build relationships with younger and/or more tech-savvy stakeholders.

Now, one of the things I am seeing is a fundamental shift in how people engage in identity and community-making with social media. Prior to social media, most youth or most people experienced their coming-of-age period with personal social circles. There was a clear boundary between strangers and known people. It was very clear who was in your circle and out of your circle. What I am consistently seeing in my research around the world is that youth who have come of age on the Internet are using anonymous identities to express their emotions with massive social networks of unknown people—essentially strangers. They have whole entire secret lives that they keep separate from people that they know. When I speak to youth like Amanda, she tells me that she spends most of her time on Tumblr, because that is where she doesn’t feel weird for being a nerd who likes Manga. Youth like [Lilio] tell me that he prefers to spend his time around people he doesn’t know on a gay social Chinese network, because this is where he feels accepted in a society where being gay is stigmatized. Teens like Ernesto tell me that World of Warcraft (WoW) is a great game, but he really enjoys the friendships that have emerged from his guild. Now, I have met hundreds of youth like Amanda, Lilio and Ernesto all around the world who spend their time online with strangers under anonymous conditions.

This everyday form of interaction with strangers is what I call the elastic self . It is the feeling that one’s identity is malleable—the action of trying on multiple and different identities that are beyond the realm of a prescribed self. A prescribed self is composed of identities that are dictated by one’s existing social structural categories. It’s like your gender, your nationality, your family. It is pretty much all the stuff that you did not get to choose. You weren’t chosen to be born into this skin color or into this gender. It is some of the identities that we were born into and were structured into. Now, the elastic self framework helps us to understand all of the fascinating and yet foreign behaviors on the Internet, and like why Amanda, Lilio and Ernesto have so many anonymous accounts—accounts that you may never find out about, because to tell you or anyone else would reveal an aspect of themselves that they want to keep secret.

It helps to explain why teens have adopted short loops of commonly experienced pop culture as a form of communication. When they need to express feeling overwhelmed, sad, angry, excited, very excited, or just like they want to give up—it is easier for them to actually post an animated GIF, because you are going to get a much more common reaction from this instead of saying like oh, I’m feeling sad. It explains why teens like Amanda love anonymity and the flexibility of identity inherent in the social blogging services like Tumblr. Being able to express herself without the pressures of her proximate communities, she feels closer to people on Tumblr than the people she knows. It explains why people poke fun at the concept of circles on the Google+ platform, because it doesn’t work for how users actually interact with people that they know.

Understanding these different but related behaviors means that we can no longer lump all social media platforms together, as if they are one homogeneous set of apps. I don’t think we should be saying that social media does this or social media does that, because depending on the context they do very different things.

It is much more accurate to talk about social media as platforms that are dominant in either formal/informal modes of interaction. Formal modes of interaction are when we engage with people that we already know, like on Facebook. Informal modes of interaction are when we engage with people that we don’t personally know, like on Twitter or Tumblr. Now, I am not talking about abstract theory. I am talking about real features that can be designed into a social platform to encourage either an informal or formal mode of interaction.

Now, I usually discuss a set of design principles like until a platform to be dominant in the formal or informal mode, but today I am just going to share a quick example with something that we are all very familiar with—usernames. Usernames are the names that we use to sign up for social media accounts. Now, on Facebook we are expected to create one account using our real names and our real identities—real information. Facebook has an entire page that explains what counts as a real name; for example, legitimate names cannot include symbols, numbers, unusual capitalizations, multiple languages, nicknames, titles of any kind, and the list goes on. By the way, you have to use the same name on your credit card or identity card. If your name is rejected, Facebook even has a page that explains why. This page is actually necessary, because Facebook prevents people from signing up for new accounts quite often—leading some people to gripe that Facebook hates their name. Not every social media is like Facebook.

The most popular blogging platform now, Tumblr operates very differently. On Tumblr users have a greater degree of malleability with their identities. Tumblr does not even care whether people are using their real names or false identities, because all usernames are within the realm of possibility. At any point in time people can change their Tumblr url. It’s like the subdomain, the part that comes right before the dot Tumblr domain. This happens all the time and so at any point—this is just one of my couple of hundreds Tumblrs where I can change my primary one to another one, if I just no longer wanted to be Tricia Wang today. Now, youth like Amanda do this because her ideas about herself and the world are constantly changing.

Youth identities are not static. They want the flexibility to have that reflected in their profiles, but this can be very difficult. That means that if you are following them through an RSS feed, the RSS feed changes when they actually change their username. Now, sometimes youth just don’t want to be discoverable, and they want to resist search engine optimization. Often, they want to get away from people that they know, because they want to express emotions that they just aren’t ready to share with friends, family, and colleagues.

The malleability in identity makes Tumblr dominant in the informal mode. Now, if you look around the Web, there are many other platforms like Tumblr that are dominant in the informal mode—from Reddit to the millions of message boards and digital games on consoles, computers, devices—these are the spaces where a wider spectrum of identities emerge. This is where sociality and identity veers towards exploratory, performative, and even fantastical, because this is where people tend to socialize with people that they don’t know. These are the exciting spaces on the Web. This is where we find emergent behavior and very unexpected forms of community-making. Now, the elastic self flourishes in the informal mode. Because in the presence of strangers, individuals feel more liberated to try on different identities without the pressure of committing to just one identity.

Now, when we look at platforms that are dominant in the formal mode like Facebook or even LinkedIn, identities are prescriptive, singular and stable. This is where social interaction tends to mirror or extend existing interactions, because people are socializing with people that they already know. When people are surrounded by friends, family, and colleagues, whether you are offline or online, they can feel socially restricted at times from exploring or expressing anything that may counter their prescribed selves—or dominant norms.

When Lilio wanted to explore what being gay meant, he did not want to do this around people he knew, because it didn’t feel safe at the time. He needed to be in places where he wouldn’t feel ashamed. He chose experimenting with profiles and interactions that gave him the social distance from people he knew. So much of what people do with computers and social media in the informal mode is a secret. It is a secret because people have pride, shame, hope, and fear. They have emotions that they want to keep from people that they know. Now, you may not personally relate to any of the examples I have given today, but we all engage in the elastic self. We all have secrets and moments that we just don’t want to share with the people we know. Things happen to us that we don’t want to have broadcasted. We erase our Web browsing history, or we thought twice about posting something to Facebook. And then sometimes it is just kind of nice to talk to a stranger on a plane or in a bar. This has always been the case even before the Internet. It is just that social media platforms that are dominant in the informal mode have made these anonymous interactions for the elastic self much more flexible and accessible.

Now, this division of platforms between the informal and formal is not a hard and fast thing. It is not universal and it is not static. Users engage in emergent behaviors that subvert platforms all the time. Most Facebook users are like Ernesto where they use it for formal modes of interactions to talk with people that they know. But then users like Amanda create fake accounts to find strangers on Facebook. Platforms shift into one mode depending upon context. That is why people and products misinterpret meanings of people’s social lives; codifying in a way that forces people into static relationships that just do not reflect the fluid nature of actual relationships. For any U.S. engineer or designer to think that they can design social experiences without deep social understanding is absolutely absurd. It would be like trying to study fairness among monkeys but through graphs alone.

Now, if I were looking for any of this data around the elastic self behavior, I would never find it. What I am talking about is the resistance to data culture to begin with. If you are looking at the ways that people actually interact with computers, you realize that it is not the tool to see the field. It is the field. The fascinating part about computers is how people use them in ways that are totally mysterious and unpredictable. The only way that we would know is if we see users as humans, and not data points.

So what about us? The work that we do as ethnographers is more needed than ever, because the world has mistaken computers as merely machines to produce measurement tools while missing out on the fact that they have also been producing genuine social spaces that can only be understood through experiences and stories. We have a few challenges to address in our field. The first issue is that ethnographic work is largely invisible, like really good and deep ethnographic work. We don’t get awards like designers, because our work is not front-facing. There is no insight or co-creation workshop of your trophy. It would be weird if there was! This is because we produce analysis, action plans, deliverables, processes for leaders, programmers, designers, managers, sales and marketing, and R&D to act upon. Much of the time the value of our work resides in what doesn’t happen.

We have to find ways to make the invisible visible and that includes our own work. You can do this in many ways. Danah Boyd speaks across multiple disciplines to tech organizations about the importance of understanding users as humans. One thing that I have been doing is live feed noting, using social media to publish field notes when I am doing independent fieldwork so that my work is visible to the general public in ways that are accessible, searchable, and linkable. The team Ethnography Matters, of which I am a part, curates open content around innovative uses of ethnography. We make great efforts to showcase the work of people who don’t necessarily identify as ethnographers. We feature many great folks like 2012 EPIC co-chair John Payne, to folks who are here today like Joan Braderman and Sam Ladner. I don’t know where you are, but these are just a few examples of some of the new ethnography that I have been seeing. The combination of all of these efforts addresses the second challenge in our field, reminding others and ourselves that a fundamental aspect of our work of storytelling matters more now than ever.

In the face of big data our work is commonly referred to as small data. Now, when I actually started asking people in our field why people call it small data, nobody could give me an answer. Why is it called small data? Perhaps it was chosen because it’s the opposite of the word big. I am not really sure why. I think that we need to be thoughtful and strategic about the words that we use to describe the work that we do to people who may not be familiar with it, or who just do not know its value. To me small does not capture the kind of data that we gather, and so that is why I have been calling it big data. When I talk to companies, I explain to them that big data only gives data points. It is flat; it is 2D. You still need experts to explain the data, which can require gathering thick ethnographic insights. I am not saying that we don’t need big data or computers. On the contrary, ethnographers must work with more integrative and agile research models that bring out the best benefits of big and thick data. Big data presents an opportunity for ethnographers to showcase the value of our work. Now that companies have massively increased their spending on big data, they actually need thick data because big data produces new questions. There are some that can be answered through more big data, but some questions can only be answered through thick data approaches.

The new ethnography is not about qualitative work and sitting in a silo, or even in a separate department or in some of the side cubicles. The new ethnography is about sitting at the table with big data folks, so that we can be right there with all of those numbers advocating for humans. This is how we can help organizations lift the curse of Kelvin, by introducing new models and approaches that restore an image of what the world really looks like and how it really works. Perhaps like in 50-100 years to date in all its forms will be just as normalized as electricity. Before this happens we have a chance to shape the way that people think about data. This is absolutely critical, but this is not magic. It takes confidence in the belief that we can learn from the world in multiple ways from observing massive groups, through analyzing microinteractions.

Although institutions have lost this sense of confidence, because over the last few decades after UNIVAC came countless computers and companies that have drilled the same message—computers, algorithms, software, statistics. All of this will make you a more efficient and/or profitable organization. They believed it. People started thinking that we don’t have to talk to our stakeholders. That is a lot of work! We can just look through their data, scrub it, standardize it, normalize it, run it through some computers, and we will know what is going on. The organizations have given more power to quantifiable approaches. As they did this, they have also lost the confidence in human approaches. People are actually scared of talking to humans. They are not just hesitant; they are actually scared and they don’t know what to do. They do not believe that they can actually do it and get good stories from them.

Now, fortunately all of us in this room already have this confidence. We just have to continue following in the footsteps of those who have led the way from Lucy Suchman, Elizabeth Churchill, Jane Suri Fulton, Judy Jordan, Tracey Lovejoy, Ken Anderson and Nina Wakeford—and many more.

We are gathered here today to continue their efforts in asking the hard questions about where our field is going, and how to move forward strategically. The organizers of EPIC have structured our three days together to tackle these questions. Coming up after the break is Rogerio Abreu de Paula’s session on big data. Later this afternoon, Stefana Broadbent’s sessions on how ethnographers are now responsible for taking part in massive organizational transformations that are super complex. Tomorrow Hiroshi Tamura will start our day with a session on new practices in our field, and Martin Orleib will end the day with a paper session on how ethnographers interface with multiple stakeholders inside organizations. And then on Wednesday, Martha Cotton’s session will reflect on the past, present and future of the EPIC community. In between all of these sessions we will have inspiring keynotes from David Howes, Daniel Miller, and Genevieve Bell. And to top it off, we have Pecha Kucha, salons, artifacts, the town hall on big data and workshops that will dig even deeper.

For these three days, we are going to do something very magic, and something that we do not usually get to do. We get to set aside our prescribed selves that we present to clients, to students, to employees, and to employers. We can just speak very frankly as ethnographer-to-ethnographer about where our field is going and where it is heading. No computer can predict where we are going, and so who better else to figure out the future of ethnography in organizations than those whose job is to identify the unknown. Thank you so much for listening. Let us have some fun over these next three days, thank you. [APPLAUSE]

Q. Hi, Tracey Lovejoy from Microsoft. It is very nice to meet you. I would love to hear your personal opinion on how you think this group potentially should respond to programs that Unilever or maybe others may be putting together. In your call to action, what do you think that we as a body should be working towards?

[Tricia Wang]: Well, interestingly when I started Googling and looking for responses as to how other ethnographers felt and if there were any kind of professional responses—I actually could not find a lot. I found a lot from like some qualitative research, you know, other kinds of associations that I had never really heard of. I think that we should talk about it, and then we need to write more about it. I don’t know how to actually get more people to collaborate, but as individuals we should be writing about it. We should be talking to our companies or our clients or employers about this. This conversation is not public enough. The scary part is that I think a lot of organizations are looking at Unilever and saying, “Hey, should we be doing that?” This is actually a debate within EPIC, or I know that on anthrodesign—is Sam Ladner here actually right now?

[Moderator]: At the back.

[Tricia Wang]: Oh, okay! Sam, I remember on the anthrodesign listserv, this conversation already started happening around, you know, should we professionalize or not. I know that you are a big part of that conversation, which I think we should talk about afterwards. I think that this actually hits on a lot of big concerns of what do we do about our industry where there is such a wide range of skill sets and then how do we even define quality.

The issue is that our work is creative. I have really big concerns about trying to standardize creative work, unless you don’t think that our work is creative—then right, it would make sense. I think that what we do is just so unique and it is really almost like artistic work. I think that when I see other ethnographers going into the field, I feel like I am watching an artist in action the way that they get to that insight. That is why it is so rare, and companies should know that. When you get a good ethnographer or a good research firm, like don’t screw the relationship. It is actually very hard. It is very hard for people to stay inspired and to want to do that good work for the client. I think that we can talk very openly about how do we train clients or organizations like Unilever to understand that this may not make sense for you, and there are a lot of other people who have other ideas.

[Moderator]: Let’s take one more, and Tricia I’m sure will hang in what last year we called the mosh pit and take questions one-to-one.

Q. The world that you have described is mapped perfectly onto the world in which my 19-year-old daughter seems to operate in.

[Tricia Wang]: Ah, perfect!

Q. Actually, she operates so much in that, that I think it has stimulated her move to become a transgendered person—so malleable is that world. I was riding on the train yesterday with a woman who was talking about how she was struggling to move into the world of Microsoft Word from the world of Word Perfect. And then I think about the people I work with in rural Uganda who can barely get their cell phones to work. I am interested in your thoughts about a world in which there is a massive and growing divide between the people who inhabit a multiplexed world of various social media—blah, blah, blah—and a world in which people are not either interested in that world, or find that world accessible to them. How do we operate in a world like that creatively and effectively to tell good stories?

[Tricia Wang]: Am I understanding your question correctly in that how do we work where there is a big divide in technology access? And also those who may not have an interest in even understanding that, or who are not even engaged? Well, partly I think it is our job to do a better job of getting people engaged. I frankly don’t think that ethnographers as a field, we are not front-facing enough.

I think that we need to write more publicly about our work. I think that is another way to engage people with some of the stuff that I was just talking about. We do amazing stuff, but so much of it is hidden. I think that it is difficult, because a lot of our work is under NDAA and so we can’t talk about it. I actually think for us to be rejuvenated, it is just like artists where you cannot always make artwork to be sold. You also have to sometimes just do art for art’s sake. I think as ethnographers, we also need to do fieldwork just because it is fun and so we can talk about it. I don’t know. I would feel kind of dead if I couldn’t just do fieldwork on my own, and so that is one way that I try and would suggest to engage with people—to speak in their language.

Also, I actually feel like one of the worlds I work in a lot is actually with ICT and a lot in China, in Mexico, and Africa. This is a continual issue is that there is a divide in people who have access to technology, and the people who do not. I think that we have a very important job to do. As companies are expanding to these markets and they see them just as markets and datasets, they kind of have these assumptions that what works in the West will also work out there. They do not always consult and actually think about well, what about the cultural meaning if this is to make sense. How do we take into account local conditions. How do we take into account economic conditions. I think that we have a big job to do there, which is to make sure that companies do not just reproduce Western ways of thinking and implement it in other parts of the world.

Share: