It is awfully nice not to have to invent a basic tool over and over again. For ethnographers, coding and categorization is work that has to happen whether you are studying housework or neurosurgery, with novices or experts, in an exotic location or in suburban Ohio (no offense to my friends and family in Ohio). A coding structure is one of the most basic and useful tools you ought to have.
Devising one that works with your data can be a great deal of work—finding and maintaining the right level of abstraction, setting parameters that make meaningful, consistent distinctions, all while balancing specificity for the frame of the immediate data and the purpose of the inquiry (is it deep cleaning or spot? open surgery or laparoscopic?) against the ability to generalize categories across investigations, to test or refute interpretations in independent engagements. All the sort of work that supports the value of any repeatable methodology. Not something one minds doing in the course of an investigation that might stretch six months or a year or longer, the sorts of timeframes one comes to think in—in academia. Grant durations. But in the generally quicker pace of applied work, starting with a workable, high-level scheme that can be adapted instead of invented is probably a better use of time. I’m very fond of the scheme represented in the time-honored mnemonic AEIOU.
Over the years, I’ve probably read or heard of a couple of dozen variations of categorizations using or based off this mnemonic and at least as many stories of where it came from. More than a few people know that I had something to do with it, and have asked me to write down how it started, or even sit down for an interview and let them write it. I always think that I will, later. This seems to be a perfect later.
Recounting as best I can, acknowledging that some details will be fuzzy or wrong, I apologize in advance and ask for anyone who knows more or better—as long as you were actually there—to chime in. I’m pretty sure this story takes place in 1994. I know that Doblin Group had recently moved into new offices at 35 E. Wacker in Chicago’s loop, and that we were at the time dealing with an extremely large amount of data from our studies of McDonald’s restaurants. Mostly videos. Hours and hours and hours of fixed camera shots of people doing lots of what happens in a quick-serve restaurant, on both sides of the counter.
There is an actual moment and an accompanying setting almost perfectly clear in my memory: At least three of us, but I think perhaps as many as five, were sitting around the carts that we made out of metro shelving and those big 6-inch, hard rubber wheels, in the back third of the floor, which we had not yet built out. Ilya Prokopoff, who would soon move to IDEO in San Francisco, had created working spaces out of the cavernous, unfinished space with what he dubbed and the rest of us gleefully called “panty walls:” bolts of white spandex-blend material with big grommets punched into the corners and suspended from random bolts, corners, and rafters with bungee cords. The man is seriously inventive. They worked. And it was fun to say “panty wall.”
Julie Bellanca was, at the time, freshly conscripted straight out of college to help us develop software to wrest some sort of persistent order from the video data. She was writing in that incredibly powerful tool of the day—wait for it—SuperCard. I was responsible for the research program. Which meant I was also in charge of saying, “We don’t know, yet” to an increasingly agitated president of a company who had backed the still-novel approach we were using and was waiting for us to get something out of the fieldwork. I know for a fact that Ilya, Julie, and I were there, and I think, because of timing and a murky memory, that Stefanie Norvaisas and Katie Boyd McGlenn were there that day, too.
We were trying to articulate to Julie what the software needed to keep track of besides time. Time was the given, the rail along which everything else ran. We’d mark an “in point” and an “out point” in the tape’s timecode, which was synchronized to all the other cameras’ timecodes and registered against clock time. Each marked segment constituted what we called a “bite,” borrowing from journalism jargon.
We’d been trying to list all the things we wanted to include in the coding by working in detail through random half-hours of footage pulled from different cameras we’d installed in the stores, writing lists and grouping and regrouping them on the whiteboard walls that were the back of these very cool illuminated walls that Doblin Group had in that space. I was trying to sort them into useful categories, and was at that time fascinated with making things MECE (mutually exclusive and collectively exhaustive), which I’d learned through a visiting McKinsey guy and which, I thought, sounded very grown up.
Getting from the narrow, close-to-the-data level of immediate observation up to the useful level of clear, persistent phenomena that makes sense to someone besides the observer—what Danny Miller wonderfully describes as bringing particularity and universality “back into conversation with and acknowledgement of each other” (Miller, 2010, 7)—has always been the point of doing applied (and a good deal of other) ethnographic work.
The actual getting from one to another requires language tools that can be marvelously refined and evolved—think of the power of terminology in linguistics or kinship study. But labels can also be little more than jargon, with little shared understanding and a real tendency to drift rather shockingly across usages. We knew we didn’t yet have morphemes and phonemes, or affinity and consanguinity, but those were some of the analogous constructs we aspired to.
Finally, clustering and labeling from the bottom up got us to a short set that we had, in iterations, come to label Activities, Artifacts, Environments, Users, and Interactions. As a categorization scheme for the things we were looking at, it was ok. We looked at the clusters and labels for a minute, and then Ilya said, “If you change ‘Artifacts’ to ‘Objects’ we’d have the vowels.”
That was it. It worked. It stuck. Activities, Environments, Interactions, Objects, and Users. Memorable even if it is only 80% of the way to being universal. I’ve always been pretty convinced of the usefulness of an 80/20 threshold. The rest, as the saying goes, is history. We incorporated it into the tool; we explained it to the folks at McDonald’s (and other Doblin and E-Lab clients); I wrote an article in the now-vanished ACD journal. And it just got used because it was useful, no more. Some folks have run off with it and tried to call it a method, rather than just accept it as a simple categorizing heuristic, but that’s not really worth arguing about. The same is probably true for the battles over adding ‘T’ for time, or taking ‘I’ out because “everything we do is an interaction,” or any of many variations that are out there: adapting and varying, not owning it, is kind of the point.
It was one of those beautifully simple things that came from some smart people with very different backgrounds working systematically at a knotty problem until they had something that not only worked on that particular problem but, because of taking the time and making the effort to maintain that ‘conversation with universality,’ worked for lots of other problems too. Hundreds more, in fact. Julie built a robust version of the coding tool we called CAVEAT that we used for years, both at Doblin and in later iterations at E-Lab and Sapient.
That, and a brilliant little moment of insight from the man who invented panty walls.
Miller, Daniel. 2010. Stuff. London, Cambridge: Polity Press.