Two years ago, I arrived at IBM Design’s Studio in Austin to work on Watson. I didn’t know how to code, thought mastering the set up of my iPhone was a technical achievement and had never researched the world of the developer. Yet here I was, venturing into the very technical realm of artificial intelligence (AI).
AI is generally defined by IBM as systems (machines) that can deeply understand a domain, reason towards specific goals, learn continuously from experience and interact naturally with humans. The focus of this definition is on the machine itself. What I have discovered about AI is that while it is certainly about machines, the building of AI is very much about humans. And for my research, it’s about the humans building the machine…the makers. These makers are learning and inventing what it means to actually create a machine that deeply understands a domain or can interact naturally with humans. The definition of these activities is being discovered and reworked everyday. As a researcher on Watson, I am focused on better understanding the thoughts and feelings of these makers as they go through the process of building with AI. What I find then informs product design that works to make the process easier and fit with how these makers like to work.
Here are 3 lessons I’ve learned from doing research with people building with AI:
- People are making as a means to learn.
AI is so new that everyone is learning through iteration and pivots. Often, these makers don’t even remember exactly what they’ve done because their work is constantly new and changing. As a researcher, I have had to explore ways to get them to open up and dig deeper into their experiences.
- Building with AI can be stressful.
Best practices evolve and technology changes so quickly that makers are often stressed as they try to build an empathic, more human machine. As a researcher, I have to work with this tension between maker and machine.
- Understanding makers requires more of a social science approach.
Research in technology is typically hard-data driven. However, humans are unpredictable, and when they’re building AI, they are contending with the additional unpredictability of an unknown space. Understanding these makers means relying on social science research skills that explores social dynamics, behavior and all its grey areas.
Making as a Means to Learn
Artificial intelligence and its application is very much at the frontier stage. It feels a lot like the early days of social media, when companies were figuring out how to develop a Facebook page or a Twitter feed that could actually benefit their customers. Eventually, social media departments sprang up and strategies were created from trial and error.
The same is true right now for AI. There’s so much promise and interest, but less concrete understanding about what AI actually is, why and when to use it, and who will create the products that deploy it. The promise of AI centers around machines helping humans do things faster, cheaper and more efficiently, even doing some things better than humans. However, fulfilling this promise starts with humans themselves figuring out how to create machines that can do this. Makers face the extraordinarily complex task of trying to anticipate what a human may ask or need from a machine.
This complexity leads to constant iteration as makers are building and re-building their processes and techniques to find an effective and efficient way to work with AI. From embedding AI in existing applications to building chatbots to developing a smarter search function, I have traveled the journey with many makers and the theme is always…change.
This context provides both exciting opportunities and steep challenges for me as a researcher. On the one hand, everything that is learned is net new in the field. This isn’t the work of finding nuance, this is the work of discovery! On the other hand, unpacking complexity with research participants is never easy and requires patience and persistence.
Most AI teams have a deep-seeded knowledge of their processes and experiences that can be hard to explain. Think about those times where you inherently knew something but found it difficult to describe to someone one else. This happens often to me in this space. Makers have been through all sorts of things to get where they are. I am able to observe parts of what they are doing when I am with them but to get the whole picture means to dig around much further to open up the story behind the experience.
This digging has helped me discover that teams work very differently when developing a proof-of-concept (PoC) versus an advanced AI system. Teams evolve from developer-centric groups into business-centric groups as detailed content and designing a good end-user experience come into play. Participants could not really articulate this for me when asked directly about how they work as they hadn’t fully processed it. They were already moving quickly into building for many other challenges. But through digging into their processes, I began to see this pattern across the makers.
Building with AI Can Be Stressful
Research in AI can be an unusual, multi-layered approach to understanding human behavior. I am observing and recording both the human creating the machine and the machine itself. In most conversations, I am learning how my participant’s own feelings and emotions are impacted while they are making a machine meant to convey a level of feelings and emotions itself.
Building with AI can be stressful. The category itself is constantly evolving and yet there is much pressure in building things that are instantly useful and impact the bottom line. This creates pressurized environments for the makers as they tackle complex problems.
Much has been written on the impact of working under pressure. Studies have shown that people working under stress will choose simpler strategies for solving problems to alleviate the cognitive load. AI makers are trying to solve extremely complicated problems—for instance, building a multi-branch dialog to handle any conversation that may come up regarding taking out a home loan. These problems don’t have simple, known strategies that lead to solutions. So the pressure to move quickly turns into stress as problems take longer to solve than may have been expected.
Some of my participants have been working with me for a while and feel comfortable sharing their more personal thoughts on building with AI. It has helped me explore how AI builders are feeling more deeply. It puts their decision-making and workflow choices into perspective. It can be difficult to build a more empathetic, human system when the humans building the machine are stressed themselves.
Understanding Makers Requires a Social Science Approach
Understanding complexity and stress is not easily accomplished with fully structured, tactical research. This type of research certainly plays an important role in technology, but when an entire space like AI is emerging, it is important to dig further into what people are doing and why.
Drawing on softer research skills such as making a real human connection with participants, employing observational techniques from ethnography or contextual inquiry and using personification exercises or drawing activities to help participants remember and explain their journey, can help in understanding the makers behind the machines.
These techniques are not new to the field of social science but they can bring a new perspective to research involving technology and complex problem solving with developers and business analysts. They can help create some order out of that chaos, but they can also embrace the chaos in a way. There is much to be learned from messy experiences.
How teams are building with AI is not currently linear. Even trying to define a specific workflow is not always quite right. AI makers are drawing from things they already know, learning totally new things on top of that and mixing it all together. Research designed to work with the sometimes messy and unstructured nature of AI can reveal a lot about the work of the makers and, ultimately, the machines that they create.
Image: “Abysmal / Void (TR). Machine perception is the ability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them. Using an artistic approach to interpret the learning mechanism of an AI-based projection-mapping technique, “Void” challenge the dominant perception system of artificial intelligence as practiced today, which is purely objective and reductionist.” Credit: Ars Electronica / Robert Bauernhansl (CC BY-NC-ND 2.0) via Flickr.
Kelly, J (2015). Computing, cognition and the future of knowing. Whitepaper, IBM Research.
Beilock, S. L., & DeCaro, M. S. (2007). From poor performance to success under stress: Working memory, strategy selection, and mathematical problem solving under pressure. Journal of Experimental Psychology: Learning, Memory, and Cognition.
IBM is an EPIC2017 Sponsor
Ellen Kolstø has been conducting user and consumer research for 18+ years. She started her career journey as a Strategic Planner for agencies such as Young & Rubicam, Mullen and GSD&M. In 2012 she moved to market and product research heading up qualitative research in North America for System 1 Group (formerly BrainJuicer). Since joining IBM Watson, Ellen has focused on understanding the teams developing AI products including conversational systems (e.g. chatbots, virtual agents) and cross-platform AI solutions.
Sustaining Stories: The Versatile Life of Sustained, In-house, Ethnographic Practice in a Global Software Company, Natalie Hanson & Johann W. Warmiento-Klapper