Advancing the Value of Ethnography

How Ethnographic Methods Make APIs More Usable

Ethnographic methods that center systems-thinking, how knowledge is constructed, and how knowledge is shared among communities are the best approach for developing collective digital products like APIs.

Share:

Application Programming Interfaces, commonly known as APIs, connect the front-end interfaces we see when we navigate the internet (like websites and apps) to the back-end systems, or databases, that store information. APIs enable people to carry out transactions online, like purchasing goods, booking flights, or applying for government benefits. While they are invisible to end-users, APIs are crucially important to developers and to the way many websites, programs, and applications function.

Like codebases and databases, APIs are objects consumed collectively and collaboratively by teams of developers who work together to integrate front-end to back-end systems, run tests, and monitor and troubleshoot integration issues. In the context of APIs, typical UX research methods that focus on individual users don’t always make sense because they don’t address the way that most developers learn how to use APIs (other useful studies are here, here, here). Rather, ethnographic methods are instrumental to understanding how teams share knowledge, build a common understanding and practices, and generally use APIs.

At Ad Hoc, UX researchers and designers work alongside developers and product managers to design and maintain API platforms for different government agencies. We often get asked, why do we invest so heavily in UX for APIs when APIs don’t have interfaces you can sketch and see? The answer is simple but not obvious: APIs need to be usable to enable people to access government services. In this article, we describe how we use these methods in the API work we do with two different government agencies.

Usability and APIs: What Is a “Usable” API?

Before getting into our case studies, we want to outline what we mean by usability in the context of APIs. Unlike an interface, what makes an API usable isn’t its visual design, but its functionality and the documentation that explains how it works. Although an API can be used by an individual developer, an API is usually part of a collection of knowledge that makes an app or website work.

There are two key components to making an API usable: the design and functionality of the API itself, and the usability of the documentation about the API.

First, the design of the API is the way it is built to expose data from a source or multiple sources, or intake data in order to enable people to find information and carry out transactions. To make an API usable by others, the team building the API has to take into consideration the needs of the people who will ultimately use the digital tools that rely on those APIs. If the API doesn’t answer questions that real people are asking, other developers won’t know what to build with it. The same goes for how that information is presented and structured in the API. If developers cannot make sense of the data that an API will send back, they will have a hard time figuring out how they can use that API to help users complete tasks on their app or website.

The second key piece of API usability is documentation. Documentation teaches newcomers what the API does, how to build something with it, and how to troubleshoot any problems they might encounter. It’s like the instruction booklet for an appliance: if you had never used a washing machine before and you encountered one without labels or instructions, it would be difficult to understand what it was just by looking at it. If developers can’t understand the documentation, if the documentation is incomplete, or if it is hard to find, the API will ultimately be unusable.

Methods to Design API Functionality

A functional and usable API answers real questions that people have. Our first example is from our work with a government agency that offers benefits to qualified applicants and has built an API platform. We start by researching problems that people encounter when applying for benefits or appealing benefit decisions in order to design APIs that address those problems.

This agency’s APIs are open to external consumers—meaning private companies—that use the APIs to build software for people who, with designated representatives, apply for, access, and manage their benefits, as well as appeal benefit decisions. If developers at these private companies do not understand the data that the APIs are exposing or the types of transactions that these APIs enable, the APIs are unusable. We have conducted extensive remote user interviews and contextual inquiries with people who apply for and manage benefits to understand these processes end-to-end and assess the information users need. Each study we conduct builds on the previous one. We design discovery studies that way to ensure that we can work nimbly alongside product timelines while also gaining deep insights into how the benefits system works.

Recently, we worked on an API that allows people and accredited representatives to appeal benefits decisions online with the agency. Currently, an individual applicant can only appeal a decision by faxing, mailing, or submitting in person all the paperwork. They can’t check whether the appeal has been received and processed, and they have to wait for the processing office to mail a notification letter. Applicants can submit an appeal digitally if they are working with an accredited representative—which over 90% of applicants do—because these representatives have access to some of the agency’s internal digital systems, including access to an agency-wide a digital mail portal. However, because this is a digital mail portal for the entire agency and not dedicated to the office that processes the appeals, representatives do not get status updates or any information until the appeal is routed to and processed by the office in charge of reviewing the appeals. This intake process takes on average 90 days.

In research with accredited representatives, we learned that the lack of information on the status of a benefits application is emotionally taxing to both applicants and their representatives. In 2020, we conducted interviews with seven representatives working in different regions of the US. Across the board, we heard that by the time an applicant starts an appeal, they are already tired of “fighting with the government,” as a participant put it. Applicants make appeals because either their representative convinces them they have a good chance of winning, or they have developed a debilitating condition and have no other option. We also heard that although emotionally taxing, the appeals process offers applicants an opportunity to humanize their story—to put a face to the government form—and show a judge how a medical condition impacts their bodies and their lives. Not knowing what’s happening in an appeal process adds stress and frustration to what is already a difficult emotional process.

Starting from this place of understanding what it means for a person to appeal benefits at this specific agency motivates us to design an API that centers transparent communication and trust. The main purpose of the API is to enable representatives to submit an appeal on behalf of their clients. However, because we now have good insight into the challenges these representatives face, we know that digital submission functionality alone does not add as much value as knowing the status of the submission. The status enables the representatives to know what is going on with the appeal, plan and prepare for coming next steps. For that reason, we’re also prioritizing building an API endpoint that posts status updates as an appeal moves through the agency’s review process and through different information systems. Ethnographic methods that get at how people experience a particular service and understanding context have been key to identifying crucial functionality that accredited representatives will need for the APIs to work for them.

We will now turn our attention to our work with another agency to describe how we use ethnographic methods to make API documentation usable.

Understanding Communities to Make Documentation Usable

Several researchers at Ad Hoc work on a family of APIs using the Fast Healthcare Interoperability Resources (FHIR) standard at the Centers for Medicare & Medicaid Services (CMS), which includes: Blue Button 2.0 API, Beneficiary Claims Data API, Data at the Point of Care API, and Claims Data to Part D Sponsors API. A lot of our research focuses on communities of developers because this is where knowledge about the APIs is discussed and shared. We use ethnographic approaches to understand these communities, their needs, shared interests and interactions.

Participating in the Community

UX researchers are part of their API developer community and are embedded in the community not just as silent observers, but as active members in Google Groups and office hours. With the CMS FHIR APIs, we created these community spaces to consolidate support and gather feedback from users in one place.

On our API teams, researchers work closely with developers to share knowledge and learn from each other. Each API has its own Google Group for external developers and other users, which we use to make announcements, recruit participants for research, and answer questions. UX researchers and developers manage the Google Group together, tackling questions about how to implement the API, what data is available, or other ways to use the data.

On Data at the Point of Care API and the Beneficiary Claims Data API, team researchers and developers hold office hours every few months or after a big change to the API. During office hours, users can ask questions about the API and troubleshoot problems, and developers clarify and explain technical jargon. This helps the user community build trust, and gives the entire API team insight into the way developers work, what information is meaningful to help them troubleshoot, what challenges they face, how they support each other, how knowledge about the API is produced and used, and the kinds of problems users want to solve with the API. This deep understanding of developers as a knowledge-sharing community would not be possible through standard interviews, observations, or tracking of individual developers.

By tracking user questions in Google Groups and office hours, researchers, product managers, and developers can see how the API is being used. They can use these insights to improve the API’s documentation or propose new features to the API based on user needs. If a lot of users ask, “How often is new data refreshed?” in the Google Group, we know that our documentation doesn’t readily give them that information. If we investigate further, perhaps through user interviews, we may find out that the data isn’t being refreshed often enough and we should change the API’s functionality. Since the team is part of the user community, we can directly see how end users work and what they need to succeed, which makes it easier to agree on what needs to change and prioritize new work on the API.

(Note: these methods are great for websites and apps, too!)

Building and Maintaining Knowledge

Our developer community has been even more valuable as we are now required to use a new version of the FHIR standard for our APIs. We decided to use this opportunity to update the documentation for each API and develop new documents about this version. To evaluate our existing API documentation, we conducted user interviews with developers, product managers, and business analysts who use Blue Button 2.0, and learned about how they use our API. Combined with our community research, we’ve identified new ways that we can help our users figure out how to use our API and what they can do with it.

All of the CMS FHIR APIs will need to create a new version to accommodate the new FHIR standard. To manage this kind of big change across four different APIs, our team turned to strategic UX methods based in service design and change management. Each API has its own planned work and product map, which runs the risk of information silos and duplicated efforts. By coming together as a group during quarterly planning meetings (Program Increment Planning in an Agile framework) and remote workshops, we looked across all our plans to coordinate efforts and cut down on duplicative work. Along with the UX teams on the other APIs, we created documents that can be used across the APIs (a shared content strategy guide, a communication rollout plan for the new version, an FAQ) and we are working on ways for us to regularly share UX insights with each other. Anything one team learns through research or community management can improve another API, not just their own. Creating shared, cohesive documentation about our APIs makes it easier for us to maintain and gives our users a deep body of knowledge to draw from when using our APIs.

Conclusion

Like any other digital product, APIs need research with users to iron out potential usability issues. This is especially important in government tech, where the websites that citizens use to apply for benefits and carry out other vital transactions depend on having usable and reliable APIs. However, not all UX methods apply to this space. Unlike interfaces that cater to individual users, APIs are collective and collaborative digital objects. They’re built, maintained, and integrated with by teams, and they serve as bridges for information exchange among different parts within a system. As our work on two government API platforms demonstrates, ethnographic methods that center 1) systems-thinking, 2) how knowledge is constructed, and 3) how knowledge is shared among communities, are better suited for this work than traditional UX methods.

References & Further Reading

Baek, Knowl et al, Human API as a Research Source in Health Care, 2013 EPIC Proceedings, pp. 266–281

Carelock, Nikki, Anthropology Techniques in Human-Centered Design: The Deep Hang, Ad Hoc, November 22, 2019

Clarke, Steven, Measuring API Usability, Dr Dobb’s, May 1, 2004

Collier Jennings, Jennifer & Rita Denny, Where is Remote Research? Ethnographic Positioning in Shifting Spaces, EPIC Perspectives, May 18, 2020

Earle, Ralph et al, User Preferences of Software Documentation Genres, Proceedings of the 33rd Annual International Conference on the Design of Communication, July 2015, Article No. 46, pp. 1–10, https://doi.org/10.1145/2775441.2775457

Gershman, Greg, Ad Hoc, Fearless, and Ellumen Win Contract to Continue Support of Blue Button 2.0, Ad Hoc, June 9, 2020

Li, Hongwei et al, What Help Do Developers Seek, When and How? 20th Working Conference on Reverse Engineering (WCRE), 2013, pp. 142–151, doi: 10.1109/WCRE.2013.6671289

Michael Meng et al, How Developers Use API Documentation: An Observation Study, Communication Design Quarterly, January 29, 2019

Piccioni, M. et al, An Empirical Study of API Usability, 2013 ACM / IEEE International Symposium on Empirical Software Engineering and Measurement, 2013, pp. 5–14, doi: 10.1109/ESEM.2013.14

Rauf, Irum, et al, A Systematic Mapping Study of API Usability Evaluation Methods, Computer Science Review, Volume 33, August 2019, pp. 49–68

Saied, Mohamed Aymen et al, An Observational Study on API Usage Constraints and Their Documentation, IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER), 2015, pp. 33–42, doi: 10.1109/SANER.2015.7081813.

Vidart-Delgado, Maria Ethnography is Key for Computer-to-Computer Communication That Enhances Veteran Experiences, Ad Hoc, Jun 14, 2020

Zhou, Y. et al, Analyzing APIs Documentation and Code to Detect Directive Defects, 2017 IEEE/ACM 39th International Conference on Software Engineering (ICSE), 2017, pp. 27–37, doi: 10.1109/ICSE.2017.11.

Image: 2012_1_22_26_47_86252 by justin lincoln via flickr


Related Articles

Software Quality and Its Entanglements in Practice, Julia Prior & John Leaney

A Researcher’s Perspective on People Who Build with AI, Ellen Kolsto

(Fr)agile Objects: Thinking Scrum through Post-It Notes, Isabel Lafuente & Wilson Prata

0 Comments

Share:

Libby Kaufer

Libby Kaufer is a UX Researcher working on global products at Twilio, a communications platform. Before Twilio, she worked at Ad Hoc LLC on healthcare APIs.  She has over five years of experience conducting UX research, content strategy, and design. Her goal is to make digital services and websites easier to use for all people, whether they’re signing up for healthcare or keeping track of invoices at their job. She received an MSc in the Social Science of the Internet at Oxford Internet Institute (University of Oxford), and a Masters from the Pratt Institute School of Library and Information Science.

Maria Vidart-Delgado

Maria Vidart-Delgado, Ad Hoc LLC

Maria Vidart-Delgado is Sr. UX Researcher with Ad Hoc LLC, and she has over 10 years experience conducting ethnographic and qualitative research for civic projects. She currently works with government APIs. In the past, she has worked on climate change advocacy, cultural policy, creative placemaking, and political tech. She holds a PhD in Cultural Anthropology from Rice University.