In this tutorial you’ll learn a framework for evaluating AI research tools, leading strategic conversations about them, and deciding whether, when, and how to use them. We’ll build a “thinking stack” that includes:
✴️ Building Blocks: Understand the technologies, their capabilities and limitations, and sourcing under the hood of different types of AI tools
✴️ Context: Organizational, cognitive, and cultural contexts all influence whether and how you decide to use an AI tool
✴️ Epistemic Fit: Your research questions, paradigms, methodologies, and goals should be enhanced if an AI tool is appropriate
✴️ Workflow: Match specific use cases to different types of tools, from scoping a project to activating and storing insights
✴️ Quality: Establish a method for assessing whether an AI tool works and is valid, compliant, and ethical for your use
✴️ Accountability: Establish policies and practices for taking responsibility for the credibility, ethics, and outcomes of AI tools
✴️ Adoption: Prepare for the changes to organizations, work practices, and relationships that may be required or created with AI integration
Amid market and industry pressures to work at greater speed and scale, the way we use AI must underscore the core value our work creates. In working through the “thinking stack”, participants will discuss how AI tech could contribute to the way we frame problems, ask questions, and use participation, reflexivity, and multimodal data. We’ll also discuss the creative agency we have – alongside the significant constraints we experience – within organizations and sociotechnical systems.
Instructor

Lindsey DeWitt Prat is a Director at Bold Insight, where she leads global UX research initiatives to help teams understand how cultural and linguistic dynamics shape technology use. She has been testing and evaluating AI research tools for more than two years, A humanities-trained ethnographer, author, and translator with more than 15 years of research experience bridging academia and industry, she brings cultural insights to projects spanning 25+ countries, with particular depth in Japan and East Asia. Her research focuses on pathways to making AI more inclusive and culturally resonant through deep, contextual understanding of people and society. Most of her published work explores gender exclusion, cultural heritage, and religion in Japan through a combined ethnographic and historical approach. Lindsey holds a PhD in Asian Languages & Cultures from UCLA and an MA in International Studies and Comparative Religion from the University of Washington. She is a 2025–2026 AI for Developing Countries Forum (AIFOD) Senior Fellow.