The collection, curation, and analysis of data has always been as social as it is technical. As the statistical techniques and computational infrastructures of artificial intelligence and data science rapidly develop, we must continue to ground our understandings of data in context, drawing on the lived experiences of people who give that data meaning. But how do we bring human-centered perspectives and cultural contexts to data-intensive, highly-automated algorithmic decision-making? In this talk, I define and discuss two ways of thinking about ethnographic methods in relation to computer, information, and data science, then discuss how my research into various knowledge infrastructures and user-generated content platforms relates to both.
First, the ethnography of computation involves using traditional ethnographic methods (e.g. interviews, observation, participant-observation, case studies, and archival research) to study how people relate to computation and data in various ways. How do people design, develop, deploy, document, debate, maintain, manage, use, not use, learn, or teach computation and data in their everyday life and work? Second, computational ethnography involves extending ethnography’s traditionally qualitative methodological toolkit with computational methods. How can we conduct mixed-method scholarship in line with the broader epistemological principles that make ethnography a rich method for holistically investigating cultural phenomena? Both approaches bring key insights and collaborations to many classic and contemporary issues about information systems as socio-technical systems, letting us attend to data, information, and knowledge as it exists in particular organizational, institutional, social, cultural, economic, and political contexts.