New article about algorithmic systems in Wikipedia and going ‘beyond opening up the black box’

13 minute read

Published:

I'm excited to share a new article, "Beyond opening up the black box: Investigating the role of algorithmic systems in Wikipedian organizational culture" (open access PDF here). It is published in Big Data & Society as part of a special issue on "Algorithms in Culture," edited by Morgan Ames, Jason Oakes, Massimo Mazzotti, Marion Fourcade, and Gretchen Gano. The special issue came out of a fantastic workshop of the same name held last year at UC-Berkeley, where we presented and workshopped our papers, which were all taking some kind of socio-cultural approach to algorithms (broadly defined). This was originally a chapter of my dissertation based on my ethnographic research into Wikipedia, and it has gone through many rounds of revision across a few publications, as I've tried to connect what I see in Wikipedia to broader conversations about the role of highly-automated, data-driven systems across platforms and domains.

I use the case of Wikipedia's unusually open algorithmic systems to rethink the "black box" metaphor, which has become a standard way to think about ethical, social, and political issues around artificial intelligence, machine learning, expert systems, and other automated, data-driven decisionmaking processes. Entire conferences are being held on these topics, like Fairness, Accountability, and Transparency in Machine Learning (FATML) and Governing Algorithms. In much current scholarship and policy advocacy, there is often an assumption that we are after some internal logic embedded into the codebase (or "the algorithm") itself, which has been hidden from us under reasons of corporate or state secrecy. Many times this is indeed the right goal, but scholars are increasingly raising broader and more complex issues around algorithmic systems, such as work from Nick Seaver (PDF), Tarleton Gillespie (PDF), and Kate Crawford (link), and Jenna Burrell (link), which I build on in the case of Wikipedia. What happens when the kind of systems that are kept under tight lock-and-key at Google, Facebook, Uber, the NSA, and so on are not just open sourced in Wikipedia, but also typically designed and developed in an open, public process in which developers have to explain their intentions and respond to questions and criticism?

In the article, I discuss these algorithmic systems as being a part of Wikipedia's particular organizational culture, focusing on how becoming and being a Wikipedian involves learning not just traditional cultural norms, but also familiarity with various algorithmic systems that operate across the site. In Wikipedia's unique setting, we see how the questions of algorithmic transparency and accountability subtly shift away from asking if such systems are open to an abstract, aggregate "public." Based on my experiences in Wikipedia, I instead ask: For whom are these systems open, transparent, understandable, interpretable, negotiable, and contestable? And for whom are they as opaque, inexplicable, rigid, bureaucratic, and even invisible as the jargon, rules, routines, relationships, and ideological principles of any large-scale, complex organization? Like all cultures, Wikipedian culture can be quite opaque, hard to navigate, difficult to fully explain, constantly changing, and has implicit biases – even before we consider the role of algorithmic systems. In looking to approaches to understanding culture from the humanities and the interpretive social sciences, we get a different perspective on what it means for algorithmic systems to be open, transparent, accountable, fair, and explainable.


I should say that I'm a huge fan and advocate of work on "opening the black box" in a more traditional information theory approach, which tries to audit and/or reverse engineer how Google search results are ranked, how Facebook news feeds are filtered, how Twitter's trending topics are identified, or similar kinds of systems that are making (or helping make) decisions about setting bail for a criminal trial, who gets a loan, or who is a potential terrorist threat. So many of these systems that make decisions about the public are opaque to the public, protected as trade secrets or for reasons of state security. There is a huge risk that such systems have deeply problematic biases built-in (unintentionally or otherwise), and many people are trying to reverse engineer or otherwise audit such systems, as well as looking at issues like biases in the underlying training data used for machine learning. For more on this topic, definitely look through the proceedings of FATML, read books like Frank Pasquale's The Black Box Society and Cathy O'Neill's Weapons of Math Destruction, and check out the Critical Algorithms Studies reading list.

Yet when I read this kind of work and hear these kinds of conversations, I often feel strangely out of place. I've spent many years investigating the role of highly-automated algorithmic systems in Wikipedia, whose community has strong commitments to openness and transparency. And now I'm in the Berkeley Institute for Data Science, an interdisciplinary academic research institute where open source, open science, and reproducibility are not only core values many people individually hold, but also a major focus area for the institute's work.

So I'm not sure how to make sense of my own position in the "algorithms studies" sub-field when I hear of heroic (and sometimes tragic) efforts to try and pry open corporations and governmental institutions that are increasingly relying on new forms of data-driven, automated decision-making and classification. If anything, I have the opposite problem: in the spaces I tend to spend time in, the sheer amount of code and data I can examine can be so open that it is overwhelming to navigate. There are so many people in academic research and the open source / free culture movements who are wanting a fresh pair of eyes on the work they've done, which often use many the same fundamental approaches and technologies that concern us when hidden away by corporations and governments.

Wikipedia has received very little attention from those who focus on issues around algorithmic opacity and interpretability (even less so than scientific research, but that's a different topic). Like almost all the major user-generated content platforms, Wikipedia deeply relies on automated systems for reviewing and moderating the massive number of contributions made to Wikipedia articles every day. Yet almost all of the code and most of the data keeping Wikipedia running is open sourced, including the state-of-the-art machine learning classifiers trained to distinguish good contributions from bad ones (for different definitions of good and bad).

The design, development, deployment, and discussion of such systems generally takes place in public forums, including wikis, mailing lists, chat rooms, code repositories, and issue/bug trackers. And this is not just a one-way mirror into the organization, as volunteers can and do participate in these debates and discussions. In fact, the people who are paid staff at the Wikimedia Foundation tasked with developing and maintaining these systems are often recruiting volunteers to help, since the Foundation is a non-profit that doesn't have the resources that a large company or even a smaller startup has.


From all this, Wikipedia may appear to be the utopia of algorithmic transparency and accountability that many scholars, policymakers, and even some industry practitioners are calling for in other major platforms and institutions. So for those of us who are concerned with black-boxed algorithmic systems, I ask: is open source, open data, and open process the solution to all our problems? Or more constructively, when those artificial constraints on secrecy are not merely removed by some external fiat, but something that people designing, developing, and deploying such systems strongly oppose on ideological grounds, what will our next challenge be?

In trying to work through my understanding of this issue, I argue we need to take an expanded micro-sociological view of algorithmic systems as deeply entwined with particular facets of culture. We need to look at algorithmic systems not just in terms of how they make decisions or recommendations by transforming inputs into outputs, but also asking how they transform what it means to participate in a particular socio-technical space. Wikipedia is a great place to study that, and many Wikipedia researchers have focused on related topics. For example, newcomers to Wikipedia must learn that in order to properly participate in the community, they have to directly and indirectly interact with various automated systems, such as tagging requests with machine-readable codes so that they are properly circulated to others in the community. And in terms of newcomer socialization, it probably isn't wise to tell someone about how to properly use these machine-readable templates, then send them to the code repository for the bot that parses these templates to assist with the task at hand.

It certainly makes sense that newcomers to a place like Wikipedia have to learn its organizational culture to fully participate. I'm not arguing that these barriers to entry are inherently bad and should be dismantled as a matter of principle. Over time, Wikipedians have developed a specific organizational culture through various norms, jargon, rules, processes, standards, communication platforms beyond the wiki, routinized co-located events, as well as bots, semi-automated tools, browser extensions, dashboards, scripted templates, and code directly built into the platform. This is a serious accomplishment and it is a crucial part of the story about how Wikipedia became one of the most widely consulted sources of knowledge today, rather than the frequently-ridiculed curiosity I remember it being in the early 2000s. And it is an even greater accomplishment that virtually all of this work is done in ways that are, in principle, accessible to the general public.


But what does that openness of code and development mean in practice? Who can meaningfully make use of what even to a long-time Wikipedian like me often feels like an overwhelming amount of openness? My argument isn't that open source, open code, and open process somehow doesn't make a difference. It clearly does in many different ways, but Wikipedia shows us that we should asking: when, where, and for whom does openness make more or less of a difference? Openness is not equally distributed, because openness takes certain kinds of work, expertise, self-efficacy, time, and autonomy to properly take advantage of it, as Nate Tkacz has noted with Wikipedia in general. For example, I reference Ezster Hargattai's work on digital divides, in which she argues that just giving access to the Internet isn't enough; we have to also teach people how to use and take advantage of the Internet, and these "second-level digital divides" are often where demographic gaps widen even more.

There is also an analogy here with Jo Freeman's famous piece The Tyranny of Structurelessness, in which she argues that documented, formalized rules and structures can be far more inclusive than informal, unwritten rules and structures. Newcomers can more easily learn what is openly documented and formalized, while it is often only possible to learn the informal, unwritten rules and structures by either having a connection to an insider or accidentally breaking them and being sanctioned. But there is also a problem with the other extreme, when the rules and structures grow so large and complex that they become a bureaucratic labyrinth that is just as hard for the newcomer to learn and navigate.

So for veteran Wikipedians, highly-automated workflows like speedy deletion can be a powerful way to navigate and act within Wikipedia at scale, in a similar way that Wikipedia's dozens of policies make it easy for veterans to speak volumes just by saying that an article is a CSD#A7, for example. For its intended users, it sinks into the background and becomes second nature, like all good infrastructure does. The veteran can also foreground the infrastructure and participate in complex conversations and collective decisions about how these tools should change based on various ideas about how Wikipedia should change – as Wikipedians frequently do. But for the newcomer, the exact same system – which is in principle almost completely open and contestable to anyone who opens up a ticket on Phabricator – can look and feel quite different. And just knowing "how to code" in the abstract isn't enough, as newcomers must learn how code operates in Wikipedia's unique organizational culture, which has many differences from other large-scale open source software projects.


So this article might seem on the surface to be a critique of Wikipedia, but it is more a critique of my wonderful, brilliant, dedicated colleagues who are doing important work to try and open up (or at least look inside) the proprietary algorithmic systems that are playing important roles in major platforms and institutions. Make no mistake: despite my critiques of the information theory metaphor of the black box, their work within this paradigm is crucial, because there can be many serious biases and inequalities that are intentionally or unintentionally embedded in and/or reinforced through such systems.

However, we must also do research in the tradition of the interpretive social sciences to understand the broader cultural dynamics around how people learn, navigate, and interpret algorithmic systems, alongside all of the other cultural phenomena that remain as "black boxed" as the norms, discourses, practices, procedures and ideological principles present in all cultures. I'm not the first one to raise these kinds of concerns, and I also want to highlight the work like that of Motahhare Eslami et al (PDF1, PDF2) on people's various "folk theories" of opaque algorithmic systems in social media sites. The case of Wikipedia shows how when such systems are quite open, it is perhaps even more important to understand how these differences make a difference.