Blog

“The algorithm is the ultimate arbiter”

Google has recently expanded its Fact Check label to searches worldwide. Danny Sullivan has a good writeup about it over at Search Engine Land, providing a general overview of the feature along with some important caveats, the most important of which is there is little to no transparency into how Google determines whether a fact-checking source is reliable. As Sullivan says, “the algorithm is the ultimate arbiter” as to whether a fact-check publisher is deemed an “authoritative source” for a given search result. He then goes on to point out that we Google users don’t have any way of knowing how the algorithm works.

Pointing out this opacity is not necessarily an indictment of Fact Check’s feature itself, which appears to be a sincere and positive step taken by Google in the fight against the misinformation and disinformation epidemics that are plaguing the online landscape, but it underscores a serious problem that goes beyond fact checking and search.  The public discussion over filter bubbles has been going on for several years and has recently gained the attention of a much wider audience.  The general causes of filter bubbles tend to fit into one of two categories.  There are first what might be called the internal causes, where people either consciously or unconsciously choose to seek out information and other people who reinforce their beliefs, and then there are the external causes, where online services feed you a steady diet of personalized search results, news stories, social media posts, ads and so on.

The internal causes are likely to be mostly deeply ingrained in each of us as a combination of biological and cultural imprinting. We may want to overcome these biases, and with some effort and persistence we can discover and develop means to both make us more aware of our biases and help us to work around those biases to some degree, but most of us don’t have strong enough incentives to consistently do this kind of work.  Challenging assumptions and deeply held beliefs may be beneficial in the broader context of a society, but for an individual it often carries a subjective cost that outweighs the subjective benefit.  Without some additional external incentive to figure out how or why I might be wrong about something, I’m likely to ask myself “Why try?”, as Will Wilkerson asked on behalf of a hypothetical everyman in a recent Tweet storm.

If we’re realistically going to chip away at filter bubbles to make them less insular, then the bulk of that work is going to come from external forces, with Google’s Fact Check label being just one small but not insignificant example. If a service such as the Fact Check label is actually going to be effective in influencing perceptions, and not just something that people alternately embrace or dismiss based on whether the information being proffered supports or refutes  their beliefs, then that service has to be trusted as a reasonably disinterested arbiter of factual claims.  Building up that kind of trust among a broad audience will take time, and it’s not clear just how broad or how deep that trust can actually go in our hyper-polarized environment. This is where transparency comes in.

A transparent algorithm for determining authoritative fact checking sources for a given claim will not magically transcend charges of bias simply by the sake of its transparency, but it will tend to shift the focus away from the bias of persons and toward the bias of an impersonal process, and that actually can lead to more constructive dialogues.  The critiques and defenses become less personal and the emotional stakes seem lower.  I suspect that even for people who are very experienced engaging in arenas where contentious battles over ideas are a commonplace, it’s not always easy to completely separate the idea from the person.  For most people it is extremely difficult, and we can see evidence of this just about anywhere online where people with differing opinions can engage with one another. Perhaps that’s because we’re primarily social animals and not ideological ones. Whatever the reason, the tendency for ideological disagreements to become heated and personal is not something that will be going away any time soon.

Of course, even a transparent and completely automated algorithm will still be dismissed by some on the grounds of personal associations. The algorithm was created by one or more persons, and those creators may have an agenda that they’ve managed to cleverly hide in the algorithm’s logic. In Fact Check’s case, the algorithm is merely identifying authoritative fact-checking sources and not doing the actual fact checking, so even if one sees no glaring flaws or biases in the algorithm itself, one might still dismiss the work of the fact checker as biased. It’s even possible, maybe likely, that Fact Check will be regarded as biased by one group of people simply because another group with opposing views seems to cite it more often.

Transparency is not a panacea, but its presence helps to shield against the most summary dismissals of bias. One comes across as somewhat less than reasonable if one labels an observable process biased without bothering to actually examine that process to any extent. Transparency also helps to defuse some of the emotional charge of debates, as some of the focus shifts from the personal to the procedural. Focus on process can also have the added benefit of prompting us to consider from time to time the processes we use to determine what is true and what is false. Of course, those processes are often far from transparent, most of all to ourselves.