Thoughts on Filter Bubbles

My friends Boaz Miller and Isaac Record have a paper forthcoming in Episteme on the implications of internet filter bubbles for our ability to form knowledge. They argue that, because search engines like Google personalize search results through an unknown algorithm, we cannot base knowledge claims on those search results alone. If we attempt to do so, we are failing to live up to our epistemic responsibilities to avoid bias—our beliefs lack the requisite justification to be counted as knowledge.

Miller and Record’s claim is based on observations such as those from Eli Pariser that search results can differ significantly between individuals with diverging ideologies or interests. Pariser’s paradigmatic example is of two friends he asked to search for “BP” in 2010 after the Deepwater Horizon oil spill. One friend’s search results were investment information while the other’s were news of the spill. These friends were both “educated, white left-leaning women”, and so this difference is meant to suggest even greater differences will exist for others.

Several potential objections to Pariser, Miller, and Record come to mind.

First, is there more than anecdotal evidence that this effect really exists, or that it is as significant as they suggest? One reason to suspect that it is not is that Pariser is the “board president and former executive director of MoveOn.org“. The Internet filter bubble fits well with the leftish narrative of the Tea Party as existing in a radical echo chamber insulated from reality. Elsewhere Miller has argued that we should infer that a consensus is knowledge based when knowledge is the best explanation for that consensus. If, for example, common ideology seems to be a good explanation for a scientific consensus then we have less reason to believe that the consensus is due to scientific knowledge—that the purported facts the scientists agree upon are true. Along similar lines, for an individual we might say that we should infer the truth of his or her claim if the best explanation for that person making that claim is that it is true. For example, if an oil magnate claims that global warming is occurring we have more reason to believe them than if a Greenpeace activist claims that it is, because such a claim probably goes against the oil magnate’s financial interests while it would not go against the activists’. In this case, since Pariser’s claim seems quite compatible with the worldview of the mainstream left, we have less reason to accept the claim as true. Nevertheless, I am prepared to grant that it is true—that Google’s search personalization does have a significant effect on search results. I don’t have any evidence to the contrary.

Second, does Google’s personalization actually affect belief formation or is it just epiphenomenal? Personalization is based on your past search behaviour—presumably, what links you click in search results. What Google could be doing is merely ordering your search results such that the top results are the results you would have clicked on without any personalization. Google could just be saving you time, by getting you the pages you would have visited anyways, but more efficiently. If this is the case, then the kind of information that you could be missing out on due to personalization is of the sort “there are other pages that match my search—I don’t have any interest in them but I know they exist”. You might also be missing out on the opportunity to visit a site by “accident”—expecting a site that adheres to your worldview but inadvertently being exposed to a viewpoint that does not match your own. While this kind of information is not worthless, I’m not sure it is significant enough to affect knowledge claims in the way Miller and Record suggest.

Third, which knowledge claims are actually being undermined by personalization? In Pariser’s example neither friend seems to be exposed to false information. There doesn’t seem to be any reason to suspect that the first friend’s beliefs about BP as a potential investment were likely to be false as a result of her past search behaviour, or any reason to suspect that the second friend’s beliefs about the oil spill would be. They formed different beliefs, not conflicting beliefs. For personalization to be relevant for belief justification, it needs to make the beliefs you form as a result of personalized searches less likely to be true. Where it might have an effect is relating to Sandy Goldberg’s conception of “(reliably) complete coverage“—that some of our knowledge is based on the inference that if a purported fact were true, we would have heard of it by now. Pariser’s first friend, upon searching for “BP” and discovering only investment information, might infer that BP could not have been involved in a major oil spill because if it had it would have showed up in her search results. However, it seems that this inference would only be problematic if either Google’s search was that person’s only source of information about the domain in question or if Google was a part of a larger all-encompassing filter bubble that would shelter the person from such information. The latter is certainly plausible (Facebook, Fox News, Twitter, Conservapedia…), but the case still has to be made.

Finally, there is the possibility that the personalization effect is proportionate to the entrenchment of an individual’s already-held beliefs relating to their search. That is, if I have only mild preconceptions about a particular question, then it is likely that I will get search results that are relatively unbiased. It is only where I have strong interests or strongly held beliefs that there will be a significant filtering effect due to personalization. But these are precisely the questions upon which being exposed to a wider range of search results will be unlikely to change my views regardless. So Google might only fail to provide unbiased results when providing unbiased results would not change my beliefs anyways. Google’s personalization may only be significant where I have already lost the ability to properly form knowledge. Worrying about how personalization affects the justification of your beliefs might then be like worrying about your cholesterol while on death row.

All this is not to say that Miller and Record are wrong about Google’s filter bubble or its effect on knowledge claims, but I am somewhat skeptical of the magnitude and importance of the effect, at least in isolation. This is certainly an area that warrants further exploration by epistemologists.

2 Comments

  • Boaz Miller
    Boaz Miller Reply

    Great post, Mike. A few comments. It is indeed an empirical question how much filtering biases users’ beliefs, and I would guess that this depends a lot on the specific technology and the specific domain of knowledge. Part of our point, though, is to stress epistemic responsibility as a constraint on epistemic justification. As we know both from Popper and feminist epistemology, confirmations are too easy to find, so a responsible subjects needs, within reasonable limits, to seek disconfirmations to her prior beliefs. And on controversial matters different beliefs tend to be conflicting beliefs as well. Stressing epistemic responsibility is also why Goldberg argues that coverage-based beliefs are inferential rather than another case of extended cognition. He wants their justification still to depend on the subject’s critical awareness of what she can reasonably expect her sources to cover or not to cover. Our point is that in the way filtering technologies work today, without transparency and with little user control, they transfer a lot of responsibility to the end user, since they leave her mostly in the dark about what she is being filtered from. They don’t necessarily need to be designed this way. They may be designed in a way that relieves the end user from some of the responsibility.

Leave a Reply

Your email address will not be published. Required fields are marked *