As if we needed reasons to be more worried about Google taking over the world, a new study suggests that it could have an enormous impact on elections merely by manipulating search results. Researchers Robert Epstein and Ronald Robertson of the American Institute of Behavioral Research and Technology found that they could “sway the voting preferences of undecided voters by 15% or more” merely by biasing search results presented to research subjects. This is interesting—and scary—in its own respect, but it also has connections to some of my earlier posts about Internet filter bubbles and what I’ve called the “Internet Observer Effect“.
In an earlier post, I expressed skepticism that “filter bubbles”, as described by Eli Pariser and philosophers Boaz Miller and Issac Record, are a significant worry. Pariser’s thesis is that personalized search results, such as those from Google, or self-selecting streams of news from sites such as Facebook or Twitter, create a biased view of the world. Our ideologies, in this account, are powerful drivers of what information we are exposed to. The worry is that this effect creates self-reinforcing worldviews that can become increasingly detached from reality and each other as more and more personalized information streams filter out any possible discordant evidence. Miller and Record take this account to imply that, in order to form justified beliefs, we need to step out of our filter bubbles, say by checking news sources we know to conflict with our ideological positions or searching Google in “private” mode.
My knee-jerk reaction to such claims is to dismiss them as some kind of technophobia or scaremongering. That the Internet would make us *less* knowledgable and *less* exposed to alternative points of view seems laughable. You want to talk filter bubbles? How about having all your news delivered by William Randolph Hearst? Or Walter Cronkite? If Chomsky and Herman are correct, the entire corporate media system is devoted to propagandizing us. Surely the internet, with all of its possibilities for independent media and citizen journalism is a major step forward in eliminating filter bubbles?
Miller and Record’s argument is that knowledge claims rest on fulfilling our epistemic responsibilities. In their account, “subjects may vary in the standards they need to meet to have a belief justified on a given matter because one subject has access to a particular technology that enables her to conduct certain inquiry, which then may be required of her to reach justified belief, while the other subject has no access to such technology” (16). Filter bubbles may not be new, but our ability to step outside of them is. North Koreans have an easier time justifying their beliefs than we do because they have access to far fewer sources of information. Because we can check alternative news sources, we must check them if we want to form justified beliefs. This argument makes sense, but it’s not an argument that should make us scared about the future of our society. The internet is a new technology that increases our epistemic responsibilities, but this is a virtue, not a danger.
However, the more concerning aspect of internet filter bubbles is their invisibility (Miller and Record describe Google’s search as a “secret technology”). What makes filter bubbles worrying is that we do not know that we are within them or how they operate. This invisibility may also exist within the corporate media environment described in Manufacturing Consent, but it is nonetheless worrying, especially if we thought the internet would free us from the world described by Chomsky and Herman. Even if we know that Google “personalizes” our search results, we do not know how or how much. The “Internet Observer Effect” is worrying for similar reasons: we do not know how much our search activities and web browsing will affect the epistemic environments of others, and so every web page we link to or even visit incurs some risk that we will cause some unknown person to form a biased belief. Epstein and Robertson’s research takes this worry to an entirely new level.
In Pariser’s, Miller and Record’s and my discussions, we all took Google and its ilk to be neutral algorithms. Filter bubbles are the results of uncaring algorithms responding to our behaviors. In effect, they give me what I want. If I am conservative, I will gradually be exposed to more conservative-friendly information. If I am a Marxist, I will be gradually exposed to more Marxist-friendly information. If this is worrying to you, then how about if Google is not neutral, but instead seeks to actively manipulate your beliefs for its own purposes? Now the internet becomes the ultimate Chomskyan dystopia, where Google can effectively decide the results of elections or any other democratic process. And most importantly, it could do it without us ever knowing. It could conceivably be going on right now. Epstein and Robertson estimated that their methodology could sway elections with win margins of up to 2.9% of the popular vote. That is within the margin of victory for both of GW Bush’s victories, and close to that of Obama’s second term victory in 2012.
This power is not necessarily limited to elections. On many policy issues, politicians and other public bodies such as the courts take their cues from public opinion. Google could be manipulating public views about global warming, internet legislation, corporate governance, or gay marriage. Google could be manipulating search results about Google! This manipulation could in turn affect the political, legal, and social environment in which Google itself operates.
In retrospect, Epstein and Robertson’s results seem obvious. Of course Google could manipulate search results if it wants to, and of course this manipulation can have significant effects on our beliefs. Even more worrying, there is no clear solution. Nationalizing Google seems extreme, and might just give the government an unprecedented technology for manipulating public opinion. Should all search engines need to be open source? Or subject to some kind of inspection? Those might be wise measures, but it would still be trivial for a powerful company such as Google to manipulate results on a short-term basis without being discovered. Constant monitoring of their algorithms seems infeasible. Perhaps this means Google is a dangerous monopoly that needs to be broken up?
Previously I viewed the filter bubble argument with professional interest and a healthy dose of skepticism. Now I’m really worried; the potential for search engines to subvert democracy is genuinely scary.