Yesterday, Ellie’s Weekly Roundup referenced an editorial in the Washington Post arguing that “‘Scientific’ racism is creeping back into our thinking.” The authors see this trend in the media, in science, and in policy. Some examples:
Consider a recent paper that argues that ethnic conflict throughout history is a result of genetic diversity among communities. The authors argue that genetic diversity is the dominant force behind conflict among groups. It pushes religious communities into battle, causes distrust among neighbors and dictates support for problematic social policies. Such an argument places the history and future of human conflict in genes, as if human interaction and environmental influences cannot match their power.
For instance, in the wake of the 2012 Olympics, nearly one-third of the news articles that evoked race, genetics and athletics posited that African American and West Indian sprinters are fastest because they descend from testosterone-heavy ancestors who survived the brutal conditions of transatlantic slave trade—a belief that found resonance and widespread acceptance in a BBC-produced documentary entitled, “Survival of the Fastest.” But there is no gene or allele for “speed,” and no direct link between testosterone and speed (while sprinters may have high testosterone, not all high-testosterone people can sprint).
A highly-praised volume by journalist Nicholas Wade, “A Troublesome Inheritance,” posited that recent genetic and genomic research suggests that Africa’s underdevelopment was a result of genetic inferiority of the communities on the continent, eschewing the devastating effects of colonialism.
I’m not convinced that there is anything new in any of these examples or that we are in the midst of a “resurgence” of scientific racism, though if there is such a trend I’d look to the rise of evolutionary psychology as an obvious source. However, the potential social damage of such studies seems obvious.
Cases like these, where scientific claims can have detrimental social consequences, are a major interest of social epistemologists. Heather Douglas, for instance, argues that in cases like this the evidential threshold for making a scientific claim ought to be higher. With so much at stake, she contends, scientists really need to be sure that they’re right before they make claims like this public. Given that the reliability of many psychological studies have recently been questioned, this seems like even more reasonable advice. In Science, Truth, and Democracy, Philip Kitcher goes even further, arguing that certain lines of research ought to be (temporarily) banned because their potential consequences are too great for our (current) society to take the risk of pursuing them. This idea is anathema to scientists, but perhaps Kitcher’s extreme solution deserves consideration of scientists aren’t able to police themselves?