In Heather Douglas’s Science, Policy, and the Value-Free Ideal (you can find a video of Douglas speaking about her book here), she claims that there is no practical way to draw a distinction between scientists-as-scientists and scientists-as-advisors. That is, you cannot cleanly separate the descriptive, empirical claims of scientists from their prescriptive advice. Mainstream philosophy of science, she claims, has gone astray since the 1940s in supporting a view of science as value-free, and scientists as detached and objective. Douglas not only argues that we need to acknowledge the unavoidable value-ladenness of science, but that values are not necessarily a negative influence on science. Rather, scientists have an ethical obligation to make value judgments in their work.
Here is one of her examples:
Suppose a scientist is examining epidemiological records in conjunction with air quality standards and the scientist notices that a particular pollutant is always conjoined with a spike in respiratory deaths. Suppose that this pollutant is cheap to control or eliminate (a new and simple technology has just been developed). Should the scientist make the empirical claim (or, if on a science advisory panel reviewing this evidence, support the claim) that this pollutant is a public health threat? Certainly, there is uncertainty in the empirical evidence here. Epidemiological records are always fraught with problems of reliability, and indeed, we have only a correlation between the pollutant and the health effect. The scientist, in being honest, should undoubtedly acknowledge these uncertainties. To pretend certainty on such evidence would be dishonest and deceptive. But the scientist can also choose whether or not the emphasize the importance of the uncertainties (81).
Douglas presents this as a slam-dunk case, and has constructed the situation, by assuming a cheap and easy fix, to be unproblematic. However, I find the implications of this argument deeply troubling.
At least since Kuhn, philosophers of science have acknowledged that all of us, including scientists, are, to some degree, trapped by our prior values and beliefs. Our observation of the world is “theory laden”: what we see is influenced by what we believe. Two observers with different beliefs may observe the same scene but come to different conclusions about what has been observed. However, Douglas is taking this commonly accepted view one step further: she is advocating that scientists make choices about how to report their results based on their values. The scientist in this situation is not forced by psychology into forming a belief about the threat of this pollutant based on her belief, but is making a conscious, unforced decision about how to report that belief to the public and policymakers. There is even a potential double-whammy here, as the scientist’s values first unconsciously affect her interpretation of the data, and then consciously affect how she reports that data.
It seems worth asking what is at stake in this example. What should the scientist do if she had no idea how difficult or expensive the problem would be to solve? Would the scientist be unable to make an empirical claim in that case? It seems unlikely. If the scientist would be able to function in this alternative situation, then why cannot, or why should not, the scientist make a report that ignores such information and allows the public and policymakers to decide how to act? What authorizes the scientist to make theses decisions on the public’s behalf?