Bycroft demonstrates how to respond to #overlyhonestmethods

Mike Bycroft at Double Réfraction has an interesting and valuable post up about the #overlyhonestmethods Twitter trend that I think demonstrates how we should be responding to it and similar phenomena.

These tweets from scientists aren’t really all that surprising to people familiar with STS, and are usually more silly than scandalous. Here are a few from a list compiled by Beckie Port at Spotify:



Or this one from

Nevertheless, this trend has some speculating about what effect these tweets will have on the public perception of science. For instance:

This consideration leads me to a related question: how will these confessions be received by the public, who generally speaking, trust that scientific research returns objective results precisely because it adheres to its principles of reliability, replicability and validity?  Of course, whilst most non-scientists are unlikely to get many of the in-jokes in these anecdotes, neither are they likely to find funny the fact that scientists are alluding to questionable use of their research funds.  Particularly since many of these scientists are likely funded by the public and/or state funds.  As one tweet claimed:  “Functional magnetic resource imaging was performed because we had to justify this large grant somehow”.  Although one can assume (or hope) the tweeter was joking, in a post-recession climate where resources are strained, many may fail to see the funny side.1

Bycroft has three principle arguments in regards to the hubbub over #overlyhonestmethods. First, he argues that scientific articles don’t necessarily reconstruct their procedures for rhetorical effect—to make their claims more credible—but often instead do so for practical reasons, such as keeping journal articles to a reasonable and affordable length. Second, the view of the scientific method these tweets supposedly debunk is one that has not been seriously held by any serious philosopher of science since, perhaps, Francis Bacon. Third, it’s not just scientists that reconstruct in this way—everyone does it, including those in STS.

Bycroft makes some good points here, and I agree with most of them, but I will take issue with the first. The reconstruction of procedures is absolutely a rhetorical device. Yes, Robert Boyle used a different rhetorical strategy—supplying as much detail as possible—but a difference in strategies does not make either less of a strategy. Practical concerns such as reducing the length of articles due to the costs of publication might be a contributing factor to how scientific articles are written, but I cannot believe this is the major driving force behind how they are written. Rather, there are conventions of writing scientific articles, and deviating from those conventions will cause other scientists to give less credit to a publication. The scientists behind these tweets have all been trained to write in a certain style, and they all know that failing to do so will hurt their careers. That is largely the appeal of this phenomenon for them—it’s a chance to let loose on the often comical differences between the way they work and the way they have to pretend to work if they are to be successful.

I do think that Mike is on to something valuable here, though. He is trying to show that, despite what people might initially think about the scientific method upon reading these tweets, it isn’t really that bad. While for those in STS this is all old news, what is new about #overlyhonestmethods is its popularity. The scientific method “debunked” by these tweets might not have been held by any serious scholar of science since Bacon (though I might bring things forward to the logical positivists), but, as suggested by the above quote, it might be one held by much of the public. The immediate danger I see is that “#overlyhonestmethods will fuel the anti-science campaigns of creationists and climate-deniers”.2. The most zealous campaigners from these groups have never hesitated to latch on to any evidence of deviation from the simplistic scientific method to forward their cause.

The correct response isn’t to suppress #overlyhonestmethods or similar accounts of science that undermine public perceptions of the perfect rationality and objectivity of science. But we need to seriously consider how best to deal with the inevitable rhetorical uses these accounts will be put to. Although the content of these tweets shouldn’t be surprising to those in STS, perhaps their popularity should be, because in a significant sense it testifies to our failure to get the message out about how science really works. Bycroft’s post is a good first attempt at explaining why the image of science promulgated by these tweets shouldn’t be a cause for concern about the practice of science.

Note: For a good historical overview of science studies perspectives on the discrepancies between science-in-practice and science-in-publication see Will Thomas’s “Kuhn’s Demon, or: The Iconoclastic Tradition in Science Criticism” at Etherwave Propaganda.


  1. Simon Williams, “#Overly Honest Methods or PhD Madness?” (Jan 11, 2013).
  2. American Science, “Science and its #overlyhonestmethods” (Jan 12, 2013)


  • Michael Bycroft Reply

    Hi Mike! Thanks for this sympathetic reply to my post. I certainly share your conclusion that “the image of science promulgated by these tweets shouldn’t be a cause for concern about the practice of science.”

    1. On the logical positivists, I would distinguish their account of method from Bacon’s since they did not intend to describe the process of research but only its final products. As I understand them (eg. Reichenbach and Carnap), they were interested in finding out what the ideal justification of a scientific claim would look like. The did not care if the claim or its justification emerged out of “stuff-ups and serendipity”–all that mattered was the final form of the chain of inference from evidence to theory. So I would say that many of the confessions on #overlyhonestmethods are consistent even with the kind of methods advanced by logical positivists.

    2. I take your point that “a difference of strategies does not make either less of a strategy.”

    3. However I think we may disagree about the amount of rhetoric in scientific articles (either that or I’ve misread what you’ve written).

    If rhetoric means “any device that makes an article more credible,” and if “credible” just means “persuasive or convincing,” then even the most impeccable arguments are rhetorical in this sense.

    I prefer to define “rhetoric” as something more like “that which has a certain psychological appeal but does not stand up to rational scrutiny.” My paradigm case of rhetoric in this sense is Galileo putting his opponent’s views into the mouth of someone called “Simplicio.”

    I would not say that the conventions of scientific articles are obviously rhetorical in this sense. There are good epistemic reasons for, say, separating the “results” section from the “methods” section, and for giving a clear statement of the “aims” of the research. And there is often no good epistemic reason for giving details about an experiment that failed for known reasons, or about the order in which experiments were carried out in real life.

    This is not to deny that scientists might deploy or interpret the conventions in ignorance of the reasons for the conventions. Perhaps some scientists do believe that the only reason to separate the “methods” and “results” sections (say) is to help their careers. Scientists might also abuse the conventions, writing papers with impeccable formats but poor content.

    But the mere facts that the conventions exist, and that there is social pressure on scientists to follow them, do not show that the conventions are rhetorical (in my sense of the latter term).

    Cf. In philosophy there is a convention of including in a paper at least one argument for the main contention of the paper. There is also social pressure on philosophers to follow this convention; to flout it is career suicide. But would you say that “use of arguments” is a “rhetorical” aspect of philosophy?

    • Mike Thicke
      Mike Thicke Reply

      Thanks Mike, your post made me think.

      Logical Positivists: Reichenbach did make a strong distinction between the context of discovery, where these tweets reside, and the context of justification in which scientific papers are written. For what it’s worth, this is a distinction that many have attempted to erode—the search for a logic of discovery has never really died (See for example Harvey Siegel, “Justification, Discovery and the Naturalizing of Epistemology” (1980)). Some Logical Positivists (Carnap?) also attempted to create an inductive logic which in my mind seems similar to Bacon’s project.

      Rhetoric: I think scientific papers are designed to be maximally persuasive within the bounds of accepted scientific norms. That is, subject to following the rules of proper scientific conduct, I think scientists do everything they can to convince the reader that they are correct. If this doesn’t count as a rhetorical strategy I’m not sure what does. It might not me mere rhetoric—“Sophistry”—but it is rhetoric nonetheless. What scientific papers don’t do is just present the facts in as neutral a way as possible in an attempt to allow the reader to draw their own conclusions.

      • Michael Bycroft Reply

        Logical positivists: yes, many have questioned the j/d distinction. But it matters that many LPs, like Reichenbach, *thought* there was such a distinction. Because they thought so, and whether they were right or not, it is “straw-mannish” to burden them with the belief that stuff-ups and serendipity do not occur in scientific research.

        On Carnap I confess that I haven’t read his books on inductive logic. But here’s an extract from an article by the philosopher of science Wesley Salmon:

        “Another fundamental result of Carnap’s formalization [of inductive logic] is the clear distinction between inductive logic proper and the methodology of induction. Previous failure to make this distinction led to all sorts of mischief. There was serious doubt about the very possibility of an exact inductive logic, due in large part to a confusion of inductive logic with methodology…Failure to make this same distinction led to considerable confusion regarding the relations between discovery and justification in induction. Carnap’s formulations made it possible to discuss this distinction [ie. the j/d distinction] in a manner parallel to its very fruitful treatment in deductive logic.”

        “Carnap’s Inductive Logic,” The Journal of Philosophy, 64 no. 21 (1967), 725-739, on 727.

        This statement suggests to me that a) Carnap insisted on the j/d distinction, b) he meant his inductive logic to apply only to j, and c) he was aware of the difficulty of giving an “exact” account of d (presumably because of all the stuff-ups and serendipity in d).

        Rhetoric: I agree that “scientists do everything they can to convince the reader that they are correct.” I probably wouldn’t call this “rhetoric,” but that’s just a terminological matter.

    • Mike Thicke
      Mike Thicke Reply

      Another interesting thing about your post that I didn’t talk about is how you’ve inverted the Strong Program’s symmetry principle. Barnes, Bloor, et al. tried to show that the practice of science is much like the practice of everything else. This was taken to denigrate science. You’re saying the practice of science is just like everything else to defend science. That is quite a reversal!

      • Michael Bycroft Reply

        I’m glad you found it interesting.

        “You’re saying the practice of science is just like everything else to *defend* science.” I hadn’t thought of it like that, but yes that’s roughly what I had in mind. That is, I do want to say that the mismatch between rhetoric and reality in science is no greater, and no more problematic, than it is in other domains. Of course this is only a claim about the epistemic standing of scientific practice *relative to* the epistemic pretensions of scientific articles. This claim leaves open the possibility that the epistemic standing of scientific practice is greater–even much greater–than that of non-scientific practice.

        I do think that one of the problems with Barnes, Bloor, et. al. is to give reflexivity a bad name. It seems to me that many debates about the nature of science would be easily solved–or at least more sensibly discussed–if we asked the reflexive question more often.

        Eg. Any historian of science who thinks that evidence has no causal role in scientist’s beliefs, should ask him/herself why he/she does archival research. And any historian who thinks that the credibility has nothing to do with personal identity, should ask themselves why they favour double-blind reviews of papers by history of science journals.

        I hope to write a post on this issue at some point…

  • Eleanor Louson
    Eleanor Louson Reply

    Given the uproar over the purported misuse of scientific data revealed in “Climategate” I think science benefits from being more transparent in its actual methods and from a public more aware of actual scientific practices. If that transparency vaults to hilarious popularity, all the better.

    #overlyhonestmethods is so funny precisely because it subverts what outsiders expect from scientific methodology, but it really comes as no surprise to scientists or those in STS, as you rightly point out. I think any negative fallout from these tweets is tantamount to growing pains in the name of transparency.

    I also found that some of Bycroft’s examples of “surprisingly inept method” or those that “look like cases of genuinely bad scientific practice” are slightly naive. For example, claiming that an apparatus held together with bluetack is inferior to one “of bewildering complexity made up of a zoo of exotic materials” neglects much work on tacit knowledge in scientific practice. Sometimes the bluetack is the only thing keeping the complex machinery running smoothly, and the sanitized methodologies in published papers aren’t enough to allow anyone to reproduce those results (as Harry Collins describes in his 1974 TEA set paper).

    • Michael Bycroft Reply

      Thanks for your comment.

      For the sake of argument, I’m happy to concede that blue-tack need not be inferior to a more orthodox component that does the same thing as the blue-tack.

      But my point in the post was an ad hominem one, against those who take it for granted that blue-tack is a surprisingly inept component for a cutting-edge researcher to use. If you think that blue-tack can be just as good as anything else, then you are already on my side!

      When talking about replication in my post, I did not mean to deny that replication usually requires more than just scientist X reading a scientific paper written by some other scientist Y. I meant that, however they do it, scientists do sometimes manage to replicate their results–and sometimes they do so in a surprising number of different circumstances, with a surprising number of different techniques or assumptions. Eg. it’s not just a philosopher’s fairy-tale that the physicist Jean Perrin used some 13 different techniques to derive the value of Avogadro’s number.

      (There is an ongoing debate among philosophers about what can be inferred, from Perrin’s multiple determinations, about the reality of molecules. But all disputants agree, I think, that he did identify these different determinations and that he did not lie about the results they gave.)

      • Eleanor Louson
        Eleanor Louson Reply

        And thanks for the response. Ruminating over your post this week, I got the feeling that our positions must be closer than I initially expected.

        Surprising circumstances and techniques sounds absolutely right to me.

Leave a Reply

Your email address will not be published. Required fields are marked *