Science and the Media: Upside-Down Pyramid Thinking

This is the second post to appear in our new section called “quick thoughts.” The aim of this section is to raise an issue for comment in more detail than the weekly roundup does, but in a more succinct format than our longer 1000 word posts. We hope that this section will turn the spotlight onto those that choose to comment, rather than the author of the post.

I’ve been reading Naomi Oreskes’ book Merchants of Doubt, which I will review for Spontaneous Generations and post here on the Bubble Chamber as well. I will save my comments for that review, but the book, and a recent lunch conversation with philosophers and HPSers, has me thinking a lot about how the media reports on events within the scientific community.

While I was a master’s student, I was course instructor for “Phil120 – Introduction to Logic,” which was interestingly enough a required course for the school of journalism (I have a hot chili on, in case you were wondering). The second and third year journalism students, who constituted a majority of my class, did not understand why they needed to take the course, and they were vocal about it. As a response to this, and to low marks across the board, I gave an extra credit assignment: Use your journalism skills and interview a professor or administrator responsible for the inclusion of this class in your course requirements. Respond to this interview with your own arguments, either for or against the position presented.

I got many papers, some of high quality, some of not so high quality. One sticks out in my memory. The professor interviewed had said that students were required to take logic because it promoted linear thinking and critical evaluation of the facts. The student’s response was that this was not consistent with what they had learned in their journalism classes: they were taught “upside-down pyramid thinking” and that “facts were facts.” This was an astute observation. After asking my class what upside-down pyramid thinking was, and after joking that “an upside-down pyramid cannot stand” (one student laughed), I realized that the student’s response was dead on: my students were unable to differentiate between how to reason, and how to present information in a report. Facts were primitives.

The Inverted Pyramid
A graphical representation of the "upside-down pyramid."

For those of you that don’t know, the inverted pyramid is a method of informational organization taught in journalism schools. There are many variations, but the general idea is to organize the information in the report in order of decreasing importance. The problem is that this model gives you no method to decide what is important information and what is not. As I talked to my students I realized that the facts as primitives view put all the information on par. For them, the “most important” facts became the biggest ones. Sometimes this information was just what happened – the who, what, where, why – but other times it was just the largest statistic, or the contrary viewpoint. Buried down at the end of the story (the thinnest part of the pyramid) was the history of the debate, the “minor” facts, etc. This part of the pyramid is the most likely to get cut, according to my students, when an editor puts everything together.

In Oreskes’ book, she attributes much of the dissemination of misinformation to parties that quoted secondary sources instead of reading the original. Had the original source been read (and its structure and content reviewed seriously), the information could have never been used for the purpose it was. This is the critical evaluation of the facts, the evaluation students resisted learning in my class. In Oreskes’ book, the lack of factual analysis seems to lead to media manipulation – with industries like big tobacco creating “another side of the story” to be reported on. This gave scientifically unfounded views traction in the press.

Other stories have caught my attention, with the NASA arsenic microbe story immediately coming to mind. A Guardian piece posted on our weekend roundup talks about the science and media relations. Another was this piece which made its way around the blogosphere on precognition, where the “dissenting” view got a shout out in the closing sentences, only after we hear about the prominence of the report’s author. Lastly, I was disheartened to hear that my Junior Senator from Massachusetts used faulty statistics from a special interest group erroneously while speaking about a critical banking bill (yes, he presented faulty “recession job loss” figures as if they were accurate “future job loss” figures). It took awhile for this mistake (and the origin of the statistics) to come to light.

I meant to do no more here then offer my story as a personal reflection. I wrote this piece in the hope that our readers could help me understand the way journalism and science interact. Does the inverted pyramid style of organization make sense in a world where news is on the Internet, and there is essentially unlimited space for it? How does one decide what pieces of information a reader must have? Should journalists play a more active role in analyzing the content of the news? I would really like to hear from communication specialists and journalists about the current state of science journalism.


  • W. Dean Reply

    Last week you lamented the lack of science training you’re able to get in your HPS program, this week you criticize journalists for not having enough. While I think the second complaint is probably a fair criticism of the average science journalist, the first complaint should make you realize you’ve set the bar fairly high in expecting journalists to play second-guessers to scientists. Journalists are hardly in a position to engage in a “critical evaluation of the facts” of the latest scientific research.

    I don’t expect them to do this—nor, I suspect, do many others—because they’re in no position to. Suppose, for example, a journalist had raised doubts about the (now fraudulent) vaccine-autism link when it was first published in the British Journal of Medicine. No doubt we’d praise him for his skepticism now. But would anyone have taken his views of it seriously over the editors and the reviewers who allowed the phony study to be published in the first place? More importantly, should anyone really accord equal weight to experts and to the opinions of science journalists? I think not, so I fail to see why we should expect them to adopt a role we won’t take them seriously for doing in the first place.

    Indeed, I prefer it when journalists keep their arm-chair expertise regarding the merits of the latest research (and most other matters) to themselves. Even if they often do a poor job, at least it’s a job they’re in a position to do well.

    • Boaz Miller
      Boaz Miller Reply

      The post is not about journalists’ not having enough scientific training, but about their standards of ranking facts according to how good a story they make, as opposed to how accurate a story they make. This is a problem in science reporting irrespectively of how much scientific training they have.

Leave a Reply

Your email address will not be published. Required fields are marked *