Weekly Roundup

Image via the galileowaswrong blog.

The release of the trailer for The Principle, a geocentric film casting doubt on Copernicanism, resulted in statements from several people featured in the trailer distancing themselves from the project. Narrator Kate Mulgrew explained on Facebook that she was misled about the nature of the project and that “I am not a geocentrist, nor am I in any way a proponent of geocentrism.” Physicist Lawrence Krauss reasoned that producers either purchased footage of him from another production company, interviewed him under false pretences, or used public domain footage. With regard to the latter, producer Rick Delano said in a released statement that “I can tell him how he ended up in our film. He signed a release form, and cashed a check.” Robert Sungenis, the film’s executive producer, is a geocentrist, running the galileowaswrong.com blog.

Old men become grumpy around age 70, but they live longer in nursing homes.

Entrain, a new app, calculates how best to fight jet lag. The app’s methodology is supported by a recent paper in PLOS Computational Biology.

Just in time for Homeopathy Awareness Week, a new draft report by Australia’s National Health and Medical Research Council debunked homeopathy’s effectiveness. Homeopathy proponents were permitted to submit material for the report, but it didn’t meet the council’s scientific standards.

“Language diversity” correlates both with mountainous terrain that isolates human groups and with rivers that bring those groups together.

Jenny McCarthy argues in an op-ed piece for the Chicago Sun-Times that she is not and has never been against vaccines. Phil Plait provides an excellent argument to the contrary, but I’ll add that her backtracking might have been motivated by the resurgence of preventable disease outbreaks, grimly documented in the Jenny McCarthy Body Count website.

Kansas is not planning to black out the science program Cosmos, despite the viral popularity of the satirical news story.

Posted in Weekly Roundup | Leave a comment

Com’on…it wasn’t that bad: Winter 2014

How cold, nasty, and intolerable was the winter of 2014?

I woke up this morning to a dusting of whiteness out my window: there was snow in

April Fool’s Day storm 1997 over northeast USA.

Toronto. I could hear the moans of Torontonians waking up and looking out their window only to realize it was again cold, and again, snow would ruin their morning TTC ride. This morning reminded me of April 1 1997. As a kid in Boston I woke up to almost 30 inches of snow on the ground – more in that one night than the rest of that winter. I didn’t have a morning commute. Schools were closed. I liked the snow then. This year though, no one is happy to see the snow again. For many North Americans, this winter has felt cold, long, and intolerable.

These feelings about the weather matter. Research shows that the way we perceive weather affects the way we respond to problems like climate change. Simply put, the perception that local weather is at odds with claims regarding the climate (weather is cold but climate is warming), affects the strength of belief or likelihood to act on climate issues.

The purpose of this post is twofold: 1) to convince you that, from a certain perspective, this winter wasn’t the long, cold, and intolerable one you might have experienced (OK, maybe if you live in  Wisconsin), and 2) to buy myself time to put together a proper post on the pop-explanation for this winter, the polar vortex.

Where was it bad? Middle-to-Eastern US and Canada

If you lived in the middle of the US or Canada, you felt cold this winter.

The blue is colder than average (compared to 1981-2010 average), and the red warmer than average. You can still see lots of red. In fact, global land and ocean records reveal an above average winter.

For example, Madison Wisconsin (article here) had their 11th coldest winder on record, with average temp of 13 degrees F, and (at least) 81 consecutive days of at least 1 inch of snow on the ground (the 4th longest in recorded history). The US as a whole had its 34th coldest winter (from 119 recorded winters). Toronto had a record 101 consecutive days with 1 cm of snow on the ground, the temperature average was the coldest in 20 years, 3rd coldest in 50 years, and 35 extreme temperature warnings were issued. Great lake ice coverage was at a near all time high.

But don’t think that because you were cold, that it was a cold winter.

This winter, from a global perspective, was warm (according to NOAA global analysis). Europe was warm. Denmark reported its fifth warmest winter since records began in 1874, Germany its fourth warmest, and Austria its second.

This image breaks up the anomaly in terms of percentage departure from average. You can think of it as the same plot as above, but scaled to the natural variability of regions. Notice the regions of red significantly outnumber the blue, as do the regions of dark red. Thanks Melanie for pointing these images out to me!

Globally, this winter’s (Dec-Feb) land records indicated it was the 10th warmest (2007 was the warmest) and the 126th coolest (1893 was the coldest). In the northern hemisphere, this winter was the 11th warmest and 125th coolest.

Combined land and ocean surface temps for this winter was the eighth highest on record, and .57 degrees C above the 20th century average. What about sea ice? Arctic sea ice extent – the loss of which is thought to affect climate – was at its fifth lowest.

It is easy to forget that everywhere is not like where we are. Please keep in mind that the weather where you live is not an indicator of the global state of the atmosphere. 


Posted in In the Spotlight, Quick Thoughts | 1 Comment

Twice Is Nice! Double counting evidence in climate model confirmation

Greg Lusk

Charlotte Werndl (LSE) is speaking at Western University on Monday (the talk will be live-streamed) on evidence and climate change modeling. Having recently read her paper (with co-author Katie Steele) entitled “Climate Models, Calibration, and Confirmation” (CMCC) I thought I would post about it. The paper focuses on the use of evidence in confirming climate models with particular attention paid to double counting, which in this context means using the same evidence for two purposes (more on this use of the term later). I believe the paper is an important one, as it nicely separates concerns about double counting from other, related, confirmatory issues, and I think successfully shows where a form of double counting is legitimate. Still, despite being a casual fan of Bayesianism, I wonder if it gives us what we want in this particular circumstance. I can’t cover all the threads of argument made in the paper, so here I’ll simply discuss what double counting is, why we should worry about it, and how Steele and Werndl (S+W) argue that it could be legitimate in some circumstances.

What’s the worry about double counting? Climate change models typically go through a process called tuning (also sometimes called calibration). Tuning sets the values of parameters in the model that represent highly uncertain processes for which there are few empirical observations. The parameters are treated as “free parameters” that can take on a wide range of values. The values that result in the best fit with observations during tuning are the values chosen for the model. For example, if scientists are interested in global mean surface temperature (GMST), they would vary the parameter values of some uncertain processes until the model’s output of GMST closely matched GMST observations. The model, with these parameter values, would then be used to make climate projections.

The worry is that one way climate models are evaluated is by comparing their results to observations of some historical period; if scientists want to know if a model is adequate for the purpose of predicting GMST, they compare the model output to historical GMST observations. This agreement is supposed to build confidence in (confirm) the model’s ability to simulate the desired quantity. It is typically believed that to gain any confidence in the model at all, the simulation output must be compared to a different set of observations than the one that was used for tuning. After all, the observations used for tuning wouldn’t provide any confidence, because the model was designed to agree with them!

To deal with double counting, CMCC adopts an explicitly Bayesian view of confirmation. The Bayesian view adopted is necessarily contrastive and incremental: a model is confirmed only relative to other models, and the result of confirmation is greater confidence in the model for some particular purpose (not a claim that the model is a correct representation or the truth). Confirmation of one model relative to another can be tracked with the likelihood ratio, which is the probably of the evidence conditional on the first model divided by the probability of the evidence conditional on the second model. If the ratio is >1, the first model is confirmed,

So here is a simple way in which double counting is legitimate on the Bayesian view presented in CMCC. Imagine tuning some model M whose parameters have not yet been set (S+W call this a base model). In order to tune it, we create several different instances of the base-model, all with different parameter values: M1, M2, and so on. We compare the results of each model instance to observations and select the best fitting instance. This is an example of double counting in the following sense: the same data is used to both confirm and tune the model. This is tuning, because we have selected parameter values by comparing outputs to observations, and it is confirmation, because we have gained greater confidence in one instance over all the other instances in light of the data. S+W call this double-counting 1 and it is fairly uncontroversial.

Double-counting 2 seeks to confirm two different (M and L let’s say) base-models, but the situation is much the same. The Bayesian apparatus is more complex, and I’ll leave it to my readers to seek out the details in the paper itself. However, the evaluation still deals with likelihood ratios, it is just that the likelihood ratio needs to take into account all the instances of base-models M and L, as well as our prior probabilities regarding them. The likelihood ratio becomes a weighted sum of the probability of the evidence given each model instance for one base-model over the other. Double-counting 2 is legitimate in two situations 1) the average fit with the data for one base-model’s instances is higher than the other model’s (assuming the priors for each model were equal) and/or 2) the base-models have equivalent fit with the observations, but one model had a higher prior probability (was more plausible). An example of (1) would be that base-model M is tuned to the observations, and on average, its instances are closer to the observations than model L’s. This would result in a greater likelihood for M compared to L, and thus confirm M relative to L. Again, even in this situation tuning “can be regarded as the same process as confirmation in the sense that the evidence is used to do both calibration and confirmation simultaneously” (p618).

Quick Comments

S+W do a great job distinguishing the two kinds of double counting and separating them from other concerns about model tuning and climate projections (this work is done in the second half of the paper not discussed here). They seem right, given the view of confirmation they hold, that confirmation and tuning can be done with the same evidence. After all, double counting S+W’s sense is a sophisticated way of saying that the model that fits the data best is more likely.

A few issues worth thinking about:

1) Double counting here is a bit of a misnomer. As S+W make clear, it is not that the data is used twice, it is that it is used for two different purposes.
2) Confirmation for S+W, between two different models, is always confirmation of the model family (all the instances of the base model). It is not clear to me that this always the desired object of confirmation. Sometimes we might want to confirm one instance of a base model against another instance of a base model (the two best performing instances of models, lets say) and as specified in the paper, confirmation via double counting isn’t set up for that.
3) Climate scientists seem to think of confirmation in absolute terms. In S+W’s scheme, this would be confirming a base-model M relative to its complement (all other models that are not M). This is considered on p628. Double counting doesn’t help us here – in order to confirm in this way, we need to know the prior probabilities for all the non-M models. Since we think there are lots of models that we haven’t considered, and we don’t know how many, this is difficult if not impossible to quantify. Double counting, though legitimate in this area, isn’t a remedy for our ills.
4) How applicable is the Bayesian framework in these instances? I haven’t read all his work, but Joel Katzav argues that it isn’t reliable when it comes to this kind of modeling. One reason (at least I gather this is the argument) is that the conditional probabilities we assign in the Bayesian scheme are conditional on the truth of the model, but we know that the models are not true (because they have gross idealizations/simplifications/parameterizations). Thus we can’t/shouldn’t make those assignments. Perhaps the comparative nature of S+W’s confirmation can side step this? If any readers have insight on how this might work, or if it is really a problem, please post in the comments.

Posted in What We're Reading | Leave a comment

Weekly Roundup


Here’s a roundup of the best April Fool’s Day hoaxes from around the web, and another one focused on the science/library community. But NPR’s prank is the clear winner.

“You don’t think of the Bible necessarily as a scientifically accurate source of information, so I guess we were quite surprised when we discovered it would work. We’re not proving that it’s true, but the concept would definitely work”: Physics students at the University of Leicester have determined that Noah’s ark would indeed be buoyant.

Don’t tell Mr. Toad: A new study suggests that children retain less information about animals from anthropomorphized accounts. But kids learn more when science is packaged in a music video.

We don’t have stasis fields yet, but in a new clinical trial, gunshot or stabbing victims will be placed in suspended animation (induced hypothermia) while doctors repair damaged organs. [via Marginal Revolution]

Eliminating invasive species is more difficult than we realize, as is even labelling them “native” or “alien.”

Posted in Uncategorized, Weekly Roundup | 1 Comment

Weekly Roundup

Which diet is best? According to new research, none of them.

Cosmos continues to attract controversy as Creationists demand equal time for their theories on the program.

Cancer care in hospitals should not include unproven treatments like reflexology and reiki, argues Brian Palmer at Slate. And nearly half of Americans believe at least one medical conspiracy theory, according to a BMJ survey.

Users of the new Spreadsheets app have gamified their sexual encounters. Here’s a map showing the average duration of intercourse in each American state. And here’s a series of (PG-rated) sketches of animal mating rituals, if they were performed by humans.

As a nice change from contemporary parenting debates, here’s a look into the way parents dealt with teenagers during the Middle Ages.

A paper on climate change deniers’ belief in conspiracy theories has been pulled from Frontiers in Psychology due to the “legal context” created by allegations of defamation.

A postdoc was sabotaged by one of her peers, reports Science (paywall-protected), and claims in a lawsuit that she received inadequate response from the school and her supervisor.

A buzzword-induced fetish for innovation is not the same as a robust technology policy, argues Evgeny Morozov at the New Republic.

Wikipedia founder Jimmy Wales responds to a petition criticizing the representation of holistic medicine: ”What we won’t do is pretend that the work of lunatic charlatans is the equivalent of ‘true scientific discourse’. It isn’t.”

Posted in Weekly Roundup | Leave a comment

Could Prediction Markets Help to Find MH370?

Mike Thicke

Like everyone else, I have become obsessed with the disappearance of Malaysian Airlines flight MH370. When I read that the flight lost contact on March 8, I assumed that it would be found crashed into the ocean in a matter of days if not hours. Nearly two weeks later, people are starting to wonder whether it will ever be found.

There is no shortage of theories about what happened to the flight. Pilot suicide seems to be the most likely answer, but there is scant evidence of motive. Terrorist hijacking is an obvious possibility, perhaps by the Taliban or Uighurs seeking to strike back at China, but no groups have claimed responsibility. Piracy is a possibility; the list price of a Boeing 777 is over $200 million. Pilot Chris Goodfellow claims that an electrical fire is the most likely cause, and many believe this explanation is the most likely, but it has difficulty explaining the several course and altitude changes made by the flight. Similarly, Australian Pilot Desmond Ross argues that the flight could have depressurized, explaining why the plane first rapidly descended. He then argues that errors induced by the depressurization could explain the plane’s other maneuvers. Any, or none, of these could be true.

As theories have proliferated and the official search area has widened to a significant portion of the Earth’s surface, I have started to wonder whether prediction markets might help to locate the missing flight. Prediction markets are similar to stock markets, but the traded contracts are predictions rather than shares of a corporation. Contracts in prediction markets have a payoff (say $1) if the associated prediction is correct, and no payoff if it is incorrect. Such markets have proven remarkably powerful in predicting the outcomes of certain types of events, such as political elections.

Thinking about MH370, I was reminded of this passage from James Surowiecki’s The Wisdom of Crowds:

In May 1968, the U.S. submarine Scorpion disappeared on its way back to Newport News after a tour of duty in the North Atlantic. Although the navy knew the sub’s last reported location, it had no idea what happened to the Scorpion, and only the vaguest sense of how far it might have traveled after it had last made radio contact. As a result, the area where the navy began searching for the Scorpion was a circle twenty miles wide and many thousands of feet deep. You could not imagine a more hopeless task(xx).

Continue reading

Posted in In the Spotlight | 4 Comments

Weekly Roundup

Conflicting with previous results, a new study supports the 5-second food rule.

The end may be nigh, according to a NASA-funded study of past complex civilizations employing both natural and social scientists. The paper, which will eventually be published in Ecological Economics, points to resource overexploitation and economic inequalities as the harbingers of doom, but offers a silver lining: structural changes or policy initiatives could stave off societal collapse.

Should parents ban handheld device use in children? Maybe not.

Rebekah Higgitt examines the many reactions, mostly negative, to the rebooted TV science program Cosmos: A Spacetime Odyssey‘s portrayal of Giordano Bruno from the history of science community.

Migraines and hangovers may now be curable, for about $300 each.

When high schools roll back their start times, results include better grades and fewer car crashes.

Following the recent removal of Asperger’s as a diagnostic category, a behavioural neurologist believes the ADHD label should be next to disappear.

Say goodbye to dessert: the WHO proposes limiting added sugars to 5% of caloric intake, or around 6 teaspoons of added sugar per day for the average person, halving their previous recommendation. But not to worry; if Virginia Tech professor Y.H. Percival Zhang’s research can scale up, we’ll all be eating starch from wood chips and other currently-inedible plant parts.

Gravitational waves have been discovered in the cosmic background radiation, which would confirm cosmic inflation theory. Here’s Ethan Siegel’s great breakdown, with diagrams.

Posted in Weekly Roundup | 1 Comment

Weekly Roundup

Powerpoint presentations are the bane of higher education and the corporate world, claims this Powerpoint presentation.

A 43% reduction in American childhood obesity has been reported across multiple news outlets, but some question such striking results. Mark Liberman at Language Log has done some digging and suspects both the statistical treatment of reference population growth charts, as well as changes to the sampling method which result in a more racially-inclusive population.

We eat too much of everything… except yogourt: the FDA has proposed new serving sizes for several types of food to better reflect actual consumption habits.

Here is the first x-ray image of individual living cells, preserved without chemical fixation, from Physical Review Letters. This research illustrates the nanoscale damage to cell structures caused by traditional techniques [via Gizmodo].

What do women want while ovulating? Positional goods that improve their status compared to that of other women, according to a new paper in the Journal of Marketing Research. “Overall, women’s monthly hormonal fluctuations seem to have a substantial effect on consumer behavior by systematically altering their positional concerns, a finding that has important implications for marketers, consumers, and researchers” [via Marginal Revolution].

A new Pew survey of millennials, a demographic who confuse their parents, teachers, therapists, and bosses, shows that they are also pretty confused.

Men who act sexually aggressive in a barroom setting don’t drink more alcohol, but they target women who do, claims a new study in Alcoholism: Clinical and Experimental Research [via Jezebel].

Insert a reference to your thawed-virus horror film of choice: a thirty thousand-year-old giant virus was discovered in the Siberian permafrost. But don’t worry; it only infects amoebas [via io9].

If you’ve got the time to scroll through mostly darkness, check out this scale representation of the solar system where the moon = one pixel.

Posted in Uncategorized, Weekly Roundup | Leave a comment

Weekly Roundup

“Professors, We Need You!” Nicholas Kristof argues in the New York Times that professors need to make themselves relevant in real-world debates. Professors argued back that they already do, and that they might be better off staying in the (shrinking) ivory tower: for one thing, there are no FBI background checks.

Food research is notorious for flip-flopping, but studies suggest that consumers of whole milk and butter are less likely to be obese. NPR explores this “full fat paradox.”

“These gender differences that everyone knows exist, and they know they’ll always exist and they’re biological — when I started pressing on them I found that a lot of those assumptions hadn’t really been tested.” New York Magazine interviews psychologist Terri Conley, whose work debunks evolutionary explanations for men and women’s sex preferences.

Jackie Chan has joined the fight to halt the consumption of endangered animal products for food and traditional remedies.

Lonely people are more likely to die sooner, and lonely cancer patients suffer detrimental lifestyle impacts.

Man’s best friend, indeed: dogs’ brains react to voices and emotional cues similar to those of humans.

High school grades predict college success better than SAT scores do.

A new study in JAMA Pediatrics suggests a link between using acetaminophen (paracetamol; found in Tylenol and other medications) during pregnancy and ADHD/hyperkinetic behaviours in children. However, doctors believe that these results do not warrant a change in the drug’s classification as a safe painkiller for pregnant women.

Over 120 research papers residing in Springer and IEEE subscription publications have been removed after Cyril Labbé discovered that they were produced by SCIgen, a program designed by MIT graduate students to generate nonsense computer science papers. If you suspect a given computer science paper is gibberish, you can test it using Labbé’s website.

Posted in Weekly Roundup | 1 Comment

Weekly Roundup

One in four Americans believes that the sun revolves around the Earth, according to the results of a NSF survey presented at the recent AAAS meeting. But according to Time magazine, Europeans fared even worse on that question, with one in three responding incorrectly.

Despite an outpouring of protest and offers of rehousing, Copenhagen Zoo killed its “surplus” male giraffe Marius, and then performed a public autopsy with children in the audience. Meat from the corpse was fed to the zoo’s lions. Marius’ death has sparked discussion on the ethics of zoo conservation, with some blaming the zoo’s actions on Denmark’s pragmatic culture.

The health outcomes of people living in food deserts, areas without access to fresh food and which have prompted healthy eating initiatives, aren’t improved by improving access to fresh food; some researchers even believe food deserts aren’t the issue but that the cumulative stress (allostatic load) caused by long-term poverty is responsible for illness. Nutrition is a confusing field with a ”dysfunctional research establishment.” All we know without a doubt is that Americans really love pizza.

A report commissioned by The Beer Store, Ontario’s beer retailer, claims that beer would become more expensive if customers could purchase it at convenience stores. But the author of a previous study commissioned by the Ontario Convenience Store Association disagrees.

Bill Nye, The Science Guy debated Ken Ham, young-Earth creationist and head of the Creation Museum, on the question “Is creation a viable model of origins in today’s modern, scientific era?” The entire debate can be viewed on YouTube. While some in the scientific community welcomed the publicity, others claimed that Nye lost by showing up. Post-debate, creationists provided answers to evolutionist issues raised at the debate, while Buzzfeed collected questions from creationists which have been tackled by quite a few bloggers.

Some sciences are just harder than others: a new study in the  Interdisciplinary Journal on Research and Religion claims that social science professors are more religious and politically extreme than their counterparts in the natural sciences, and that the difference is due to the higher intelligence of the natural scientists, thanks to the correlation of both religiosity and political extremism with lower intelligence. [via Marginal Revolution]

Posted in Weekly Roundup | Leave a comment