Infographic: Americans use more energy in 2013 than in 2012

Greg Lusk

The bad news is that Americans used more energy in 2013 than in 2012.  Unchanged is the fact that US energy efficiency is still terrible. The good news is that 2013 saw  more renewable energy produced!

Each year the Lawrence Livermore Labs releases an energy flow chart, which is a great infographic that displays the origin of US energy, the sectors that use that energy, and the efficiency of each sector. This year’s infographic was recently posted (click on the image to make it larger).

Lawrence Livermore Labs Energy Infographic

Some highlights from the lab’s news release:

  • “Wind energy continued to grow strongly, increasing 18 percent from 1.36 quadrillion BTUs, or quads, in 2012 to 1.6 quads in 2013.”
  • “Natural gas prices rose slightly in 2013, reversing some of the recent shift from coal to gas in the electricity production sector.”
  • “Petroleum use increased in 2013 from the previous year.”
  • “Rejected energy [roughly energy lost to inefficiency] increased to 59 quads in 2013 from 58.1 in 2012, rising in proportion to the total energy consumed.”

What I enjoy about this infographic is that it highlights the rejected energy, which highlights the inefficiency of  US energy use. Transportation, as you can see, produces a lot of rejected energy (probably due to the inefficiency of the combustion engine). If we can’t curb our energy use (which I think we should) then we absolutely need to be doing a better job finding efficiencies.

Posted in In the Spotlight, Quick Thoughts | Leave a comment

Children are NOT the future.

Greg Lusk

Should we motivate concern for climate action through the wellbeing of our decedents? I argue that it is time for change.

Michael Mann was promoting his new book The Hockey Stick and the Climate Wars last night with a lecture at the University of Wisconsin. I attended and live-tweeted it on my twitter account @WxPhilosopher for any of you who missed it. For the most part Mann’s talk followed what has become the standard climate talk format: here’s some science we’re sure of, here’s why models are helpful, this is how the topic was politicized, we’re all doomed unless we act fast. Possibly even more cliché than the format itself is the trope with which such talks, including Mann’s, usually close: Consider the legacy of our children, and how climate change could affect them. Let’s ensure they are better off, and leave them a world in which they can flourish. I’ll call this the child trope.

I hate the child trope, and I find my own hatred of it somewhat strange. Of course, I want preserve the planet’s ability to support life, and I want humanity to flourish. So why do I have these negative emotions toward it? After hearing Mann evoke the trope, I sat down to rationalize my emotional position. I realized that I find the trope not very compelling, but also that it possibly reinforces what I think are dangerous presuppositions. I’ve listed a few of the reasons below.

Please, by all means, comment on this post. I might be a little pessimistic, and I want to know if this trope actually is effective in demographics other than those in which I reside.

How the trope works:

Think of the children!

I take it that the child trope is one way of personalizing the harm that climate change will cause even though climate change works on long timescales. Because it isn’t us that will be hurt most by the affects of climate change, but our progeny, and because we are the cause of climate change, the child trope is relied on to make currently existing individuals feel responsible for what happens in the future. The trope creates this feeling of responsibility for yet un-actualized people through two social norms: 1) needing to provide for blood relatives, especially children and 2) the culturally accepted desire for parents to want their children to have a better life then they (the parents) had.

Reasons to question the trope:

It fails to address the link between population and consumption. The trope presupposes that the audience is going to have children. However, population growth and consumption are linked, and consumption is one problem that needs to be addressed to mitigate and adapt to climate change. One way to address consumption is to manage population, and this means seriously questioning the social norms supporting unfettered procreation. It is hard to seriously discuss decisions to procreate if what motivates our responsible action on climate is the product of that procreation.

UPDATE 4/19/14 2 PM Eastern: As some commenters have pointed out, the link between population and consumption is a complicated one. I did not mean to imply in the original post that it was a direct relationship (more people = more consumption). Please see my response to Nathan in the comments for a bit more considered response.

Whose kids? I’ve seen this trope evoked most frequently with a majority white middle/upper class North American, college audience. There is good reason to think that children of this audience will be fine in the future – they have the advantages of being privileged and in developed countries rich enough to take adaptation seriously. They may even find ways to profit from climate change. Children in less privileged countries (especially the sea-side ones) are likely to be hurt more seriously, and much sooner (as in, they already are suffering climate change related affects). These are the people we should care about. But the child trope doesn’t motivate us to do so, because it is predicated on concern for blood relatives.

Wanting more and better (partially) got us into this mess. For much of this century, the “better life for our children” meant the acquisition of wealth and goods, and led to a bigger, faster, and cheaper mentality. This drive towards easy consumption helped create the climate problem. I believe that in order to address climate, we need to learn to be content with only what we need (or at least a lot less), and create efficiencies in providing those needs. Insofar as this trope relies on an unquestioned desire for a better life for offspring, this trope doesn’t steer us towards sustainable living.

The trope doesn’t seem to be effective. Is there any evidence that this trope is at all effective? The trope has been part of the climate discussion since I can remember, and action has been slow. Can’t we do better?  It was interesting to hear Michael Mann say that we need to make climate change relevant to daily life, and then evoke the child trope. Let’s hire a good marketing firm.

Why are non-actualized future individuals assumed to motivate action better than actual existing individuals? The trope presupposes a kind of selfishness: we are motivated primarily by our own interest, in this case, the wellbeing of our future decedents. I think evoking this trope helps to perpetuate this selfishness especially the effects of climate change are becoming visible. The most vulnerable humans are already being harmed, and the biosphere is already experiencing negative effects. Why are we still talking about abstract non-actualized future individuals? If we aren’t willing to go beyond self interest to help those we have never met who will suffer because of our collective actions, then the effects of climate change will be disastrous. We need to work to develop this kind of global awareness.

There is an economic counter argument. A common retort to proposed action on climate change is that it is too costly. The US and other privileged countries benefitted the most from burning the fossil fuels that largely created the climate change problem. One might think that puts privileged nations on the hook for the cost of cleaning it up.  Paying for clean up may diminish the economic standing of these countries, and as a consequence, children in those countries might be worse off. What this shows is that the status of our children doesn’t directly relate to our moral obligations – those who created the problem have a responsibility to fix the problem regardless of the wellbeing of our children. The trope misses this point.

Future peoples can’t be better off! Derek Parfit brought philosophical attention to the non-identity problem, which has interesting consequences when applied to climate change. Here’s the quick argument from one of his papers: 1) Identity biologically depends, very sensitively, on the timing of conception. 2) Energy policy interventions will shift future human behavior, which will in turn change times of conception, 3) Changing the time of conception will result in different persons being born than would have without policy intervention. 4) This means that future individuals can’t be better off, because the actions that will result in a better environment will bring about different individuals.

Parfit actually thinks this argument doesn’t hold much weight; he says we should continue to talk as if individuals will be better off. However I’ve always found it compelling, precisely because it brings to the fore of the climate discussion an important point: What we do now has consequences in the future that we don’t even think about. It’s time to start thinking about those consequences.

Posted in In the Spotlight | 21 Comments

Weekly Roundup

Image via the galileowaswrong blog.

The release of the trailer for The Principle, a geocentric film casting doubt on Copernicanism, resulted in statements from several people featured in the trailer distancing themselves from the project. Narrator Kate Mulgrew explained on Facebook that she was misled about the nature of the project and that “I am not a geocentrist, nor am I in any way a proponent of geocentrism.” Physicist Lawrence Krauss reasoned that producers either purchased footage of him from another production company, interviewed him under false pretences, or used public domain footage. With regard to the latter, producer Rick Delano said in a released statement that “I can tell him how he ended up in our film. He signed a release form, and cashed a check.” Robert Sungenis, the film’s executive producer, is a geocentrist, running the galileowaswrong.com blog.

Old men become grumpy around age 70, but they live longer in nursing homes.

Entrain, a new app, calculates how best to fight jet lag. The app’s methodology is supported by a recent paper in PLOS Computational Biology.

Just in time for Homeopathy Awareness Week, a new draft report by Australia’s National Health and Medical Research Council debunked homeopathy’s effectiveness. Homeopathy proponents were permitted to submit material for the report, but it didn’t meet the council’s scientific standards.

“Language diversity” correlates both with mountainous terrain that isolates human groups and with rivers that bring those groups together.

Jenny McCarthy argues in an op-ed piece for the Chicago Sun-Times that she is not and has never been against vaccines. Phil Plait provides an excellent argument to the contrary, but I’ll add that her backtracking might have been motivated by the resurgence of preventable disease outbreaks, grimly documented in the Jenny McCarthy Body Count website.

Kansas is not planning to black out the science program Cosmos, despite the viral popularity of the satirical news story.

Posted in Weekly Roundup | Leave a comment

Com’on…it wasn’t that bad: Winter 2014

How cold, nasty, and intolerable was the winter of 2014?

I woke up this morning to a dusting of whiteness out my window: there was snow in

April Fool’s Day storm 1997 over northeast USA.

Toronto. I could hear the moans of Torontonians waking up and looking out their window only to realize it was again cold, and again, snow would ruin their morning TTC ride. This morning reminded me of April 1 1997. As a kid in Boston I woke up to almost 30 inches of snow on the ground – more in that one night than the rest of that winter. I didn’t have a morning commute. Schools were closed. I liked the snow then. This year though, no one is happy to see the snow again. For many North Americans, this winter has felt cold, long, and intolerable.

These feelings about the weather matter. Research shows that the way we perceive weather affects the way we respond to problems like climate change. Simply put, the perception that local weather is at odds with claims regarding the climate (weather is cold but climate is warming), affects the strength of belief or likelihood to act on climate issues.

The purpose of this post is twofold: 1) to convince you that, from a certain perspective, this winter wasn’t the long, cold, and intolerable one you might have experienced (OK, maybe if you live in  Wisconsin), and 2) to buy myself time to put together a proper post on the pop-explanation for this winter, the polar vortex.

Where was it bad? Middle-to-Eastern US and Canada

If you lived in the middle of the US or Canada, you felt cold this winter.

The blue is colder than average (compared to 1981-2010 average), and the red warmer than average. You can still see lots of red. In fact, global land and ocean records reveal an above average winter.

For example, Madison Wisconsin (article here) had their 11th coldest winder on record, with average temp of 13 degrees F, and (at least) 81 consecutive days of at least 1 inch of snow on the ground (the 4th longest in recorded history). The US as a whole had its 34th coldest winter (from 119 recorded winters). Toronto had a record 101 consecutive days with 1 cm of snow on the ground, the temperature average was the coldest in 20 years, 3rd coldest in 50 years, and 35 extreme temperature warnings were issued. Great lake ice coverage was at a near all time high.

But don’t think that because you were cold, that it was a cold winter.

This winter, from a global perspective, was warm (according to NOAA global analysis). Europe was warm. Denmark reported its fifth warmest winter since records began in 1874, Germany its fourth warmest, and Austria its second.

This image breaks up the anomaly in terms of percentage departure from average. You can think of it as the same plot as above, but scaled to the natural variability of regions. Notice the regions of red significantly outnumber the blue, as do the regions of dark red. Thanks Melanie for pointing these images out to me!

Globally, this winter’s (Dec-Feb) land records indicated it was the 10th warmest (2007 was the warmest) and the 126th coolest (1893 was the coldest). In the northern hemisphere, this winter was the 11th warmest and 125th coolest.

Combined land and ocean surface temps for this winter was the eighth highest on record, and .57 degrees C above the 20th century average. What about sea ice? Arctic sea ice extent – the loss of which is thought to affect climate – was at its fifth lowest.

It is easy to forget that everywhere is not like where we are. Please keep in mind that the weather where you live is not an indicator of the global state of the atmosphere. 

 

Posted in In the Spotlight, Quick Thoughts | 1 Comment

Twice Is Nice! Double counting evidence in climate model confirmation

Greg Lusk

Charlotte Werndl (LSE) is speaking at Western University on Monday (the talk will be live-streamed) on evidence and climate change modeling. Having recently read her paper (with co-author Katie Steele) entitled “Climate Models, Calibration, and Confirmation” (CMCC) I thought I would post about it. The paper focuses on the use of evidence in confirming climate models with particular attention paid to double counting, which in this context means using the same evidence for two purposes (more on this use of the term later). I believe the paper is an important one, as it nicely separates concerns about double counting from other, related, confirmatory issues, and I think successfully shows where a form of double counting is legitimate. Still, despite being a casual fan of Bayesianism, I wonder if it gives us what we want in this particular circumstance. I can’t cover all the threads of argument made in the paper, so here I’ll simply discuss what double counting is, why we should worry about it, and how Steele and Werndl (S+W) argue that it could be legitimate in some circumstances.

What’s the worry about double counting? Climate change models typically go through a process called tuning (also sometimes called calibration). Tuning sets the values of parameters in the model that represent highly uncertain processes for which there are few empirical observations. The parameters are treated as “free parameters” that can take on a wide range of values. The values that result in the best fit with observations during tuning are the values chosen for the model. For example, if scientists are interested in global mean surface temperature (GMST), they would vary the parameter values of some uncertain processes until the model’s output of GMST closely matched GMST observations. The model, with these parameter values, would then be used to make climate projections.

The worry is that one way climate models are evaluated is by comparing their results to observations of some historical period; if scientists want to know if a model is adequate for the purpose of predicting GMST, they compare the model output to historical GMST observations. This agreement is supposed to build confidence in (confirm) the model’s ability to simulate the desired quantity. It is typically believed that to gain any confidence in the model at all, the simulation output must be compared to a different set of observations than the one that was used for tuning. After all, the observations used for tuning wouldn’t provide any confidence, because the model was designed to agree with them!

To deal with double counting, CMCC adopts an explicitly Bayesian view of confirmation. The Bayesian view adopted is necessarily contrastive and incremental: a model is confirmed only relative to other models, and the result of confirmation is greater confidence in the model for some particular purpose (not a claim that the model is a correct representation or the truth). Confirmation of one model relative to another can be tracked with the likelihood ratio, which is the probably of the evidence conditional on the first model divided by the probability of the evidence conditional on the second model. If the ratio is >1, the first model is confirmed,

So here is a simple way in which double counting is legitimate on the Bayesian view presented in CMCC. Imagine tuning some model M whose parameters have not yet been set (S+W call this a base model). In order to tune it, we create several different instances of the base-model, all with different parameter values: M1, M2, and so on. We compare the results of each model instance to observations and select the best fitting instance. This is an example of double counting in the following sense: the same data is used to both confirm and tune the model. This is tuning, because we have selected parameter values by comparing outputs to observations, and it is confirmation, because we have gained greater confidence in one instance over all the other instances in light of the data. S+W call this double-counting 1 and it is fairly uncontroversial.

Double-counting 2 seeks to confirm two different (M and L let’s say) base-models, but the situation is much the same. The Bayesian apparatus is more complex, and I’ll leave it to my readers to seek out the details in the paper itself. However, the evaluation still deals with likelihood ratios, it is just that the likelihood ratio needs to take into account all the instances of base-models M and L, as well as our prior probabilities regarding them. The likelihood ratio becomes a weighted sum of the probability of the evidence given each model instance for one base-model over the other. Double-counting 2 is legitimate in two situations 1) the average fit with the data for one base-model’s instances is higher than the other model’s (assuming the priors for each model were equal) and/or 2) the base-models have equivalent fit with the observations, but one model had a higher prior probability (was more plausible). An example of (1) would be that base-model M is tuned to the observations, and on average, its instances are closer to the observations than model L’s. This would result in a greater likelihood for M compared to L, and thus confirm M relative to L. Again, even in this situation tuning “can be regarded as the same process as confirmation in the sense that the evidence is used to do both calibration and confirmation simultaneously” (p618).

Quick Comments

S+W do a great job distinguishing the two kinds of double counting and separating them from other concerns about model tuning and climate projections (this work is done in the second half of the paper not discussed here). They seem right, given the view of confirmation they hold, that confirmation and tuning can be done with the same evidence. After all, double counting S+W’s sense is a sophisticated way of saying that the model that fits the data best is more likely.

A few issues worth thinking about:

1) Double counting here is a bit of a misnomer. As S+W make clear, it is not that the data is used twice, it is that it is used for two different purposes.
2) Confirmation for S+W, between two different models, is always confirmation of the model family (all the instances of the base model). It is not clear to me that this always the desired object of confirmation. Sometimes we might want to confirm one instance of a base model against another instance of a base model (the two best performing instances of models, lets say) and as specified in the paper, confirmation via double counting isn’t set up for that.
3) Climate scientists seem to think of confirmation in absolute terms. In S+W’s scheme, this would be confirming a base-model M relative to its complement (all other models that are not M). This is considered on p628. Double counting doesn’t help us here – in order to confirm in this way, we need to know the prior probabilities for all the non-M models. Since we think there are lots of models that we haven’t considered, and we don’t know how many, this is difficult if not impossible to quantify. Double counting, though legitimate in this area, isn’t a remedy for our ills.
4) How applicable is the Bayesian framework in these instances? I haven’t read all his work, but Joel Katzav argues that it isn’t reliable when it comes to this kind of modeling. One reason (at least I gather this is the argument) is that the conditional probabilities we assign in the Bayesian scheme are conditional on the truth of the model, but we know that the models are not true (because they have gross idealizations/simplifications/parameterizations). Thus we can’t/shouldn’t make those assignments. Perhaps the comparative nature of S+W’s confirmation can side step this? If any readers have insight on how this might work, or if it is really a problem, please post in the comments.

Posted in What We're Reading | 1 Comment

Weekly Roundup

npr

Here’s a roundup of the best April Fool’s Day hoaxes from around the web, and another one focused on the science/library community. But NPR’s prank is the clear winner.

“You don’t think of the Bible necessarily as a scientifically accurate source of information, so I guess we were quite surprised when we discovered it would work. We’re not proving that it’s true, but the concept would definitely work”: Physics students at the University of Leicester have determined that Noah’s ark would indeed be buoyant.

Don’t tell Mr. Toad: A new study suggests that children retain less information about animals from anthropomorphized accounts. But kids learn more when science is packaged in a music video.

We don’t have stasis fields yet, but in a new clinical trial, gunshot or stabbing victims will be placed in suspended animation (induced hypothermia) while doctors repair damaged organs. [via Marginal Revolution]

Eliminating invasive species is more difficult than we realize, as is even labelling them “native” or “alien.”

Posted in Uncategorized, Weekly Roundup | 1 Comment

Weekly Roundup

Which diet is best? According to new research, none of them.

Cosmos continues to attract controversy as Creationists demand equal time for their theories on the program.

Cancer care in hospitals should not include unproven treatments like reflexology and reiki, argues Brian Palmer at Slate. And nearly half of Americans believe at least one medical conspiracy theory, according to a BMJ survey.

Users of the new Spreadsheets app have gamified their sexual encounters. Here’s a map showing the average duration of intercourse in each American state. And here’s a series of (PG-rated) sketches of animal mating rituals, if they were performed by humans.

As a nice change from contemporary parenting debates, here’s a look into the way parents dealt with teenagers during the Middle Ages.

A paper on climate change deniers’ belief in conspiracy theories has been pulled from Frontiers in Psychology due to the “legal context” created by allegations of defamation.

A postdoc was sabotaged by one of her peers, reports Science (paywall-protected), and claims in a lawsuit that she received inadequate response from the school and her supervisor.

A buzzword-induced fetish for innovation is not the same as a robust technology policy, argues Evgeny Morozov at the New Republic.

Wikipedia founder Jimmy Wales responds to a petition criticizing the representation of holistic medicine: “What we won’t do is pretend that the work of lunatic charlatans is the equivalent of ‘true scientific discourse’. It isn’t.”

Posted in Weekly Roundup | Leave a comment

Could Prediction Markets Help to Find MH370?

Mike Thicke

Like everyone else, I have become obsessed with the disappearance of Malaysian Airlines flight MH370. When I read that the flight lost contact on March 8, I assumed that it would be found crashed into the ocean in a matter of days if not hours. Nearly two weeks later, people are starting to wonder whether it will ever be found.

There is no shortage of theories about what happened to the flight. Pilot suicide seems to be the most likely answer, but there is scant evidence of motive. Terrorist hijacking is an obvious possibility, perhaps by the Taliban or Uighurs seeking to strike back at China, but no groups have claimed responsibility. Piracy is a possibility; the list price of a Boeing 777 is over $200 million. Pilot Chris Goodfellow claims that an electrical fire is the most likely cause, and many believe this explanation is the most likely, but it has difficulty explaining the several course and altitude changes made by the flight. Similarly, Australian Pilot Desmond Ross argues that the flight could have depressurized, explaining why the plane first rapidly descended. He then argues that errors induced by the depressurization could explain the plane’s other maneuvers. Any, or none, of these could be true.

As theories have proliferated and the official search area has widened to a significant portion of the Earth’s surface, I have started to wonder whether prediction markets might help to locate the missing flight. Prediction markets are similar to stock markets, but the traded contracts are predictions rather than shares of a corporation. Contracts in prediction markets have a payoff (say $1) if the associated prediction is correct, and no payoff if it is incorrect. Such markets have proven remarkably powerful in predicting the outcomes of certain types of events, such as political elections.

Thinking about MH370, I was reminded of this passage from James Surowiecki’s The Wisdom of Crowds:

In May 1968, the U.S. submarine Scorpion disappeared on its way back to Newport News after a tour of duty in the North Atlantic. Although the navy knew the sub’s last reported location, it had no idea what happened to the Scorpion, and only the vaguest sense of how far it might have traveled after it had last made radio contact. As a result, the area where the navy began searching for the Scorpion was a circle twenty miles wide and many thousands of feet deep. You could not imagine a more hopeless task(xx).

Continue reading

Posted in In the Spotlight | 4 Comments

Weekly Roundup

Conflicting with previous results, a new study supports the 5-second food rule.

The end may be nigh, according to a NASA-funded study of past complex civilizations employing both natural and social scientists. The paper, which will eventually be published in Ecological Economics, points to resource overexploitation and economic inequalities as the harbingers of doom, but offers a silver lining: structural changes or policy initiatives could stave off societal collapse.

Should parents ban handheld device use in children? Maybe not.

Rebekah Higgitt examines the many reactions, mostly negative, to the rebooted TV science program Cosmos: A Spacetime Odyssey‘s portrayal of Giordano Bruno from the history of science community.

Migraines and hangovers may now be curable, for about $300 each.

When high schools roll back their start times, results include better grades and fewer car crashes.

Following the recent removal of Asperger’s as a diagnostic category, a behavioural neurologist believes the ADHD label should be next to disappear.

Say goodbye to dessert: the WHO proposes limiting added sugars to 5% of caloric intake, or around 6 teaspoons of added sugar per day for the average person, halving their previous recommendation. But not to worry; if Virginia Tech professor Y.H. Percival Zhang’s research can scale up, we’ll all be eating starch from wood chips and other currently-inedible plant parts.

Gravitational waves have been discovered in the cosmic background radiation, which would confirm cosmic inflation theory. Here’s Ethan Siegel’s great breakdown, with diagrams.

Posted in Weekly Roundup | 1 Comment

Weekly Roundup

Powerpoint presentations are the bane of higher education and the corporate world, claims this Powerpoint presentation.

A 43% reduction in American childhood obesity has been reported across multiple news outlets, but some question such striking results. Mark Liberman at Language Log has done some digging and suspects both the statistical treatment of reference population growth charts, as well as changes to the sampling method which result in a more racially-inclusive population.

We eat too much of everything… except yogourt: the FDA has proposed new serving sizes for several types of food to better reflect actual consumption habits.

Here is the first x-ray image of individual living cells, preserved without chemical fixation, from Physical Review Letters. This research illustrates the nanoscale damage to cell structures caused by traditional techniques [via Gizmodo].

What do women want while ovulating? Positional goods that improve their status compared to that of other women, according to a new paper in the Journal of Marketing Research. “Overall, women’s monthly hormonal fluctuations seem to have a substantial effect on consumer behavior by systematically altering their positional concerns, a finding that has important implications for marketers, consumers, and researchers” [via Marginal Revolution].

A new Pew survey of millennials, a demographic who confuse their parents, teachers, therapists, and bosses, shows that they are also pretty confused.

Men who act sexually aggressive in a barroom setting don’t drink more alcohol, but they target women who do, claims a new study in Alcoholism: Clinical and Experimental Research [via Jezebel].

Insert a reference to your thawed-virus horror film of choice: a thirty thousand-year-old giant virus was discovered in the Siberian permafrost. But don’t worry; it only infects amoebas [via io9].

If you’ve got the time to scroll through mostly darkness, check out this scale representation of the solar system where the moon = one pixel.

Posted in Uncategorized, Weekly Roundup | Leave a comment