Warning: mysql_real_escape_string(): Access denied for user 'melvina'@'localhost' (using password: NO) in /home/mikethicke/public_html/thebubblechamber.org/wp-content/plugins/easy-contact-forms/easy-contact-forms-database.php on line 152
Warning: mysql_real_escape_string(): A link to the server could not be established in /home/mikethicke/public_html/thebubblechamber.org/wp-content/plugins/easy-contact-forms/easy-contact-forms-database.php on line 152 In the Spotlight – The Bubble Chamber
A post recently came up in my Facebook feed that is notable for the confluence of three things: (1) a spectacular claim, (2) it’s wrong, and (3) it’s not a journalist’s fault. The combination of (1) and (2) is quite common, but usually it turns out that the actual science is much less spectacular than the headline suggests, because a journalist or editor has misunderstood the science or amplified the claim unjustifiably in order to garner readers. In this case, though, the paper itself is at fault.
The claim in question is that “it is highly likely (99.999 percent) that the 304 consecutive months of anomalously warm global temperatures to June 2010 is directly attributable to the accumulation of global greenhouse gases in the atmosphere.”1 The Facebook post linked to an article from The Conversation, but that quote is directly from their paper, published this April in Climate Risk Management.
Last week headlines announced that a computer, known as Eugene Goostman, had passed the Turing Test at a competition at the Royal Society of London by University of Reading researchers. It was heralded as a milestone in artificial intelligence (by one of the competition organizers) and implied that a computer program had shown some significant amount of intelligence and fooled people into believing it was human after a robust interrogation. Turing originally predicted that in the year 2000 a computer might be able to hold up in a conversational test as well as a human for five minutes in 30 percent of trials. Which added a sense of officialness to the claim that Eugene passed. Quickly critics appeared to call into question whether Eugene would really have fooled anyone in a normal conversation.
Eugene managed to fool the judges in the competition about a third of the time. However this was achieved by Eugene presenting the persona of a Ukranian 13 year old with imperfect English, the competition was a speed test with only five minutes to evaluate multiple potential humans or machines at once via computer relayed chat (you can see examples here). Critics pointed out this means the program shows no real intellectual achievement and rather relies on convincing the judges that the agent is confused and that a longer time to take the test would be more informative.
In the 1950 paper “Computing Machinery and Intelligence” Alan Turing asked the question “Can machines think?” He then declares the definition of the terms of the question (machine and think) too vague to admit of a good answer and changed it to ask whether some digital computer could successfully play the imitation game. The imitation game imagined was one where two participants hid from the view of the third and conversed by passed notes or some other intermediary device, one of the two hidden participants would imitated a woman, the other would in fact be a woman and the third participant would have to guess which was which after conversing with them for some time. Turing imagined the computer in place of the man. It is ambiguous how exactly the game would be modified with the change and some have argued that it makes a difference which way we take the game to be played. Since Turing does not precisely define his tests all subsequent uses are in a sense their own version of a Turing Test. Modern versions of the Turing Test tend to assume that judges will converse with multiple participants some of whom are computers and others are humans and they will have to guess which is which. In any case the point of the redefinition of the question was as Turing put it to “drawing a fairly sharp line between the physical and the intellectual capacities of a man”. Turing imagined the discussions ranging from physical appearance through, mathematics, chess, and poetry writing, every imaginable skill or piece of knowledge might be called upon by the participants. Although put in terms of “thought” the original question seems to have been meant in the spirit of “can machines possess intelligence” or “can machines engage in intelligent behaviour”. It seems as though Turing was trying to demonstrate to his incredulous audience what he thought an intelligent machine would look like by example more than trying to define thought or intelligence as such.
In some ways Turing anticipated that a machine might succeed at the imitation without showing any intelligence. He notes “the best strategy for the machine may possibly be something other than imitation of the behaviour of a man”, but he thought it unlikely, outside the scope of the essay, and stipulated that for the purposes of the essay we should assume that the best strategy was really that of imitating a man’s behaviour. This illustrates that Turing was more concerned with illustrating how future machines might earn the appellation of thinking or intelligent rather than devising a strict test for success.
Despite these ambiguities Turing’s paper is widely cited and created an interest among both academic AI researchers and a wider public in the idea of a computer convincing humans that it was human as a test of its intellectual ability. A google search finds a first instance of “Turing’s Test” in 1959, and in 1962 it is noted that “Turing’s Test” has become standard nomenclature in the computer field and I find an instance of shortening the name to “Turing Test” in 1964. Over the years, in the popular imagination some have transformed the Turing Test with the idea that a computer that can pass the test is an autonomous intellect on par with a human person. Competitions like the one that crowned Eugene have been going on for some time, such as the Loebner Prize an annual competition since 1991.
The diversity of things covered by the name Turing Test is best illustrated by the most ubiquitous example of a Turing Test. CAPTCHA stands for Completely Automated Public Turing Test To Tell Computers and Humans Apart and the term was invented in 2000 by Luis von Ahn, Manuel Blum, Nicholas Hopper and John Langford of Carnegie Mellon University. Here the idea is to find a simple one question test administered by a computer that distinguishes humans from currently available computers by taking advantage of a specific skill (such as recognizing distorted text) that humans are good at but current computers find impossible. A computer passing this test would not require the variety of abilities Turing imagined, but it serves to deter computer programs that might otherwise spread unwanted advertising in internet forums or do other dubious or nefarious work.
The question of whether machines can think actually has an older pedigree than Turing or the modern computer. An example of this is a 1939 essay in Astounding Science Fiction “Tools for Brains” which begins with the line: “CAN machines think? The question keeps coming up every time a new kind of calculating machine is invented…” However Turing’s imitation game has left an indelible mark on the question.…
People across the political spectrum have long recognized that our democratic system disenfranchises the unborn. Those on the left tend to worry that those alive today are pillaging natural resources from future generations. Those on the right tend to worry that excessive public spending will force our children or grandchildren into economic slavery. Either way, people in the future will be forced to live with the consequences of our present decisions, but they have no say in those decisions (though Greg Lusk has problematized this reasoning).
How to solve this problem? Philosopher Thomas Wells proposes a direct solution: give voting powers to “trustee” organizations “such as charitable foundations, environmentalist advocacy groups or non-partisan think tanks.” These organizations would have a block of votes equivalent to something like 10% of the overall electorate. If there are 10 million eligible voters in an election, we would assign 1 million votes to these organizations. Wells’s idea is that these organizations would vote with the best interests of the future in mind. Not only could they affect the results of elections, but Wells predicts they would shape the political conversation as politicians tailor their policies to appeal to this powerful voting block.
Alex Tabarrok over at Marginal Revolution finds Wells’s proposal “laughable”. He sees Wells’s proposal for a select group of trustees as merely replicating Wells’s own view of how the future ought to look. Instead, Taborrok proposes the economist’s universal solution: the market. Specifically, prediction markets. While I share some of Taborrok’s skepticism of Wells’s proposal, I find Taborrok’s proposal even less realistic. I shall focus my critique on two problems: an epistemic problem and a relevance problem.…
Some have argued that the emphasis on Sterling’s comments obscured the larger, more
harmful, actions that he has taken (the Guardian suggests this, as does the link below). In particular, they point to the housing discrimination he was accused of perpetrating as an owner of 100’s of properties in the Los Angeles area. Housing discrimination is terrible, and (like all forms of discrimination) should not be tolerated.
Still, I wondered, do sports perpetuate or help fight discrimination? There is obviously no cut and dry answer to this question. It is too broad a question to be answered directly: there are many different sports, and too many ways of thinking about discrimination for the question to be taken seriously. However, in thinking about the question, I took a look at the academic literature on sociology of sport. I found it surprising that this literature is not more heavily cited in recent discussions of racism in sports.
Here I’m going to share excerpts from a paper entitled “Professional Football Scouts: An Investigation of Racial Stacking” by J. R. Woodward (2004). The study covered in the article analyzes draft guides that describe the suitability of college athletes for the NFL draft, paying particular attention to the descriptions of the perceived physical and mental capabilities of white and African American players. I quote this paper it because the study is interesting, but also because it has a fairly detailed literature review with some interesting studies. Given that it seems the sports media perpetuates the messages discussed in this study from 2010, and broadcasts to millions of people, I would guess the messages we receive about sports and athletes portray more bias than we immediately realize.
“Coakley (1998) notes, there are roughly 20 times more African American physicians and lawyers than top professional athletes; nor have most sports truly integrated to allow for equal participation and rewards between the races. In 1997, the 50th anniversary of Jackie Robinson joining Major League Baseball, his old team the Dodgers had the exact same number of American-born Blacks on the opening day roster as they did in 1947: one.”
“Whites dominate most sports at the collegiate and high school level; football, basketball, track, and baseball—sports where Whites are underrepresented—make up only 4 out of at least 40 sports played competitively.”
“The belief that sport has been a source of upward mobility for African Americans has been rebutted in previous research and is not the object of this project (see Sailes, 1998; and Smith, 1993, 1995). What is of interest, however, is the tenacity of this view. Personal beliefs about race and sport are often solidified when society at large seems to share and reinforce these beliefs, regard- less of their veracity.”
“One manifestation of our “race logic” (how we come to understand racial phenomena in society) is the link between race and athletics, principally the belief in African American athletic superiority. Unfortunately, concomitant with this view has been the conviction of mental inferiority; i.e., the “dumb jock” stereotype (Hoberman, 1997; Eitzen, 1999). American history is replete with academic, intellectual, and social discussions of the primitive nature of Blacks, whose supposed strength, power, and sexual aggression made them appear almost animalistic, an assertion strengthened by their perceived lack of innate cognitive abilities (Mead, 1985).”
“Racial ideology, then, was situated in a particular, disparaging view of African Americans as physical, not mental beings. Athletics was just one of many endeavors in which this view was manifested (Coakley, 1998).”
“Racial stacking is the over- or underrepresentation of players of certain races in particular positions in team sports (Coakley, 1998). For example, quarterbacks in football and catchers in baseball have traditionally been White, whereas Black players are more often found playing in the outfield in baseball and as running backs or wide receivers in football.”
“Loy and McElvogue (1970) presented the first study on racial stacking by examining the racial makeup of baseball and football in America. Their findings suggested that White players are more likely to be found in what they termed central positions (i.e., discrimination is most likely to occur at central positions in any social organization, where the most interaction occurs).”
“In this study, an assessment was made to determine whether scouting reports of college quarterbacks, centers, inside linebackers, and tight ends relied on mental descriptors of White players and physical descriptors of African American players. At a basic level, scouts are individuals raised in contemporary U.S. society with all the implied racial beliefs. Because physical and mental abilities relative to football can be extremely subjective, it follows that descriptions of athletes in various positions would differ for Whites and African Americans, based solely on the ascribed characteristic of race.…
The bad news is that Americans used more energy in 2013 than in 2012. Unchanged is the fact that US energy efficiency is still terrible. The good news is that 2013 saw more renewable energy produced!
Each year the Lawrence Livermore Labs releases an energy flow chart, which is a great infographic that displays the origin of US energy, the sectors that use that energy, and the efficiency of each sector. This year’s infographic was recently posted (click on the image to make it larger).
“Wind energy continued to grow strongly, increasing 18 percent from 1.36 quadrillion BTUs, or quads, in 2012 to 1.6 quads in 2013.”
“Natural gas prices rose slightly in 2013, reversing some of the recent shift from coal to gas in the electricity production sector.”
“Petroleum use increased in 2013 from the previous year.”
“Rejected energy [roughly energy lost to inefficiency] increased to 59 quads in 2013 from 58.1 in 2012, rising in proportion to the total energy consumed.”
What I enjoy about this infographic is that it highlights the rejected energy, which highlights the inefficiency of US energy use. Transportation, as you can see, produces a lot of rejected energy (probably due to the inefficiency of the combustion engine). If we can’t curb our energy use (which I think we should) then we absolutely need to be doing a better job finding efficiencies.…
Should we motivate concern for climate action through the wellbeing of our decedents?I argue that it is time for change.
Michael Mann was promoting his new bookThe Hockey Stick and the Climate Wars last night with a lecture at the University of Wisconsin. I attended and live-tweeted it on my twitter account @WxPhilosopher for any of you who missed it. For the most part Mann’s talk followed what has become the standard climate talk format: here’s some science we’re sure of, here’s why models are helpful, this is how the topic was politicized, we’re all doomed unless we act fast. Possibly even more cliché than the format itself is the trope with which such talks, including Mann’s, usually close: Consider the legacy of our children, and how climate change could affect them. Let’s ensure they are better off, and leave them a world in which they can flourish. I’ll call this the child trope.
I hate the child trope, and I find my own hatred of it somewhat strange. Of course, I want preserve the planet’s ability to support life, and I want humanity to flourish. So why do I have these negative emotions toward it? After hearing Mann evoke the trope, I sat down to rationalize my emotional position. I realized that I find the trope not very compelling, but also that it possibly reinforces what I think are dangerous presuppositions. I’ve listed a few of the reasons below.
Please, by all means, comment on this post. I might be a little pessimistic, and I want to know if this trope actually is effective in demographics other than those in which I reside.
How the trope works:
I take it that the child trope is one way of personalizing the harm that climate change will cause even though climate change works on long timescales. Because it isn’t us that will be hurt most by the affects of climate change, but our progeny, and because we are the cause of climate change, the child trope is relied on to make currently existing individuals feel responsible for what happens in the future. The trope creates this feeling of responsibility for yet un-actualized people through two social norms: 1) needing to provide for blood relatives, especially children and 2) the culturally accepted desire for parents to want their children to have a better life then they (the parents) had.
Reasons to question the trope:
It fails to address the link between population and consumption. The trope presupposes that the audience is going to have children. However, population growth and consumption are linked, and consumption is one problem that needs to be addressed to mitigate and adapt to climate change. One way to address consumption is to manage population, and this means seriously questioning the social norms supporting unfettered procreation. It is hard to seriously discuss decisions to procreate if what motivates our responsible action on climate is the product of that procreation.
UPDATE 4/19/14 2 PM Eastern: As some commenters have pointed out, the link between population and consumption is a complicated one. I did not mean to imply in the original post that it was a direct relationship (more people = more consumption). Please see my response to Nathan in the comments for a bit more considered response.
Whose kids? I’ve seen this trope evoked most frequently with a majority white middle/upper class North American, college audience. There is good reason to think that children of this audience will be fine in the future – they have the advantages of being privileged and in developed countries rich enough to take adaptation seriously. They may even find ways to profit from climate change. Children in less privileged countries (especially the sea-side ones) are likely to be hurt more seriously, and much sooner (as in, they already are suffering climate change related affects). These are the people we should care about. But the child trope doesn’t motivate us to do so, because it is predicated on concern for blood relatives.
Wanting more and better (partially) got us into this mess. For much of this century, the “better life for our children” meant the acquisition of wealth and goods, and led to a bigger, faster, and cheaper mentality. This drive towards easy consumption helped create the climate problem. I believe that in order to address climate, we need to learn to be content with only what we need (or at least a lot less), and create efficiencies in providing those needs. Insofar as this trope relies on an unquestioned desire for a better life for offspring, this trope doesn’t steer us towards sustainable living.
The trope doesn’t seem to be effective. Is there any evidence that this trope is at all effective? The trope has been part of the climate discussion since I can remember, and action has been slow. Can’t we do better? It was interesting to hear Michael Mann say that we need to make climate change relevant to daily life, and then evoke the child trope. Let’s hire a good marketing firm.
Why are non-actualized future individuals assumed to motivate action better than actual existing individuals? The trope presupposes a kind of selfishness: we are motivated primarily by our own interest, in this case, the wellbeing of our future decedents. I think evoking this trope helps to perpetuate this selfishness especially the effects of climate change are becoming visible. The most vulnerable humans are already being harmed, and the biosphere is already experiencing negative effects. Why are we still talking about abstract non-actualized future individuals? If we aren’t willing to go beyond self interest to help those we have never met who will suffer because of our collective actions, then the effects of climate change will be disastrous. We need to work to develop this kind of global awareness.
There is an economic counter argument. A common retort to proposed action on climate change is that it is too costly. The US and other privileged countries benefitted the most from burning the fossil fuels that largely created the climate change problem.…
Toronto. I could hear the moans of Torontonians waking up and looking out their window only to realize it was again cold, and again, snow would ruin their morning TTC ride. This morning reminded me of April 1 1997. As a kid in Boston I woke up to almost 30 inches of snow on the ground – more in that one night than the rest of that winter. I didn’t have a morning commute. Schools were closed. I liked the snow then. This year though, no one is happy to see the snow again. For many North Americans, this winter has felt cold, long, and intolerable.
These feelings about the weather matter. Research shows that the way we perceive weather affects the way we respond to problems like climate change. Simply put, the perception that local weather is at odds with claims regarding the climate (weather is cold but climate is warming), affects the strength of belief or likelihood to act on climate issues.
The purpose of this post is twofold: 1) to convince you that, from a certain perspective, this winter wasn’t the long, cold, and intolerable one you might have experienced (OK, maybe if you live in Wisconsin), and 2) to buy myself time to put together a proper post on the pop-explanation for this winter, the polar vortex.
Where was it bad? Middle-to-Eastern US and Canada
If you lived in the middle of the US or Canada, you felt cold this winter.
For example, Madison Wisconsin (article here) had their 11th coldest winder on record, with average temp of 13 degrees F, and (at least) 81 consecutive days of at least 1 inch of snow on the ground (the 4th longest in recorded history). The US as a whole had its 34th coldest winter (from 119 recorded winters). Toronto had a record 101 consecutive days with 1 cm of snow on the ground, the temperature average was the coldest in 20 years, 3rd coldest in 50 years, and 35 extreme temperature warnings were issued. Great lake ice coverage was at a near all time high.
But don’t think that because you were cold, that it was a cold winter.
This winter, from a global perspective, was warm (according to NOAA global analysis). Europe was warm. Denmark reported its fifth warmest winter since records began in 1874, Germany its fourth warmest, and Austria its second.
Globally, this winter’s (Dec-Feb) land records indicated it was the 10th warmest (2007 was the warmest) and the 126th coolest (1893 was the coldest). In the northern hemisphere, this winter was the 11th warmest and 125th coolest.
Combined land and ocean surface temps for this winter was the eighth highest on record, and .57 degrees C above the 20th century average. What about sea ice? Arctic sea ice extent – the loss of which is thought to affect climate – was at its fifth lowest.
It is easy to forget that everywhere is not like where we are. Please keep in mind that the weather where you live is not an indicator of the global state of the atmosphere.
Like everyone else, I have become obsessed with the disappearance of Malaysian Airlines flight MH370. When I read that the flight lost contact on March 8, I assumed that it would be found crashed into the ocean in a matter of days if not hours. Nearly two weeks later, people are starting to wonder whether it will ever be found.
There is no shortage of theories about what happened to the flight. Pilot suicide seems to be the most likely answer, but there is scant evidence of motive. Terrorist hijacking is an obvious possibility, perhaps by the Taliban or Uighurs seeking to strike back at China, but no groups have claimed responsibility. Piracy is a possibility; the list price of a Boeing 777 is over $200 million. Pilot Chris Goodfellow claims that an electrical fire is the most likely cause, and many believe this explanation is the most likely, but it has difficulty explaining the several course and altitude changes made by the flight. Similarly, Australian Pilot Desmond Ross argues that the flight could have depressurized, explaining why the plane first rapidly descended. He then argues that errors induced by the depressurization could explain the plane’s other maneuvers. Any, or none, of these could be true.
As theories have proliferated and the official search area has widened to a significant portion of the Earth’s surface, I have started to wonder whether prediction markets might help to locate the missing flight. Prediction markets are similar to stock markets, but the traded contracts are predictions rather than shares of a corporation. Contracts in prediction markets have a payoff (say $1) if the associated prediction is correct, and no payoff if it is incorrect. Such markets have proven remarkably powerful in predicting the outcomes of certain types of events, such as political elections.
In May 1968, the U.S. submarine Scorpion disappeared on its way back to Newport News after a tour of duty in the North Atlantic. Although the navy knew the sub’s last reported location, it had no idea what happened to the Scorpion, and only the vaguest sense of how far it might have traveled after it had last made radio contact. As a result, the area where the navy began searching for the Scorpion was a circle twenty miles wide and many thousands of feet deep. You could not imagine a more hopeless task(xx).
The party line of climate change skeptics these days is that global warming has paused, or even reversed, in the last 15 years. According to the Nongovernmental Panel on Climate Change, “Global temperatures stopped rising 15 years ago despite rising levels of carbon dioxide, the invisible gas the IPCC claims is responsible for causing global warming.” Typical denialist refusal to accept the facts, right? Perhaps not! A recent paper in Geophysical Research Letters explains, “Although the Mount Pinatubo eruption in 1991 caused a short-term reduction in TOA radiation, increasing greenhouse gases should have led to increasing warming. However, sea surface temperature (SST) increases stalled in the 2000s and this is also reflected in upper ocean heat content (OHC) for the top 700 m in several analyses.”1 Despite this apparent anomaly, however, climate scientists have not been jumping ship from their consensus position that anthropogenic global warming is occurring.
Skeptics see this lack of response as evidence of liberal bias or even conspiracy, but I think there is a much more compelling explanation: anthropogenic warming is part of the hard core of climate science. That is, the AGW claim is not subject to revision, and so when anomalies occur—when observations fail to meet predictions—other components of climate science must be revised to preserve it.