Do sports perpetuate or help fight discrimination?

I look at an article from sports sociology that suggests descriptions of athletes might perpetuate inaccurate stereotypes.

Like many other people, I was shocked to hear the alleged tape recording of Donald Sterling saying that his girlfriend should not take photographs with black people, or bring them to basketball games (but she can bring them to bed). I don’t follow sports very closely anymore; I didn’t know that Sterling, at least according to the Guardian, made millions as a landlord through racist housing policies. Maybe we should have seen this coming.

Some have argued that the emphasis on Sterling’s comments obscured the larger, more

“In 1997, the 50th anniversary of Jackie Robinson joining Major League Baseball, his old team the Dodgers had the exact same number of American-born Blacks on the opening day roster as they did in 1947: one.” J. R. Woodward (2004)

harmful, actions that he has taken (the Guardian suggests this, as does the link below).  In particular, they point to the housing discrimination he was accused of perpetrating as an owner of 100’s of properties in the Los Angeles area. Housing discrimination is terrible, and (like all forms of discrimination) should not be tolerated.

However, many of these arguments that suggest housing discrimination is “actually harmful” imply that Sterling’s other actions were largely inconsequential. I thought that, perhaps, this position might stem from a belief that in the realm of sport, discrimination was not significant. In fact, it might even be thought that sports are a way for those frequently discriminated against to get ahead, since, on the face of it, it would seem that on-field performance would be the dominant driver of athletic success. There are obvious reasons to resist at least the first part of this description, for example, racism against black soccer players in Spain is so pervasive that the players can plan their responses in advance. 

Still, I wondered, do sports perpetuate or help fight discrimination? There is obviously no cut and dry answer to this question. It is too broad a question to be answered directly: there are many different sports, and too many ways of thinking about discrimination for the question to be taken seriously. However, in thinking about the question, I took a look at the academic literature on sociology of sport. I found it surprising that this literature is not more heavily cited in recent discussions of racism in sports.

Here I’m going to share excerpts from a paper entitled “Professional Football Scouts: An Investigation of Racial Stacking” by J. R. Woodward (2004). The study covered in the article analyzes draft guides that describe the suitability of college athletes for the NFL draft, paying particular attention to the descriptions of the perceived physical and mental capabilities of white and African American players. I quote this paper it because the study is interesting, but also because it has a fairly detailed literature review with some interesting studies. Given that it seems the sports media perpetuates the messages discussed in this study from 2010, and broadcasts to millions of people, I would guess the messages we receive about sports and athletes portray more bias than we immediately realize.

Literature Review

“Coakley (1998) notes, there are roughly 20 times more African American physicians and lawyers than top professional athletes; nor have most sports truly integrated to allow for equal participation and rewards between the races. In 1997, the 50th anniversary of Jackie Robinson joining Major League Baseball, his old team the Dodgers had the exact same number of American-born Blacks on the opening day roster as they did in 1947: one.”

“Whites dominate most sports at the collegiate and high school level; football, basketball, track, and baseball—sports where Whites are underrepresented—make up only 4 out of at least 40 sports played competitively.”

“The belief that sport has been a source of upward mobility for African Americans has been rebutted in previous research and is not the object of this project (see Sailes, 1998; and Smith, 1993, 1995). What is of interest, however, is the tenacity of this view. Personal beliefs about race and sport are often solidified when society at large seems to share and reinforce these beliefs, regard- less of their veracity.”

“One manifestation of our “race logic” (how we come to understand racial phenomena in society) is the link between race and athletics, principally the belief in African American athletic superiority. Unfortunately, concomitant with this view has been the conviction of mental inferiority; i.e., the “dumb jock” stereotype (Hoberman, 1997; Eitzen, 1999). American history is replete with academic, intellectual, and social discussions of the primitive nature of Blacks, whose supposed strength, power, and sexual aggression made them appear almost animalistic, an assertion strengthened by their perceived lack of innate cognitive abilities (Mead, 1985).”

“Racial ideology, then, was situated in a particular, disparaging view of African Americans as physical, not mental beings. Athletics was just one of many endeavors in which this view was manifested (Coakley, 1998).”

“Racial stacking is the over- or underrepresentation of players of certain races in particular positions in team sports (Coakley, 1998). For example, quarterbacks in football and catchers in baseball have traditionally been White, whereas Black players are more often found playing in the outfield in baseball and as running backs or wide receivers in football.”

“Loy and McElvogue (1970) presented the first study on racial stacking by examining the racial makeup of baseball and football in America. Their findings suggested that White players are more likely to be found in what they termed central positions (i.e., discrimination is most likely to occur at central positions in any social organization, where the most interaction occurs).”

The Study

“In this study, an assessment was made to determine whether scouting reports of college quarterbacks, centers, inside linebackers, and tight ends relied on mental descriptors of White players and physical descriptors of African American players. At a basic level, scouts are individuals raised in contemporary U.S. society with all the implied racial beliefs. Because physical and mental abilities relative to football can be extremely subjective, it follows that descriptions of athletes in various positions would differ for Whites and African Americans, based solely on the ascribed characteristic of race.…

Infographic: Americans use more energy in 2013 than in 2012

The bad news is that Americans used more energy in 2013 than in 2012.  Unchanged is the fact that US energy efficiency is still terrible. The good news is that 2013 saw  more renewable energy produced!

Each year the Lawrence Livermore Labs releases an energy flow chart, which is a great infographic that displays the origin of US energy, the sectors that use that energy, and the efficiency of each sector. This year’s infographic was recently posted (click on the image to make it larger).

Lawrence Livermore Labs Energy Infographic

Some highlights from the lab’s news release:

  • “Wind energy continued to grow strongly, increasing 18 percent from 1.36 quadrillion BTUs, or quads, in 2012 to 1.6 quads in 2013.”
  • “Natural gas prices rose slightly in 2013, reversing some of the recent shift from coal to gas in the electricity production sector.”
  • “Petroleum use increased in 2013 from the previous year.”
  • “Rejected energy [roughly energy lost to inefficiency] increased to 59 quads in 2013 from 58.1 in 2012, rising in proportion to the total energy consumed.”

What I enjoy about this infographic is that it highlights the rejected energy, which highlights the inefficiency of  US energy use. Transportation, as you can see, produces a lot of rejected energy (probably due to the inefficiency of the combustion engine). If we can’t curb our energy use (which I think we should) then we absolutely need to be doing a better job finding efficiencies.…

Children are NOT the future.

Should we motivate concern for climate action through the wellbeing of our decedents? I argue that it is time for change.

Michael Mann was promoting his new book The Hockey Stick and the Climate Wars last night with a lecture at the University of Wisconsin. I attended and live-tweeted it on my twitter account @WxPhilosopher for any of you who missed it. For the most part Mann’s talk followed what has become the standard climate talk format: here’s some science we’re sure of, here’s why models are helpful, this is how the topic was politicized, we’re all doomed unless we act fast. Possibly even more cliché than the format itself is the trope with which such talks, including Mann’s, usually close: Consider the legacy of our children, and how climate change could affect them. Let’s ensure they are better off, and leave them a world in which they can flourish. I’ll call this the child trope.

I hate the child trope, and I find my own hatred of it somewhat strange. Of course, I want preserve the planet’s ability to support life, and I want humanity to flourish. So why do I have these negative emotions toward it? After hearing Mann evoke the trope, I sat down to rationalize my emotional position. I realized that I find the trope not very compelling, but also that it possibly reinforces what I think are dangerous presuppositions. I’ve listed a few of the reasons below.

Please, by all means, comment on this post. I might be a little pessimistic, and I want to know if this trope actually is effective in demographics other than those in which I reside.

How the trope works:

Think of the children!

I take it that the child trope is one way of personalizing the harm that climate change will cause even though climate change works on long timescales. Because it isn’t us that will be hurt most by the affects of climate change, but our progeny, and because we are the cause of climate change, the child trope is relied on to make currently existing individuals feel responsible for what happens in the future. The trope creates this feeling of responsibility for yet un-actualized people through two social norms: 1) needing to provide for blood relatives, especially children and 2) the culturally accepted desire for parents to want their children to have a better life then they (the parents) had.

Reasons to question the trope:

It fails to address the link between population and consumption. The trope presupposes that the audience is going to have children. However, population growth and consumption are linked, and consumption is one problem that needs to be addressed to mitigate and adapt to climate change. One way to address consumption is to manage population, and this means seriously questioning the social norms supporting unfettered procreation. It is hard to seriously discuss decisions to procreate if what motivates our responsible action on climate is the product of that procreation.

UPDATE 4/19/14 2 PM Eastern: As some commenters have pointed out, the link between population and consumption is a complicated one. I did not mean to imply in the original post that it was a direct relationship (more people = more consumption). Please see my response to Nathan in the comments for a bit more considered response.

Whose kids? I’ve seen this trope evoked most frequently with a majority white middle/upper class North American, college audience. There is good reason to think that children of this audience will be fine in the future – they have the advantages of being privileged and in developed countries rich enough to take adaptation seriously. They may even find ways to profit from climate change. Children in less privileged countries (especially the sea-side ones) are likely to be hurt more seriously, and much sooner (as in, they already are suffering climate change related affects). These are the people we should care about. But the child trope doesn’t motivate us to do so, because it is predicated on concern for blood relatives.

Wanting more and better (partially) got us into this mess. For much of this century, the “better life for our children” meant the acquisition of wealth and goods, and led to a bigger, faster, and cheaper mentality. This drive towards easy consumption helped create the climate problem. I believe that in order to address climate, we need to learn to be content with only what we need (or at least a lot less), and create efficiencies in providing those needs. Insofar as this trope relies on an unquestioned desire for a better life for offspring, this trope doesn’t steer us towards sustainable living.

The trope doesn’t seem to be effective. Is there any evidence that this trope is at all effective? The trope has been part of the climate discussion since I can remember, and action has been slow. Can’t we do better?  It was interesting to hear Michael Mann say that we need to make climate change relevant to daily life, and then evoke the child trope. Let’s hire a good marketing firm.

Why are non-actualized future individuals assumed to motivate action better than actual existing individuals? The trope presupposes a kind of selfishness: we are motivated primarily by our own interest, in this case, the wellbeing of our future decedents. I think evoking this trope helps to perpetuate this selfishness especially the effects of climate change are becoming visible. The most vulnerable humans are already being harmed, and the biosphere is already experiencing negative effects. Why are we still talking about abstract non-actualized future individuals? If we aren’t willing to go beyond self interest to help those we have never met who will suffer because of our collective actions, then the effects of climate change will be disastrous. We need to work to develop this kind of global awareness.

There is an economic counter argument. A common retort to proposed action on climate change is that it is too costly. The US and other privileged countries benefitted the most from burning the fossil fuels that largely created the climate change problem.…

Com’on…it wasn’t that bad: Winter 2014

How cold, nasty, and intolerable was the winter of 2014?

I woke up this morning to a dusting of whiteness out my window: there was snow in

April Fool’s Day storm 1997 over northeast USA.

Toronto. I could hear the moans of Torontonians waking up and looking out their window only to realize it was again cold, and again, snow would ruin their morning TTC ride. This morning reminded me of April 1 1997. As a kid in Boston I woke up to almost 30 inches of snow on the ground – more in that one night than the rest of that winter. I didn’t have a morning commute. Schools were closed. I liked the snow then. This year though, no one is happy to see the snow again. For many North Americans, this winter has felt cold, long, and intolerable.

These feelings about the weather matter. Research shows that the way we perceive weather affects the way we respond to problems like climate change. Simply put, the perception that local weather is at odds with claims regarding the climate (weather is cold but climate is warming), affects the strength of belief or likelihood to act on climate issues.

The purpose of this post is twofold: 1) to convince you that, from a certain perspective, this winter wasn’t the long, cold, and intolerable one you might have experienced (OK, maybe if you live in  Wisconsin), and 2) to buy myself time to put together a proper post on the pop-explanation for this winter, the polar vortex.

Where was it bad? Middle-to-Eastern US and Canada

If you lived in the middle of the US or Canada, you felt cold this winter.

The blue is colder than average (compared to 1981-2010 average), and the red warmer than average. You can still see lots of red. In fact, global land and ocean records reveal an above average winter.

For example, Madison Wisconsin (article here) had their 11th coldest winder on record, with average temp of 13 degrees F, and (at least) 81 consecutive days of at least 1 inch of snow on the ground (the 4th longest in recorded history). The US as a whole had its 34th coldest winter (from 119 recorded winters). Toronto had a record 101 consecutive days with 1 cm of snow on the ground, the temperature average was the coldest in 20 years, 3rd coldest in 50 years, and 35 extreme temperature warnings were issued. Great lake ice coverage was at a near all time high.

But don’t think that because you were cold, that it was a cold winter.

This winter, from a global perspective, was warm (according to NOAA global analysis). Europe was warm. Denmark reported its fifth warmest winter since records began in 1874, Germany its fourth warmest, and Austria its second.

This image breaks up the anomaly in terms of percentage departure from average. You can think of it as the same plot as above, but scaled to the natural variability of regions. Notice the regions of red significantly outnumber the blue, as do the regions of dark red. Thanks Melanie for pointing these images out to me!

Globally, this winter’s (Dec-Feb) land records indicated it was the 10th warmest (2007 was the warmest) and the 126th coolest (1893 was the coldest). In the northern hemisphere, this winter was the 11th warmest and 125th coolest.

Combined land and ocean surface temps for this winter was the eighth highest on record, and .57 degrees C above the 20th century average. What about sea ice? Arctic sea ice extent – the loss of which is thought to affect climate – was at its fifth lowest.

It is easy to forget that everywhere is not like where we are. Please keep in mind that the weather where you live is not an indicator of the global state of the atmosphere. 

 …

Twice Is Nice! Double counting evidence in climate model confirmation

Charlotte Werndl (LSE) is speaking at Western University on Monday (the talk will be live-streamed) on evidence and climate change modeling. Having recently read her paper (with co-author Katie Steele) entitled “Climate Models, Calibration, and Confirmation” (CMCC) I thought I would post about it. The paper focuses on the use of evidence in confirming climate models with particular attention paid to double counting, which in this context means using the same evidence for two purposes (more on this use of the term later). I believe the paper is an important one, as it nicely separates concerns about double counting from other, related, confirmatory issues, and I think successfully shows where a form of double counting is legitimate. Still, despite being a casual fan of Bayesianism, I wonder if it gives us what we want in this particular circumstance. I can’t cover all the threads of argument made in the paper, so here I’ll simply discuss what double counting is, why we should worry about it, and how Steele and Werndl (S+W) argue that it could be legitimate in some circumstances.

What’s the worry about double counting? Climate change models typically go through a process called tuning (also sometimes called calibration). Tuning sets the values of parameters in the model that represent highly uncertain processes for which there are few empirical observations. The parameters are treated as “free parameters” that can take on a wide range of values. The values that result in the best fit with observations during tuning are the values chosen for the model. For example, if scientists are interested in global mean surface temperature (GMST), they would vary the parameter values of some uncertain processes until the model’s output of GMST closely matched GMST observations. The model, with these parameter values, would then be used to make climate projections.

The worry is that one way climate models are evaluated is by comparing their results to observations of some historical period; if scientists want to know if a model is adequate for the purpose of predicting GMST, they compare the model output to historical GMST observations. This agreement is supposed to build confidence in (confirm) the model’s ability to simulate the desired quantity. It is typically believed that to gain any confidence in the model at all, the simulation output must be compared to a different set of observations than the one that was used for tuning. After all, the observations used for tuning wouldn’t provide any confidence, because the model was designed to agree with them!

To deal with double counting, CMCC adopts an explicitly Bayesian view of confirmation. The Bayesian view adopted is necessarily contrastive and incremental: a model is confirmed only relative to other models, and the result of confirmation is greater confidence in the model for some particular purpose (not a claim that the model is a correct representation or the truth). Confirmation of one model relative to another can be tracked with the likelihood ratio, which is the probably of the evidence conditional on the first model divided by the probability of the evidence conditional on the second model. If the ratio is >1, the first model is confirmed,

So here is a simple way in which double counting is legitimate on the Bayesian view presented in CMCC. Imagine tuning some model M whose parameters have not yet been set (S+W call this a base model). In order to tune it, we create several different instances of the base-model, all with different parameter values: M1, M2, and so on. We compare the results of each model instance to observations and select the best fitting instance. This is an example of double counting in the following sense: the same data is used to both confirm and tune the model. This is tuning, because we have selected parameter values by comparing outputs to observations, and it is confirmation, because we have gained greater confidence in one instance over all the other instances in light of the data. S+W call this double-counting 1 and it is fairly uncontroversial.

Double-counting 2 seeks to confirm two different (M and L let’s say) base-models, but the situation is much the same. The Bayesian apparatus is more complex, and I’ll leave it to my readers to seek out the details in the paper itself. However, the evaluation still deals with likelihood ratios, it is just that the likelihood ratio needs to take into account all the instances of base-models M and L, as well as our prior probabilities regarding them. The likelihood ratio becomes a weighted sum of the probability of the evidence given each model instance for one base-model over the other. Double-counting 2 is legitimate in two situations 1) the average fit with the data for one base-model’s instances is higher than the other model’s (assuming the priors for each model were equal) and/or 2) the base-models have equivalent fit with the observations, but one model had a higher prior probability (was more plausible). An example of (1) would be that base-model M is tuned to the observations, and on average, its instances are closer to the observations than model L’s. This would result in a greater likelihood for M compared to L, and thus confirm M relative to L. Again, even in this situation tuning “can be regarded as the same process as confirmation in the sense that the evidence is used to do both calibration and confirmation simultaneously” (p618).

Quick Comments

S+W do a great job distinguishing the two kinds of double counting and separating them from other concerns about model tuning and climate projections (this work is done in the second half of the paper not discussed here). They seem right, given the view of confirmation they hold, that confirmation and tuning can be done with the same evidence. After all, double counting S+W’s sense is a sophisticated way of saying that the model that fits the data best is more likely.

A few issues worth thinking about:

1) Double counting here is a bit of a misnomer.…

Snowquester – A perfect storm for HPS/STS

If you were following the weather recently, you know about the Snowquester. What happened was that there was very little snow in Washington DC, and lots of snow in Boston and the Northeast. While this shouldn’t sound surprising, it really blindsided weather forecasters. Forecasters predicted lots of heavy wet snow for DC, which caused government services, municipal services, and schools to shut down before the flakes even began to fall. When the storm came, only a few inches appeared. The forecast was a bust and quite costly to the city. In the Northeast, Boston kept schools open based on a prediction of 6-10 inches of snow, but then received almost 30 inches of the white stuff. Another bust for forecasters. What exactly happened?

The finger pointing began almost immediately and almost everyone and everything that could be blamed was. The result, however, was a perfect storm for those of us that study HPS and STS.…

Big. Bad. Big data?

A few days ago Nassim N. Taleb wrote an opinion piece for Wired claiming that we should “Be aware of the big errors of big data.” If you haven’t heard about it, “big data” is becoming a buzz term in the media and sciences, particularly social sciences, for the scientific strategy of gathering massive amounts of data and then processing it with statistical tools. Taleb paints a picture of big data as being extremely manipulable, so much so that scientists can not resist the urge to employ it uncritically in support of their favorite theories:…

Can you predict the weather?

All of my friends raise an eyebrow when they hear that I’m a member of a competitive weather forecasting team. What is a weather forecasting team? And what possible qualifications could I have as a philosopher allow me to predict the weather?

This is the first year that the University of Toronto has had a forecasting team. Started by graduate students studying atmospheric physics, the team competes in the WxChallenge, a North American collegiate forecasting competition against approximately 60 other North American Universities. The competition involves predicting the high temperature, low temperature, highest sustained wind-speed, and precipitation, for a particular observation station 24 hours in advance. The next day predictions are compared to the observations made at the station; the higher the discrepancy the more points a forecaster earns. The few thousand forecasters that compete are then ranked according to these points and the top 64 will go into a head-to-head forecasting tournament at the end of the season. To add some variety, the weather station changes every two weeks which continually presents forecasters with new challenges.…

Science and the Media: Upside-Down Pyramid Thinking

This is the second post to appear in our new section called “quick thoughts.” The aim of this section is to raise an issue for comment in more detail than the weekly roundup does, but in a more succinct format than our longer 1000 word posts. We hope that this section will turn the spotlight onto those that choose to comment, rather than the author of the post.

I’ve been reading Naomi Oreskes’ book Merchants of Doubt, which I will review for Spontaneous Generations and post here on the Bubble Chamber as well. I will save my comments for that review, but the book, and a recent lunch conversation with philosophers and HPSers, has me thinking a lot about how the media reports on events within the scientific community.

While I was a master’s student, I was course instructor for “Phil120 – Introduction to Logic,” which was interestingly enough a required course for the school of journalism (I have a hot chili on ratemyprofessor.com, in case you were wondering). The second and third year journalism students, who constituted a majority of my class, did not understand why they needed to take the course, and they were vocal about it. As a response to this, and to low marks across the board, I gave an extra credit assignment: Use your journalism skills and interview a professor or administrator responsible for the inclusion of this class in your course requirements. Respond to this interview with your own arguments, either for or against the position presented.…

How to pursue science from the humanities?

You may notice that this article appears in a new section called “quick thoughts.” The aim of this section is to raise an issue for comment in more detail than the weekly roundup does, but in a more succinct format than our longer 1000 word posts. We hope that this section will turn the spotlight onto those that choose to comment, rather than the author of the post.

There has been a lot of talk around my department about curriculum changes, and it has me thinking about the ideal HPS curriculum. I surfed around the web a bit looking at various departmental websites. My program, as well as some others, seems to be oriented towards science undergrads who have decided to enter the humanities. The more recent entering classes in my program have not fit this description, as it seems more and more students are coming from the humanities instead of the sciences. Science, no matter what the field, takes an immense amount of time to learn. It seems that there are not as many accommodations made for the humanities student wanting to learn science as there are for the science student wanting to enter the humanities – there is just not a push to train humanities students in the sciences. Where is a humanities graduate student going to get the time to train him or herself in science? This seems to be a problem with the HPS curriculum.

From what I hear this problem is endemic in history and philosophy of science. We all want to know more science and math; yet, we also want to graduate without taking on more debt than is necessary. Maybe I am just blowing the whole thing out of proportion. However, I bet those of us who enter the field from the humanities rather than the sciences feel more constrained within the field.

I hear about this problem in different fields of study as well. At the Canadian Science Policy Conference that I recently attended, many speakers pointed out the need for government representatives to have a knowledge of how science works. At the Canadian Congress of the Humanities and Social Sciences, a researcher’s survey data demonstrated that the most requested resource by public school science teachers in one Canadian province was not money or lab equipment, but rather “knowledge of science.” I am sure this problem also appears for those looking to work at the intersection between science and business, policy, or communications. It feels as if those in HPS need to be full time science students in addition to being full time humanities students. In a way there are obvious answers to this problem for the humanities grad student: either learn the material as you complete your degree, or take time off for intensive study and return to your degree later. But both of these options are easier said than done, especially if one is trying to avoid student debt.

Have any of our readers successfully navigated this problem and have advice? Are there programs that could help a humanities student further embrace his/her love of science and math? Should one just let these topics pass him/her by and concentrate on problems of a non-technical nature? Or should HPS departments be more attune to this desire?…