Visualizing and teaching evolution through synteny

Embracing the rationalist and empirically-based perspective of science is not easy. Modern science generates disconcerting ideas that can be difficult to accept and often upsetting to philosophical or religious views of what gives meaning to existence [link]. In the context of evolutionary mechanisms within biology, the fact that variation is generated by random (stochastic) events, unpredictable at the level of the individual or within small populations, led to the rejection of Darwinian principles by many working scientists around the turn of the 20th century (see Bowler’s The Eclipse of Darwinism + link).  Educational research studies, such as our own “Understanding randomness and its impact on student learning“, reinforce the fact that ideas involving stochastic processes are relevant to evolutionary, as well as cellular and molecular, biology and are inherently difficult for people to accept (see also: Why being human makes evolution hard to understand). Yet there is no escape from the science-based conclusion that stochastic events provide the raw material upon which evolutionary mechanisms act, as well as playing a key role in a wide range of molecular and cellular level processes, including the origin of various diseases, particularly cancer [Cancer is partly caused by bad luck](1).

All of which leaves the critical question, at least for educators, of how to best teach students about evolutionary mechanisms and outcomes. The problem becomes all the more urgent given the anti-science posturing of politicians and public “intellectuals”, on both the right and the left, together with various overt and covert attacks on the integrity of science education, such as a new Florida law that lets “anyone in Florida challenge what’s taught in schools”.

Just to be clear, we are not looking for students to simply “believe” in the role of evolutionary processes in generating the diversity of life on Earth, but rather that they develop an understanding of how such processes work and how they make a wide range of observations scientifically intelligible. Of course the end result, unless you are prepared to abandon science altogether, is that you will find yourself forced to seriously consider the implications of unescapable scientific conclusions, no matter how weird and disconcerting they may be.

There are a number of educational strategies, in part depending upon one’s disciplinary perspective, on how to approach teaching evolutionary processes. Here I consider just one, based on my background in cell and molecular biology.  Genomicus is a web tool that “enables users to navigate in genomes in several dimensions: linearly along chromosome axes, transversely across different species, and chronologically along evolutionary time.”  It is one of a number of recently developed web-based resources that make it possible to use the avalanche of DNA (gene and genomic) sequence data being generated by the scientific community. For example, the ExAC Browser enables one to examine genetic variation in over 60,000 unrelated people. Such tools supplement and extend a range of tools accessible through the U.S. National Library of Medicine / NIH / National Center for Biotechnology Information (NCBI) web portal (PubMed).

In the biofundamentals© / coreBio course (with an evolving text available here), we originally used the observation that members of our subfamily of primates,  the Haplorhini or dry nose primates, are, unlike most mammals, dependent on the presence of vitamin C (ascorbic acid) in their diet; without vitamin C we develop scurvy, a potentially lethal condition. While there may be positive reasons for vitamin C dependence, in biofundamentals© we present this observation in the context of small population size and a forgiving environment. A plausible scenario is that the ancestral population of the Haplorhini lost the L-gulonolactone oxidase (GULO) gene (see OMIM) needed for vitamin C synthesis. The remains of the GULO gene found in humans and other Haplorhini genomes is mutated and non-functional, resulting in our requirement for dietary vitamin C.

How, you might ask, can we be so sure? Because we can transfer a functional mouse GULO gene into human cells; the result is that vitamin C dependent human cells become vitamin C independent (see: Functional rescue of vitamin C synthesis deficiency in human cells). This is yet another experimental result, similar to the ability of bacteria to accurately decode a human insulin gene), that supports the explanatory power of an evolutionary perspective (2),


In an environment in which vitamin C is plentiful in a population’s diet, the mutational loss of the GULO gene would be benign, that is, not selected against. In a small population, the stochastic effects of genetic drift can lead to the loss of genetic variants that are not strongly selected for. More to the point, once a gene’s function has been lost due to mutation, it is unlikely, although not impossible, that a subsequent mutation will lead to the repair of the gene. Why? Because there are many more ways to break a molecular machine, such as the GULO enzyme, but only a few ways to repair it. As the ancestor of the Haplorhini diverged from the ancestor of the vitamin C independent Strepsirrhini (wet-nose) group of primates, an event estimated to have occurred around 65 million years ago, its ancestors had to deal with their dietary dependence on vitamin C either by remaining within their original (vitamin C-rich) environment or by adjusting their diet to include an adequate source of vitamin C.

At this point we can start to use Genomicus to examine the results of evolutionary processes (a YouTube video on using Genomicus)(3).  In Genomicus a gene is indicated  by a pointed box  ; for simplicity all genes are drawn as if they are the same size (they are not); different genes get different colors and the direction of the box indicates the direction of RNA synthesis, the first stage of gene expression. Each horizontal line in the diagram below represents a segment of a chromosome from a particular species, while the blue lines to the left represent phylogenic (evolutionary) relationships. If we search for the GULO gene in the mouse, we find it and we discover that its orthologs (closely related genes) can be found in a wide range of eukaryotes, that is, organisms whose cells have a nucleus (humans are eukaryotes).
We find a version of the GULO gene in single-celled eukaryotes, such as baker’s yeast, that appear to have diverged from other eukaryotes about ~1.500,000,000 years ago (1500 million years ago, abbreviated Mya).  Among the mammalian genomes sequenced to date, the genes surrounding the GULO gene are also (largely) the same, a situation known as synteny (mammals are estimated to have shared a common ancestor about 184 Mya). Since genes can move around in a genome without necessarily disrupting their normal function(s), a topic for another day, synteny between distinct organisms is assumed to reflect the organization of genes in their common ancestor. The synteny around the GULO gene, and the presence of a GULO gene in yeast and other distantly related organisms, suggests that the ability to synthesize vitamin C is a trait conserved from the earliest eukaryotic ancestors.

Now a careful examination of this map (↑) reveals the absence of humans (Homo sapiens) and other Haplorhini primates – Whoa!!! what gives?  The explanation is, it turns out, rather simple. Because of mutation, presumably in their common ancestor, there is no functional GULO gene in Haplorhini primates. But the Haplorhini are related to the rest of the mammals, aren’t they?  We can test this assumption (and circumvent the absence of a functional GULO gene) by exploiting synteny – we search for other genes present in the syntenic region (↓). What do we find? We find that this region, with the exception of GULO, is present and conserved in the Haplorhini: the systemic region around the GULO gene lies on human chromosome 8 (highlighted by the red box); the black box indicates the GULO region in the mouse. Similar syntenic regions are found in the homologous (evolutionarily-related) chromosomes of other Haplorhini primates.

The end result of our Genomicus exercise is a set of molecular level observations, unknown to those who built the original anatomy-based classification scheme, that support the evolutionary relationship between the Haplorhini and more broadly among mammals. Based on these observations, we can make a number of unambiguous and readily testable predictions. A newly discovered Haplorhini primate would be predicted to share the same syntenic region and to be missing a functional GULO gene, whereas a newly discovered Strepsirrhini primate (or any mammal that does not require dietary ascorbic acid) should have a functional GULO gene within this syntenic region.  Similarly, we can explain the genomic similarities between those primates closely related to humans, such as the gorilla, gibbon, orangutan, and chimpanzee, as well as to make testable predictions about the genomic organization of extinct relatives, such as Neanderthals and Denisovians, using DNA recovered from fossils [link].

It remains to be seen how best to use these tools in a classroom context and whether having students use such tools influences their working understanding, and more generally, their acceptance of evolutionary mechanisms. That said, this is an approach that enables students to explore real data and to develop  plausible and predictive explanations for a range of genomic discoveries, likely to be relevant both to understanding how humans came to be, and in answering pragmatic questions about the roles of specific mutations and genetic variations in behavior, anatomy, and disease susceptibility.

Some footnotes:

(1) Interested in a magnetic bumper image? visit: http://www.cafepress.com/bioliteracy

(2) An insight completely missing (unpredicted and unexplained) by any creationist / intelligent design approach to biology.

(3) Note, I have no connection that I know of with the Genomicus team, but I thank Tyler Square (soon to be at UC Berkeley) for bringing it to my attention.

The pernicious effects of disrespecting the constraints of science

By Mike Klymkowsky

Recent political events and the proliferation of “fake news” and the apparent futility of fact checking in the public domain have led me to obsess about the role played by the public presentation of science. “Truth” can often trump reality, or perhaps better put, passionately held beliefs can overwhelm a circumspect worldview based on a critical and dispassionate analysis of empirically established facts and theories. Those driven by various apocalyptic visions of the world, whether religious or political, can easily overlook or trivialize evidence that contradicts their assumptions and conclusions. While historically there have been periods during which non-empirical presumptions are called into question, more often than not such periods have been short-lived. Some may claim that the search for absolute truth, truths significant enough to sacrifice the lives of others for, is restricted to the religious, they are sadly mistaken – political (often explicitly anti- religious) movements are also susceptible, often with horrific consequences, think Nazism and communist-inspired apocalyptic purges. The history of eugenics and forced sterilization based on flawed genetic premises have similar roots.

Copyright Sidney Harris
Copyright Sidney Harris; http://sciencecartoonspplus.com/ Please note: this is not a CCBY image; must contact copyright holder above.

Given the seductive nature of belief-based Truth, many turned to science as a bulwark against wishful and arational thinking. The evolving social and empirical (data-based) nature of the scientific enterprise, beginning with guesses as to how the world (or rather some small part of the world) works, then following the guess’s logical implications together with the process of testing those implications through experiment or observation, leading to the revision (or abandonment) of the original guess, moving it toward hypothesis and then, as it becomes more explanatory and accurately predictive, and as those predictions are confirmed, into a theory.  So science is a dance between speculation and observation. In contrast to a free form dance, the dance of science is controlled by a number of rigid, and oppressive to some, constraints [see Feynman].

Perhaps surprisingly, this scientific enterprise has converged onto a small set of over- arching theories and universal laws that appear to explain much of what is observable, these include the theory of general relativity, quantum and atomic theory, the laws of thermodynamics, and the theory of evolution. With the noticeable exception of relativity and quantum mechanics, these conceptual frameworks appear to be compatible with one another. As an example, organisms, and behaviors such as consciousness, obey and are constrained by, well established and (apparently) universal physical and chemical rules.

 

https://en.wikipedia.org/wiki/Last_universal_common_ancestor
https://en.wikipedia.org/wiki/Last_universal_common_ancestor

A central constraint on scientific thinking is that what cannot in theory be known is not a suitable topic for scientific discussion. This leaves outside of the scope of science a number of interesting topics, ranging from what came before the “Big Bang” to the exact steps in the origin of life. In the latter case, the apparently inescapable conclusion that all terrestrial organisms share a complex “Last Universal Common Ancestor” (LUCA) makes theoretically unconfirmable speculations about pre-LUCA living systems outside of science.  While we can generate evidence that the various building blocks of life can be produced abiogenically (a process begun with Wohler’s synthesis of urea) we can only speculate as to the systems that preceded LUCA.

 

Various pressures have led many who claim to speak scientifically (or to speak for science) to ignore the rules of the scientific enterprise – they often act as if their are no constraints, no boundaries to scientific speculation. Consider the implications of establishing “astrobiology” programs based on speculation (rather than observations) presented with various levels of certainty as to the ubiquity of life outside of Earth [the speculations of Francis Crick and Leslie Orgel on “directed panspermia”: and the Drake equation come to mind, see Michael Crichton’s famous essay on Aliens and global warming]. Yet such public science pronouncements appear to ignore (or dismiss) the fact that we know (and can study) only one type of life, the descendants of LUCA. They appear untroubled when breaking the rules and abandoning the discipline that has made science a powerful, but strictly constrained human activity.

 

Whether life is unique to Earth or not requires future explorations and discoveries that may (or given the technological hurdles involved, may not) occur. Similarly postulating theoretically unobservable alternative universes or the presence of some form of consciousness in inanimate objects [such unscientific speculation as illustrated here] crosses a dividing line between belief for belief’s sake, and the scientific – it distorts and obscures the rules of the game, the rules that make the game worth playing [again, the Crichton article cited above makes this point]. A recent rather dramatic proposal from some in the physical-philosophical complex has been the claim that the rules of prediction and empirical confirmation (or rejection) are no longer valid – that we can abandon requiring scientific ideas to make observable predictions [see Ellis & Silk]. It is as if objective reality is no longer the benchmark against which scientific claims are made; that perhaps mathematical elegance or spiritual comfort are more important – and well they might be (more important) but they are also outside of the limited domain of science. At the 2015 “Why Trust a Theory” meeting, the physicist Carlo Rovelli concluded “by pointing out that claiming that a theory is valid even though no experiment has confirmed it destroys the confidence that society has in science, and it also misleads young scientists into embracing sterile research programs.” [quote from Massimo’s Pigliucci’s Footnotes to Plato blog].

 

While the examples above are relatively egregious, it is worth noting that various pressures for glory, fame, and funding can tend to impact science more frequently – leading to claims that are less obviously non-scientific, but that bend (and often break) the scientific charter. Take, for example, claims about animal models of human diseases. Often the expediencies associated with research make the use of such animal models necessary and productive, but they remain a scientific compromise. While mice, rats, chimpanzees, and humans are related evolutionarily, they also carry distinct traits associated with each lineage’s evolutionary history, and the associated adaptive and non-adaptive processes and events associated with that history. A story from a few years back illustrates how the differences between the immune systems of mice and humans help explain why the search, in mice, for drugs to treat sepsis in humans was so relatively unsuccessful [Mice Fall Short as Test Subjects for Some of Humans’ Deadly Ills]. A similar type of situation occurs when studies in the mouse fail to explicitly acknowledge how genetic background influences experimental phenotypes [Effect of the genetic background on the phenotype of mouse mutations], as well as how details of experimental scenarios influence human relevance [Can Animal Models of Disease Reliably Inform Human Studies?].

 

Speculations that go beyond science (while hiding under the mantel of science – see any of a number of articles on quantum consciousness) – may seem just plain silly, but by abandoning the rules of science they erode the status of the scientific process.  How, exactly, would one distinguish a conscious from an unconscious electron?

In science (again as pointed out by Crichton) we do not agree through consensus but through data (and respect for critical analyzed empirical observations). The Laws of Thermodynamics, General Relativity, the standard model of particle physics, and Evolution theory are conceptual frameworks that we are forced (if we are scientifically honest) to accept. Moreover the implications of these scientific frameworks can be annoying to some; there is no free lunch (perpetual motion machine), no efficient, intelligently-designed evolutionary process (just blind variation and differential reproduction), and no zipping around the galaxy. The apparent limitation of motion to the speed of light means that a “Star Wars” universe is impossible – happily, I would argue, given the number of genocidal events that appear to be associated with that fictional vision.

 

Whether our models for the behavior of Earth’s climate or the human brain can be completely accurate (deterministic), given the roles of chaotic and stochastic events in these systems, remains to be demonstrated; until they are, there is plenty of room for conflicting interpretations and prescriptions. That atmospheric levels of greenhouse gases are increasing due to human activities is unarguable, what it implies for future climate is less clear, and what to do about it (a social, political, and economic discussion informed but not determined by scientific observations) is another.

Courtesy NASA.As we discuss science, we must teach (and remind ourselves, even if we are working scientific practitioners) about the limits of the scientific enterprise. As science educators, one of our goals is to help students develop an appreciation of the importance of an honest and critical attitude to observations and conclusions, a recognition of the limits of scientific pronouncements. We need to explicitly identify, acknowledge, and respect the constraints under which effective science works and be honest in labeling when we have left scientific statements, lest we begin to walk down the path of little lies that morph into larger ones.  In contrast to politicians and other forms of religious and secular mystics, we should know better than to be seduced into abandoning scientific discipline, and all that that entails.

Picture1

 

 

 

 

M.W. Klymkowsky  web site:  http://klymkowskylab.colorado.edu  email: klym@colorado.edu

 

 

 

 

Why Statistics Should Be A Mandatory Part of High School Education

Back in 2007, the Advertising Standards Authority (ASA) in Britain ruled that the oral health manufacturing giant Colgate could not use its claim that “More than 80% Of Dentists recommend Colgate” or that its brand was “used and recommended by most dentists.” These bans were based on the finding that Colgate had used deceptive statistics to derive its numbers.

 

For instance, when reading the original claim, consumers would likely think that four out of five dentists had recommended Colgate over its competitors. Instead, ASA revealed that dentists in the study were allowed to recommend more than one brand. The numbers were less impressive than Colgate had made them sound.

 

The ASA explained that “The claim would be understood by readers to mean that 80 per cent of dentists recommend Colgate over and above other brands, and the remaining 20 per cent would recommend different brands. […] Because we understood that another competitor’s brand was recommended almost as much as the Colgate brand by the dentists surveyed, we concluded that the claim misleadingly implied 80 per cent of dentists recommend Colgate toothpaste in preference to all other brands.”

 

This sort of fact-fudging is concerning because numbers permeate our lives. Sports fans pore over statistics of their favorite teams and players. Consumers are bombarded with product information on billboards, TV, and the internet. Pundits and politicians rattle off figures to tell voters how better or worse things have gotten. People tune into the weather channel to see the chance of rain. Some data are truly informative, some are twisted to support a point, and others are outright fabricated. And yet, every day, we are inundated with a deluge of numbers we must continually process.

 

So how can we make sense of it all?

 

According to Charles Wheelan, a senior lecturer and policy fellow at Dartmouth College and bestselling author of Naked Economics, one of the best tools that we have to separate the wheat from the chaff is statistics, a system used to gather, organize, and interpret data. In short, statistics helps us to conceptualize information by allowing individuals to understand how data is collected and how it can be interpreted and communicated. Wheelan states, “Statistics is one of those things that people need to understand in order to be an informed citizen, especially the use and abuse of data.”

 

Given its importance, descriptive statistics ought to ascend from its status as an elective to the pantheon of required high school mathematics, next to the trinity of algebra, geometry, and trigonometry. Statistics is “also more intuitive and applied than other kinds of high school math courses (e.g. calculus or trig),” states Wheelan, “so it certainly strikes me as sensible to make basic statistics an integral part of any high school math curriculum.”

 

In doing so, students will be better prepared to make informed decisions as adults over a wide range of subjects. For instance, as consumers, students will learn to question and be skeptical of advertisement claims. As voters, they will be able to interpret basic socioeconomic data touted or slammed by candidates, understand how surveys and polls work, and be aware of how data can be skewed—intentionally or unintentionally—through bias.

 

By incorporating more knowledge of statistics into our everyday lives, we will be able to foster an educated citizenry, helping future generations to make sense of our increasingly data-deluged world.

 

What Every Science Student Should Know (University of Chicago Press)

 

Check out my new guide aimed at helping college students excel in science, What Every Science Student Should Know (University of Chicago Press)

Book Review: An Astronaut’s Guide to Life on Earth

Commander Chris Hadfield captured the world’s imagination last year, when, from 13 March to 13 May 2013, he was the first Canadian Commander of the International Space Station. While aboard the ISS, Commander Hadfield did a series of “experiments,” both for scientists, but, perhaps most importantly, for youth. This included genuinely interesting questions like “How do you cry in space? (video above)” and “How do you cut your nails?” and the always important “How do you go to the bathroom?” His amicable nature and genuinely infectious enthusiasm brought science to the masses, and helped inspire thousands of youth.

Recently, Chris Hadfield released his book – “An Astronaut’s Guide to Life on Earth.” My sister waited in line for 3 hours at our local Costco to get me a signed copy for my birthday, and I finally got around to reading it for this review. The book follows the life of Chris Hadfield as he becomes the commander of Expedition 35, detailing his attitude and the path he took to become the first Canadian Commander of the ISS. The book is split into three broad sections leading up to Expedition 35 titled “Pre-Launch,” “Liftoff” and “Coming Down to Earth,” with several chapters within each section.

The book was fascinating to me – Hadfield is a hybrid pilot-engineer-scientist-lab rat. His expertise is in engineering and as a test pilot, but throughout the book he references how his work is interdisciplinary, and he has to have a broad understanding of several domains in order to be effective. In addition to his role as an astronaut and Commander, he is also a fully fledged lab rat, and people on the ground will ask him questions about how he’s feeling, take samples while he’s in space and after he returns, as well as measure how quickly he recovers to life back on Earth in order to further our understanding about how life in space impacts the human body. Since, at some point, we hope to explore the stars, any data we can get on how astronauts respond to life in space is valuable.

One of my favourite parts of the book was how it didn’t just focus on the mundane, it relished them. He spends pages describing the drills he went through, and how important have a strong grasp of the fundamentals was for his success. I found this refreshing – too often in science we glorify the achievements but ignore all the hard work that got them there. A breakthrough in the lab might take months or even years of work before things go right, and having some acknowledge that, not only do things not work (often), them not working is not the end of the world. This was a refreshing take on the scientific method, and really highlighted the value in “the grind” of slowly perfecting your skills.

Click the book cover for purchasing options!
Click the book cover for purchasing options!

He also has a certain brand of “folksy wisdom” that is inspiring in it’s own way. It’s not inspirational in the nauseating sense that these things are often written in, but more practical. He states the importance of reading the team dynamic before getting involved for example, or how important it is to really understand the nuts and bolts of what you’re doing, but at no point does that feel patronizing or “hey, look at me, I’m an astronaut!” For many budding scientists, the idea of trudging through another page of equations, or washing beakers, or just doing the mundane, less exciting parts of science makes you apathetic and bored. Hadfield takes this moments and stresses just how important it is to learn from them, as well as ensure that you know exactly why they are important. I highly recommend the book to anyone interested in STEM careers, and especially those early in their careers.

To purchase, check out Chris Hadfield’s official website.


Featured image: Commander Hadfield performed at the 2013 Canada Day celebrations in Ottawa, ON | Picture courtesy David Johnson, click for more info

Childhood obesity drops 40% in the last decade. Or not really, but who’s checking?

“A lie that is half-truth is the darkest of all lies.”
― Alfred Tennyson

Last week, a new study published in the Journal of the American Medical Association received a lot of media attention. The study, performed by Cynthia Ogden and colleagues at the CDC, aimed to describe the prevalence of obesity in the US and look at changes between 2003 and 2012. The study itself had several interesting findings, not least among them that the prevalence of obesity seems to have stabilized in many segments of the US population. However, they made one observation that caught the media’s attention:

“There was a significant decrease in obesity among 2- to 5-year-old children (from 13.9% to 8.4%; P = .03)”

This is where things get interesting, as the focus was not on the 5.5 percentage points difference. Instead of reporting the absolute difference, i.e. how much something changed, news outlets focused on the relative difference, i.e. how much they changed compared to each other. In that case, it would be (5.5/13.9 =) 40%. Which is much more impressive than the 5.5% change reported in the study. So you can guess what the headlines loudly proclaimed:

Headlines from Bloomberg, the LA Times and the WSJ
Headlines from Bloomberg, the LA Times and the WSJ | Click to enlarge, click links to read the articles

The media latched onto this “40%” statistic and ran with it, despite the researchers clearly stating that this was not their intention. In fact, from the paper itself, they said (in their conclusions):

Overall, there have been no significant changes in obesity prevalence in youth or adults between 2003-2004 and 2011-2012. Obesity prevalence remains high and thus it is important to continue surveillance. (emphasis mine)

This makes me wonder how many journalists read the article, how many got to the end, and how many just saw what other people had reported and ran with the same headline and narrative.

Here’s the thing – they’re technically correct (the best kind of correct). Yes, childhood obesity dropped 40% based on that report, and if that is true, that is a dramatic decrease. However, that is one group, and even the researchers themselves conclude this may be meaningless. It begs the question why, and if this is an actual association or just an artifact of something else like the type and number of statistical tests used. But since the narrative had already been written, everyone followed suit, and next thing you know we’re all slapping hi fives and proclaiming that there has been a drop off in childhood obesity that may not actually be something worth celebrating.

Now, had the results been portrayed fairly, two things would have happened. For one, the findings would not have been as positive as they are now. In fact, the headlines would have read “Business as usual: Obesity the same for the last decade” or “Obesity up 20% among elderly women!” (The latter refers to the finding that the prevalence of obesity went up among women aged 60 years and older from 31.5% to 38.1%). Secondly, a much more detailed discussion of the study findings would have happened – why has the prevalence stabilized? Have we finally reached saturation? Are all the people who could be obese now obese? Or is something else going on? But these weren’t the findings that were focused on.

The worst outcome of this media exposure won’t be felt right now. It will be felt in the next study. You see, this study in JAMA was reported all over the media, and millions would have heard about how we’ve finally “turned a corner in the childhood obesity epidemic” (to quote the New York Times). Unfortunately, this may not be the case, and if a new study comes out saying the opposite, this further undermines the public’s confidence in science, even though the researchers in question never made any such claim.

And that, dear readers, is the darkest of all lies.

References
Ogden, C. L., Carroll, M. D., Kit, B. K., & Flegal, K. M. (2014). Prevalence of Childhood and Adult Obesity in the United States, 2011-2012. JAMA, 311(8), 806-814.

Creation vs Evolution: Why science communication is doomed

Last Tuesday night, Bill Nye the Science Guy had a debate with Ken Ham over creationism vs evolution. I watched part of the debate, and have conflicted feelings on it. I’m going to start by saying I think it was a brilliant marketing move. For one, it suddenly brought the Creation Museum into the forefront of society for next to nothing. While before only a handful had heard of it, now it has risen to national prominence, and I’m sure the number of visits they have will reflect that in the near future.

As for the substance itself, I don’t think this is a very good topic for a debate. Any time you bring religion into a discussion, it turns into an “us vs them” argument where neither party is willing to change their view. Even the advertising and marketing billed it as a debate of “creationism vs evolution” – effectively presupposing the view that one can believe in both (which I’ll come back to). At best, it’s snarky and offhanded, and at worst, antagonistic and ad hominem. I should point out though that this is on both sides – neither side is willing to reconcile.

And why should they? Both view their side as being right, and weigh the information they have differently. So all that this accomplishes is that both sides become further polarized and further entrenched, and any chance of meaningful dialogue between both sides becomes less and less likely with every angry jab back and forth. It turns into a 21st century war of angry op-eds, vindictive tweets and increasingly hostile and belligerent Facebook posts shared back and forth. This isn’t just limited to religion though – many discussions end this way with people being forced to take sides in an issue that is more complicated than simply being black/white. Rather than discuss the details and come to an understanding of what we agree and disagree on, we’re immediately placed into teams that are at loggerheads with each other.

What is most interesting is what happens to extreme viewpoints when they are criticized. Rather than taking in new information and evaluating it based on its merits, criticism actually results in the consolidation of those perspectives. In lay language, if you have an extreme viewpoint, you dig in your heels, build a trench and get ready to defend yourself against all attackers. This isn’t entirely surprising – when someone attacks you, and in particular attacks you *personally*, why wouldn’t you get defensive. Studies of this have look at this from a political perspective, comparing extreme conservatives to extreme liberals. To quote Psychology Today:

Extreme conservatives believed that their views about three topics were more superior: (1) the need to require voters to show identification when voting; (2) taxes, and (3) and affirmative action. Extreme liberals, on the other hand, believed that their views were superior on (1) government aid for the needy; (2) the use of torture on terrorists, and (3) not basing laws on religion.

But wait! Aren’t these just fringe opinions being heard in the media? The good news is yes. The bad news is that the extremes are what people hear. If you imagine everyone existing on a normal distribution – with extreme opinions on the edges – then the vast majority of the people exist in the gulf between those people. However, those extremes are what people hear. In fact, this is what led to Popular Science shutting down their comments, based on findings by Brossard and Scheufele. What they did was ask people to read a study, and while the article remained the same, one group was exposed to civil comments, and the other to uncivil comments. What they found was striking:

In the civil group, those who initially did or did not support the technology — whom we identified with preliminary survey questions — continued to feel the same way after reading the comments. Those exposed to rude comments, however, ended up with a much more polarized understanding of the risks connected with the technology.

So seeing negative comments not only made people more skeptical of the article, it made them more skeptical of the science itself! That’s a huge concern for us, and how science is written about and discussed. Seeing negative comments, no matter how poorly written or ill-informed they are, makes people fundamentally view the science as being of lower quality. And that resulted in Popular Science closing their commenting section.

So to bring it all full circle, the “debate” was a microcosm of science and the public. Scientists sit back, do their work, and then turn around and say “Hey! You should do this” and then wonder why no one listens to them and why people fight them. We saw this with the New York soda ban, we’re seeing this in other spheres as well, and unless we change how we approach these hot button issues, we’ll lose the support of the fringe opinions (which we have already lost), but also the support of the moderates (which we can still get). I was having this discussion with my friend Steve Mann, who is one of the smartest men I know, and he sums it up best:

“It’s easier to poke fun at people with whom you disagree, particularly if you can imply that they are childish, old-fashioned, religious, or uneducated, than to honestly examine whether there is any merit to what they’re saying, and I think that’s a shame.”

I’m not taking sides – that wasn’t the aim of this piece. The aim of this piece is to tell you to listen with a open mind, discuss issues with others, and at all costs avoid ad hominem and personal attacks. If we want to bring people together, we have to avoid using language that drives us apart. If we want to promote science, we have to discourage hate. And if we want to educate others, we first have to start by understanding others.

Reference:
K. Toner, M. R. Leary, M. W. Asher, K. P. Jongman-Sereno. Feeling Superior Is a Bipartisan Issue: Extremity (Not Direction) of Political Views Predicts Perceived Belief Superiority. Psychological Science, 2013; DOI: 10.1177/0956797613494848

Using Math to make Guinness

William Sealy Gosset, statistician and rebel | Picture from Wikimedia Commons

Let me tell you a story about William Sealy Gosset. William was a Chemistry and Math grad from Oxford University in the class of 1899 (they were partying like it was 1899 back then). After graduating, he took a job with the brewery of Arthur Guinness and Son, where he worked as a mathematician, trying to find the best yields of barley.

But this is where he ran into problems.

One of the most important assumptions in (most) statistical tests is that you have a large enough sample size to create inferences about your data. You can’t make many comments if you only have 1 data point. 3? Maybe. 5? Possibly. Ideally, we want at least 20-30 observations, if not more. It’s why when a goalie in hockey, or a batter in baseball, has a great game, you chalk it up to being a fluke, rather than indicative of their skill. Small sample sizes are much more likely to be affected by chance and thus may not be accurate of the underlying phenomena you’re trying to measure. Gosset, on the other hand, couldn’t create 30+ batches of Guinness in order to do the statistics on them. He had a much smaller sample size, and thus “normal” statistical methods wouldn’t work.

Gosset wouldn’t take this for an answer. He started writing up his thoughts, and examining the error associated with his estimates. However, he ran into problems. His mentor, Karl Pearson, of Pearson Product Moment Correlation Coefficient fame, while supportive, didn’t really appreciate how important the findings were. In addition, Guiness had very strict policies on what their employees could publish, as they were worried about their competitors discovering their trade secrets. So Gosset did what any normal mathematician would.

He published under a pseudonym. In a startlingly rebellious gesture, Gosset published his work in Biometrika titled “The Probable Error of a Mean.” (See, statisticians can be badasses too). The name he used? Student. His paper for the Guinness company became one of the most important statistical discoveries of the day, and the Student’s T-distribution is now an essential part of any introductory statistics course.

======

So why am I telling you this? Well, I’ve talked before about the importance of storytelling as a way to frame scientific discovery, and I’ve also talked about the importance of mathematical literacy in a modern society. This piece forms the next part of that spiritual trilogy. Math is typically taught in a very dry, very didactic format – I recite Latin to you, you remember it, I eventually give you a series of questions to answer, and that dictates your grade in the class. Often, you’re only actually in the class because it’s a mandatory credit you need for high school or your degree program. There’s very little “discovery” occurring in the math classroom.

Capturing interest thus becomes of paramount importance to instructors, especially in math which faces a societal stigma of being “dull,” “boring” and “just for nerds.” A quick search for “I hate math” on Twitter yields a new tweet almost every minute from someone expressing those sentiments, sometimes using more “colourful” language (at least they’re expanding their vocabulary?).

There are lots of examples of these sorts of interesting anecdotes about math. The “Scottish book” was a book named after the Scottish Café in Lviv, Ukraine, where mathematicians would leave a potentially unsolvable problem for their colleagues to tackle. Successfully completing these problems would result in you receiving a prize ranging from a bottle of brandy to, I kid you not, a live goose (thanks Mariana for that story!) The Chudnovsky Brothers built a machine in their apartment that calculated Pi to two billion decimal places. I asked for stories on Twitter and @physicsjackson responded with:

Amalie (Emmy) Noether is probably the most famous mathematician you’ve never heard of | Photo courtesy Wikimedia Commons

There’s also the story of Amalie Noether, the architect behind Noether’s theorem, which basically underpins all modern physics. Dr Noether came to prominence at a time when women were largely excluded from academic positions, yet rose through the ranks to become one of the most influential figures of that time, often considered at the same level of brilliance as Marie Curie. Her mathematical/physics contemporaries included David Hilbert, Felix Klein and Albert Einstein, who took up her cause to help her get a permanent position, and often sought out her opinion and thoughts. Indeed, after Einstein stated his theory of general relativity, it was Noether who then took this to the next level and linked time and energy. But don’t take my word for it – Einstein himself said:

In the judgment of the most competent living mathematicians, Fräulein Noether was the most significant creative mathematical genius thus far produced since the higher education of women began.

While stories highlight the importance of these discoveries, they also highlight the diversity that exists within the scientific community. Knowing that the pantheon of science and math heroes includes people who aren’t all “math geniuses” can make math much more engaging and interesting. Finally, telling stories of the people behind math can demystify the science, and engage youth who may not consider math as a career path.

Making good use of Hollywood’s bad science

Maybe it was destiny, but when I was a freshman in high school in 1997 the movie Dante’s Peak, starring Pierce Brosnan, was released.  Why was it destiny?  For those of you unfamiliar with the movie, it was about a seemingly quiet volcano near a small town, which begins to be become active and subsequently produces a cataclysmic eruption that decimates the town.  I went on to get my M.S. in geology, focusing on volcanic hazards.

I was on the edge of my seat the entire time.  Pierce played a volcanologist from the United States Geological Survey, who despite his boss’s insistence that the volcanic activity was nothing more than a stomach rumble, had a gut feeling that the volcano was going to be the next Mt. St. Helens.  And of course, he was right.

Movie poster for Dante's Peak, the movie which might have caused me to pursue volcanology, perhaps under false pretenses.  Image Fair Use through Wiki.
Movie poster for Dante’s Peak, the movie which might have caused me to pursue volcanology, perhaps under false pretenses. Image Fair Use through Wiki.

However correct Pierce was in predicting the eruption of Dante’s Peak, it was years later that I learned the movie was not the most scientifically accurate.  I left the theater believing that all volcanoes had hot springs that would boil people alive, erupt basaltic lava flows (think Hawaii) and dacitic pyroclastic flows (a la Mt. St. Helens) at the same time, and cause a lake to become so acidic that it would dissolve a grandmother’s legs as she pushes her family to safety in a boat.  Granted, many of the events that happened in the movie might happen at a given volcano, but not all at the same time.

Just two and half month later Volcano was released.  In this movie a volcano erupts in Los Angeles along a transform fault (unheard of), not a convergent of divergent tectonic settings, like the Cascades and Dante’s Peak or the East African Rift, respectively.

Sure, movies have that creative license to add drama, but not everyone knows where the line is between truth and fiction.  This is a great teaching moment to point out inaccuracies, because left unattended these moments will lead to misconceptions.

Benefiting from Science Misconceptions in Movies

More recently on our radars is probably The Core, released in theaters in 2003 during my junior year in college.  By the time I saw this flick, I had taken a number of geology courses and knew a few things about how the Earth functions.

As geology students, we were obviously curious what this movie was about.  When the movie made it to DVD, we got together and watched it as a group, laughing about giant diamonds and how seismic waves would actually travel through a molten core.  As fun as the movie was to watch in a group, drinking some beers of course, it was a great learning experience.  Each of us would try to be the first to identify something as inaccurate or wrong, and if we were wrong, shame on us.  Where we weren’t exactly sure how accurate something was, we would debate it, using the knowledge we had learned in class.  Here is a little overview of the good and bad science in The Core.

For those of you interested in the astronomical side of science misconceptions, Phil Plait comments on several examples on his webpage Phil Plaits’ Bad Astronomy.

Not a misconception: these giant gypsum crystals were found in Mexico, not on the set of The Core. Photo by Alexander Van Driessche
Not a misconception: these giant gypsum crystals were found in Mexico, not on the set of The Core. Photo by Alexander Van Driessche

Over the decades, there have been a number of other science-inspired movies.  Having a geology background, it’s a little easier to for me to pick out the instances in geology-themed movies where there is something wrong or inaccurate.  Unfortunately, it’s not the same when I watch a movie like The Day After Tomorrow, Twister, Outbreak, or Contagion.  I know there is at least some truth to these movies, but for any non-specialist, identifying where creative liberties were taken, if any, is very difficult.  I imagine that for someone with a minimal science background it is even more difficult.  Nonetheless, these are just a few movies in a near-endless list where learning opportunities abound in identifying and correcting misconceptions.

In schools, why not have biology students write a report about the inaccuracies in Anaconda (although just a few years ago titanaboa was unearthed).  Open the semester of a climate science course with a discussion about misconceptions in The Day After Tomorrow.  Finish a course on paleontology with a competition of who can identify the most dinosaurs in Jurassic Park (1, 2, or 3) that were not from the Jurassic period.  The Tyrannosaurus Rex happens to be from the Cretaceous period, after the Jurassic.

Science and Science Fiction

About two months ago, fellow Sci-Ed blogger Atif Kukaswadia, posted a great piece about using science fiction movies as opportunities to learn science.  In particular, I enjoyed Atif’s reference to an MIT paper on comic book superhero physics.  Both science-based and science fiction movies have their place in learning environments for identifying the right, wrong, plausible, impossible and inaccurate.  Some questions that science fiction movies offer are “Is that possible?” or “What is required to make that possible?” Science-based movies let us ask “Is that true?” and “How do we know that is correct or incorrect?”

Cristina Russo, also a Sci-Ed blogger, commented on how documentaries and nature films may also create some misconceptions or be misleading.  Among other points, this piece takes a closer look at pieces meant to promote conservation, yet use storytelling techniques to make a more compelling movie.

Scientific inaccuracies abound in movies and TV, and all for the sake of good entertainment.  This is fine.  Although they do inspire students be become scientists, there are also unintended consequences.  After watching The Core, it’s not likely that someone out there is going to try to get to the Earth’s core by building some supposedly indestructible vessel, but it is likely to leave the idea in impressionable minds that the Earth is something it is not.  This is where we need to identify these misconceptions and turn them into learning opportunities.

If you have any favorite scientific inaccuracies in movies, I would love to hear about them.

 

Mathematical Literacy: A necessary skill for the 21st century

Maths chocolate
Photo by Flick user s-guilana | CC BY 2.0

My Grade 9 math teacher was a jolly British man, and probably taught me one of the most useful things I ever learnt in high school: how to do basic math in my head (or, since I was in the British educational system, it was Grammar School). Every so often we’d go into our math class and find little bits of paper on every desk. This was a harbinger of doom – it meant we were having a 20 question surprise quiz. And not just any quiz, a mental arithmetic quiz. He would read a question out loud twice, and then we’d have to do the math. He’d give us some leeway (you didn’t have to be exact), but man did I ever hate those quizzes. At the time, they seemed impractical and a colossal waste of time. In retrospect, they were incredibly useful.

Now, being on the other side of the divide, I see something that concerns me. I regularly TA undergraduate and graduate students in statistics, and I notice that many of them, while they have all the skills to do math, are absolutely terrified of it. And as soon as you fear a subject, or don’t want to learn it, you won’t. Your mind will shut down and every instinct you have will prevent you from engaging in the material. As a result, I spend the first hour of any class I’m teaching talking to the students and determining what it is they don’t understand to tailor my sessions accordingly. But the comments generally involve variations on:

“I just don’t get math.”
“I’ve never been any good at math.”
“I don’t like it.”

Of these, the first two concern me. The third I can’t help – I don’t need my students to love math, but I do want them to understand enough to pass the course and feel comfortable interpreting statistical analyses. There’s a culture among schoolkids to dislike math and the perception that it’s largely useless. While in chemistry you can see stuff blow up, and in biology you can dissect animals, math is a largely abstract concept. That perception then manifests as a lack of interest, which results in poorer performance, and that puts people off math. This is further compounded by a phenomena known as “Math Anxiety” or “Math Phobia.”  Ashcraft and Kirk discuss this extensively in their 2001 paper, and suggest that much of the anxiety is a result of the fear of getting the wrong answer in their tests. I’m not going to delve into it now as the whole area of math performance, both in terms of math anxiety and performance anxiety as well as cultural and gender differences in math warrant a dedicated post. For now, let’s just talk about what constitutes “mathematical literacy.”

The OECD released a report in 2000, where they defined literacy in three domains, and the way they defined numerical literacy was:

Quantitative literacy – the knowledge and skills required to apply arithmetic operations, either alone or sequentially, to numbers embedded in printed materials, such as balancing a chequebook, figuring out a tip, completing an order form or determining the amount of interest on a loan from an advertisement.

The OECD also conducts the Program for International Student Assessment (PISA) which evaluated the performance of 15-year olds in math, science and reading. It defines mathematical literacy as:

Mathematical literacy is an individual’s capacity to identify and understand the role that mathematics plays in the world, to make well-founded judgements and to use and engage with mathematics in ways that meet the needs of that individual’s life as a constructive, concerned and reflective citizen

As you can see, the idea of numerical or mathematical literacy, as defined above, isn’t advanced math like calculus or algebraic manipulations. We’re talking about being to understand the order of operations and activities requiring that level of mathematical understanding. Given that the world is moving towards a knowledge based economy, the lack of mathematical literacy is a big concern. Now more than ever the ability to critically evaluate information presented to us to draw our own conclusions, rather than have someone tell us what they mean, is of the utmost importance.

In Canada, this has particular relevance as we (like most of the Western world) are in the midst of an aging population. This comes with its own set of challenges, but one is that as patients age, they suffer from illnesses, and if they are unable to to interpret medical information or if doctors are unable to explain to patients in a way they’ll understand, then patients are unable to make informed decisions about their health.

Maths
Photo by Flickr user Minibe09 | CC BY-NC 2.0

I’m not implying that everyone needs to be able to advanced math and statistics. Given the advances in technology (see abacus app above), you can now use an app to calculate how to split the bill or calculate a tip (iLounge reviews 30 (!!) apps here). You don’t need to be able to do hierarchical ordinal regression using bootstrapping, or factor analyses, or structural equation modelling. But given how much data we are presented with on a regular basis, be that in the form of interest rates on a bank loan, discount on sale items or even polling numbers for political parties (the latter discussed by Swans on Tea), a basic level of numerical literacy is not only important, it’s necessary.

References
Ashcraft, Mark H.; Kirk, Elizabeth P., “The Relationships Among Working Memory, Math Anxiety, and Performance”, Journal of Experimental Psychology: General 2001 pp. 224-237
Ciampa PJ, Osborn CY, Peterson NB, Rothman RL., “Patient numeracy, perceptions of provider communication, and colorectal cancer screening utilization.” J Health Commun. 2010;15 Suppl 3:157-68. Available at: http://www.ncbi.nlm.nih.gov/pubmed/21154091
OECD. “Assessing Scientific, Reading and Mathematical Literacy: A Framework for PISA 2006” 2006. Available online at: http://www.oecd.org/pisa/pisaproducts/pisa2006/37464175.pdf
OECD. “Literacy in the Information Age: Final report of the International Adult Literacy Survey” 2000. Available online at: http://www.oecd.org/education/educationeconomyandsociety/39437980.pdf

Wildlife documentaries or dramatic science?

Update: Lizzie Crouch expands the discussion when addressing fiction.

Jason G. Goldman just posted to Scientific American blogs the twitter discussion that followed this post. Brian Switek encourages us to use #scioceans and keep the conversation going. 

Behind the scenes with To the Arctic 3D. Photo credit: © Florian Schulz/Visionsofthewild.com via Smithsonian blogs.

I first met Chris Palmer when I attended his lecture about ethics in wildlife film. Palmer is a wildlife filmmaker, and his CV includes IMAX productions like Whales and To the Arctic, and the book Shooting in the Wild: An Insider’s Account of Making Movies in the Animal Kingdom. Also a conservation advocate, Chris believes that filmmakers “have a responsibility of raising viewer awareness of the serious environmental problems facing the world”. We talked further (he graciously agreed to answer a few interview questions) and we both agree that wildlife films are great opportunity to educate the general public about science and spread a message of conservation. But, like Chris said, “[solely] promoting the beauty of the natural world is not the same as conservation.” How can we use wildlife films to educate?

Pelicans in flight by Etienne-Jules Marey. Photo – US public domain.

Can we learn science from wildlife films?

The use of films to teach science is not new. According to Gregg Mitman in Cinematic Nature, “the motion picture was first developed not for entertainment purposes, but for the analysis of animal motion.” In 1882, French physiologists Etienne-Jules Marey recorded pelicans in flight, with the goal to understand animal movement. Soon after, Eadweard Muybridge proved Marey’s hypothesis that a horse can have all four hooves off the ground, and photographed a galloping horse. Mitman also points out several other occasions in which film was used to benefit science, such as teaching surgery techniques and anatomy lessons (and also serve as inspiration for the pursuit of science).

The horse in motion by Eadweard Muybridge. Photo source: Library of Congress Prints and Photographs Division. US Public domain.

Since those early recordings for teaching and study, science and nature films have immensely diversified. Films allow for observation of animal behavior in their natural environment (such as hunting behavior recorded from penguins carrying cameras). We can now watch footage from remote locations that show exotic animals in their habitat. A broad spectrum of wildlife films is available to the public, from the high budget, state-of-the-art IMAX productions, to independent short films on habitat conservation. And somewhere in between, let’s not forget the “reality shows” and the presenter-led TV series (including the ones on the search for a hidden beast, real or otherwise). Besides giving us access to inhospitable ecosystems, films can also raise public interest in science and therefore encourage science education. Like Chris Palmer told me,  “[wildlife film’s] job is to raise awareness and promote conservation.”

Filming Meerkat Manor, a nature “reality show” that follows a family of meerkats and their social conflicts. Photo via Wikipedia.

Wildlife film has shaped the general public’s understanding of science. How did it reach the status of scientific authority?

Many people rely on wildlife films (and lately, on nature-related reality TV shows such as Meerkat Manor or Dangerous Encounters) as their source of scientific information. Nature films have the potential to educate and to bridge the knowledge gap between the general public and the scientific community. Such a gap is sometimes referred as the “deficit model,” or the “assumption that differences in understanding between experts and the lay public result from the latter’s ignorance of science,” according to Dingwall and Aldridge in their analysis of TV wildlife programming as science education source. (The debate over the deficit model is extensive and was recently discussed at the 2013 Science Online conference. In the words of science educator and PLOS colleague Jean Flanagan, “[the model is] at the very least incomplete; much misunderstanding of science goes further than just not being aware of the facts.”)

The public has come to trust films and see them as a scientific truth. In her study of scientific authority in wildlife film, Rebecca Wexler reports that “viewers regard film sequences as realistic because of cultural tendencies resulting from 19th century understandings of photography and film as mechanically accurate reproductions of the visual world.” This also happens because movies are labeled as scientifically correct and factual.

When I asked Chris Palmer him what he thought of scientific accuracy in nature films, he responded “you can find many scientists who are appalled at how they have been portrayed in documentaries and how their message has been corrupted and messed with by filmmakers for the sake of ratings.” It turns out this status of scientific authority is given to nature films even in cases of scripted dramas: footage that has been twisted to accommodate a sequence of edited scenes closely following a script.

Behind the scenes at March of the Penguins, a film that unintentionally sparked creationist intelligent design beliefs. Photo credit: Jérôme Maison © 2005 Bonne Pioche Productions / Alliance De Production Cinématographique via The Documentary Blog.

Rebecca Wexler focused her analysis of scientific authority in film on March of the Penguins. The film shows beautiful footage of emperor penguins in their journey across Antarctica to breed and raise a chick (a task that has been deemed “the worst journey in the world” by Aspley Cherry-Garrard, who brought back an emperor’s egg in 1911). March’s constant use of anthropomorphism might have brought unwanted attention to the movie. Creationist groups have deemed this film as “proof” of intelligent design (ID). The fact that emperor penguins form mating pairs is seen as support for “traditional family values” of monogamy and heterosexuality. The hardship that emperors go through to raise chicks was believed to support ID (even though, for scientists, it seems like the opposite). Even the lack of conservation messages in the film has fueled an anti-global warming movement (if the penguins are doing fine, why should we be concerned?). Jean explored this topic in an earlier post on cultural cognition: groups (both creationists and scientists) will align with concepts that match their worldview, regardless of facts or accuracy.

The film is a beautiful drama, and it should be seen as such. Anthropomorphism is used with the goal of creating and emotional connection with the audience. For example, narrator Morgan Freeman explains some of the scenes: “‘[the penguins] are not that different from us, really. They pout, they bellow, they strut, and occasionally they will engage in some contact sports.” The movie director and distributors claimed that “the movie is simply a tale about penguins and that any attempt to divine a deeper meaning is misguided.” Except when films are portrayed as scientific facts and presented to a credulous non-scientific audience, perhaps they have a responsibility to make their intentions clear.

If films like March of the Penguins are seen as scientific authority and becoming a resource for creationism or ID beliefs (which may influence school curricula in parts of the US), what other non-scientific ideas are films serving as authority for? Should we expect to see nature “documentaries” about the search for Sasquatch or the Loch Ness monster? (Oh wait, those already exist. Finding Bigfoot and Mermaid: the body found are only a few examples. Pseudoscience and cryptozoology in TV is illustrated by skeptic investigator Benjamin Radford.)

Burden’s komodo dragon. Screen capture via Slate.

Wildlife drama vs wildlife documentary: films should either be accurate or provide disclosure as being pure storytelling

In 1926, William Douglas Burden set out to film and capture komodo dragons for the Bronx Zoo. The resulting film was a hit, and it caused the increased number of zoo visitors hoping to see the reptiles up close. However, visitors were disappointed: the lethargic animals looked nothing like the blood-thirsty komodo dragons pictured in the movie. In order to create that behavior, the film was heavily edited and staged (watch a clip), the animals were baited with meat (you can even see strings holding it together), and Burden was not even present during their capture. As Mitman points out, “nature uncut and unedited is never as dramatic and captivating as nature onscreen.” In our interview, Chris Palmer mentioned that “[mass appeal] affects [science portrayal] in a big way. No one wants to watch a dull, pedantic, tedious scientist on TV, however exact, accurate, and nuanced they are being.”

Wildlife “drama” now employs a storyline and a script. It includes characters, that can be scientists or naturalists, but are usually the animals themselves. Animal characters are given a name, a role, and an anthropomorphic personality. They undergo a “hero’s journey,” complete with great adversity and conquest (e.g. a story on a long migration). The journey is put together by editing scenes from footage of different animals obtained in different geographic locations. The animal’s anthropomorphic behavior is accentuated by emotional narration and a celebrity narrator or on-camera host. Character’s “roles” reinforce human gender and societal roles such as good guys (prey), bad guys (predator), nuclear families (the mother takes care of the young while father hunts — apparently no one has heard of ostrich harens and male caretakers).

I don’t see a problem in using an interesting narrative. At the end of Whales, one of Chris’s IMAX productions, mother and calf whales “Misty” and “Echo” were not the same animals who started their migration in the beginning of the film. (Until we have humpback whale GPS, filmmakers have to improvise by recording different animals.) Still, referring to two other whales with those names might strengthen the storyline. I also enjoy the editing and narration, when they serve to educate. I am fine with staging with captive animals, as long as they are humanely treated (after all, I watch sea lions feeding every week and appreciate the educational opportunity the National Zoo is offering to its audience), and it can be especially useful to illustrate behavior otherwise impossible to see. What I am not happy with is the excessive anthropomorphism (Lucy Sullivan illustrates how anthropomorphism defeats science) and the lack of mention of conservation.

Diving with enormous cameras is just one challenge that IMAX filmmakers face. Photo of making of IMAX The Last Reef, via Scripps Institute.

“Environmental films need to do more to encourage conservation because the world’s ecosystems are troubled and in decline,” Chris believes. After I asked how can films adapt to better convey a scientific message of conservation, he stated “by listening more attentively and respectfully to the best scientists we have. Of course, what puts them into that category is that they are humane scientists who respect the rights of animals not to be harassed or harmed.”

Therefore, if a wildlife documentary is not a documentary, and is storytelling or wildlife drama instead, we should treat it as such. Because of its mass appeal, films have an enormous potential to raise awareness and drive change. I’d love to see it increase scientific literacy and to spread a message of conservation. Otherwise, in Chris Palmer’ words, “there would soon be nothing left to film.”

References

  1. Dingwall and Aldridge, Public Understanding of Science 15, 131–152 (2006)
  2. De Cheveigné, Public Understanding of Science 5, (1996) 231-253
  3. Kalof and Amthor, Etudes rurales, (2010) 165-180
  4. Kalof et al, Organization Environment (2011) 1-25
  5. Kilborn, Jump Cut: A Review of Contemporary Media 48 (2006)
  6. Mitman, Isis, 84, (1993) 637 – 661
  7. Sullivan, Philosophical Transactions: Biological Sciences, 349, (2006) 215-218
  8. Wexler, Studies in History and Philosophy of Biology and Biomedical Sciences, 39, (2008) 273 – 279