Visualizing and teaching evolution through synteny

Embracing the rationalist and empirically-based perspective of science is not easy. Modern science generates disconcerting ideas that can be difficult to accept and often upsetting to philosophical or religious views of what gives meaning to existence [link]. In the context of evolutionary mechanisms within biology, the fact that variation is generated by random (stochastic) events, unpredictable at the level of the individual or within small populations, led to the rejection of Darwinian principles by many working scientists around the turn of the 20th century (see Bowler’s The Eclipse of Darwinism + link).  Educational research studies, such as our own “Understanding randomness and its impact on student learning“, reinforce the fact that ideas involving stochastic processes are relevant to evolutionary, as well as cellular and molecular, biology and are inherently difficult for people to accept (see also: Why being human makes evolution hard to understand). Yet there is no escape from the science-based conclusion that stochastic events provide the raw material upon which evolutionary mechanisms act, as well as playing a key role in a wide range of molecular and cellular level processes, including the origin of various diseases, particularly cancer [Cancer is partly caused by bad luck](1).

All of which leaves the critical question, at least for educators, of how to best teach students about evolutionary mechanisms and outcomes. The problem becomes all the more urgent given the anti-science posturing of politicians and public “intellectuals”, on both the right and the left, together with various overt and covert attacks on the integrity of science education, such as a new Florida law that lets “anyone in Florida challenge what’s taught in schools”.

Just to be clear, we are not looking for students to simply “believe” in the role of evolutionary processes in generating the diversity of life on Earth, but rather that they develop an understanding of how such processes work and how they make a wide range of observations scientifically intelligible. Of course the end result, unless you are prepared to abandon science altogether, is that you will find yourself forced to seriously consider the implications of unescapable scientific conclusions, no matter how weird and disconcerting they may be.

There are a number of educational strategies, in part depending upon one’s disciplinary perspective, on how to approach teaching evolutionary processes. Here I consider just one, based on my background in cell and molecular biology.  Genomicus is a web tool that “enables users to navigate in genomes in several dimensions: linearly along chromosome axes, transversely across different species, and chronologically along evolutionary time.”  It is one of a number of recently developed web-based resources that make it possible to use the avalanche of DNA (gene and genomic) sequence data being generated by the scientific community. For example, the ExAC Browser enables one to examine genetic variation in over 60,000 unrelated people. Such tools supplement and extend a range of tools accessible through the U.S. National Library of Medicine / NIH / National Center for Biotechnology Information (NCBI) web portal (PubMed).

In the biofundamentals© / coreBio course (with an evolving text available here), we originally used the observation that members of our subfamily of primates,  the Haplorhini or dry nose primates, are, unlike most mammals, dependent on the presence of vitamin C (ascorbic acid) in their diet; without vitamin C we develop scurvy, a potentially lethal condition. While there may be positive reasons for vitamin C dependence, in biofundamentals© we present this observation in the context of small population size and a forgiving environment. A plausible scenario is that the ancestral population of the Haplorhini lost the L-gulonolactone oxidase (GULO) gene (see OMIM) needed for vitamin C synthesis. The remains of the GULO gene found in humans and other Haplorhini genomes is mutated and non-functional, resulting in our requirement for dietary vitamin C.

How, you might ask, can we be so sure? Because we can transfer a functional mouse GULO gene into human cells; the result is that vitamin C dependent human cells become vitamin C independent (see: Functional rescue of vitamin C synthesis deficiency in human cells). This is yet another experimental result, similar to the ability of bacteria to accurately decode a human insulin gene), that supports the explanatory power of an evolutionary perspective (2),


In an environment in which vitamin C is plentiful in a population’s diet, the mutational loss of the GULO gene would be benign, that is, not selected against. In a small population, the stochastic effects of genetic drift can lead to the loss of genetic variants that are not strongly selected for. More to the point, once a gene’s function has been lost due to mutation, it is unlikely, although not impossible, that a subsequent mutation will lead to the repair of the gene. Why? Because there are many more ways to break a molecular machine, such as the GULO enzyme, but only a few ways to repair it. As the ancestor of the Haplorhini diverged from the ancestor of the vitamin C independent Strepsirrhini (wet-nose) group of primates, an event estimated to have occurred around 65 million years ago, its ancestors had to deal with their dietary dependence on vitamin C either by remaining within their original (vitamin C-rich) environment or by adjusting their diet to include an adequate source of vitamin C.

At this point we can start to use Genomicus to examine the results of evolutionary processes (a YouTube video on using Genomicus)(3).  In Genomicus a gene is indicated  by a pointed box  ; for simplicity all genes are drawn as if they are the same size (they are not); different genes get different colors and the direction of the box indicates the direction of RNA synthesis, the first stage of gene expression. Each horizontal line in the diagram below represents a segment of a chromosome from a particular species, while the blue lines to the left represent phylogenic (evolutionary) relationships. If we search for the GULO gene in the mouse, we find it and we discover that its orthologs (closely related genes) can be found in a wide range of eukaryotes, that is, organisms whose cells have a nucleus (humans are eukaryotes).
We find a version of the GULO gene in single-celled eukaryotes, such as baker’s yeast, that appear to have diverged from other eukaryotes about ~1.500,000,000 years ago (1500 million years ago, abbreviated Mya).  Among the mammalian genomes sequenced to date, the genes surrounding the GULO gene are also (largely) the same, a situation known as synteny (mammals are estimated to have shared a common ancestor about 184 Mya). Since genes can move around in a genome without necessarily disrupting their normal function(s), a topic for another day, synteny between distinct organisms is assumed to reflect the organization of genes in their common ancestor. The synteny around the GULO gene, and the presence of a GULO gene in yeast and other distantly related organisms, suggests that the ability to synthesize vitamin C is a trait conserved from the earliest eukaryotic ancestors.

Now a careful examination of this map (↑) reveals the absence of humans (Homo sapiens) and other Haplorhini primates – Whoa!!! what gives?  The explanation is, it turns out, rather simple. Because of mutation, presumably in their common ancestor, there is no functional GULO gene in Haplorhini primates. But the Haplorhini are related to the rest of the mammals, aren’t they?  We can test this assumption (and circumvent the absence of a functional GULO gene) by exploiting synteny – we search for other genes present in the syntenic region (↓). What do we find? We find that this region, with the exception of GULO, is present and conserved in the Haplorhini: the systemic region around the GULO gene lies on human chromosome 8 (highlighted by the red box); the black box indicates the GULO region in the mouse. Similar syntenic regions are found in the homologous (evolutionarily-related) chromosomes of other Haplorhini primates.

The end result of our Genomicus exercise is a set of molecular level observations, unknown to those who built the original anatomy-based classification scheme, that support the evolutionary relationship between the Haplorhini and more broadly among mammals. Based on these observations, we can make a number of unambiguous and readily testable predictions. A newly discovered Haplorhini primate would be predicted to share the same syntenic region and to be missing a functional GULO gene, whereas a newly discovered Strepsirrhini primate (or any mammal that does not require dietary ascorbic acid) should have a functional GULO gene within this syntenic region.  Similarly, we can explain the genomic similarities between those primates closely related to humans, such as the gorilla, gibbon, orangutan, and chimpanzee, as well as to make testable predictions about the genomic organization of extinct relatives, such as Neanderthals and Denisovians, using DNA recovered from fossils [link].

It remains to be seen how best to use these tools in a classroom context and whether having students use such tools influences their working understanding, and more generally, their acceptance of evolutionary mechanisms. That said, this is an approach that enables students to explore real data and to develop  plausible and predictive explanations for a range of genomic discoveries, likely to be relevant both to understanding how humans came to be, and in answering pragmatic questions about the roles of specific mutations and genetic variations in behavior, anatomy, and disease susceptibility.

Some footnotes:

(1) Interested in a magnetic bumper image? visit: http://www.cafepress.com/bioliteracy

(2) An insight completely missing (unpredicted and unexplained) by any creationist / intelligent design approach to biology.

(3) Note, I have no connection that I know of with the Genomicus team, but I thank Tyler Square (soon to be at UC Berkeley) for bringing it to my attention.

The pernicious effects of disrespecting the constraints of science

By Mike Klymkowsky

Recent political events and the proliferation of “fake news” and the apparent futility of fact checking in the public domain have led me to obsess about the role played by the public presentation of science. “Truth” can often trump reality, or perhaps better put, passionately held beliefs can overwhelm a circumspect worldview based on a critical and dispassionate analysis of empirically established facts and theories. Those driven by various apocalyptic visions of the world, whether religious or political, can easily overlook or trivialize evidence that contradicts their assumptions and conclusions. While historically there have been periods during which non-empirical presumptions are called into question, more often than not such periods have been short-lived. Some may claim that the search for absolute truth, truths significant enough to sacrifice the lives of others for, is restricted to the religious, they are sadly mistaken – political (often explicitly anti- religious) movements are also susceptible, often with horrific consequences, think Nazism and communist-inspired apocalyptic purges. The history of eugenics and forced sterilization based on flawed genetic premises have similar roots.

Copyright Sidney Harris
Copyright Sidney Harris; http://sciencecartoonspplus.com/ Please note: this is not a CCBY image; must contact copyright holder above.

Given the seductive nature of belief-based Truth, many turned to science as a bulwark against wishful and arational thinking. The evolving social and empirical (data-based) nature of the scientific enterprise, beginning with guesses as to how the world (or rather some small part of the world) works, then following the guess’s logical implications together with the process of testing those implications through experiment or observation, leading to the revision (or abandonment) of the original guess, moving it toward hypothesis and then, as it becomes more explanatory and accurately predictive, and as those predictions are confirmed, into a theory.  So science is a dance between speculation and observation. In contrast to a free form dance, the dance of science is controlled by a number of rigid, and oppressive to some, constraints [see Feynman].

Perhaps surprisingly, this scientific enterprise has converged onto a small set of over- arching theories and universal laws that appear to explain much of what is observable, these include the theory of general relativity, quantum and atomic theory, the laws of thermodynamics, and the theory of evolution. With the noticeable exception of relativity and quantum mechanics, these conceptual frameworks appear to be compatible with one another. As an example, organisms, and behaviors such as consciousness, obey and are constrained by, well established and (apparently) universal physical and chemical rules.

 

https://en.wikipedia.org/wiki/Last_universal_common_ancestor
https://en.wikipedia.org/wiki/Last_universal_common_ancestor

A central constraint on scientific thinking is that what cannot in theory be known is not a suitable topic for scientific discussion. This leaves outside of the scope of science a number of interesting topics, ranging from what came before the “Big Bang” to the exact steps in the origin of life. In the latter case, the apparently inescapable conclusion that all terrestrial organisms share a complex “Last Universal Common Ancestor” (LUCA) makes theoretically unconfirmable speculations about pre-LUCA living systems outside of science.  While we can generate evidence that the various building blocks of life can be produced abiogenically (a process begun with Wohler’s synthesis of urea) we can only speculate as to the systems that preceded LUCA.

 

Various pressures have led many who claim to speak scientifically (or to speak for science) to ignore the rules of the scientific enterprise – they often act as if their are no constraints, no boundaries to scientific speculation. Consider the implications of establishing “astrobiology” programs based on speculation (rather than observations) presented with various levels of certainty as to the ubiquity of life outside of Earth [the speculations of Francis Crick and Leslie Orgel on “directed panspermia”: and the Drake equation come to mind, see Michael Crichton’s famous essay on Aliens and global warming]. Yet such public science pronouncements appear to ignore (or dismiss) the fact that we know (and can study) only one type of life, the descendants of LUCA. They appear untroubled when breaking the rules and abandoning the discipline that has made science a powerful, but strictly constrained human activity.

 

Whether life is unique to Earth or not requires future explorations and discoveries that may (or given the technological hurdles involved, may not) occur. Similarly postulating theoretically unobservable alternative universes or the presence of some form of consciousness in inanimate objects [such unscientific speculation as illustrated here] crosses a dividing line between belief for belief’s sake, and the scientific – it distorts and obscures the rules of the game, the rules that make the game worth playing [again, the Crichton article cited above makes this point]. A recent rather dramatic proposal from some in the physical-philosophical complex has been the claim that the rules of prediction and empirical confirmation (or rejection) are no longer valid – that we can abandon requiring scientific ideas to make observable predictions [see Ellis & Silk]. It is as if objective reality is no longer the benchmark against which scientific claims are made; that perhaps mathematical elegance or spiritual comfort are more important – and well they might be (more important) but they are also outside of the limited domain of science. At the 2015 “Why Trust a Theory” meeting, the physicist Carlo Rovelli concluded “by pointing out that claiming that a theory is valid even though no experiment has confirmed it destroys the confidence that society has in science, and it also misleads young scientists into embracing sterile research programs.” [quote from Massimo’s Pigliucci’s Footnotes to Plato blog].

 

While the examples above are relatively egregious, it is worth noting that various pressures for glory, fame, and funding can tend to impact science more frequently – leading to claims that are less obviously non-scientific, but that bend (and often break) the scientific charter. Take, for example, claims about animal models of human diseases. Often the expediencies associated with research make the use of such animal models necessary and productive, but they remain a scientific compromise. While mice, rats, chimpanzees, and humans are related evolutionarily, they also carry distinct traits associated with each lineage’s evolutionary history, and the associated adaptive and non-adaptive processes and events associated with that history. A story from a few years back illustrates how the differences between the immune systems of mice and humans help explain why the search, in mice, for drugs to treat sepsis in humans was so relatively unsuccessful [Mice Fall Short as Test Subjects for Some of Humans’ Deadly Ills]. A similar type of situation occurs when studies in the mouse fail to explicitly acknowledge how genetic background influences experimental phenotypes [Effect of the genetic background on the phenotype of mouse mutations], as well as how details of experimental scenarios influence human relevance [Can Animal Models of Disease Reliably Inform Human Studies?].

 

Speculations that go beyond science (while hiding under the mantel of science – see any of a number of articles on quantum consciousness) – may seem just plain silly, but by abandoning the rules of science they erode the status of the scientific process.  How, exactly, would one distinguish a conscious from an unconscious electron?

In science (again as pointed out by Crichton) we do not agree through consensus but through data (and respect for critical analyzed empirical observations). The Laws of Thermodynamics, General Relativity, the standard model of particle physics, and Evolution theory are conceptual frameworks that we are forced (if we are scientifically honest) to accept. Moreover the implications of these scientific frameworks can be annoying to some; there is no free lunch (perpetual motion machine), no efficient, intelligently-designed evolutionary process (just blind variation and differential reproduction), and no zipping around the galaxy. The apparent limitation of motion to the speed of light means that a “Star Wars” universe is impossible – happily, I would argue, given the number of genocidal events that appear to be associated with that fictional vision.

 

Whether our models for the behavior of Earth’s climate or the human brain can be completely accurate (deterministic), given the roles of chaotic and stochastic events in these systems, remains to be demonstrated; until they are, there is plenty of room for conflicting interpretations and prescriptions. That atmospheric levels of greenhouse gases are increasing due to human activities is unarguable, what it implies for future climate is less clear, and what to do about it (a social, political, and economic discussion informed but not determined by scientific observations) is another.

Courtesy NASA.As we discuss science, we must teach (and remind ourselves, even if we are working scientific practitioners) about the limits of the scientific enterprise. As science educators, one of our goals is to help students develop an appreciation of the importance of an honest and critical attitude to observations and conclusions, a recognition of the limits of scientific pronouncements. We need to explicitly identify, acknowledge, and respect the constraints under which effective science works and be honest in labeling when we have left scientific statements, lest we begin to walk down the path of little lies that morph into larger ones.  In contrast to politicians and other forms of religious and secular mystics, we should know better than to be seduced into abandoning scientific discipline, and all that that entails.

Picture1

 

 

 

 

M.W. Klymkowsky  web site:  http://klymkowskylab.colorado.edu  email: klym@colorado.edu

 

 

 

 

Why Statistics Should Be A Mandatory Part of High School Education

Back in 2007, the Advertising Standards Authority (ASA) in Britain ruled that the oral health manufacturing giant Colgate could not use its claim that “More than 80% Of Dentists recommend Colgate” or that its brand was “used and recommended by most dentists.” These bans were based on the finding that Colgate had used deceptive statistics to derive its numbers.

 

For instance, when reading the original claim, consumers would likely think that four out of five dentists had recommended Colgate over its competitors. Instead, ASA revealed that dentists in the study were allowed to recommend more than one brand. The numbers were less impressive than Colgate had made them sound.

 

The ASA explained that “The claim would be understood by readers to mean that 80 per cent of dentists recommend Colgate over and above other brands, and the remaining 20 per cent would recommend different brands. […] Because we understood that another competitor’s brand was recommended almost as much as the Colgate brand by the dentists surveyed, we concluded that the claim misleadingly implied 80 per cent of dentists recommend Colgate toothpaste in preference to all other brands.”

 

This sort of fact-fudging is concerning because numbers permeate our lives. Sports fans pore over statistics of their favorite teams and players. Consumers are bombarded with product information on billboards, TV, and the internet. Pundits and politicians rattle off figures to tell voters how better or worse things have gotten. People tune into the weather channel to see the chance of rain. Some data are truly informative, some are twisted to support a point, and others are outright fabricated. And yet, every day, we are inundated with a deluge of numbers we must continually process.

 

So how can we make sense of it all?

 

According to Charles Wheelan, a senior lecturer and policy fellow at Dartmouth College and bestselling author of Naked Economics, one of the best tools that we have to separate the wheat from the chaff is statistics, a system used to gather, organize, and interpret data. In short, statistics helps us to conceptualize information by allowing individuals to understand how data is collected and how it can be interpreted and communicated. Wheelan states, “Statistics is one of those things that people need to understand in order to be an informed citizen, especially the use and abuse of data.”

 

Given its importance, descriptive statistics ought to ascend from its status as an elective to the pantheon of required high school mathematics, next to the trinity of algebra, geometry, and trigonometry. Statistics is “also more intuitive and applied than other kinds of high school math courses (e.g. calculus or trig),” states Wheelan, “so it certainly strikes me as sensible to make basic statistics an integral part of any high school math curriculum.”

 

In doing so, students will be better prepared to make informed decisions as adults over a wide range of subjects. For instance, as consumers, students will learn to question and be skeptical of advertisement claims. As voters, they will be able to interpret basic socioeconomic data touted or slammed by candidates, understand how surveys and polls work, and be aware of how data can be skewed—intentionally or unintentionally—through bias.

 

By incorporating more knowledge of statistics into our everyday lives, we will be able to foster an educated citizenry, helping future generations to make sense of our increasingly data-deluged world.

 

What Every Science Student Should Know (University of Chicago Press)

 

Check out my new guide aimed at helping college students excel in science, What Every Science Student Should Know (University of Chicago Press)

Book Review: An Astronaut’s Guide to Life on Earth

Commander Chris Hadfield captured the world’s imagination last year, when, from 13 March to 13 May 2013, he was the first Canadian Commander of the International Space Station. While aboard the ISS, Commander Hadfield did a series of “experiments,” both for scientists, but, perhaps most importantly, for youth. This included genuinely interesting questions like “How do you cry in space? (video above)” and “How do you cut your nails?” and the always important “How do you go to the bathroom?” His amicable nature and genuinely infectious enthusiasm brought science to the masses, and helped inspire thousands of youth.

Recently, Chris Hadfield released his book – “An Astronaut’s Guide to Life on Earth.” My sister waited in line for 3 hours at our local Costco to get me a signed copy for my birthday, and I finally got around to reading it for this review. The book follows the life of Chris Hadfield as he becomes the commander of Expedition 35, detailing his attitude and the path he took to become the first Canadian Commander of the ISS. The book is split into three broad sections leading up to Expedition 35 titled “Pre-Launch,” “Liftoff” and “Coming Down to Earth,” with several chapters within each section.

The book was fascinating to me – Hadfield is a hybrid pilot-engineer-scientist-lab rat. His expertise is in engineering and as a test pilot, but throughout the book he references how his work is interdisciplinary, and he has to have a broad understanding of several domains in order to be effective. In addition to his role as an astronaut and Commander, he is also a fully fledged lab rat, and people on the ground will ask him questions about how he’s feeling, take samples while he’s in space and after he returns, as well as measure how quickly he recovers to life back on Earth in order to further our understanding about how life in space impacts the human body. Since, at some point, we hope to explore the stars, any data we can get on how astronauts respond to life in space is valuable.

One of my favourite parts of the book was how it didn’t just focus on the mundane, it relished them. He spends pages describing the drills he went through, and how important have a strong grasp of the fundamentals was for his success. I found this refreshing – too often in science we glorify the achievements but ignore all the hard work that got them there. A breakthrough in the lab might take months or even years of work before things go right, and having some acknowledge that, not only do things not work (often), them not working is not the end of the world. This was a refreshing take on the scientific method, and really highlighted the value in “the grind” of slowly perfecting your skills.

Click the book cover for purchasing options!
Click the book cover for purchasing options!

He also has a certain brand of “folksy wisdom” that is inspiring in it’s own way. It’s not inspirational in the nauseating sense that these things are often written in, but more practical. He states the importance of reading the team dynamic before getting involved for example, or how important it is to really understand the nuts and bolts of what you’re doing, but at no point does that feel patronizing or “hey, look at me, I’m an astronaut!” For many budding scientists, the idea of trudging through another page of equations, or washing beakers, or just doing the mundane, less exciting parts of science makes you apathetic and bored. Hadfield takes this moments and stresses just how important it is to learn from them, as well as ensure that you know exactly why they are important. I highly recommend the book to anyone interested in STEM careers, and especially those early in their careers.

To purchase, check out Chris Hadfield’s official website.


Featured image: Commander Hadfield performed at the 2013 Canada Day celebrations in Ottawa, ON | Picture courtesy David Johnson, click for more info

Childhood obesity drops 40% in the last decade. Or not really, but who’s checking?

“A lie that is half-truth is the darkest of all lies.”
― Alfred Tennyson

Last week, a new study published in the Journal of the American Medical Association received a lot of media attention. The study, performed by Cynthia Ogden and colleagues at the CDC, aimed to describe the prevalence of obesity in the US and look at changes between 2003 and 2012. The study itself had several interesting findings, not least among them that the prevalence of obesity seems to have stabilized in many segments of the US population. However, they made one observation that caught the media’s attention:

“There was a significant decrease in obesity among 2- to 5-year-old children (from 13.9% to 8.4%; P = .03)”

This is where things get interesting, as the focus was not on the 5.5 percentage points difference. Instead of reporting the absolute difference, i.e. how much something changed, news outlets focused on the relative difference, i.e. how much they changed compared to each other. In that case, it would be (5.5/13.9 =) 40%. Which is much more impressive than the 5.5% change reported in the study. So you can guess what the headlines loudly proclaimed:

Headlines from Bloomberg, the LA Times and the WSJ
Headlines from Bloomberg, the LA Times and the WSJ | Click to enlarge, click links to read the articles

The media latched onto this “40%” statistic and ran with it, despite the researchers clearly stating that this was not their intention. In fact, from the paper itself, they said (in their conclusions):

Overall, there have been no significant changes in obesity prevalence in youth or adults between 2003-2004 and 2011-2012. Obesity prevalence remains high and thus it is important to continue surveillance. (emphasis mine)

This makes me wonder how many journalists read the article, how many got to the end, and how many just saw what other people had reported and ran with the same headline and narrative.

Here’s the thing – they’re technically correct (the best kind of correct). Yes, childhood obesity dropped 40% based on that report, and if that is true, that is a dramatic decrease. However, that is one group, and even the researchers themselves conclude this may be meaningless. It begs the question why, and if this is an actual association or just an artifact of something else like the type and number of statistical tests used. But since the narrative had already been written, everyone followed suit, and next thing you know we’re all slapping hi fives and proclaiming that there has been a drop off in childhood obesity that may not actually be something worth celebrating.

Now, had the results been portrayed fairly, two things would have happened. For one, the findings would not have been as positive as they are now. In fact, the headlines would have read “Business as usual: Obesity the same for the last decade” or “Obesity up 20% among elderly women!” (The latter refers to the finding that the prevalence of obesity went up among women aged 60 years and older from 31.5% to 38.1%). Secondly, a much more detailed discussion of the study findings would have happened – why has the prevalence stabilized? Have we finally reached saturation? Are all the people who could be obese now obese? Or is something else going on? But these weren’t the findings that were focused on.

The worst outcome of this media exposure won’t be felt right now. It will be felt in the next study. You see, this study in JAMA was reported all over the media, and millions would have heard about how we’ve finally “turned a corner in the childhood obesity epidemic” (to quote the New York Times). Unfortunately, this may not be the case, and if a new study comes out saying the opposite, this further undermines the public’s confidence in science, even though the researchers in question never made any such claim.

And that, dear readers, is the darkest of all lies.

References
Ogden, C. L., Carroll, M. D., Kit, B. K., & Flegal, K. M. (2014). Prevalence of Childhood and Adult Obesity in the United States, 2011-2012. JAMA, 311(8), 806-814.

Creation vs Evolution: Why science communication is doomed

Last Tuesday night, Bill Nye the Science Guy had a debate with Ken Ham over creationism vs evolution. I watched part of the debate, and have conflicted feelings on it. I’m going to start by saying I think it was a brilliant marketing move. For one, it suddenly brought the Creation Museum into the forefront of society for next to nothing. While before only a handful had heard of it, now it has risen to national prominence, and I’m sure the number of visits they have will reflect that in the near future.

As for the substance itself, I don’t think this is a very good topic for a debate. Any time you bring religion into a discussion, it turns into an “us vs them” argument where neither party is willing to change their view. Even the advertising and marketing billed it as a debate of “creationism vs evolution” – effectively presupposing the view that one can believe in both (which I’ll come back to). At best, it’s snarky and offhanded, and at worst, antagonistic and ad hominem. I should point out though that this is on both sides – neither side is willing to reconcile.

And why should they? Both view their side as being right, and weigh the information they have differently. So all that this accomplishes is that both sides become further polarized and further entrenched, and any chance of meaningful dialogue between both sides becomes less and less likely with every angry jab back and forth. It turns into a 21st century war of angry op-eds, vindictive tweets and increasingly hostile and belligerent Facebook posts shared back and forth. This isn’t just limited to religion though – many discussions end this way with people being forced to take sides in an issue that is more complicated than simply being black/white. Rather than discuss the details and come to an understanding of what we agree and disagree on, we’re immediately placed into teams that are at loggerheads with each other.

What is most interesting is what happens to extreme viewpoints when they are criticized. Rather than taking in new information and evaluating it based on its merits, criticism actually results in the consolidation of those perspectives. In lay language, if you have an extreme viewpoint, you dig in your heels, build a trench and get ready to defend yourself against all attackers. This isn’t entirely surprising – when someone attacks you, and in particular attacks you *personally*, why wouldn’t you get defensive. Studies of this have look at this from a political perspective, comparing extreme conservatives to extreme liberals. To quote Psychology Today:

Extreme conservatives believed that their views about three topics were more superior: (1) the need to require voters to show identification when voting; (2) taxes, and (3) and affirmative action. Extreme liberals, on the other hand, believed that their views were superior on (1) government aid for the needy; (2) the use of torture on terrorists, and (3) not basing laws on religion.

But wait! Aren’t these just fringe opinions being heard in the media? The good news is yes. The bad news is that the extremes are what people hear. If you imagine everyone existing on a normal distribution – with extreme opinions on the edges – then the vast majority of the people exist in the gulf between those people. However, those extremes are what people hear. In fact, this is what led to Popular Science shutting down their comments, based on findings by Brossard and Scheufele. What they did was ask people to read a study, and while the article remained the same, one group was exposed to civil comments, and the other to uncivil comments. What they found was striking:

In the civil group, those who initially did or did not support the technology — whom we identified with preliminary survey questions — continued to feel the same way after reading the comments. Those exposed to rude comments, however, ended up with a much more polarized understanding of the risks connected with the technology.

So seeing negative comments not only made people more skeptical of the article, it made them more skeptical of the science itself! That’s a huge concern for us, and how science is written about and discussed. Seeing negative comments, no matter how poorly written or ill-informed they are, makes people fundamentally view the science as being of lower quality. And that resulted in Popular Science closing their commenting section.

So to bring it all full circle, the “debate” was a microcosm of science and the public. Scientists sit back, do their work, and then turn around and say “Hey! You should do this” and then wonder why no one listens to them and why people fight them. We saw this with the New York soda ban, we’re seeing this in other spheres as well, and unless we change how we approach these hot button issues, we’ll lose the support of the fringe opinions (which we have already lost), but also the support of the moderates (which we can still get). I was having this discussion with my friend Steve Mann, who is one of the smartest men I know, and he sums it up best:

“It’s easier to poke fun at people with whom you disagree, particularly if you can imply that they are childish, old-fashioned, religious, or uneducated, than to honestly examine whether there is any merit to what they’re saying, and I think that’s a shame.”

I’m not taking sides – that wasn’t the aim of this piece. The aim of this piece is to tell you to listen with a open mind, discuss issues with others, and at all costs avoid ad hominem and personal attacks. If we want to bring people together, we have to avoid using language that drives us apart. If we want to promote science, we have to discourage hate. And if we want to educate others, we first have to start by understanding others.

Reference:
K. Toner, M. R. Leary, M. W. Asher, K. P. Jongman-Sereno. Feeling Superior Is a Bipartisan Issue: Extremity (Not Direction) of Political Views Predicts Perceived Belief Superiority. Psychological Science, 2013; DOI: 10.1177/0956797613494848

Using Math to make Guinness

William Sealy Gosset, statistician and rebel | Picture from Wikimedia Commons

Let me tell you a story about William Sealy Gosset. William was a Chemistry and Math grad from Oxford University in the class of 1899 (they were partying like it was 1899 back then). After graduating, he took a job with the brewery of Arthur Guinness and Son, where he worked as a mathematician, trying to find the best yields of barley.

But this is where he ran into problems.

One of the most important assumptions in (most) statistical tests is that you have a large enough sample size to create inferences about your data. You can’t make many comments if you only have 1 data point. 3? Maybe. 5? Possibly. Ideally, we want at least 20-30 observations, if not more. It’s why when a goalie in hockey, or a batter in baseball, has a great game, you chalk it up to being a fluke, rather than indicative of their skill. Small sample sizes are much more likely to be affected by chance and thus may not be accurate of the underlying phenomena you’re trying to measure. Gosset, on the other hand, couldn’t create 30+ batches of Guinness in order to do the statistics on them. He had a much smaller sample size, and thus “normal” statistical methods wouldn’t work.

Gosset wouldn’t take this for an answer. He started writing up his thoughts, and examining the error associated with his estimates. However, he ran into problems. His mentor, Karl Pearson, of Pearson Product Moment Correlation Coefficient fame, while supportive, didn’t really appreciate how important the findings were. In addition, Guiness had very strict policies on what their employees could publish, as they were worried about their competitors discovering their trade secrets. So Gosset did what any normal mathematician would.

He published under a pseudonym. In a startlingly rebellious gesture, Gosset published his work in Biometrika titled “The Probable Error of a Mean.” (See, statisticians can be badasses too). The name he used? Student. His paper for the Guinness company became one of the most important statistical discoveries of the day, and the Student’s T-distribution is now an essential part of any introductory statistics course.

======

So why am I telling you this? Well, I’ve talked before about the importance of storytelling as a way to frame scientific discovery, and I’ve also talked about the importance of mathematical literacy in a modern society. This piece forms the next part of that spiritual trilogy. Math is typically taught in a very dry, very didactic format – I recite Latin to you, you remember it, I eventually give you a series of questions to answer, and that dictates your grade in the class. Often, you’re only actually in the class because it’s a mandatory credit you need for high school or your degree program. There’s very little “discovery” occurring in the math classroom.

Capturing interest thus becomes of paramount importance to instructors, especially in math which faces a societal stigma of being “dull,” “boring” and “just for nerds.” A quick search for “I hate math” on Twitter yields a new tweet almost every minute from someone expressing those sentiments, sometimes using more “colourful” language (at least they’re expanding their vocabulary?).

There are lots of examples of these sorts of interesting anecdotes about math. The “Scottish book” was a book named after the Scottish Café in Lviv, Ukraine, where mathematicians would leave a potentially unsolvable problem for their colleagues to tackle. Successfully completing these problems would result in you receiving a prize ranging from a bottle of brandy to, I kid you not, a live goose (thanks Mariana for that story!) The Chudnovsky Brothers built a machine in their apartment that calculated Pi to two billion decimal places. I asked for stories on Twitter and @physicsjackson responded with:

Amalie (Emmy) Noether is probably the most famous mathematician you’ve never heard of | Photo courtesy Wikimedia Commons

There’s also the story of Amalie Noether, the architect behind Noether’s theorem, which basically underpins all modern physics. Dr Noether came to prominence at a time when women were largely excluded from academic positions, yet rose through the ranks to become one of the most influential figures of that time, often considered at the same level of brilliance as Marie Curie. Her mathematical/physics contemporaries included David Hilbert, Felix Klein and Albert Einstein, who took up her cause to help her get a permanent position, and often sought out her opinion and thoughts. Indeed, after Einstein stated his theory of general relativity, it was Noether who then took this to the next level and linked time and energy. But don’t take my word for it – Einstein himself said:

In the judgment of the most competent living mathematicians, Fräulein Noether was the most significant creative mathematical genius thus far produced since the higher education of women began.

While stories highlight the importance of these discoveries, they also highlight the diversity that exists within the scientific community. Knowing that the pantheon of science and math heroes includes people who aren’t all “math geniuses” can make math much more engaging and interesting. Finally, telling stories of the people behind math can demystify the science, and engage youth who may not consider math as a career path.