Recently, I contributed to a project that turned healthy human tissues into an earlier stage of pancreatic cancer—a disease that carries a dismal 5-year survival rate of 5 percent.
When I described our project to a friend, she asked, “why in the world would you want to grow cancer in a lab?” I explained that by the time a patient learns that he has pancreatic cancer, the tumor has spread throughout the body. At that point, the patient typically has less than a year to live and his tumor cells have racked up a number of mutations, making clinical trials and molecular studies of pancreatic cancer evolution downright difficult. For this reason, our laboratory model of pancreatic cancer was available to scientists who wanted to use it to find the biological buttons that turn healthy cells into deadly cancer. By sharing our discovery, we wanted to enable others in developing drugs to treat cancer and screening tests to diagnose patients early. The complexity of this process demonstrates that science is a team effort that involves lots of time, money, and the brainpower of highly-trained individuals working together toward a single goal.
Many of the challenges we face today—from lifestyle diseases, to the growing strains of antibiotic-resistant superbugs in hospitals, to the looming energy crisis—require scientific facts and solutions. And although there’s never a guarantee of success, scientists persist in hopes that our collective discoveries will reverberate into the future. However, as a corollary, hindering scientific progress means a loss of possibilities.
Unfortunately, the deceleration of scientific progress seems likely possibility. In March, the White House released a document called “America First: A Budget Blueprint to Make America Great Again,” which describes deep cuts to some of the country’s most important funding agencies for science.
As it stands, the National Institutes of Health is set to lose nearly a fifth of its budget; the Department of Energy’s Office of Science, $900 million; and the Environmental Protection Agency, a 31.5 percent budget cut worth $2.6 billion. Imagine the discoveries that could have saved our lives or created jobs, which will instead languish solely as unsupported hypotheses in the minds of underfunded scientists.
Scientists cannot remain idle on the sidelines; we must be active in making the importance of scientific research known. Last weekend’s March on Science drew tens of thousands of people around more than 600 rallies across the world, but the challenge now lies in harnessing the present momentum and energy to make sustained efforts to maintain government funding for a wide range of scientific projects.
The next step is to get involved in shaping public opinion and policy. As it stands, Americans on both sides of the political spectrum have expressed ambivalence about the validity of science on matters ranging from climate change to childhood vaccinations. Academics can start tempering the public’s unease toward scientific authority and increase public support for the sciences by stepping off the ivory tower. Many researchers are already engaging with the masses by posting on social media, penning opinion articles, and appearing on platforms aimed at public consumption (Youtube channels, TED, etc). A researcher is her own best spokesperson in explaining the importance of her work and the scientific process; unfortunately, a scientist’s role as an educator in the classroom and community is often shoved out by the all-encompassing imperative to publish or perish. As a profession, we must become more willing to step out of our laboratories to engage with the public and educate the next generation of science-savvy citizens.
In addition, many scientists have expressed interest in running for office, including UC Berkeley’s Michael Eisen (who also a co-founder of PLOS). When asked by Science why he was considering a run for senate, Eisen responded:
“My motivation was simple. I’m worried that the basic and critical role of science in policymaking is under a bigger threat than at any point in my lifetime. We have a new administration and portions of Congress that don’t just reject science in a narrow sense, but they reject the fundamental idea that undergirds science: That we need to make observations about the world and make our decisions based on reality, not on what we want it to be. For years science has been under political threat, but this is the first time that the whole notion that science is important for our politics and our country has been under such an obvious threat.”
If scientists can enter into the house and senate in greater numbers, they will be able to inject scientific sense into the discussions held by members of legislature whose primary backgrounds are in business and law.
Science is a bipartisan issue that should not be bogged down by the whims of political machinations. We depend on research to address some of the most pressing problems of our time, and America’s greatness lies in part on its leadership utilizing science as an exploration of physical truths and a means of overcoming our present limitations and challenges.
What I want to do here is to present some reflections on the relationship between science and politics, by which I include various belief systems (ideologies).
The mystic Giordano Bruno, burnt at the stake by the Roman Catholic Church as a heretic in 1600, is sometimes put forward as a patron saint of science, mistakenly in my view. Bruno was a mystic, whose ideas were at best loosely grounded in the observable and in no way scientific as we understand the term. His type of magical thinking is similar to that of modern anti-vaccination-ists who claim vaccination can cause autism (it does not)(1) or that GMOs are somehow innately “unhealthy” and more dangerous than “natural” organisms (see: The GMO safety debate is over). A better model, particularly in the context of current political controversies, would be the many Soviet geneticists who suffered exile and often death (the famed geneticist N.I. Vavilov starved to death in a Soviet gulag in 1943) as a result of the state/party-driven politicization of science, specifically genetics, carried out by Joseph Stalin (1878-1953) and the Communist party/state of the Soviet Union (see: The tragic story of Soviet genetics shows the folly of political meddling in science). In response to the implications of genetic and evolutionary mechanisms, Stalin favored Lamarckism (inheritance of acquired traits) posited by Ivan Michurin (1855–1935) and Trofim Lysenko (1898–1976)[see link]. Communist ideology required (or rather demanded) that traits, including human traits, be seen as malleable, that the “nature” of plants and people could be altered permanently with appropriate manipulations (vernalization for plants, political re-education for people)[see: The consequences of political dictatorship for Russian science). No need to wait for the messy, multi-generational processes associated with conventional plant breeding (and Darwinian evolution). In both cases, the unforgiving realities of the natural world intervened, but not without intense human suffering and starvation associated with both efforts.
It is worth noting explicitly that there are, and likely always will be, pressures to politicize science, due in large measure to science’s success in explaining the natural world and providing the basis for its technology-based manipulation. Giordano Bruno was an early martyr in the evolution of a highly ideological world view (illustrated by the house arrest of Galileo and the suppression of heliocentric models of the solar system)(2). Eventually such forms of natural theology were replaced by the apolitical and empirical ideals implicit in Enlightenment science. Aspects of ideological (racist) influences can be seen in 19th century science, most dramatically illustrated by Gould (Morton’s ranking of races by cranial capacity. Unconscious manipulation of data may be a scientific norm)(see link). How racist policies were initially embraced, and then rejected by American geneticists during the course of the 20th century is described by Provine (Geneticists and the Biology of Race Crossing).
More recent events remind us of the pressures to politicize science. A number of states (Kentucky in 1976, Mississippi in 2006, Louisiana in 2008, and Tennessee in 2012) have passed bills that allow teachers to present non-scientific ideas to students (think intelligent design creationism and climate change denial). Such bills continue to come up with depressing frequency. Most recently an admitted creationist has been appointed to lead a federal higher education reform task force in the United States [see link]. Is creationism simply alt-science? a position explicitly or tacitly supported by both the religiously orthodox and those of a post-modernist persuasion, such as left-leaning college instructors, who claim that science is a social construct [see: Is Science ‘Forever Tentative’ and ‘Socially Constructed’?].
While such recent anti-science/alt-science attitudes have not had quite the draconian effects found in the Soviet Union, Nazi Germany or eugenist America), I would argue that they have a role in eroding the public’s faith in the scientific understanding of complex processes, a faith that is largely justified even in the face of the so-called “reproducibility crises”, which in a sense is no crises at all, but an expected outcome from the size, complexity, and competing forces acting on scientists and the scientific enterprise. That said, laws and various forms of coercion dictating right-wing/religious or left-wing/political correctness in science threaten to impact the education of a generation of students. Predictions of climate changed based on human-driven (anthropogenic) increases in atmospheric CO2 levels or the effects of lead in public water systems on human health [link] cannot simply be discarded or discounted based on ideological positions on the role of government in protecting the public interest, a role that neither unfettered capitalism or fundamentalist communism seems particularly good at addressing. Similarly the lack of any demonstrable connection between autism and vaccination (see above), the physicochemical impossibility of homeopathic treatments (or various versions of “Christian Science”), and the lack of evidence for the therapeutic claims made for the rather startling array of nutritional supplements serve to inject a political, ideological, and economic dimension into scientific discourse. In fact science is constantly under pressure to distort its message. Consider the European response to GMOs in favor of the “organic” (non-GMO); most GMOs have been banned from the EU for what appears to be ideological (non-scientific) reasons, even though the same organisms have been found safe and are grown in the US and most of Asia (see this Economist essay).
It is clear that the rejection of scientific observations is wide-spread on both the left and the right, basically whenever scientific observations, ideas, or models lead to disturbing or discomforting conclusions or implications (link). Consider the violent response when Charles Murray was invited to speak at Middlebury College (see Andrew Sullivan’s Is intersectionality a religion?). That human populations might (and in fact can be expected to) display genetic differences, the result of their migration history and subsequent evolutionary processes, both adaptive and non-adaptive (see Henn et al., The great human expansion), is labelled racist and by implication beyond the pale of scientific discourse, even though it is tacitly recognized by the scientific community to be well established (no one, I think, gets particularly upset at the suggestion that noses are shaped by evolutionary processes and reflect genetic differences between populations (see Climate shaped the human nose) or that nose shape might play a role in human sexual selection (see Facial Attractiveness and Sexual Selection; and sexual dimorphism). One might even speculate that studies of the role of nose shape in mate selection could form the basis of an interesting research project (see Beauty and the beast: mechanisms of sexual selection in humans.
What often goes undiscussed is whether differences in specific traits (different alleles and allele frequencies) between populations have any meaningful significance in the context f public policy – I would argue that they do not). What is clear is that in a pre-genomic era recognizing such differences can be of practical value, for example in the treatment of diseases (see Ethnic Differences in Cardiovascular Drug Response). That said, the era of genomics-based personalized diagnosis and treatment is rapidly making such population-based considerations obsolete (see: Genetic tests for disease risks and ethical debate on personal genome testing), while at the same time raising serious issues of privacy and discrimination based on the presence of the “wrong” alleles (see: genome sequencing–ethical issues). In a world of facile genomic engineering the dangers of unfettered technological manipulations move more and more rapidly from science fiction to the boutique (intelligent?) design of people (see: CRISPR gene-editing and human evolution).
So back (about time, you may be thinking) to the original question – if we “march for science”, what exactly are we marching for [link]? Are we marching to defend the apolitical nature of science and the need to maintain economic support (increased public funding levels) for the scientific enterprise, or are we conflating support for science with a range of social and political positions? Are we affirming our commitment to a politically independent (skeptical) community of practitioners who serve to produce, reproduce, critically examine, and extend empirical observations and explanatory (predictive) models?
This is not to ignore the various pressures acting on scientists as they carry out their work. These pressures act to tempt (and sometimes reward) practitioners to exaggerate (if not fabricate) the significance of their observations and ideas in order to capture the resources (funds and people) needed to carry out modern science, as well as the public’s attention. Since resources are limited, extra-scientific forces have an increasing impact on the scientific enterprise – enticing scientists to make exaggerated claims and to put forth extra-scientific arguments and various semi-hysterical scenarios based on their observations and models. In the context of an inherently political event (a march) the apolitical ideals of science can seem too bland to command attention and stir action, not to mention the damage that politicizing science does to the integrity of science.
(1) While there is not doubt that vaccinations can, like all drugs and medical interventions, lead to side effects in certain individuals, there is unambiguous evidence against any link between autism and vaccination.
(2) It is worth noting that as originally proposed the Copernican (Sun-centered) model of the solar system was more complex than the Ptolemaic (Earth-centered) system it was meant to replace. It was Kepler’s elliptical, rather than circular, orbits that made the heliocentric model dramatically simpler, more accurate, and more aesthetically compelling.
Purely scientific discussions are hallmarked by objective, open, logical, and skeptical thought; they can describe and explain natural phenomena or provide insights into a broader questions. At the same time, scientific discussions are generally incomplete and tentative (sometimes for well understood reasons). True advocates of the scientific method appreciate the value of its skeptical and tentative approach, and are willing to revise even long-held positions in response to new, empirically-derived evidence or logical contradictions. Over time, science’s scope and conclusions have expanded and evolved dramatically; they provide an increasingly accurate working model of a wide range of processes, from the formation of the universe to the functioning of the human mind. The result is that the ubiquity of science’s impacts on society are clear and growing. However, discussing and debating the details of how science works, and the current consensus view on various phenomena, such as global warming or the causes of cancer or autism, is very different from discussing and debating how a scientific recommendation fits into a societal framework. As described in a recent National Academies Press report on Communicating Science Effectively [link], “the decision to communicate science [outside of academia] always involves an ethical component. Choices about what scientific evidence to communicate and when, how, and to whom, are a reflection of values.”
Over the last ~150 years, the accelerating pace of advances in science and technology have enabled future sustainable development, but they have also disrupted traditional social and economic patterns. Closing coal mines in response to climate predictions (and government regulations) may be sensible when viewed broadly, but are disruptive to those who have, for generations, made a living mining coal. Similarly, a number of prognosticators have speculated on the impact of robotics and artificial intelligence on traditional socioeconomic roles and rules. Whether such impacts are worth the human costs is rarely explicitly considered and discussed in the public forum, or the classroom. As members of the scientific community, our educational and outreach efforts must go beyond simply promoting an appreciation of, and public support for science. They must also consider its limitations, as well as the potential ethical and disruptive effects on individuals, communities, and/or societies. Making policy decisions with large socioeconomic impacts based on often tentative models raises risks of alienating the public upon which modern science largely depends.
Citizens, experts or not, are often invited to contribute to debates and discussions surrounding science and technology at the local and national levels. Yet, many people are not provided with the tools to fully and effectively engage in these discussions, which involves critically analyzing the scope, resolution, and stability of scientific conclusions. As such, the acceptance or rejection of scientific pronouncements is often framed as an instrument of political power, casting a shadow on core scientific principles and processes, framing scientists as partisan players in a political game. The watering down of the role of science and science-based policies in the public sphere, and the broad public complacency associated with (often government-based, regulatory) efforts, is currently being challenged by the international March For Science effort. The core principles and goals of this initiative [link] are well articulated, and, to my mind, representative of a democratic society. However, a single march on a single day is not sufficient to promote a deep social transformation, and promote widespread dispassionate argumentation and critical thinking. Perspectives on how scientific knowledge can help shape current and future events, as well as the importance of recognizing both the implications and limits of science, are perspectives that must be taught early, often, and explicitly. Social or moral decisions are not mutually exclusive from scientific evidence or ideas, but overlap is constrained by the gates set by values that are held.
In this light, I strongly believe the sociopolitical nature of science in practice must be taught alongside traditional science content. Understanding the human, social, economic and broader (ecological) costs of action AND inaction can be used to highlight the importance of framing science in a human context. If the expectation is for members of our society to be able to evaluate and weigh in on scientific debates at all levels, I believe we are morally obligated to supply future generations with the tools required for full participation. This posits that scientists and science educators, together with historian, philosophers, and economists, etc., need to go beyond the teaching of simple facts and theories by considering how these facts and theories developed over time, their impact on people’s thinking, as well as the socioeconomic forces that shape societies. Highlighting the sociopolitical implications of science-based ideas in classrooms can also motivate students to take a greater interest in scientific learning in particular, and related social and political topics in general. It can help close the gap between what is learned in school and what is required for the critical evaluation of scientific applications in society, and how scientific ideas can and should be evaluated when it comes to social policy or person beliefs.
A “science in a social context” approach to science teaching may also address the common student question, “When will I ever use this?” All too often, scientific content in schools is presented in ways that are abstract, decontextualized, and can feel irrelevant to students. Such an approach can leave a student unable or unwilling to engage in meaningful and substantive discussions on the applications and limitations of science in society. The entire concept of including cost-benefit analyses when considering the role of science in shaping decisions is often over-looked, as if scientific conclusions are black and white. Furthermore, the current culture of science in classrooms leaves little room for students to assess how scientific information does and does not align with their cultural identities, often framing science as inherently conflicting or alien, forcing a choice between one way of seeing the world over the other, when a creative synthesis seems more reasonable. Shifting science education paradigms toward a strategy that promotes “education through science” (as opposed to “science through education”) recognizes student needs and motivations as critical to learning, and opens up channels for introducing science as something that is relevant and enriching to their lives. Centered on the German philosophy of Allgemeinbildung [link] that describes “the competence for participation in critical dialogue on currently important matters,” this approach has been found to be effective in motivating students to develop the necessary skills to implement empirical evidence when forming arguments and making decisions.
In extending the idea of the perceived value of science in sociopolitical debates, students can build important frameworks for effectively engaging with society in the future. A relevant example is the increasing accessibility of genome editing technology, which represents an area of science poised to deeply impact the future of society. In a recent report [link] on the ethics of genome editing, assembled by an panel of clinicians and scientists (experts), it is recommended that the United States should proceed — cautiously — with genome editing studies on human embryos. However, as pointed out [link], this panel failed to include ANY public participation in this decision. This effort, fundamentally ignores “a more conscious evaluation of how this impacts social standing, stigma and identity, ethics that scientists often tend to cite pro forma and then swiftly scuttle.” As this discussion increasingly shifts into the mainstream, it will be essential to engage with the public in ways that promote a more careful and thoughtful analysis of scientific issues [link], as opposed to hyperbolic fear mongering (as seen in regard to most GMO discussions)[link] or reserving genetic engineering to the hyper-affluent. Another, more timely example, involves the the level at which an individual’s genome be used to predict a future outcome or set of outcomes, and whether this information can be used by employers in any capacity [link]. By incorporating a clear description of how science is practiced (including the factors that influence what is studied, and what is done with the knowledge generated), alongside the transfer of traditional scientific knowledge, we can help provide future citizens with tools for critical evaluation as they navigate these uncharted waters.
It is also worth noting that the presentation of science in a sociopolitical contexts can emphasize learning of more than just science. Current approaches to education tend to compartmentalize academic subjects, framing them as standalone lessons and philosophies. Students go through the school day motions, attending English class, then biology, then social studies, then trigonometry, etc., and the natural connections among subject areas are often lost. When framing scientific topics in the context of sociopolitical discussions and debates, stu
dents have more opportunities to explore aspects of society that are, at face value, unrelated to science.
Drawing from lessons commonly taught in American History class, the Manhattan Project [link] offers an excellent opportunity to discuss the fundamentals of nuclear chemistry as well as sociopolitical implications of a scientific discovery. At face value, harnessing nuclear fission marked a dramatic milestone for science. However, when this technology was pursued by the United States government during World War II — at the urging of the famed physicist Albert Einstein and others — it opened up the possibility of an entirely new category of warfare, impacting individuals and communities at all levels. The reactions set off by the Manhattan Project, and the consequent 1945 bombing of Hiroshima and Nagasaki, are ones that are still felt in international power politics, agriculture, medicine, ecology, economics, research ethics, transparency in government, and, of course, the Presidency of the United States. The Manhattan Project represents an excellent case study on the relationship between science, technology, and society, as well as the project’s ongoing influence on these relationships. The double-edged nature often associated with scientific discoveries are important considerations of the scientific enterprise, and should be taught to students accordingly.
A more meaningful approach to science education requires including the social aspects of the scientific enterprise. When considering a heliocentric view of the solar system, it is worthwhile recognizing its social impacts as well as its scientific foundations (particularly before Kepler). If we want people to see science as a human enterprise that can inspire rather than dictate decisions and behaviors, it will require resifting how science — and scientists — are viewed in the public eye. As written here [link]. we need to restore the relationship between scientific knowledge and social goals by specifically recognizing how
science can be used, inappropriately, to drive public opinion. As an example, in the context of CO2-driven global warming, one could (with equal scientific validity) seek to reduce CO2 generation or increase CO2 sequestration. Science does not tell us which is better from a human perspective (although it could tell us which is likely to be easier, technically). While science should inform relevant policy, we must also acknowledge the limits of science and how it fits into many human contexts. There is clearly a need for scientists to increase participation in public discourse, and explicitly consider the uncertainties and risks (social, economic, political) associated with scientific observations. Additionally, scientists need to recognize the limits of their own expertise.
A pertinent example was the call by Paul Ehrlich to limit, in various draconian ways, human reproduction – a political call well beyond his expertise. In fact, recognizing when someone has gone beyond what science can legitimately tell us [link] could help rebuild respect for the value of science-based evidence. Scientists and science educators need to be cognizant of these limits, and genuinely listen to the valid concerns and hesitations held by many in society, rather than dismiss them. The application of science has been, and will always be, a sociopolitical issue, and the more we can do to prepare future decision makers, the better society will be.
Jeanne Garbarino, PhD, Director of Science Outreach, The Rockefeller University, NY, NY
Jeanne earned her Ph.D. in metabolic biology from Columbia University, followed by a postdoc in the Laboratory of Biochemical Genetics and Metabolism at The Rockefeller University, where she now serves as Director of Science Outreach. In this role, she works to provide K-12 communities with equitable access to authentic biomedical research opportunities and resources. You can find Jeanne on social media under the handle @JeanneGarb.
Developing a coherent understanding of a scientific idea is neither trivial nor easy and it is counter-productive to pretend that it is.
For some time now the idea of “active learning” (as if there is any other kind) has become a mantra in the science education community (see Active Learning Day in America:link). Yet the situation is demonstrably more complex, and depends upon what exactly is to be learned, something rarely stated explicitly in many published papers on active learning (an exception can be found here with respect to understanding evolutionary mechanisms :link). The best of such work generally relies on results from multiple-choice “concept tests” that provide, at best, a limited (low resolution) characterization of what students know. Moreover it is clear that, much like in other areas, research into the impact of active learning strategies is rarely reproduced (see: link, link & link).
As is clear from the level of aberrant and non-sensical talk about the implications of “science” currently on display in both public and private spheres (link : link), the task of effective science education and rigorous scientific (data-based) decision making is not a simple one. As noted by many there is little about modern science that is intuitively obvious and most is deeply counterintuitive or actively disconcerting (see link). In the absence of a firm religious or philosophical perspective, scientific conclusions about the size and age of the Universe, the various processes driving evolution, and the often grotesque outcomes they can produce can be deeply troubling; one can easily embrace a solipsistic, ego-centric and/or fatalistic belief/behavioral system.
There are two videos of Richard Feynman that capture much of what is involved in, and required for understanding a scientific idea and its implications. The first involves the basis scientific process, where the path to a scientific understanding of a phenomena begins with a guess, but these are a special kind of guess, namely a guess that implies unambiguous (and often quantitative) predictions of what future (or retrospective) observations will reveal (video: link). This scientific discipline (link) implies the willingness to accept that scientifically-meaningful ideas need to have explicit, definable, and observable implications, while those that do not are non-scientific and need to be discarded. As witness the stubborn adherence to demonstrably untrue ideas (such as where past Presidents were born or how many people attended an event or voted legally), which mark superstitious and non-scientific worldviews. Embracing a scientific perspective is not easy, nor is letting go of a favorite idea (or prejudice). The difficulty of thinking and acting scientifically needs to be kept in the mind of instructors; it is one of the reasons that peer review continues to be important – it reminds us that we are part of a community committed to the rules of scientific inquiry and its empirical foundations and that we are accountable to that community.
The second Feynman video (video : link) captures his description of what it means to understand a particular phenomenon scientifically, in this particular case, why magnets attract one another. The take home message is that many (perhaps most) scientific ideas require a substantial amount of well-understood background information before one can even begin a scientifically meaningful consideration of the topic. Yet all too often such background information is not considered by those who develop (and deliver) courses and curricula. To use an example from my own work (in collaboration with Melanie Cooper @MSU), it is very rare to find course and curricular materials (textbooks and such) that explicitly recognize (or illustrate) the underlying assumptions involved in a scientific explanation. Often the “central dogma” of molecular biology is taught as if it were simply a description of molecular processes, rather than explicitly recognizing that information flows from DNA outward (link)(and into DNA through mutation and selection). Similarly it is rare to see stated explicitly that random collisions with other molecules supply the energy needed for chemical reactions to proceed or to break intermolecular interactions, or that the energy released upon complex formation is transferred to other molecules in the system (see : link), even though these events control essentially all aspects of the systems active in organisms, from gene expression to consciousness.
The basic conclusion is that achieving a working understanding of a scientific ideas is hard, and that, while it requires an engaging and challenging teacher and a supportive and interactive community, it is also critical that students be presented with conceptually coherent content that acknowledges and presents all of the ideas needed to actually understand the concepts and observations upon which a scientific understanding is based (see “now for the hard part” : link). Bottom line, there is no simple or painless path to understanding science – it involves a serious commitment on the part of the course designer as well as the student, the instructor, and the institution (see : link).
This brings us back to the popularity of the “active learning” movement, which all too often ignores course content and the establishment of meaningful learning outcomes. Why then has it attracted such attention? My own guess it that is provides a simple solution that circumvents the need for instructors (and course designers) to significantly modify the materials that they present to students. The current system rarely rewards or provides incentives for faculty to carefully consider the content that they are presenting to students, asking whether it is relevant or sufficient for students’ to achieve a working understanding of the subject presented, an understanding that enables the student to accurately interpret and then generate reasoned and evidence-based (plausible) responses.
Such a reflective reconsideration of a topic will often result in dramatic changes in course (and curricular) emphasis; traditional materials may be omitted or relegated to more specialized courses. Such changes can provoke a negative response from other faculty, based of often inherited (an uncritically accepted) ideas about course “coverage”, as opposed to desired and realistic student learning outcomes. Given the resistance of science faculty (particularly at institutions devoted to scientific research) to investing time in educational projects (often a reasonable strategy, given institutional reward systems), there is a seductive lure to easy fixes. One such fix is to leave the content unaltered and to “adopt a pose” in the classroom.
All of which brings me to the main problem – the frequency with which superficial (low cost, but often ineffectual) strategies can act to inhibit and distract from significant, but difficult reforms. One cannot help but be reminded of other quick fixes for complex problems. The most recent being the idea, promulgated by Amy Cuddy (Harvard: link) and others, that adopting a “power pose” can overcome various forms of experienced- and socioeconomic-based prejudices and injustices, as if over-coming a person’sexperiences and situation is simply a matter of will. The message is that those who do not succeed have only themselves to blame, because the way to succeed is (basically) so damn simple. So imagine one’s surprise (or not) when one discovers that the underlying biological claims associated with “power posing” are not true (or at least cannot be replicated, even by the co-authors of the original work (see Power Poser: When big ideas go bad: link). Seems as if the lesson that needs to be learned, both in science education and more generally, is that claims that seem too easy or universal are unlikely to be true. It is worth remembering that even the most effective modern (and traditional) medicines, all have potentially dangerous side effects. Why, because they lead to significant changes to the system and such modifications can discomfort the comfortable. This stands in stark contrast to non-scientific approaches; homeopathic “remedies” come to mind, which rely on placebo effects (which is not to say that taking ineffective remedies does not itself involve risks.)
As in the case of effective medical treatments, the development and delivery of engaging and meaningful science education reform often requires challenging current assumptions and strategies that are often based in outdated traditions, and are influenced more by the constraints of class size and the logistics of testing than they are by the importance of achieving demonstrable enhancements of students’ working understanding of complex ideas.
The ubiquitous impact of science-based information and technologies in everyday life suggests that misunderstanding how science works can have serious consequences. Yet people’s decisions and strongly held beliefs are often uncoupled from, or at odds with, the conclusions and recommendations of empirical studies and scientific consensus.
In some cases, the implications of misunderstanding or rejecting science are more or less harmless – does it really matter if someone believes the Earth is the center of the Universe? In other cases, they can be critical. Perhaps nowhere is understanding science more important than in the anti-vaccination (anti-VAX) movement. While infant/child vaccination rates have increased for the major vaccine-preventable diseases (2000-2014), there are growing pockets of vaccine refusal in multiple locations across the United States. The real world and serious implication of vaccine refusal is the disruption of local herd immunity, resulting in vaccine-preventable disease outbreaks. For example, regions of California, including Marin, Napa, and Sonoma counties, have seen significant increases in the number of pertussis and measles cases in recent years, largely related to increased intentional vaccine refusal among parents. Similarly, we have seen a revival of measles and mumps cases in and around Brooklyn, NY.
According to survey data, vaccine refusal usually relates to concerns about vaccine safety and perceived efficacy. Specifically, parents either misunderstand or flat out reject the data-based debunking of a causal relationship between vaccination and autism, based on a seriously flawed, and now retracted study (see PLoS Blog: Public Health Takes on Anti-Vaccine Propaganda). Additionally, parents may not see the point of vaccination since many vaccine-preventable diseases, and their serious health implications, have become much less common in the developed world. We now live in a world free of smallpox and almost free of polio, making it difficult for people to connect to the painful consequences of their reappearance.
Parents who choose to forgo or hesitate in vaccinating their children tend to be white, well-educated, and living in households making $75,000 or more annually. Presumably, such parents have ready access to relevant information, spend a lot of time researching the topic of vaccines, and, according to Seth Mnookin in The Panic Virus “take pride in being intellectually curious, thoughtful, and rational.” These are individuals and communities that are generally assumed to support science and are scientifically literate by various measures (for example, they know that the Earth is not flat and that it travels around the Sun, they might even know that antibiotics do not “kill” viruses, although they may not know exactly why this is the case). But, for some reason, these parents become fully invested in exploring the “debate” relating to vaccine safety and efficacy, and are not able to differentiate scientifically supported conclusions from those that are not.
Maybe this mindset is influenced by the “healthcare consumerism” movement, which encourages patients to be more involved in their healthcare decisions, marking a shift away from the authoritarian, doctor-knows-best default. Furthermore, given the growing tax on the already faulty infrastructure of our healthcare system, this “look it up dear” trend has become an accepted norm. The ability to conduct “research” on an aspect of healthcare, and to have an appreciation on what “research” truly entails, is a direct application of science literacy. Assuming “research” is done correctly, the internet can be an excellent source of information for patients, with the potential of filling in any healthcare knowledge gaps.
One immediate driver of the anti-VAX movement, however, appears to be that internet search results on vaccines yield a very mixed bag, potentially compromising the attempts at “research.” For instance, when I plugged “are vaccines safe” into Google (December 2016), sites like CDC’s Vaccine Safety and the US Health and Human Service’s Vaccine Safety appeared in the feed. These sites, however, were outnumbered by anti-VAX (aka fake science) sites that can have the look and feel of a legitimate resource. In fact, the internet is the main platform for anti-VAX messaging; parents who exempt children from vaccination are likely to have made their decision based on information they read online. Interestingly, anti-VAX information is often couched in non-science messages demonizing “lying big pharma,” while simultaneously promoting the value of alternative (that is, non-science-based) medicine – as of 2015, an unregulated (and not completely honest) $30B per year industry. One might suspect that anti-VAX sentiments are reinforced by alternative medicine and nutraceutical lobbying, further demonstrating an inability to correctly identify conflicts of interest and a misunderstanding or flat out rejection of evidence-based medicine.
While I value the opportunity for access to scientific and medical information, and believe that patients owe it to themselves to research areas specifically related to their health/illnesses, such information is often presented in a context that assumes a general understanding of underlying processes that are fundamental to the given explanation. For instance, when delivering information about vaccine science, is it to be assumed that readers are familiar with how the immune system works? Are parents aware of the impact of withholding vaccination on their, and their neighbor’s child’s well being, or the potential immediate and long term effects of contracting the disease that the vaccine protects against? Do readers have a handle on autism spectrum disorder, and what we know – and don’t know – about related genetic and environmental influences? When explaining a “why,” such as why someone should vaccinate their child, “you have to be in some framework that you allow something to be true, otherwise you are perpetually asking why” (Feynman on BBC, 1983). In presenting materials on vaccines, have we clearly established what needs to be understood and accepted? Does our audience have the foundational knowledge to understand the arguments for and mechanisms behind vaccination? These are questions that extend beyond vaccination, and generally go well beyond what is meant by scientific literacy.
Confounding the difficulties associated with being a critical consumer of healthcare information is the tendency for humans to connect with stories – particularly stories with negative outcomes – regardless of factual content. According to a McKinsey research summary on healthcare consumerism, “there is often a disconnect between what consumers believe matters most and what influences their opinions most strongly.” This can also relate to the concept of biased assimilation, where people unknowingly engage in cherry picking to find the arguments that best support the values held by those in their community. In the context of the vaccine discussion, parents can be influenced by the (abundant) stories linking vaccines to the onset of a spectrum of health issues, even if they doubt the validity of the story, even more so if they are a part of a community that supports anti-VAX sentiments. This natural human behavior is reinforced by celebrity-backed, widely-publicized campaigns that falsely link vaccines to autism — despite retraction of the original study AND numerous subsequent studies disproving these claims – likely sustains anti-VAX momentum. Given the incoming presidential administration’s suggested openness to anti-VAX arguments, this momentum could be amplified.
Vaccine hesitancy and rejection is not restricted to the United States. In fact, nearly every country experiences pockets of vaccine-preventable disease outbreaks, resulting from vaccine hesitancy or refusal, as described in a recent WHO working group report on the topic. Similar to trends seen in the United Sates, the choice to align with anti-VAX sentiments has complex underpinnings, often relating, in part, to personal/community belief systems and distrust of healthcare providers and institutions. As an extreme example, the Taliban [link] are vehemently opposed to vaccinations as they view vaccination programs as antithetical to their traditional beliefs and culture; such programs can be seen as a nefarious plots to “inject” Western culture into traditional societies. One result has been terrorist attacks targeting polio workers in Pakistan, killing 60 since 2012.
At the risk of oversimplifying the issues related to vaccine hesitancy and rejection, people’s decision’s for themselves and their children might have less to do with the message, and more about how — and in what context — the message is delivered. In fact, a meta-analysis on persuasive techniques common to anti-VAX websites suggest that these sites go far beyond simply providing (inaccurate) information on vaccine safety. These sites are making genuine connections to parents’ values (i.e. freedom of choice) and lifestyles (i.e. healthy eating), adeptly contextualizing anti-VAX sentiments as being a part of holistic well-being. Such sites skillfully cultivate feelings of trust and credibility by aiming their message to hit the more human side of things. These sites get human behavior, while pro-objective evidence sites often do not. When looking at the anti-VAX movement, we see the power of personal stories and of presenting anti-VAX “science” alongside related messages that promote the values and ideals of the target population.
So how do we apply these lessons to improving the public’s understanding of particular science-based decisions? While it may feel counterintuitive, perhaps we should stop trying to win arguments using the traditional academic approach, with data, error bars, and p-values, as these risk strengthening the emotional appeal of anti-evidence, anti-scientific viewpoints. Instead, we can present data-based conclusions in compelling and effective ways, keeping in mind the connections and disconnections between human emotion and rationality. As the world’s population continues to soar, the importance of humanizing our messages, arguments, and conclusions is paramount.
There is no universal equation governing scientifically literate decision-making, it is unlikely that one will ever be identified simply because human behavior is difficult to predict. From a practical perspective, this may mean that “scientific literacy” as an over-arching concept is less useful than fostering a deeper understanding of relevant issues in science and medicine. In recognizing the need to provide people with specific knowledge to make informed, data-based decisions, it is implied that facts (empirical observations) are clearly differentiated from non-facts. However, this idea is much more complicated, and the interpretation of “facts” is impacted by the context of individual experiences (the Saigon, 1965 episode of the Revisionist History podcast provides an excellent example of such an analysis). The inability to predict how someone will interpret empirical evidence is the giant wrench stuck in the gears of the science literacy machine. With regard to the anti-VAX movement, we are dealing with an emotionally-charged phenomenon that is deeply intertwined with human nature (the need to protect ourselves and our children), and the process goes beyond answering general true-false questions. While difficult, the effort to better understand and meaningfully correct points of failure in communicating any controversial science issue is likely to be beneficial, particularly when we consider the consequences of scientific illiteracy.
Jeanne Garbarino, PhD, Director of Science Outreach, The Rockefeller University, NY, NY
Jeanne earned her Ph.D. in metabolic biology from Columbia University, followed by a postdoc in the Laboratory of Biochemical Genetics and Metabolism at The Rockefeller University, where she now serves as Director of Science Outreach. In this role, she works to provide K-12 communities with equitable access to authentic biomedical research opportunities and resources. You can find Jeanne on social media under the handle @JeanneGarb.
As socioeconomic inequality grows, the publicly acknowledged importance of traits such as honesty, loyalty, self-sacrifice, and reciprocity appears to have fallen out of favor with some of our socio-economic and political elites. How many people condemn a person as dishonest one day and embrace them the next? Dishonesty and selfishness no longer appear to be taboo, or a source of shame that needs to be expurgated (perhaps my Roman Catholic upbringing is bubbling to the surface here). A disavowal of shame and guilt and the lack of serious social censure appears to be on the rise, particularly within the excessively wealthy and privileged, as if the society from which they extracted their wealth and fame does not deserve their active participation and support [link: Hutton, 2009]. They have embraced a “winning takes all” strategy.
If an understanding of evolutionary mechanisms is weak within the general population [link], the situation is likely to be much worse when it comes to an understanding of the role and outcomes of social evolutionary mechanisms. Yet, the evolutionary origins of social systems, and the mechanisms by which such systems are maintained against the effects of what are known as “social cheaters”, are critical to understanding and defending, human social behaviors such as honesty, cooperation, loyalty, self-sacrifice, self-restraint, mutual respect, responsibility and kindness.
While evolutionary processes are often caricatured as favoring selfish behaviors, the facts tell a more complex, organism-specific story [link: Aktipis 2016]. Cooperation between organisms underlies a wide range of behaviors, from sexual reproduction and the formation of multicellular organisms (animals, plants, and people) to social systems, ranging from microbial films to bee colonies and construction companies [see Bourke, 2011: Principles of Social Evolution] [Wikipedia link].
One of the best studied of social systems involves the cellular slime mold Dictyostelium discoideum [Wikipedia link]. When life is good, that is when the world is moist and bacteria, the food of these organisms, are plentiful, D. discoideum live and reproduce happily as single celled amoeba-like individuals in soil. Given their small size (~5 μm diameter), they cannot travel far, but that does not matter as long as their environment is hospitable. When the environment turns hostile, however, an important survival strategy is to migrate to a new location – but what is a little guy to do? The answer in this species is to cooperate. Individual amoeba begin to secrete a chemical that acts to attract others; eventually thousands of individuals aggregate to form a multicellular “slug”; slugs migrate around to find a hospitable place and then differentiate into a fruiting body that stands ~1mm (20x the size of an individual amoeba) above the ground. To form the stalk that lifts the “fruiting body” into the air, a subset of cells (once independent individuals) change shape. These stalk cells die, while the rest of the cells form the fruiting body, which consists of spores – cells specialized to survive dehydration. Spores are released into the air where they float and are dispersed over a wide range. Those spores that land in a happy place (moist and verdant), revert to the amoeboid life style, eat, grow, divide and generate a new (clonal) population of amoeboid cells: they have escaped from a hostile environment to inhabit a new world, a migration made possible by the sacrifice of the cells that became the stalk (and died in the process). Similar types of behavior occur in a wide range of macroscopic organisms [Scrambling to the top: link]. Normally, who becomes a stalk cell and who becomes a spore is a stochastic process [see previous PLoS blog post on stochastics and biology education].
Cheaters in the slime mold system are individuals who take part in the aggregation process (they respond to the migration signal and become part of the slug), but have altered their behavior to avoid becoming a stalk cell – no self-sacrifice for them. Instead they become spores. In the short run, such a strategy can be beneficial to the individual, after all it has a better chance of survival if it can escape a hostile environment. But imagine a population made up only of cheaters – no self-sacrifice, no stalk, no survival advantage = death [see link: Strassmann & Queller, 2009].
A classic example of social cheating with immediate relevance to the human situation is cancer. Within a sexually reproducing multicellular organism, reproduction is strictly restricted to the cells of the germ line – eggs and sperm. The other cells of the organism, known collectively as somatic cells, have ceded their reproductive rights to the organism as a whole. While somatic cells can divide, they divide in a controlled and strictly regulated (unselfish) way. Somatic cells do not survive the death of the organism – only germ line cells (sperm and eggs) are able to produce a new organism. In the end cellular cooperation has been a productive strategy, as witness the number of different types of multicellular organisms, including humans. If a somatic cell breaks the social contract and cheats, that is, begins to divide (asexually) in an independent manner, it can lead to the formation of a tumor and later, if the cells of the tumor start to migrate within the organism, to metastatic cancer. More rarely (apparently) such cells can migrate between organisms, as in the case of transmissible cancers in dogs, Tasmanian Devils, and clams [see links: Murchison 2009 and Ujvari et al 2016). The growth and evolution of the tumor cell leads to the death of the organism and the cancer cells’ own extinction, another example of the myopic nature of evolutionary processes.
In the case of cancer the organism’s defenses against social cheaters comes in two forms, intrinsic to the individual cheater cells, in the form of cell suicide (known through a number of technical terms including apoptosis, anoikis and necroptosis)[link: Su et al., 2015] and extrinsic and organismic processes, such as the ability of the organism’s immune system to identify and kill cancer cells – a phenomena with therapeutically relevant implications [link: Ledford, 2014]. We can think of these two processes as guilt + shame (leading to cellular suicide) and policing + punishment (leading to immune system killing). For a cell to escape growth control and to evolve to produce metastatic disease, it needs to inactivate or ignore intrinsic cell death systems and to evade the immune system.
To consider another example, social systems are based on cooperation, often involving the sharing of resources with those in need. A recent example is the sharing of food (blood) between vampire bats [see link: Carter & Wilkinson, 2013]. The rules, as noted by Aktipis, are simple, 1) ask only when in need and 2) give when asked and able. In this context, we can identify two types of social cheaters – those who ask when they do not need and those you fail to give when asked and able. People who refuse to work even when they can and when jobs are available fall into the first group, the rich who avoid taxes and fail to donate significant funds to charities the other. It is an interesting question of how to characterize those who borrow money and fail to repay it. Bankruptcy laws that protect the wealth of the borrower while leading to losses to the lender might be seen as acting to undermine the social contract (clearly philosophers’ and economists’ comments here would be relevant).
Given that social systems at all levels are based on potentially costly traits, such as honesty, loyalty, self-sacrifice, and reciprocity, the evolutionary origins of social systems must lie in their ability to increase reproductive success, either directly or through effects on relatives, a phenomena known as inclusive fitness [Wikipedia link]. Evolutionary processes also render social systems vulnerable to cheating and so have driven the development of a range of defenses against various forms of social cheaters (see above). But recent political and cultural events appear to be acting to erode and/or ignore society’s defenses.
So what to do? Revolution? From a PLoS Science education perspective, one strategy suggests itself: to encourage (require) that students and the broader public be introduced to effective instruction on social evolutionary mechanisms, the traits they can generate (various forms of altruism and cooperation), the reality and pernicious effects of social cheaters, and the importance of defenses against them. In this light, it appears that social evolutionary processes are missing from the Next Generation Science Standards [NGSS link]. Understanding the biology, together with effective courses in civics [see link: Teaching Civics in the Year of The Donald] might serve to bolster the defense of civil society.
December 22, 2016, minor update 23 October 2020 – Mike Klymkowsky
Featured image is used with permission from Matthew Lutz (Princeton University).
Recent political events and the proliferation of “fake news” and the apparent futility of fact checking in the public domain have led me to obsess about the role played by the public presentation of science. “Truth” can often trump reality, or perhaps better put, passionately held beliefs can overwhelm a circumspect worldview based on a critical and dispassionate analysis of empirically established facts and theories. Those driven by various apocalyptic visions of the world, whether religious or political, can easily overlook or trivialize evidence that contradicts their assumptions and conclusions. While historically there have been periods during which non-empirical presumptions are called into question, more often than not such periods have been short-lived. Some may claim that the search for absolute truth, truths significant enough to sacrifice the lives of others for, is restricted to the religious, they are sadly mistaken – political, often explicitly anti-religious movements are also susceptible, often with horrific consequences, think Nazism and communist-inspired apocalyptic purges. The history of eugenics and forced sterilization based on flawed genetic and ideological premises have similar roots.
Given the seductive nature of belief-based “Truth”, many turned to science as a bulwark against wishful and arational thinking. The evolving social and empirical (data-based) nature of the scientific enterprise, beginning with guesses as to how the world (or rather some small part of the world) works, then following the guess’s logical implications together with the process of testing those implications through experiment or observation, leading to the revision (or abandonment) of the original guess, moving it toward hypothesis and then, as it becomes more explanatory and accurately predictive, and as those predictions are confirmed, into a theory. So science is a dance between speculation and observation. In contrast to a free form dance, the dance of science is controlled by a number of rigid, and oppressive to some, constraints [see Feynman & Rothman 2020. How does Science Really work].
Perhaps surprisingly, this scientific enterprise has converged onto a small set of over-arching theories and universal laws that appear to explain much of what is observable, these include the theory of general relativity, quantum and atomic theory, the laws of thermodynamics, and the theory of evolution. With the noticeable exception of relativity and quantum mechanics, these conceptual frameworks appear to be compatible with one another. As an example, organisms, and behaviors such as consciousness, obey and are constrained by, well established and (apparently) universal physical and chemical rules.
A central constraint on scientific thinking is that what cannot in theory be known is not a suitable topic for scientific discussion. This leaves outside of the scope of science a number of interesting topics, ranging from what came before the “Big Bang” to the exact steps in the origin of life. In the latter case, the apparently inescapable conclusion that all terrestrial organisms share a complex “Last Universal Common Ancestor” (LUCA) makes theoretically unconfirmable speculations about pre-LUCA living systems outside of science. While we can generate evidence that the various building blocks of life can be produced abiogenically (a process begun with Wohler’s synthesis of urea) we can only speculate as to the systems that preceded LUCA.
Various pressures have led many who claim to speak scientifically (or to speak for science) to ignore the rules of the scientific enterprise – they often act as if their are no constraints, no boundaries to scientific speculation. Consider the implications of establishing “astrobiology” programs based on speculation (rather than observations) presented with various levels of certainty as to the ubiquity of life outside of Earth [the speculations of Francis Crick and Leslie Orgel on “directed panspermia”:and the psuedoscientific Drake equation come to mind, see Michael Crichton’s famous essay on Aliens and global warming]. Yet such public science pronouncements appear to ignore (or dismiss) the fact that we know, and can study, only one type of life (at the moment), the descendants of LUCA. They appear untroubled when breaking the rules and abandoning the discipline that has made science a powerful, but strictly constrained human activity.
Whether life is unique to Earth or not requires future explorations and discoveries that may (or given the technological hurdles involved, may not) occur. Similarly postulating theoretically unobservable alternative universes or the presence of some form of consciousness in inanimate objects [an unscientific speculation illustrated here] crosses a dividing line between belief for belief’s sake, and the scientific – it distorts and obscures the rules of the game, the rules that make the game worth playing [again, the Crichton article cited above makes this point]. A recent rather dramatic proposal from some in the physical-philosophical complex has been the claim that the rules of prediction and empirical confirmation (or rejection) are no longer valid – that we can abandon requiring scientific ideas to make observable predictions [see Ellis & Silk]. It is as if objective reality is no longer the benchmark against which scientific claims are made; that perhaps mathematical elegance (see Sabine Hossenfelder’s Losts in Math) or spiritual comfort are more important – and well they might be (more important) but they are outside of the limited domain of science. At the 2015 “Why Trust a Theory” meeting, the physicist Carlo Rovelli concluded “by pointing out that claiming that a theory is valid even though no experiment has confirmed it destroys the confidence that society has in science, and it also misleads young scientists into embracing sterile research programs.” [quote from Massimo’s Pigliucci’s Footnotes to Plato blog].
While the examples above are relatively egregious, it is worth noting that various pressures for glory, fame, and funding can to impact science more frequently – leading to claims that are less obviously non-scientific, but that bend (and often break) the scientific charter. Take, for example, claims about animal models of human diseases. Often the expediencies associated with research make the use of such animal models necessary and productive, but they remain a scientific compromise. While mice, rats, chimpanzees, and humans are related evolutionarily, they also carry distinct traits associated with each lineage’s evolutionary history, and the associated adaptive and non-adaptive processes and events associated with that history. A story from a few years back illustrates how the differences between the immune systems of mice and humans help explain why the search, in mice, for drugs to treat sepsis in humans was so relatively unsuccessful [Mice Fall Short as Test Subjects for Some of Humans’ Deadly Ills]. A similar type of situation occurs when studies in the mouse fail to explicitly acknowledge how genetic background influences experimental phenotypes [Effect of the genetic background on the phenotype of mouse mutations], as well as how details of experimental scenarios influence human relevance [Can Animal Models of Disease Reliably Inform Human Studies?].
Speculations that go beyond science (while hiding under the aegis of science – see any of a number of articles on quantum consciousness) – may seem just plain silly, but by abandoning the rules of science they erode the status of the scientific process. How, exactly, would one distinguish a conscious from an unconscious electron?
In science (again as pointed out by Crichton) we do not agree through consensus but through data and respect for critical analyzed empirical observations. The laws of thermodynamics, general relativity, the standard model of particle physics, and evolution theory are conceptual frameworks that we are forced (if we are scientifically honest) to accept. Moreover the implications of these scientific frameworks can be annoying to some; there is no possibility of a “zero waste” process that involves physical objects, no free lunch (perpetual motion machine), no efficient, intelligently-designed evolutionary process (just blind variation and differential reproduction), and no zipping around the galaxy. The apparent limitation of motion to the speed of light means that a “Star Wars” universe is impossible – happily, I would argue, given the number of genocidal events that appear to be associated with that fictional vision – just too many storm troopers for my taste.
Whether our models for the behavior of Earth’s climate or the human brain can be completely accurate (deterministic), given the roles of chaotic and stochastic events in these systems, remains to be demonstrated; until they are, there is plenty of room for conflicting interpretations and prescriptions. That atmospheric levels of greenhouse gases are increasing due to human activities is unarguable, what it implies for future climate is less clear, and what to do about it (a social, political, and economic discussion informed but not determined by scientific observations) is another.
As we discuss science, we must teach (and continually remind ourselves, even if we are working scientific practitioners) about the limits of the scientific enterprise. As science educators, one of our goals needs to be to help students develop an appreciation of the importance of an honest and critical attitude to observations and conclusions, a recognition of the limits of scientific pronouncements. We need to explicitly identify, acknowledge, and respect the constraints under which effective science works and be honest in labeling when we have left scientific statements, lest we begin to walk down the path of little lies that morph into larger ones. In contrast to politicians and other forms of religious and secular mystics, we should know better than to be seduced into abandoning scientific discipline, and all that that entails.
Minor update + figures reintroduced 20 October 2020
Stochastic processes are often presented in terms of random, that is unpredictable, events. This framing obscures the reality that stochastic processes, while more or less unpredictable at the level of individual events, are well behaved at the population level. It also obscures the role of stochastic processes in a wide range of predictable phenomena; in atomic systems, for example, unknown factors determine the timing of the radioactive decay of a particular unstable atom, at the same time the rate of radioactive decay is highly predictable in a large enough population. Similarly, in the classical double-slit experiment the passage of a single photon, electron, or C60 moleculeis unpredictable while the behavior of a larger population is perfectly predictable. The macroscopic predictability of the Brownian motion (a stochastic process) enabled Einstein to argue for the reality of atoms. Similarly, the dissociation of a molecular complex or the occurrence of a chemical reaction, driven as they are by thermal collisions, are stochastic processes, whereas dissociation constants and reaction rates are predictable. In fact this type of unpredictability at the individual level and predictability at the population level is the hallmark of stochastic, as compared to truly random, that is, unpredictable behaviors.
Single cell and single molecule studies increasingly provide mechanistic insights into a range of biological processes, from evolutionary to cognitive and pathogenic mechanisms. The effects of stochastic events are complicated by the developing and adaptive
nature of biological systems and appears to be influenced by the genetic background. In some cases, homeostatic (feedback) mechanisms return the system to its original state. In others, the stochastic expression (or mutation) of a particular gene (or set of genes) leads to a cascade of downstream effects that change the system, such that subsequent events become more or less probable, a process nicely illustrated in recent real time studies of the evolution of antibiotic resistance in bacteria (←FIG & a seriously cool video). The stochastic (molecular clock) nature of an organism’s intrinsic mutation rate has recently been used with the EXAC system to visualize the impact of selective and non-adaptive effects on human genes.
Pedagogical studies: The “Framework for K12 science education” ignores stochastic processes altogether, while the Vision and Change in Undergraduate Biology Education” document contains a single point that calls for “incorporating stochasticity into biological models” (p. 17), but omits details of what this means in practice. People (even scholars) often have a difficult time developing an accurate understanding of stochastic processes (see “Understanding Randomness and its Impact on Student Learning” and “Fooled by Randomness“. The failure to appreciate the ubiquity of stochastic processes in biological system has been an obstacle to the acceptance of Darwinian evolution. In this light, it seems well past time to rethink the foundational roles of stochastic processes in biological (as well as chemical and physical) systems and how best to introduce such processes to students through coherent course narratives and supporting materials.
A number of studies indicate that students call upon deterministic models to explain a range of stochastic processes. The fact that all too often students are introduced to the behaviors of cellular and molecular level biological systems through depictions that are overtly deterministic does not help the situation. In the majority of instructional videos, for example, molecules appear to know were they are heading and move there with a purpose. Similarly the folding of polypeptides is often depicted as a deterministic process although the proliferation of model-based simulations offers a more realistic depiction (see below). That said the widespread involvement of chaperones is rarely acknowledged. Macromolecules are commonly depicted as rigid rather than as dynamic. The thermally driven opening and closing of the DNA double-helix (a consequence of the weakness of intermolecular interactions) is rarely illustrated. Molecules recognize one another and (apparently) stay locked in their mutual embrace forever; the role of thermal collisions in driving molecular dissociation (and binding specificity) is rarely considered in most textbooks, and presumably, in the classes that use these books. Moreover, the factors involved in inter-molecular interactions are often poorly understood, even after the completion of conventional university level chemistry courses. The energetic factors that determine enzyme specificity and reaction rates and the binding of transcription factors to their target DNA sequences, as well as the effects of mutations on these and other processes, often go uncommented on. It is not at all clear whether students appreciate that thermal collisions are responsible for the reversal of molecular interactions or that they supply a reaction’s activation energy. Cells with the same genotype are implicitly expected to behave in identical ways (display the same phenotypes), a situation at odds with direct observation (see “Stochastic Gene Expression in a Single Cell” and “What’s Luck Got to Do with It: Single Cells, Multiple Fates, and Biological Non-determinism” and the general processes involved in cellular differentiation and social behaviors. Phenotypic penetrance and expressivity also involve stochastic behaviors, together with genetic background effects. It certainly does not help when instructors introduce a stochastic process, such as genetic drift, in the context of the Hardy-Weinberg model, a situation in which genetic drift does not occur. Such presentations are likely to increase student confusion.
It is our impression that the typical instructional approach is to present molecular level processes in terms of large populations of molecules that behave in a deterministic manner. Consider the bacteria Escherichia coli’s lac operon, a group of genes that has been a workhorse in modern molecular biology and a common context through which to present the regulation of gene expression. Expression of the lac operon results in the synthesis of two proteins (lactose permease and β-galactosidase) that enable lactose to enter the cell and convert lactose into the monosaccharides glucose and galactose (which can be metabolized futher) and allolactone, which binds to, and inhibits the binding of the lac repressor protein to DNA, allowing the expression of the lac operon. When the bulk behavior of a bacterial culture is analyzed, the expression of the lac operon increases as a smooth function over time (in the absence of other energy sources)(FIG. ↑). The result is that the expression of the proteins required for lactose metabolism is restricted to situations in which lactose is present.
The mechanistic quandary, rarely if ever considered explicitly as far as we can tell, is how the lac operon can “turn on” when the entry of lactose into the cell and the inactivation of the lac repressor both depend upon the operon’s expression? The situation becomes clear only when we consider the behavior of individual cells; LacZ expression goes from off to fully on in a stochastic manner (FIG. 2↑). Given that there are ~5 to 10 lac repressor molecules and one to two copies of the lac operon per cell, the lac operon can be expressed when the operon is free of bound repressor. If such a “noisy” event happens to occur when lactose is present in the media, expression of the lac operon allows lactose to enter the cell, the conversion of lactose into allolactone, the inactivation of the lac repressor, and stable expression of the lac operon. The stochastic behavior of the system enables individual cells to sample their environment and respond when useful metabolites are present while minimizing unnecessary metabolic expense (the synthesis of irrelevant polypeptides) when they are not. A similar logic is involved in the quorum sensing, the emission of light (via the luciferase system), the regulation of the DNA uptake system, the generation of persister phenotypes, and programmed cell death (to benefit genetically related neighbors).
What is a biology educator to do? The question that faces the reflective educational designer and enlightened instructor is how should their course address the multiple roles of stochastic processes within biological systems? I have a short set of recommendations that I think both designers and instructors might want to consider; many have been incorporated into ongoing efforts at course design, which I have only recently (2019) begun to think of as educational engineering. First, it should be explicitly recognized, and conveyed to students, that stochastic processes are difficult to understand, as witness the common belief in the Gambler’s fallacy and the “hot hand”. Students need to be given adequate time to work with, and appropriate feedback on the behavior of stochastic systems. Secondly, and rather obviously, instructors should illustrate and articulate the role of stochastic processes in range of biological systems, from phenotypic variation and evolutionary events, including the effects of mutations and various non-adaptive processes (such as genetic drift) to de novo gene formation, gene expression, drug-target interactions, and reaction kinetics. Finally, the stochastic behaviors of molecular (and cellular) level processes should be accurately and explicitly illustrated . Among currently available examples there are those that illustrate the movement of a water molecule through a membrane either through an aquaporin molecule or on its own, as well as a PhET applet that illustrates the Elowitz et alstudy on stochastic GFP-expression in E. coli (and allows for student manipulation of key regulatory parameters). A simulation of the nature of intermolecular interactions and the role of molecular collisions in their formation has been developed for use with the CLUE Chemistry curriculum.
This is the first of the blog posts that I prepared for the PLOS science-education blog, a blog that later moved to bioliteracy. I am a PLOS ONE author and Academic Editor. For more about my work click @ ORCID or my lab website.
Scientific literacy – what is it, how to recognize it, and how to help people achieve it through educational efforts, remains a difficult topic. The latest attempt to inform the conversation is a recent National Academy report “Science Literacy: concepts, contexts, and consequences”. While there is lots of substance to take away from the report, three quotes seem particularly telling to me. The first is from Roberts  that points out that scientific literacy has “become an umbrella concept with a sufficiently broad, composite meaning that it meant both everything, and nothing specific, about science education and the competency it sought to describe.” The second quote, from the report’s authors, is that “In the field of education, at least, the lack of consensus surrounding science literacy has not stopped it from occupying a prominent place in policy discourse” (p. 2.6). And finally, “the data suggested almost no relationship between general science knowledge and attitudes about genetically modified food, a potentially negative relationship between biology-specific knowledge and attitudes about genetically modified food, and a small, but negative relationship between that same general science knowledge measure and attitudes toward environmental science” (p. 5.4).
Recognizing the scientifically illiterate
Perhaps it would be useful to consider the question of scientific literacy from a different perspective, namely, how can we recognize a scientifically illiterate person based on what they write or say? What clues imply illiteracy? To start, let us consider the somewhat simpler situation of standard literacy. Constructing a literate answer implies two distinct abilities: the respondent needs to be able to accurately interpret what the question asks and they need to recognize what an adequate answer contains. These are not innate skills; students need feedback and practice in both, particularly when the question is a scientific one. In my own experience with teaching, as well as data collected in the context of an introductory course , all to often a student’s answers consist of a single technical term, spoken (or written) as if a word = an argument or explanation. We need a more detailed response in order to accurately judge whether an answer addresses what the question asks (whether it is relevant) and whether it has a logical coherence and empirical foundations, information that is traditionally obtained through a Socratic interrogation. At the same time, an answer’s relevance and coherence serve as a proxy for whether the respondent understood (accurately interpreted) what was being asked of them.
So what is added when we move to scientific literacy, what is missing from the illiterate response. At the simplest level we are looking for mistakes, irrelevancies, failures in logic, or in recognizing contradictions within the answer, explanation or critique. The presence of unnecessary language suggests, at the very least, a confused understanding of the situation. A second feature of a scientifically illiterate response is a failure to recognize the limits of scientific knowledge; this includes an explicit recognition of the tentative nature of science, combined with the fact that some things are, theoretically, unknowable scientifically. For example, is “dark matter” real or might an alternative model of gravity remove its raison d’être? (see “Dark Matter: The Situation has Changed“
When people speculate about what existed before the “big bang” or what is happening in various unobservable parts of the multiverse, they have left science for fantasy. Similarly, speculation on the steps to the origin of life on Earth (including what types of organisms, or perhaps better put living or pre-living systems, existed before the “last universal common ancestor”), the presence of “consciousness” outside of organisms, or the probability of life elsewhere in the universe can be seen as transcending either what is knowable or likely to be knowable without new empirical observations. While this can make scientific pronouncements somewhat less dramatic or engaging, respecting the limits of scientific discourse avoids doing violence to the foundations upon which the scientific enterprise is built. It is worth being explicit, universal truth is beyond the scope of the scientific enterprise.
The limitations of scientific explanations
Acknowledging the limits of scientific explanations is a marker of understanding how science actually works. As an example, while a drug may be designed to treat a particular disease, a scientifically literate person would reject the premise that any such drug could, given the nature of interactions with other molecular targets and physiological systems, be without side effects and that these side effects will vary depending upon the features (genetic, environmental, historic, physiological) of the individual taking the drug. While science knowledge reflects a social consensus, it is constrained by rules of evidence and logic (although this might appear to be anachronistic in the current post- and more recently alternative-fact age).
Even though certain ideas are well established (Laws of Conservation and Thermodynamics, and a range of evolutionary mechanisms), it is possible to imagine exceptions (and revisions). Moreover, since scientific inquiry is (outside of some physics departments) about a single common Universe, conclusions from different disciplines cannot contradict one another – such contradictions must inevitably be resolved through modification of one or the other discipline. A classic example is Lord Kelvin’s estimate of the age of the Earth (~20 to 50 million years) and estimates of the time required for geological and evolutionary processes to produce the observed structure of the Earth and the diversity of life (hundreds of millions to billions of years), a contradiction resolved in favor of an ancient Earth by the discovery of radioactivity.
Scientific illiteracy in the scientific community
There are also suggestions of scientific illiteracy (or perhaps better put, sloppy and/or self-serving thinking) in much of the current “click-bait” approach to the public dissemination of scientific ideas and observations. All too often, scientific practitioners, who we might expect to be as scientifically literate as possible, abandon the discipline of science to make claims that are over-arching and often self-serving (this is, after all, why peer-review is necessary).
A common example [of scientific illiteracy practiced by scientists and science communicators] is provided by studies of human disease in “model” organisms, ranging from yeasts to non-human primates. While there is no doubt that such studies have been, and continue to be critical to understanding how organisms work (and certainly deserving of public and private support) – their limitations need to be made explicit. While a mouse that displays behavioral defects (for a mouse) might well provide useful insights into the mechanisms involved in human autism, an autistic mouse may well be a scientific oxymoron.
Discouraging scientific illiteracy within the scientific community is challenging, particularly in the highly competitive, litigious, and high stakes environment we find ourselves in. How to best help our students, both within and without scientific disciplines, avoid scientific illiteracy remains unclear, but is likely to involve establishing a culture of Socratic discourse (as opposed to posturing). Understanding what a person is saying, what empirical data and assumptions it is based on, and what it implies and or predicts are necessary features of literate discourse.
Minor edits 23 October 2020; added SH Dark Matter video 15 June 2021. Twitter @mikeklymkowsky
Roberts, D.A., Scientific literacy/science literacy. I SK Abell & NG Lederman (Eds.). Handbook of research on science education (pp. 729-780). 2007, Mahwah, NJ: Lawrence Erlbaum.
Klymkowsky, M.W., J.D. Rentsch, E. Begovic, and M.M. Cooper, The design and transformation of Biofundamentals: a non-survey introductory evolutionary and molecular biology course. LSE Cell Biol Edu, in press., 2016. in press.
Lee, H.-S., O.L. Liu, and M.C. Linn, Validating measurement of knowledge integration in science using multiple-choice and explanation items. Applied Measurement in Education, 2011. 24(2): p. 115-136.
Henson, K., M.M. Cooper, and M.W. Klymkowsky, Turning randomness into meaning at the molecular level using Muller’s morphs. Biol Open, 2012. 1: p. 405-10.
 Assuming, of course, that what a person’s says reflects what they actually think, something that is not always the case.
 This is one reason why multiple-choice concept tests consistently over-estimate students’ understanding ( 3. Lee, H.-S., O.L. Liu, and M.C. Linn, Validating measurement of knowledge integration in science using multiple-choice and explanation items. Applied Measurement in Education, 2011. 24(2): p. 115-136.)
 We have used this kind of analysis to consider the effect of various learning activities