Balancing research prestige, human decency, and educational outcomes.


Or why do academic institutions shield predators?  Many working scientists, particularly those early in their careers or those oblivious to practical realities, maintain an idealistic view of the scientific enterprise. They see science as driven by curious, passionate, and skeptical scholars, working to build an increasingly accurate and all encompassing understanding of the material world and the various phenomena associated with it, ranging from the origins of the universe and the Earth to the development of the brain and the emergence of consciousness and self-consciousness (1).  At the same time, the discipline of science can be difficult to maintain (see PLoS post:  The pernicious effects of disrespecting the constraints of science). Scientific research relies on understanding what people have already discovered and established to be true; all too often, exploring the literature associated with a topic can reveal that one’s brilliant and totally novel “first of its kind” or “first to show” observation or idea is only a confirmation or a modest extension of someone else’s previous discovery. That is the nature of the scientific enterprise, and a major reason why significant new discoveries are rare and why graduate students’ Ph.D. theses can take years to complete.

Acting to oppose a rigorous scholarly approach are the real life pressures faced by working scientists: a competitive landscape in which only novel observations  get rewarded by research grants and various forms of fame or notoriety in one’s field, including a tenure-track or tenured academic position. Such pressures encourage one to distort the significance or novelty of one’s accomplishments; such exaggerations are tacitly encouraged by the editors of high profile journals (e.g. Nature, Science) who seek to publish “high impact” claims, such as the claim for “Arsenic-life” (see link).  As a recent and prosaic example, consider a paper that claims in its title that “Dietary Restriction and AMPK Increase Lifespan via Mitochondrial Network and Peroxisome Remodeling” (link), without mentioning (in the title) the rather significant fact that the effect was observed in the nematode C. elegans, whose lifespan is typically between 300 to 500 hours and which displays a trait not found in humans (and other vertebrates), namely the ability to assume a highly specialized “dauer” state that can survive hostile environmental conditions for months. Is the work wrong or insignificant? Certainly not, but it is presented to the unwary (through the Harvard Gazette under the title, “In pursuit of healthy aging: Harvard study shows how intermittent fasting and manipulating mitochondrial networks may increase lifespan,” with the clear implication that people, including Harvard alumni, might want to consider the adequacy of their retirement investments


Such pleas for attention are generally quickly placed in context and their significance evaluated, at least within the scientific community – although many go on to stimulate the economic health of the nutritional supplement industry.  Lower level claims often go unchallenged, just part of the incessant buzz associated with pleas for attention in our excessively distracted society (see link).  Given the reward structure of the modern scientific enterprise, the proliferation of such claims is not surprising.  Even “staid” academics seek attention well beyond the immediate significance of their (tax-payer funded) observations. Unfortunately, the explosively expanding size of the scientific enterprise makes policing such transgressions (generally through peer review or replication) difficult or impossible, at least in the short term.

The hype and exaggeration associated with some scientific claims for attention are not the most distressing aspect of the quest for “reputation.”  Rather, there are growing number of revelations of academic institutions protecting those guilty of abusing their dependent colleagues. These reflect how scientific research teams are organized. Most scientific studies involve groups of people working with one another, generating data, testing ideas, and eventually publishing their observations and conclusions, and speculating on their broader implications.

Research groups can vary greatly in size.  In some areas, they involve isolated individuals, whether thinkers (theorists) or naturalists, in the mode of Darwin and Wallace.  In other cases, these are larger and include senior researchers, post-doctoral  fellows, graduate students, technicians, undergraduates,  and even high school students. Such research groups can range from the small (2 to 3 people) to the significantly larger (~20-50 people); the largest of such groups are associated mega-projects, such as the human genome project and the Large Hadron Collider-based search for the Higgs boson (see: Physics paper sets record with more than 5,000 authors).  A look at this site [link] describing the human genome project reflects two aspects of such mega-science: 1) while many thousands of people were involved [see Initial sequencing and analysis of the human genome], generally only the “big names” are singled out for valorization (e.g., receiving a Nobel Prize). That said, there would be little or no progress without general scientific community that evaluates and extends ideas and observations. In this context, “lead investigators” are charged primarily with securing the funds needed to mobilize such groups, convincing funders that the work is significant; it is members of the group that work out the technical details and enable the project to succeed.

As with many such social groups, there are systems in play that serve to establish the status of the individuals involved – something necessary (apparently) in a system in which individuals compete for jobs, positions, and resources.  Generally, one’s status is established through recommendations from others in the field, often the senior member(s) of one’s research group or the (generally small) group of senior scientists who work in the same or a closely related area. The importance of professional status is particularly critical in academia, where the number of senior (e.g. tenured or tenure-track professorships) is limited. The result is a system that is increasingly susceptible to the formation of clubs, membership in which is often determined by who knows who, rather than who has done what (see Steve McKnight’s “The curse of committees and clubs”). Over time, scientific social status translates into who is considered productive, important, trustworthy, or (using an oft-misused term) brilliant. Achieving status can mean putting up with abusive and unwanted behaviors (particularly sexual). Examples of this behavior have recently been emerging with increasing frequency (which has been extensively described elsewhere: see Confronting Sexual Harassment in Science; More universities must confront sexual harassment; What’s to be done about the numerous reports of faculty misconduct dating back years and even decades?; Academia needs to confront sexism; and The Trouble With Girls’: The Enduring Sexism in Science).

So why is abusive behavior tolerated?  One might argue that this reflects humans’ current and historical obsession with “stars,” pharaohs, kings, and dictators as isolated geniuses who make things work. Perhaps the most visible example of such abused scientists (although there are in fact many others : see History’s Most Overlooked Scientists) is Rosalind Franklin, whose data was essential to solving the structure of double stranded DNA, yet whose contributions were consistently and systematically minimized, a clear example of sexual marginalization. In this light, many is the technician who got an experiment to “work,” leading to their research supervisor’s being awarded the prizes associated with the breakthrough (2).

Amplifying the star effect is the role of research status at the institutional level;  an institution’s academic ranking is often based upon the presence of faculty “stars.” Perhaps surprisingly to those outside of academia, an institution’s research status, as reflected in the number of stars on staff, often trumps its educational effectiveness, particularly with undergraduates, that is the people who pay the bulk of the institution’s running costs. In this light, it is not surprising that research stars who display various abusive behavior (often to women) are shielded by institutions from public censure.

So what is to be done? My own modest proposal (to be described in more detail in a later post) is to increase the emphasis on institution’s (and departments within institutions) effectiveness at undergraduate educational success. This would provide a counter-balancing force that could (might?) place research status in a more realistic context.

a footnote or two:

  1.  on the assumption that there is nothing but a material world.
  2. Although I am no star, I would acknowledge Joe Dent, who worked out the whole-mount immunocytochemical methods that we have used extensively in our work over the years).
  3. Thanks to Becky for editorial comments as well as a dramatic reading!

Visualizing and teaching evolution through synteny

Embracing the rationalist and empirically-based perspective of science is not easy. Modern science generates disconcerting ideas that can be difficult to accept and often upsetting to philosophical or religious views of what gives meaning to existence [link]. In the context of biological evolutionary mechanisms, the fact that variation is generated by random (stochastic) events, unpredictable at the level of the individual or within small populations, led to the rejection of Darwinian principles by many working scientists around the turn of the 20th century (see Bowler’s The Eclipse of Darwinism + link).  Educational research studies, such as our own “Understanding randomness and its impact on student learning“, reinforce the fact that ideas involving stochastic processes relevant to evolutionary, as well as cellular and molecular, biology, are inherently difficult for people to accept (see also: Why being human makes evolution hard to understand). Yet there is no escape from the science-based conclusion that stochastic events provide the raw material upon which evolutionary mechanisms act, as well as playing a key role in a wide range of molecular and cellular level processes, including the origin of various diseases, particularly cancer [Cancer is partly caused by bad luck](1).

Teach Evolution

All of which leaves the critical question, at least for educators, of how to best teach students about evolutionary mechanisms and outcomes. The problem becomes all the more urgent given the anti-science posturing of politicians and public “intellectuals”, on both the right and the left, together with various overt and covert attacks on the integrity of science education, such as a new Florida law that lets “anyone in Florida challenge what’s taught in schools”.

Just to be clear, we are not looking for students to simply “believe” in the role of evolutionary processes in generating the diversity of life on Earth, but rather that they develop an understanding of how such processes work and how they make a wide range of observations scientifically intelligible. Of course the end result, unless you are prepared to abandon science altogether, is that you will find yourself forced to seriously consider the implications of inescapable scientific conclusions, no matter how weird and disconcerting they may be.

spacer bar

There are a number of educational strategies, in part depending upon one’s disciplinary perspective, on how to approach teaching evolutionary processes. Here I consider one, based on my background in cell and molecular biology.  Genomicus is a web tool that “enables users to navigate in genomes in several dimensions: linearly along chromosome axes, transversely across different species, and chronologically along evolutionary time.”  It is one of a number of  web-based resources that make it possible to use the avalanche of DNA (gene and genomic) sequence data being generated by the scientific community. For example, the ExAC/Gnomad Browser enables one to examine genetic variation in over 60,000 unrelated people. Such tools supplement and extend the range of tools accessible through the U.S. National Library of Medicine / NIH / National Center for Biotechnology Information (NCBI) web portal (PubMed).

In the biofundamentals / coreBio course (with an evolving text available here), we originally used the observation that members of our subfamily of primates,  the Haplorhini or dry nose primates, are, unlike most mammals, dependent on the presence of vitamin C (ascorbic acid) in their diet. Without vitamin C we develop scurvy, a potentially lethal condition. While there may be positive reasons for vitamin C dependence, in biofundamentals we present this observation in the context of small population size and a forgiving environment. A plausible scenario is that the ancestral Haplorhini population the lost the L-gulonolactone oxidase (GULO) gene (see OMIM) necessary for vitamin C synthesis. The remains of the GULO gene found in humans and other Haplorhini genomes is mutated and non-functional, resulting in our requirement for dietary vitamin C.

How, you might ask, can we be so sure? Because we can transfer a functional mouse GULO gene into human cells; the result is that vitamin C dependent human cells become vitamin C independent (see: Functional rescue of vitamin C synthesis deficiency in human cells). This is yet another experimental result, similar to the ability of bacteria to accurately decode a human insulin gene), that supports the explanatory power of an evolutionary perspective (2).


In an environment in which vitamin C is plentiful in a population’s diet, the mutational loss of the GULO gene would be benign, that is, not strongly selected against. In a small population, the stochastic effects of genetic drift can lead to the loss of genetic variants that are not strongly selected for. More to the point, once a gene’s function has been lost due to mutation, it is unlikely, although not impossible, that a subsequent mutation will lead to the repair of the gene. Why? Because there are many more ways to break a molecular machine, such as the GULO enzyme, but only a few ways to repair it. As the ancestor of the Haplorhini diverged from the ancestor of the vitamin C independent Strepsirrhini (wet-nose) group of primates, an event estimated to have occurred around 65 million years ago, its ancestors had to deal with their dietary dependence on vitamin C either by remaining within their original (vitamin C-rich) environment or by adjusting their diet to include an adequate source of vitamin C.

At this point we can start to use Genomicus to examine the results of evolutionary processes (see a YouTube video on using Genomicus)(3).  In Genomicus a gene is indicated  by a pointed box  ; for simplicity all genes are drawn as if they are the same size (they are not); different genes get different colors and the direction of the box indicates the direction of RNA synthesis, the first stage of gene expression. Each horizontal line in the diagram below represents a segment of a chromosome from a particular species, while the blue lines to the left represent phylogenic (evolutionary) relationships. If we search for the GULO gene in the mouse, we find it and we discover that its orthologs (closely related genes) are found in a wide range of eukaryotes, that is, organisms whose cells have a nucleus (humans are eukaryotes).

We find a version of the GULO gene in single-celled eukaryotes, such as baker’s yeast, that appear to have diverged from other eukaryotes about ~1.500,000,000 years ago (1500 million years ago, abbreviated Mya).  Among the mammalian genomes sequenced to date, the genes surrounding the GULO gene are (largely) the same, a situation known as synteny (mammals are estimated to have shared a common ancestor about 184 Mya). Since genes can move around in a genome without necessarily disrupting their normal function(s), a topic for another day, synteny between distinct organisms is assumed to reflect the organization of genes in their common ancestor. The synteny around the GULO gene, and the presence of a GULO gene in yeast and other distantly related organisms, suggests that the ability to synthesize vitamin C is a trait conserved from the earliest eukaryotic ancestors.GULO phylogeny mouse
Now a careful examination of this map (↑) reveals the absence of humans (Homo sapiens) and other Haplorhini primates – Whoa!!! what gives?  The explanation is, it turns out, rather simple. Because of mutation, presumably in their common ancestor, there is no functional GULO gene in Haplorhini primates. But the Haplorhini are related to the rest of the mammals, aren’t they?  We can test this assumption (and circumvent the absence of a functional GULO gene) by exploiting synteny – we search for other genes present in the syntenic region (↓). What do we find? We find that this region, with the exception of GULO, is present and conserved in the Haplorhini: the syntenic region around the GULO gene lies on human chromosome 8 (highlighted by the red box); the black box indicates the GULO region in the mouse. Similar syntenic regions are found in the homologous (evolutionarily-related) chromosomes of other Haplorhini primates.synteny-GULO region

The end result of our Genomicus exercise is a set of molecular level observations, unknown to those who built the original anatomy-based classification scheme, that support the evolutionary relationship between the Haplorhini and more broadly among mammals. Based on these observations, we can make a number of unambiguous and readily testable predictions. A newly discovered Haplorhini primate would be predicted to share a similar syntenic region and to be missing a functional GULO gene, whereas a newly discovered Strepsirrhini primate (or any mammal that does not require dietary ascorbic acid) should have a functional GULO gene within this syntenic region.  Similarly, we can explain the genomic similarities between those primates closely related to humans, such as the gorilla, gibbon, orangutan, and chimpanzee, as well as to make testable predictions about the genomic organization of extinct relatives, such as Neanderthals and Denisovians, using DNA recovered from fossils [link].

It remains to be seen how best to use these tools in a classroom context and whether having students use such tools influences their working understanding, and more generally, their acceptance of evolutionary mechanisms. That said, this is an approach that enables students to explore real data and to develop  plausible and predictive explanations for a range of genomic discoveries, likely to be relevant both to understanding how humans came to be, and in answering pragmatic questions about the roles of specific mutations and allelic variations in behavior, anatomy, and disease susceptibility.spacer bar

Some footnotes (figures reinserted 2 November 2020, with minor edits)

(1) Interested in a magnetic bumper image? visit: http://www.cafepress.com/bioliteracy

(2) An insight completely missing (unpredicted and unexplained) by any creationist / intelligent design approach to biology.

(3) Note, I have no connection that I know of with the Genomicus team, but I thank Tyler Square (now at UC Berkeley) for bringing it to my attention.

The trivialization of science education

It’s time for universities to accept their role in scientific illiteracy.  

There is a growing problem with scientific illiteracy, and its close relative, scientific over-confidence. While understanding science, by which most people seem to mean technological skills, or even the ability to program a device (1), is purported to be a critical competitive factor in our society, we see a parallel explosion of pseudo-scientific beliefs, often religiously held.  Advocates of a gluten-free paleo-diet battle it out with orthodox vegans for a position on the Mount Rushmore of self-righteousness, at the same time astronomers and astrophysicists rebrand themselves as astrobiologists (a currently imaginary discipline) while a subset of theoretical physicists, and the occasional evolutionary biologist, claim to have rendered ethicists and philosophers obsolete (oh, if it were only so). There are many reasons for this situation, most of which are probably innate to the human condition.  Our roots are in the vitamin C-requiring Haplorhini (dry nose) primate family, we were not evolved to think scientifically, and scientific thinking does not come easy for most of us, or for any of us over long periods of time (2). The fact that the sciences are referred to as disciplines reflects this fact, it requires constant vigilance, self-reflection, and the critical skepticism of knowledgeable colleagues to build coherent, predictive, and empirically validated models of the Universe (and ourselves).  In point of fact, it is amazing that our models of the Universe have become so accurate, particularly as they are counter-intuitive and often seem incredible, using the true meaning of the word.

Many social institutions claim to be in the business of developing and supporting scientific literacy and disciplinary expertise, most obviously colleges and universities.  Unfortunately, there are several reasons to question the general efficacy of their efforts and several factors that have led to this failure. There is the general tendency (although exactly how wide-spread is unclear, I cannot find appropriate statistics on this question) of requiring non-science students to take one, two, or more  “natural science” courses, often with associated laboratory sections, as a way to “enhance literacy and knowledge of one or more scientific disciplines, and enhance those reasoning and observing skills that are necessary to evaluate issues with scientific content” (source).

That such a requirement will “enable students to understand the current state of knowledge in at least one scientific discipline, with specific reference to important past discoveries and the directions of current development; to gain experience in scientific observation and measurement, in organizing and quantifying results, in drawing conclusions from data, and in understanding the uncertainties and limitations of the results; and to acquire sufficient general scientific vocabulary and methodology to find additional information about scientific issues, to evaluate it critically, and to make informed decisions” (source) suggests a rather serious level of faculty/institutional distain or apathy for observable learning outcomes, devotional levels of wishful thinking,  or simple hubris.  To my knowledge there is no objective evidence to support the premise that such requirements achieve these outcomes – which renders the benefits of such requirements problematic, to say the least (link).

On the other hand, such requirements have clear and measurable costs; going beyond the simple burden of added and potentially ineffective or off-putting course credit hours. The frequent requirement for multi-hour laboratory courses impacts the ability of students to schedule courses.  It would be an interesting study to examine how, independently of benefit, such laboratory course requirements impact students’ retention and time to degree, that is, bluntly put, costs to students and their families.

Now, if there were objective evidence that taking such courses improved students’ understanding of a specific disciplinary science and its application, perhaps the benefit would warrant the cost.  But one can be forgiven if one assumes a less charitable driver, that is, science departments’ self-interest in using laboratory and other non-major course requirements as means to support graduate students.  Clearly there is a need for objective metrics for scientific, that is disciplinary, literacy and learning outcomes.

And this brings up another cause for concern.  Recently, there has been a movement within the science education research community to attempt to quantify learning in terms of what are known as “forced choice testing instruments;” that is, tests that rely on true/false and multiple-choice questions, an actively anti-Socratic strategy.  In some cases, these tests claim to be research based.  As one involved in the development of such a testing instrument (the Biology Concepts Instrument or BCI), it is clear to me that such tests can serve a useful role in helping to identify areas in which student understanding is weak or confused [example], but whether they can provide an accurate or, at the end of the day, meaningful measure of whether students have developed an accurate working understanding of complex concepts and the broader meaning of observations is problematic at best.

Establishing such a level of understanding relies on Socratic, that is, dynamic and adaptive evaluations: can the learner clearly explain, either to other experts or to other students, the source and implications of their assumptions?  This is the gold standard for monitoring disciplinary understanding. It is being increasingly side-lined by those who rely on forced choice tests to evaluate learning outcomes and to support their favorite pedagogical strategies (examples available upon request).  In point of fact, it is often difficult to discern, in most science education research studies, what students have come to master, what exactly they know, what they can explain and what they can do with their knowledge. Rather unfortunately, this is not a problem restricted to non-majors taking science course requirements; majors can also graduate with a fragmented and partially, or totally, incoherent understanding of key ideas and their empirical foundations.

So what are the common features of a functional understanding of a particular scientific discipline, or more accurately, a sub-discipline?  A few ideas seem relevant.  A proficient needs to be realistic about their own understanding.  We need to teach disciplinary (and general) humility – no one actually understands all aspects of most scientific processes.  This is a point made by Fernbach & Sloman in their recent essay, “Why We Believe Obvious Untruths.”  Humility about our understanding has a number of beneficial aspects.  It helps keep us skeptical when faced with, and asked to accept, sweeping generalizations.

Such skepticism is part of a broader perspective, common among working scientists, namely the ability to distinguish the obvious from the unlikely, the implausible, and the impossible. When considering a scientific claim, the first criterion is whether there is a plausible mechanism that can be called upon to explain it, or does it violate some well-established “law of nature”. Claims of “zero waste” processes butt up against the laws of thermodynamics.

Going further, we need to consider how the observation or conclusions fits with other well established principles, which means that we have to be aware of these principles, as well as acknowledging that we are not universal experts in all aspects of science.  A molecular biologist may recognize that quantum mechanics dictates the geometries of atomic bonding interactions without being able to formally describe the intricacies of the molecule’s wave equation. Similarly, a physicist might think twice before ignoring the evolutionary history of a species, and claiming that quantum mechanics explains consciousness, or that consciousness is a universal property of matter.  Such a level of disciplinary expertise can take extended experience to establish, but is critical to conveying what disciplinary mastery involves to students; it is the major justification for having disciplinary practitioners (professors) as instructors.

From a more prosaic educational perspective other key factors need to be acknowledged, namely a realistic appreciation of what people can learn in the time available to them, while also understanding at least some of their underlying motivations, which is to say that the relevance of a particular course to disciplinary goals or desired educational outcomes needs to be made explicit and as engaging as possible, or at least not overtly off putting, something that can happen when a poor unsuspecting molecular biology major takes a course in macroscopic physics, taught by an instructor who believes organisms are deducible from first principles based on the conditions of the big bang.  Respecting the learner requires that we explicitly acknowledge that an unbridled thirst for an empirical, self-critical, mastery of a discipline is not a basic human trait, although it is something that can be cultivated, and may emerge given proper care.  Understanding the real constraints that act on meaningful learning can help focus courses on what is foundational, and help eliminate the irrelevant or the excessively esoteric.

Unintended consequences arise from “pretending” to teach students, both majors and non-science majors, science. One is an erosion of humility in the face of the complexity of science and our own limited understanding, a point made in a recent National Academy report that linked superficial knowledge with more non-scientific attitudes. The end result is an enhancement of what is known as the Kruger-Dunning effect, the tendency of people to seriously over-estimate their own expertise: “the effect describes the way people who are the least competent at a task often rate their skills as exceptionally high because they are too ignorant to know what it would mean to have the skill”.

A person with a severe case of Kruger-Dunning-itis is likely to lose respect for people who actually know what they are talking about. The importance of true expertise is further eroded and trivialized by the current trend of having photogenic and well-speaking experts in one domain pretend to talk, or rather to pontificate, authoritatively on another (3).  In a world of complex and arcane scientific disciplines, the role of a science guy or gal can promote rather than dispel scientific illiteracy.

We see the effects of the lack of scientific humility when people speak outside of their domain of established expertise to make claims of certainty, a common feature of the conspiracy theorist.  An oft used example is the claim that vaccines cause autism (they don’t), when the actual causes of autism, whether genetic and/or environmental, are currently unknown and the subject of active scientific study.  An honest expert can, in all humility, identify the limits of current knowledge as well as what is known for certain.  Unfortunately, revealing and ameliorating the levels of someone’s Kruger-Dunning-itis involves a civil and constructive Socratic interrogation, something of an endangered species in this day and age, where unseemly certainty and unwarranted posturing have replaced circumspect and critical discourse.  Any useful evaluation of what someone knows demands the time and effort inherent in a Socratic discourse, the willingness to explain how one knows what one thinks one knows, together with a reflective consideration of its implications, and what it is that other trained observers, people demonstrably proficient in the discipline, have concluded. It cannot be replaced by a multiple choice test.

Perhaps a new (old) model of encouraging in students, as well as politicians and pundits, an understanding of where science comes from, the habits of mind involved, the limits of, and constraints on, our current understanding  is needed.  At the college level, courses that replace superficial familiarity and unwarranted certainty with humble self-reflection and intellectual modesty might help treat the symptoms of Kruger-Dunning-itis, even though the underlying disease may be incurable, and perhaps genetically linked to other aspects of human neuronal processing.


some footnotes:

  1. after all, why are rather distinct disciplines lumped together as STEM (science, technology, engineering and mathematics).
  2.  Given the long history of Homo sapiens before the appearance of science, it seems likely that such patterns of thinking are an unintended consequence of selection for some other trait, and the subsequent emergence of (perhaps excessively) complex and self-reflective nervous system.
  3.  Another example of Neil Postman’s premise that education is be replaced by edutainment (see  “Amusing ourselves to Death”.

Go ahead and “teach the controversy:” it is the best way to defend science.

as long as teachers understand the science and its historical context

The role of science in modern societies is complex. Science-based observations and innovations drive a range of economically important, as well as socially disruptive, technologies. Many opinion polls indicate that the American public “supports” science, while at the same time rejecting rigorously established scientific conclusions on topics ranging from the safety of genetically modified organisms and the role of vaccines in causing autism to the effects of burning fossil fuels on the global environment [Pew: Views on science and society]. Given that a foundational principle of science is that the natural world can be explained without calling on supernatural actors, it remains surprising that a substantial majority of people appear to believe that supernatural entities are involved in human evolution [as reported by the Gallup organization]; although the theistic percentage has been dropping  (a little) of late.

God+evolution

This situation highlights the fact that when science intrudes on the personal or the philosophical (within which I include the theological and the  ideological), many people are willing to abandon the discipline of science to embrace explanations based on personal beliefs. These include the existence of a supernatural entity that cares for people, at least enough to create them, and that there are reasons behind life’s tragedies.

 

Where science appears to conflict with various non-scientific positions, the public has pushed back and rejected the scientific. This is perhaps best represented by the recent spate of “teach the controversy” legislative efforts, primarily centered on evolutionary theory and the realities of anthropogenic climate change [see Nature: Revamped ‘anti-science’ education bills]. We might expect to see, on more politically correct campuses, similar calls for anti-GMO, anti-vaccination, or gender-based curricula. In the face of the disconnect between scientific and non-scientific (philosophical, ideological, theological) personal views, I would suggest that an important part of the problem has didaskalogenic roots; that is, it arises from the way science is taught – all too often expecting students to memorize terms and master various heuristics (tricks) to answer questions rather than developing a self-critical understanding of ideas, their origins, supporting evidence, limitations, and practice in applying them.

Science is a social activity, based on a set of accepted core assumptions; it is not so much concerned with Truth, which could, in fact, be beyond our comprehension, but rather with developing a universal working knowledge, composed of ideas based on empirical observations that expand in their explanatory power over time to allow us to predict and manipulate various phenomena.  Science is a product of society rather than isolated individuals, but only rarely is the interaction between the scientific enterprise and its social context articulated clearly enough so that students and the general public can develop an understanding of how the two interact.  As an example, how many people appreciate the larger implications of the transition from an Earth to a Sun- or galaxy-centered cosmology?  All too often students are taught about this transition without regard to its empirical drivers and philosophical and sociological implications, as if the opponents at the time were benighted religious dummies. Yet, how many students or their teachers appreciate that as originally presented the Copernican system had more hypothetical epicycles and related Rube Goldberg-esque kludges, introduced to make the model accurate, than the competing Ptolemic Sun-centered system? Do students understand how Kepler’s recognition of elliptical orbits eliminated the need for such artifices and set the stage for Newtonian physics?  And how did the expulsion of humanity from the center to the periphery of things influence peoples’ views on humanity’s role and importance?

 

So how can education adapt to better help students and the general public develop a more realistic understanding of how science works?  To my mind, teaching the controversy is a particularly attractive strategy, on the assumption that teachers have a strong grounding in the discipline they are teaching, something that many science degree programs do not achieve, as discussed below. For example, a common attack against evolutionary mechanisms relies on a failure to grasp the power of variation, arising from stochastic processes (mutation), coupled to the power of natural, social, and sexual selection. There is clear evidence that people find stochastic processes difficult to understand and accept [see Garvin-Doxas & Klymkowsky & Fooled by Randomness].  An instructor who is not aware of the educational challenges associated with grasping stochastic processes, including those central to evolutionary change, risks the same hurdles that led pre-molecular biologists to reject natural selection and turn to more “directed” processes, such as orthogenesis [see Bowler: The eclipse of Darwinism & Wikipedia]. Presumably students are even more vulnerable to intelligent-design  creationist arguments centered around probabilities.

The fact that single cell measurements enable us to visualize biologically meaningful stochastic processes makes designing course materials to explicitly introduce such processes easier [Biology education in the light of single cell/molecule studies].  An interesting example is the recent work on visualizing the evolution of antibiotic resistance macroscopically [see The evolution of bacteria on a “mega-plate” petri dish].bacterial evolution-antibiotic resistancepng

To be in a position to “teach the controversy” effectively, it is critical that students understand how science works, specifically its progressive nature, exemplified through the process of generating and testing, and where necessary, rejecting or revising, clearly formulated and predictive hypotheses – a process antithetical to a Creationist (religious) perspective [a good overview is provided here: Using creationism to teach critical thinking].  At the same time, teachers need a working understanding of the disciplinary foundations of their subject, its core observations, and their implications. Unfortunately, many are called upon to teach subjects with which they may have only a passing familiarity.  Moreover, even majors in a subject may emerge with a weak understanding of foundational concepts and their origins – they may be uncomfortable teaching what they have learned.  While there is an implicit assumption that a college curriculum is well designed and effective, there is often little in the way of objective evidence that this is the case. While many of our dedicated teachers (particularly those I have met as part of the CU Teach program) work diligently to address these issues on their own, it is clear that many have not been exposed to a critical examination of the empirical observations and experimental results upon which their discipline is based [see Biology teachers often dismiss evolution & Teachers’ Knowledge Structure, Acceptance & Teaching of Evolution].  Many is the molecular biology department that does not require formal coursework in basic evolutionary mechanisms, much less a thorough consideration of natural, social, and sexual selection, and non-adaptive mechanisms, such as those associated with founder effects, population bottlenecks, and genetic drift, stochastic processes that play a key role in the evolution of many species, including humankind. Similarly, more ecologically- and physiologically-oriented majors are often “afraid” of the molecular foundations of evolutionary processes. As part of an introductory chemistry curriculum redesign project (CLUE), Melanie Cooper and her group at Michigan State University have found that students in conventional courses often fail to grasp key concepts, and that subsequent courses can sometimes fail to remediate the didaskalogenic damage done in earlier courses [see: an Achilles Heel in Chemistry Education].

The importance of a historical perspective: The power of scientific explanations are obvious, but they can become abstract when their historical roots are forgotten, or never articulated. A clear example is that the value of vaccination is obvious in the presence of deadly and disfiguring diseases. In their absence, due primarily to the effectiveness of wide-spread vaccination, the value of vaccination can be called into question, resulting in the avoidable re-emergence of these diseases.  In this context, it would be important that students understand the dynamics and molecular complexity of biological systems, so that they can explain why it is that all drugs and treatments have potential side-effects, and how each individual’s genetic background influences these side-effects (although in the case of vaccination, such side effects do not include autism).

Often “controversy” arises when scientific explanations have broader social, political, or philosophical implications. Religious objections to evolutionary theory arise primarily, I believe, from the implication that we (humans) are not the result of a plan, created or evolved, but rather that we are accidents of mindless, meaningless, and often gratuitously cruel processes. The idea that our species, which emerged rather recently (that is, a few million years ago) on a minor planet on the edge of an average galaxy, in a universe that popped into existence for no particular reason or purpose ~14 billion years ago, can have disconcerting implications [link]. Moreover, recognizing that a “small” change in the trajectory of an asteroid could change the chance that humanity ever evolved [see: Dinosaur asteroid hit ‘worst possible place’] can be sobering and may well undermine one’s belief in the significance of human existence. How does it impact our social fabric if we are an accident, rather than the intention of a supernatural being or the inevitable product of natural processes?

Yet, as a person who firmly believes in the French motto of liberté, égalité, fraternité and laïcité, I feel fairly certain that no science-based scenario on the origin and evolution of the universe or life, or the implications of sexual dimorphism or “racial” differences, etc, can challenge the importance of our duty to treat others with respect, to defend their freedoms, and to insure their equality before the law. Which is not to say that conflicts do not inevitably arise between different belief systems – in my own view, patriarchal oppression needs to be called out and actively opposed where ever it occurs, whether in Saudi Arabia or on college campuses (e.g. UC Berkeley or Harvard).

This is not to say that presenting the conflicts between scientific explanations of phenomena and non-scientific, but more important beliefs, such as equality under the law, is easy. When considering a number of natural cruelties, Charles Darwin wrote that evolutionary theory would claim that these are “as small consequences of one general law, leading to the advancement of all organic beings, namely, multiply, vary, let the strongest live  and the weakest die” note the absence of any reference to morality, or even sympathy for the “weakest”.  In fact, Darwin would have argued that the apparent, and overt cruelty that is rampant in the “natural” world is evidence that God was forced by the laws of nature to create the world the way it is, presumably a worlDarwin to Grayd that is absurdly old and excessively vast. Such arguments echo the view that God had no choice other than whether to create or not; that for all its flaws, evils, and unnecessary suffering this is, as posited by Gottfried Leibniz (1646-1716) and satirized by Voltaire (1694–1778) in his novel Candide, the best of all possible worlds. Yet, as a member of a reasonably liberal, and periodically enlightened, society, we see it as our responsibility to ameliorate such evils, to care for the weak, the sick, and the damaged and to improve human existence; to address prejudice and political manipulations [thank you Supreme Court for ruling against race-based redistricting].  Whether anchored by philosophical or religious roots, many of us are driven to reject a scientific (biological) quietism (“a theology and practice of inner prayer that emphasizes a state of extreme passivity”) by actively manipulating our social, political, and physical environment and striving to improve the human condition, in part through science and the technologies it makes possible.

At the same time, introducing social-scientific interactions can be fraught with potential  controversies, particularly in our excessively politicized and self-righteous society. In my own introductory biology class (biofundamentals), we consider potentially contentious issues that include sexual dimorphism and selection and social evolutionary processes and their implications.  As an example, social systems (and we are social animals) are susceptible to social cheating and groups develop defenses against cheaters; how such biological ideas interact with historical, political and ideological perspectives is complex, and certainly beyond the scope of an introductory biology course, but worth acknowledging [PLoS blog link].

yeats quote

In a similar manner, we understand the brain as an evolved cellular system influenced by various experiences, including those that occur during development and subsequent maturation.  Family life interacts with genetic factors in a complex, and often unpredictable way, to shape behaviors.  But it seems unlikely that a free and enlightened society can function if it takes seriously the premise that we lack free-will and so cannot be held responsible for our actions, an idea of some current popularity [see Free will could all be an illusion ]. Given the complexity of biological systems, I for one am willing to embrace the idea of constrained free will, no matter what scientific speculations are currently in vogue [but see a recent video by Sabine Hossenfelder You don’t have free will, but don’t worry, which has me rethinking].

Recognizing the complexities of biological systems, including the brain, with their various adaptive responses and feedback systems can be challenging. In this light, I am reminded of the contrast between the Doomsday scenario of Paul Ehrlich’s The Population Bomb, and the data-based view of the late Hans Rosling in Don’t Panic – The Facts About Population.

All of which is to say that we need to see science not as authoritarian, telling us who we are or what we should do, but as a tool to do what we think is best and why it might be difficult to achieve. We need to recognize how scientific observations inform and may constrain, but do not dictate our decisions. We need to embrace the tentative, but strict nature of the scientific enterprise which, while it cannot arrive at “Truth” can certainly identify non-sense.

Minor edits and updates and the addition of figures that had gone missing –  20 October 2020

After the March for Science, What Now?

Recently, I contributed to a project that turned healthy human tissues into an earlier stage of pancreatic cancer—a disease that carries a dismal 5-year survival rate of 5 percent.

 

When I described our project to a friend, she asked, “why in the world would you want to grow cancer in a lab?” I explained that by the time a patient learns that he has pancreatic cancer, the tumor has spread throughout the body. At that point, the patient typically has less than a year to live and his tumor cells have racked up a number of mutations, making clinical trials and molecular studies of pancreatic cancer evolution downright difficult. For this reason, our laboratory model of pancreatic cancer was available to scientists who wanted to use it to find the biological buttons that turn healthy cells into deadly cancer. By sharing our discovery, we wanted to enable others in developing drugs to treat cancer and screening tests to diagnose patients early. The complexity of this process demonstrates that science is a team effort that involves lots of time, money, and the brainpower of highly-trained individuals working together toward a single goal.

 

Many of the challenges we face today—from lifestyle diseases, to the growing strains of antibiotic-resistant superbugs in hospitals, to the looming energy crisis—require scientific facts and solutions. And although there’s never a guarantee of success, scientists persist in hopes that our collective discoveries will reverberate into the future. However, as a corollary, hindering scientific progress means a loss of possibilities.

 

Unfortunately, the deceleration of scientific progress seems likely possibility. In March, the White House released a document called “America First: A Budget Blueprint to Make America Great Again,” which describes deep cuts to some of the country’s most important funding agencies for science.

 

As it stands, the National Institutes of Health is set to lose nearly a fifth of its budget; the Department of Energy’s Office of Science, $900 million; and the Environmental Protection Agency, a 31.5 percent budget cut worth $2.6 billion. Imagine the discoveries that could have saved our lives or created jobs, which will instead languish solely as unsupported hypotheses in the minds of underfunded scientists.

 

Scientists cannot remain idle on the sidelines; we must be active in making the importance of scientific research known. Last weekend’s March on Science drew tens of thousands of people around more than 600 rallies across the world, but the challenge now lies in harnessing the present momentum and energy to make sustained efforts to maintain government funding for a wide range of scientific projects.

 

The next step is to get involved in shaping public opinion and policy. As it stands, Americans on both sides of the political spectrum have expressed ambivalence about the validity of science on matters ranging from climate change to childhood vaccinations. Academics can start tempering the public’s unease toward scientific authority and increase public support for the sciences by stepping off the ivory tower. Many researchers are already engaging with the masses by posting on social media, penning opinion articles, and appearing on platforms aimed at public consumption (Youtube channels, TED, etc). A researcher is her own best spokesperson in explaining the importance of her work and the scientific process; unfortunately, a scientist’s role as an educator in the classroom and community is often shoved out by the all-encompassing imperative to publish or perish. As a profession, we must become more willing to step out of our laboratories to engage with the public and educate the next generation of science-savvy citizens.

 

In addition, many scientists have expressed interest in running for office, including UC Berkeley’s Michael Eisen (who also a co-founder of PLOS). When asked by Science why he was considering a run for senate, Eisen responded:

 

“My motivation was simple. I’m worried that the basic and critical role of science in policymaking is under a bigger threat than at any point in my lifetime. We have a new administration and portions of Congress that don’t just reject science in a narrow sense, but they reject the fundamental idea that undergirds science: That we need to make observations about the world and make our decisions based on reality, not on what we want it to be. For years science has been under political threat, but this is the first time that the whole notion that science is important for our politics and our country has been under such an obvious threat.”

 

If scientists can enter into the house and senate in greater numbers, they will be able to inject scientific sense into the discussions held by members of legislature whose primary backgrounds are in business and law.

 

Science is a bipartisan issue that should not be bogged down by the whims of political machinations. We depend on research to address some of the most pressing problems of our time, and America’s greatness lies in part on its leadership utilizing science as an exploration of physical truths and a means of overcoming our present limitations and challenges.

 

 

Check out Yoo Jung’s book aimed at helping college students excel in science, What Every Science Student Should Know (University of Chicago Press)