Determinism versus free will, a false dichotomy 

from wikipedia – Brownian motion

You might be constrained and contingent on past events, but you are not determined! (that said you are not exactly free either).

AI Generated Summary: Arguments for and against determinism and free will in relation to biological systems often overlook the fact that neither is entirely consistent with our understanding of how these systems function. The presence of stochastic, or seemingly random, events is widespread in biological systems and can have significant functional effects. These stochastic processes lead to a range of unpredictable but often useful behaviors. When combined with self-consciousness, such as in humans, these behaviors are not entirely determined but are still influenced by the molecular and cellular nature of living systems. They may feel like free actions, but they are constrained by the inherent biological processes.

Recently two new books have appeared arguing for (1) and against (2) determinism in the context of biological systems. There have also been many posts on the subject (here is the latest one by John Horgan. These works join an almost constant stream of largely unfounded, and bordering on anti-scientific speculation, including suggestions that consciousness has non-biological roots and exists outside of animals. Speaking as a simple molecular/cellular biologist with a more than passing interest in how to teach scientific thinking effectively, it seems necessary to anchor any meaningful discussion of determinism vs free will in clearly defined terms. Just to start, what does it mean to talk about a system as “determined” if we can not accurately predict its behavior?  This brings us to a discussion of what are known as stochastic processes.  

The term random is often use to describe noise, unpredictable variations in measurements or the behavior of a system. Common understanding of the term random implies that noise is without a discernible cause. But the underlying assumption of the sciences, I have been led to believe, is that the Universe is governed exclusively by natural processes; magical or supernatural processes are not necessary and are excluded from scientific explanations. The implication of this naturalistic assumption is that all events have a cause, although the cause(s) may be theoretically or practically unknowable. For example, there are the implications of Heisenberg’s uncertainty principle, which limits our ability to measure all aspects of a system. On the practical side, measuring the position and kinetic energy of each molecule (and the parts of larger molecules) in a biological system is likely to kill the cell. The apparent conclusion is that the measurement accuracy needed to consider a system, particularly a biological system as “determined” is impossible to achieve. In a strict sense, determinism is an illusion. 

The question that remains is how to conceptualize the “random”and noisy aspects of systems.  I would argue that the observable reality of stochasticity, particularly in biological systems at all levels of organization, from single cells to nervous systems, largely resolves the scientific paradox of randomness. Simply put, stochastic processes display a strange and counter-intuitive behavior: they are unpredictable at the level of individual events, but the behaviors of populations become increasingly predictable as population size increases. Perhaps the most widely known examples of stochastic processes are radioactive decay and Brownian motion. Given a large enough population of atoms, it is possible to accurately predict the time it takes for half of the atoms to decay. But knowing the half-life of an isotope does not enable us to predict when any particular atom will decay. In Schrödinger’s famous scenario a living cat is placed in an opaque box containing a radioactive atom; when the atom decays, it activates a process that leads to the death of the cat.  At any particular time after the box is closed, it is impossible to predict with certainty whether the cat is alive or dead because radioactive decay is a stochastic process. Only by opening the box can we know for sure the state of the cat.  We can, if we know the half-life of the isotope, estimate the probability that the cat is alive but rest assured, as a biologist who has a cat, at no time is the cat both alive and dead. We cannot know the “state of the cat” for sure until we open the box.

Something similar is going on with Brownian motion, the jiggling of pollen grains in water first described by Robert Brown in 1827. Einstein reasoned that “if tiny but visible particles were suspended in a liquid, the invisible atoms in the liquid would bombard the suspended particles and cause them to jiggle”. His conclusion was that Brownian motion provided evidence for the atomic and molecular nature of matter. Collisions with neighboring molecules provides the energy that drives diffusion; it drives the movement of molecules so that regulatory interactions can occur and provides the energy needed to disrupt such molecular interactions. The stronger the binding interaction between atoms or molecules the longer, ON AVERAGE, they will remain associated with one another. We can measure interaction affinities based on the half-life of interactions in a large enough population, but as with radioactive decay when exactly any particular complex dissociates cannot be predicted.

Molecular processes clearly “obey” rules. Energy is moved around through collisions, but we cannot predict when any particular event will occur. Gene expression is controlled by the assembly and disassembly of multicomponent complexes. The result is that we cannot know for sure how long a particular gene will be active or repressed. The result of such unpredictable assembly/disassembly events leads to what is known as transcriptional bursting; bursts of messenger RNA synthesis from a gene followed by periods of “silence” (3).  A similar behavior is associated with the synthesis of polypeptides (4). Both processes can influence cellular and organismic behaviors. Many aspects of biological systems,  including embryonic development, immune system regulation, and the organization and activity of neurons and supporting cells involved in behavioral responses to external and internal signals (5), display such noisy behaviors.

Why are biological systems so influenced by stochastic processes? Two simple reasons – they are composed of small, sometimes very small numbers of specific molecules. The obvious and universal extreme is that a cell typically contains one to two copies of each gene. Remember, a single change in a single gene can produce a lethal effect on the cell or organism that carries it.  Whether a gene is “expressed” or not can alter, sometimes dramatically, cellular and system behaviors. The number of regulatory, structural, and catalytic molecules (typically proteins) present in a cell is often small leading to a situation in which the effects of large numbers do not apply. Consider a “simple” yeast cell. Using a range of techniques Ho et al (6) estimated that such cells contain about 42 million protein molecules. A yeast cell has around 5300 genes that encode protein components, with an average of 8400 copies of each protein. In the case of proteins present at low levels, the effects of noise can be functionally significant. While human cells are larger and contain more genes (~25,000) each gene remains at one to two copies per cell. In particular, the number of gene regulatory proteins tends to be on the low side. If you are curious the B10NUMB3R5 site hosted by Harvard University provides empirically derived estimates of the average number of various molecules in various organisms and cell types. 

The result is that noisy behaviors in living systems are ubiquitous and their effects unavoidable. Uncontrolled they could lead to the death of the cell and organism. Given that each biological system appears to have an uninterrupted billion year long history going back to the “last universal common ancestor”, it is clear that highly effective feedback systems monitor and adjust the living state, enabling it to respond to molecular and cellular level noise as well as various internal and external inputs. This “internal model” of the living state is continuously updated to (mostly) constrain stochastic effects (7).  

Organisms exploit stochastic noise in various ways. It can be used to produce multiple, and unpredictable behaviors  from a single genome, and are one reason that identical twins are not perfectly identical (8). Unicellular organisms take advantage of stochastic processes to probe (ask questions of) their environment, and respond to opportunities and challenges. In a population of bacteria it is common to find that certain cells withdraw from active cell division, a stochastic decision that renders them resistant to antibiotics that kill rapidly dividing cells. These “persisters” are no different genetically from their antibiotic-sensitive relatives (9). Their presence enables the population to anticipate and survive environmental challenges.  Another unicellular stochastically-regulated system is the bacteria E. coli‘s lac operon, a classic system that appears to have traumatized many a biology student.  It enables the cell to ask “is there lactose in my environment?”  How?  A repressor molecule, LacI, is present in about 10 copies per cell. When more easily digestible sugars are absent the cell enters a stress state. In this state, when the LacI protein is knocked off the gene’s regulatory region there is a burst of gene expression. If lactose is present the proteins encoded by the operon are synthesized and enable lactose to enter and be broken down. One of the breakdown products inhibits the repressor protein, so that the operon remains active. No lactose present? The repressor rebinds and the gene goes silent (10).  Such noisy regulatory processes enables cells to periodically check their environment so that genes stay on only when they are useful.   

As noted by Honegger & de Bivort (11)(see also post on noise) decades of inbreeding with rodents in shared environments eliminated only 20–30% of the observed variance in a number of phenotypes. Such unpredictability can be beneficial. If an organism always “jumps” in the same direction on the approach of a predator it won’t take long before predators anticipate their behavior. Recent molecular techniques, particularly the ability to analyze the expression of genes at the single cell level, have revealed the noisiness of gene expression within cells of the same “type”.  Surprisingly, in about 10% of human genes, only the maternal or the paternal version of a gene is expressed in a particular cell, leading to regions of the body with effectively different genomes.  This process of “monoallelic expression” is distinct from the dosage compensation associated with the random “inactivation” of one or the other X-chromosomes in females. Monoallelic expression has been linked to  “adaptive signaling processes, and genes linked to age-related diseases such as neurodegeneration and cancer” (12). The end result of noisy gene expression, mutation, and various “downstream” effects is that we are all mosaics, composed of clones of cells that behave differently due to noisy molecular differences.  

Consider your brain. On top of the recently described identification of  over 3000 neural cell types in the human brain (13), there is noisy as well as experience-dependent variation in gene expression, neuronal morphology and connectedness, and in the rates and patterns of neuronal firing due to differences in synaptic structure, position, strength, and other factors. Together these can be expected to influence how you (your brain) perceives and processes the external world, your own internal state, and the effects associated with the interaction between these two “models”.  Of course the current state of your brain has been influenced, constrained by and contingent upon by past inputs and experiences, and the noisy events associated with its development. At the cellular level, the sum of these molecular and cellular interactions can be considered the consciousness of the cell, but this is a consciousness not necessarily aware of itself. In my admittedly naive view, as neural systems, brains, grow in complexity, they become aware of their own workings. As Godfrey-Smith (14) puts it, “brain processes are not causes of thoughts and experiences; they are thoughts and experiences”.  Thoughts become inputs into the brain’s model of itself.

What seems plausible is that as nervous systems increase in complexity, processing increasing amounts of information including information arising from its internal workings, it may come to produce a meta-model that for reasons “known” only to itself needs to make sense of those experiences, feelings, and thoughts. In contrast to the simpler questions asked by bacteria, such as “is there an antibiotic or lactose in my world?”, more complex (neural) systems may ask “who is to blame for the pain and suffering in the world?”  I absent-mindedly respond with a smile to a person at a coffeehouse, and then my model reconsiders (updates) itself depending, in part, upon their response, previous commitments or chores, and whether other thoughts distract or attract “me”. Out of this ferment of updating models emerges self-conscious biological activities – I turn to chat or bury my head back in my book. How I (my model) responds is a complex function of how my model works and how it interprets what is going on, a process influenced by noise, genetics, and past experiences; my history of rewards, traumas, and various emotional and “meaningful” events.

Am I (my model) free to behave independently from these effects? no! But am I (my model) determined by them, again no! The effects of biological noise in its various forms, together with past and present events will be reinforced or suppressed by my internal network and my history of familial, personal, and social experiences. I feel “free” in that there are available choices, because I am both these models and the process of testing and updating them. Tentative models of what is going on (thinking fast) are then updated based on new information or self-reflection (thinking slower). I attempt to discern what is “real” and what seems like an appropriate response. When the system (me) is working non-pathologically, it avoids counter-productive, self-destructive ideations and actions; it can produce sublime metaphysical abstractions and self-sacrificing (altruistic) behaviors.  Mostly it acts to maintain itself and adapt, often resorting to and relying upon the stories it tells itself.  I am neither determined nor free, just an organism coping, or attempting to cope, with the noisy nature of existence, its own internal systems, and an excessively complex neural network.

Added notes: Today (5 Dec. 23) was surprised to discover this article (Might There Be No Quantum Gravity After All?) with the following quote “not all theories need be reversible, they can also be stochastic. In a stochastic theory, the initial state of a physical system evolves according to an equation, but one can only know probabilistically which states might occur in the future—there is no unique state that one can predict.” Makes you think! Also realized that I should have cited Zechner et al (added to REF 11) and now I have to read “Free will without consciousness? by Mudrik et al.,  2022. Trends in Cog. Sciences 26: 555-566.

Literature cited

  1. Sapolsky, R.M. 2023. Determined: A Science of Life Without Free Will. Penguin LLC US
  2. Mitchell, K.J. 2023. Free Agents: How Evolution Gave Us Free Will. Princeton. 
  3. Fukaya, T. (2023). Enhancer dynamics: Unraveling the mechanism of transcriptional bursting. Science Advances, 9(31), eadj3366.
  4. Livingston, N. M., Kwon, J., Valera, O., Saba, J.A., Sinha, N.K., Reddy, P., Nelson, B. Wolfe, C., Ha, T.,Green, R., Liu, J., & Bin Wu (2023). Bursting translation on single mRNAs in live cells. Molecular Cell
  5. Harrison, L. M., David, O., & Friston, K. J. (2005). Stochastic models of neuronal dynamics. Philosophical Transactions of the Royal Society B: Biological Sciences, 360(1457), 1075-1091. 
  6. Ho, B., Baryshnikova, A., & Brown, G. W. (2018). Unification of protein abundance datasets yields a quantitative Saccharomyces cerevisiae proteome. Cell systems, 6, 192-205. 
  7. McNamee & Wolpert (2019). Internal models in biological control. Annual review of control, robotics, and autonomous systems, 2, 339-364.
  8. Czyz, W., Morahan, J. M., Ebers, G. C., & Ramagopalan, S. V. (2012). Genetic, environmental and stochastic factors in monozygotic twin discordance with a focus on epigenetic differences. BMC medicine, 10, 1-12.
  9. Manuse, S., Shan, Y., Canas-Duarte, S.J., Bakshi, S., Sun, W.S., Mori, H., Paulsson, J. and Lewis, K., 2021. Bacterial persisters are a stochastically formed subpopulation of low-energy cells. PLoS biology, 19, p.e3001194.
  10. Vilar, J. M., Guet, C. C. and Leibler, S. (2003). Modeling network dynamics: the lac operon, a case study. J Cell Biol 161, 471-476.
  11. Honegger & de Bivort. 2017. Stochasticity, individuality and behavior & Zechner, C., Nerli, E., & Norden, C. 2020. Stochasticity and determinism in cell fate decisionsDevelopment147, dev181495.
  12. Cepelewicz 2022. Nature Versus Nurture? Add ‘Noise’ to the Debate
  13. Johansen, N., Somasundaram, S.,Travaglini, K.J., Yanny, A.M., Shumyatcher, M., Casper, T., Cobbs, C., Dee, N., Ellenbogen, R., Ferreira, M., Goldy, J., Guzman, J., Gwinn, R., Hirschstein, D., Jorstad, N.L.,Keene, C.D., Ko, A., Levi, B.P.,  Ojemann, J.G., Nadiy, T.P., Shapovalova, N., Silbergeld, D., Sulc, J., Torkelson, A., Tung, H., Smith, K.,Lein, E.S., Bakken, T.E., Hodge, R.D., & Miller, J.A (2023). Interindividual variation in human cortical cell type abundance and expression. Science, 382, eadf2359.
  14. Godfrey-Smith, P. (2020). Metazoa: Animal life and the birth of the mind. Farrar, Straus and Giroux.

Sounds like science, but it ain’t …

we are increasingly assailed with science-related “news” – stories that too often involve hype and attempts to garner attention (and no, half-baked ideas are not theories, they are often non-scientific speculation or unconstrained fantasies).

The other day, as is my addiction, I turned to the “Real Clear Science” website to look for novel science-based stories (distractions from the more horrifying news of the day). I discovered two links that seduced me into clicking: “Atheism is not as rare or as rational as you think” by Will Gervais and Peter Sjöstedt-H’s “Consciousness and higher spatial dimensions“.  A few days later I encountered “Consciousness Is the Collapse of the Wave Function” by Stuart Hameroff. On reading them (more below), I faced the realization that science itself, and its distorted popularization by both institutional PR departments and increasingly by scientists and science writers, may be partially responsible for the absurdification of public discourse on scientific topics [1].  In part the problem arises from the assumption that science is capable of “explaining” much more than is actually the case. This insight is neither new nor novel. Timothy Caulfield’s essay Pseudoscience and COVID-19 — we’ve had enough already focuses on the fact that various, presumably objective data-based, medical institutions have encouraged the public’s thirst for easy cures for serious, and often incurable diseases.  As an example, “If a respected institution, such as the Cleveland Clinic in Ohio, offers reiki — a science-free practice that involves using your hands, without even touching the patient, to balance the “vital life force energy that flows through all living things” — is it any surprise that some people will think that the technique could boost their immune systems and make them less susceptible to the virus?” That public figures and trusted institutions provide platforms for such silliness [see Did Columbia University cut ties with Dr. Oz?] means that there is little to distinguish data-based treatments from faith- and magical-thinking based placebos. The ideal of disinterested science, while tempered by common human frailties, is further eroded by the lure of profit and/or hope of enhanced public / professional status and notoriety.  As noted by Pennock‘ “Science never guarantees absolute truth, but it aims to seek better ways to assess empirical claims and to attain higher degrees of certainty and trust in scientific conclusions“. Most importantly, “Science is a set of rules that keep the scientists from lying to each other. [2]

It should surprise no one that the failure to explicitly recognize the limits, and evolving nature of scientific knowledge, opens the door to self-interested hucksterism at both individual and institutional levels. Just consider the number of complementary/alternative non-scientific “medical” programs run by prestigious institutions. The proliferation of pundits, speaking outside of their areas of established expertise, and often beyond what is scientifically knowable (e.g. historical events such as the origin of life or the challenges of living in the multiverse which are, by their very nature, unobservable) speaks to the increasingly unconstrained growth of pathological, bogus, and corrupted science  which, while certainly not new [3], has been facilitated by the proliferation of public, no-barrier, no-critical feedback platforms [1,4].  Ignoring the real limits of scientific knowledge and rejecting, or ignoring, the expertise of established authorities, rejects the ideals that have led to science that “works”.  

Of course, we cannot blame the distortion of science for every wacky idea; crazy, conspiratorial and magical thinking may well be linked to the cognitive “features” (or are they bugs) of the human brain. Norman Cohn describes the depressing, and repeated pattern behind the construction of dehumanizing libels used to justify murderous behaviors towards certain groups [5].  Recent studies indicate that brains, whether complex or simple neural networks, appear to construct emergent models of the world, models they use to coordinate internal perceptions with external realities [6].  My own (out of my area of expertise) guess is that the complexity of the human brain is associated with, and leads to the emergence of internal “working models” that attempt to make sense of what is happening to us, in part to answer questions such as why the good die young and the wicked go unpunished. It seems likely that our social nature (and our increasing social isolation) influences these models, models that are “checked” or “validated” against our experiences. 

It was in this context that Gervais’s essay on atheism caught my attention. He approaches two questions: “how Homo sapiens — and Homo sapiens alone — came to be a religious species” and “how disbelief in gods can exist within an otherwise religious species?”  But is Homo sapiens really a religious species and what exactly is a religion? Is it a tool that binds social groups of organisms together, a way of coping with, and giving meaning to, the (apparent) capriciousness of existence and experience, both, or something else again?  And how are we to know what is going on inside other brains, including the brains of chimps, whales, or cephalopods? In this light I was struck by an essay by Sofia Deleniv “The ‘me’ illusion: How your brain conjures up your sense of self” that considers the number of species that appear to be able to recognize themselves in a mirror. Turns out, this is not nearly as short a list as was previously thought, and it seems likely that self-consciousness, the ability to recognize yourself as you, may be a feature of many such systems.  Do other organisms possess emergent “belief systems” that help process incoming and internal signals, including their own neural noise? When the author says, “We then subtly gauge participants’ intuitions” by using “a clever experiment to see how people mentally represent atheists” one is left to wonder whether there are direct and objective measures of “intuitions” or “mental representations”?   Then the shocker, after publishing a paper claiming that “Analytic Thinking Promotes Religious Disbelief“, the authors state that “the experiments in our initial Science paper were fatally flawed, the results no more than false positives.’ One is left to wonder did the questions asked make sense in the first place. While it initially seemed scientific (after all it was accepted and published in a premiere scientific journal), was it ever really science? 

Both “Consciousness and Higher Spatial Dimensions” and “Consciousness Is the Collapse of the Wave Function”, sound very scientific. Some physicists (the most sciencey of scientists, right?) have been speculating via “string theory” and “multiverses”, a series of unverified (and likely unverifiable) speculations, that they universe we inhabit has many many more than the three spatial dimensions we experience.  But how consciousness, an emergent property of biological (cellular) networks, is related to speculative physics is not clear, no matter what Nobel laureates in physics may say.  Should we, the people, take these remarks seriously?  After all these are the same folks who question the reality of time (for no good reason, as far as I can tell, as I watch my new grandchild and myself grow older rather than younger). 

Part of the issue involves what has been called “the hard problem of consciousness”, but as far as I can tell, consciousness is not a hard problem, but a process that emerges from systems of neural cells, interacting with one another and their environment in complex ways, not unlike the underlying processes of embryonic development, in which a new macroscopic organism composed of thousands to billions of cells emerges from a single cell.  And if the brain and body are generating signals (thoughts) then in makes sense these in turn feed back into the system, and as consciousness becomes increasingly complex, these thoughts need to be “understood” by the system that produced them.  The system may be forced to make sense of itself (perhaps that is how religions and other explanatory beliefs come into being, settling the brain so that it can cope with the material world, whether a nematode worm, an internet pundit, a QAnon wack-o, a religious fanatic, or a simple citizen, trying to make sense of things.

Thanks to Melanie Cooper for editorial advice and Steve Pollock for checking my understanding of physics; all remaining errors are mine alone!

  1. Scheufele, D. A. and Krause, N. M. (2019). Science audiences, misinformation, and fake news. Proceedings of the National Academy of Sciences 116, 7662-7669
  2. Kenneth S. Norris, cited in False Prophet by Alexander Kohn (and cited by John Grant in Corrupted Science. 
  3.  See Langmuir, I. (1953, recovered and published in 1989). “Pathological science.” Research-Technology Management 32: 11-17; “Corrupted Science: Fraud, Ideology, and Politics in Science” and “Bogus Science: or, Some people really believe these things” by John Grant (2007 and 2009)
  4.  And while I personally think Sabine Hossenfelder makes great explanatory videos, even she is occasionally tempted to go beyond the scientifically demonstrable: e.g. You don’t have free will, but don’t worry and An update on the status of superdeterminism with some personal notes  
  5.  Norman Cohn’s (1975) “Europe’s Inner Demons” will reveal.
  6. Kaplan, H. S. and Zimmer, M. (2020). Brain-wide representations of ongoing behavior: a universal principle? Current opinion in neurobiology 64, 60-69.

Misinformation in and about science.

originally published as https://facultyopinions.com/article/739916951 – July 2021

There have been many calls for improved “scientific literacy”. Scientific literacy has been defined in a number of, often ambiguous, ways (see National Academies of Sciences and Medicine, 2016 {1}). According to Krajcik & Sutherland (2010) {2} it is “the understanding of science content and scientific practices and the ability to use that knowledge”, which implies “the ability to critique the quality of evidence or validity of conclusions about science in various media, including newspapers, magazines, television, and the Internet”. But what types of critiques are we talking about, and how often is this ability to critique, and the scientific knowledge it rests on, explicitly emphasized in the courses non-science (or science) students take? As an example, highlighted by Sabine Hossenfelder (2020) {3}, are students introduced to the higher order reasoning and understanding of the scientific enterprise needed to dismiss a belief in a flat (or a ~6000 year old) Earth?

While the sources of scientific illiteracy are often ascribed to social media, religious beliefs, or economically or politically motivated distortions, West and Bergstrom point out how scientists and the scientific establishment (public relations departments and the occasional science writer) also play a role. They identify the problems arising from the fact that the scientific enterprise (and the people who work within it) act within “an attention economy” and “compete for eyeballs just as journalists do.” The authors provide a review of all of the factors that contribute to misinformation within the scientific literature and its media ramifications, including the contribution of “predatory publishers” and call for “better ways of detecting untrustworthy publishers.” At the same time, there are ingrained features of the scientific enterprise that serve to distort the relevance of published studies, these include not explicitly identifying the organism in which the studies are carried out, and so obscuring the possibility that they might not be relevant to humans (see Kolata, 2013 {4}). There are also systemic biases within the research community. Consider the observation, characterized by Pandey et al. (2014) {5} that studies of “important” genes, expressed in the nervous system, are skewed: the “top 5% of genes absorb 70% of the relevant literature” while “approximately 20% of genes have essentially no neuroscience literature”. What appears to be the “major distinguishing characteristic between these sets of genes is date of discovery, early discovery being associated with greater research momentum—a genomic bandwagon effect”, a version of the “Matthew effect” described by Merton (1968) {6}. In the context of the scientific community, various forms of visibility (including pedigree and publicity) are in play in funding decisions and career advancement. Not pointed out explicitly by West and Bergstrom is the impact of disciplinary experts who pontificate outside of their areas of expertise and speculate beyond what can be observed or rejected experimentally, including speculations on the existence of non-observable multiverses, the ubiquity of consciousness (Tononi & Koch, 2015 {7}), and the rejection of experimental tests as a necessary criterion of scientific speculation (see Loeb, 2018 {8}) spring to mind.

Many educational institutions demand that non-science students take introductory courses in one or more sciences in the name of cultivating “scientific literacy”. This is a policy that seems to me to be tragically misguided, and perhaps based more on institutional economics than student learning outcomes. Instead, a course on “how science works and how it can be distorted” would be more likely to move students close to the ability to “critique the quality of evidence or validity of conclusions about science”. Such a course could well be based on an extended consideration of the West and Bergstrom article, together with their recently published trade book “Calling bullshit: the art of skepticism in a data-driven world” (Bergstrom and West, 2021 {9}), which outlines many of the ways that information can be distorted. Courses that take this approach to developing a skeptical (and realistic) approach to understanding how the sciences work are mentioned, although what measures of learning outcomes have been used to assess their efficacy are not described.

literature cited

  1. Science literacy: concepts, contexts, and consequencesCommittee on Science Literacy and Public Perception of Science, Board on Science Education, Division of Behavioral and Social Sciences and Education, National Academies of Sciences, Engineering, and Medicine.2016 10 14; PMID: 27854404
  2. Supporting students in developing literacy in science. Krajcik JS, Sutherland LM.Science. 2010 Apr 23; 328(5977):456-459PMID: 20413490
  3. Flat Earth “Science”: Wrong, but not Stupid. Hossenfelder S. BackRe(Action) blog, 2020, Aug 22 (accessed Jul 29, 2021)
  4. Mice fall short as test subjects for humans’ deadly ills. Kolata G. New York Times, 2013, Feb 11 (accessed Jul 29, 2021)
  5. Functionally enigmatic genes: a case study of the brain ignorome. Pandey AK, Lu L, Wang X, Homayouni R, Williams RW.PLoS ONE. 2014; 9(2):e88889PMID: 24523945
  6. The Matthew Effect in Science: The reward and communication systems of science are considered.Merton RK.Science. 1968 Jan 5; 159:56-63 PMID: 17737466
  7. Consciousness: here, there and everywhere? Tononi G, Koch C.Philos Trans R Soc Lond B Biol Sci. 2015 May 19; 370(1668)PMID: 25823865
  8. Theoretical Physics Is Pointless without Experimental Tests. Loeb A. Scientific American blog, 2018, Aug 10 [ Blog piece] (accessed Jul 29, 2021)
  9. Calling bullshit: the art of skepticism in a data-driven world.Bergstrom CT, West JD. Random House Trade Paperbacks, 2021ISBN: ‎ 978-0141987057

Is it possible to teach evolutionary biology “sensitively”?

Michael Reiss, a professor of science education at University College London and an Anglican Priest, suggests that “we need to rethink the way we teach evolution” largely because conventional approaches can be unduly confrontational and “force religious children to choose between their faith and evolution” or to result in students who”refuse to engage with a lesson.” He suggests that a better strategy would be akin to those use to teach a range of “sensitive” subjects “such as sex, pornography, ethnicity, religion, death studies, terrorism, and others” and could “help some students to consider evolution as a possibility who would otherwise not do so.” [link to his original essay and a previous post on teaching evolution: Go ahead and teach the controversy].

There is no doubt that an effective teacher attempts to present materials sensitively; it is the rare person who will listen to someone who “teaches” ideas in a hostile, alienating, or condescending manner. That said, it can be difficult to avoid the disturbing implications of scientific ideas, implications that can be a barrier to their acceptance. The scientific conclusion that males and females are different but basically the same can upset people on various sides of the theo-political spectrum. 

In point of fact an effective teacher, a teacher who encourages students to question their long held, or perhaps better put, familial or community beliefs, can cause serious social push-back  – Trouble with a capital T.  It is difficult to imagine a more effective teacher than Socrates (~470-399 BCE). Socrates “was found guilty of ‘impiety’ and ‘corrupting the young’, sentenced to death” in part because he was an effective teacher (see Socrates was guilty as charged).  In a religious and political context, challenging accepted Truths (again with a capital T) can be a crime.  In Socrates’ case”Athenians probably genuinely felt that undesirables in their midst had offended Zeus and his fellow deities,” and that, “Socrates, an unconventional thinker who questioned the legitimacy and authority of many of the accepted gods, fitted that bill.”  

So we need to ask of scientists and science instructors, does the presentation of a scientific, that is, a naturalistic and non-supernatural, perspective in and of itself represent an insensitivity to those with a super-natural belief system. Here it is worth noting a point made by the philosopher John Gray, that such systems extend beyond those based on a belief in god(s); they include those who believe, with apocalyptic certainty, in any of a number of Truths, ranging from the triumph of a master race, the forced sterilization of the unfit, the dictatorship of the proletariat, to history’s end in a glorious capitalist and technological utopia. Is a science or science instruction that is “sensitive” to, that is, uncritical of or upsetting to those who hold such beliefs, possible? 

My original impression is that one’s answer to this question is likely to be determined by whether one considers science a path to Truth, with a purposeful capital T, or rather that the goal of scientists is to build a working understanding of the world around and within us.  Working scientists, and particularly biologists who must daily confront the implications of apparently un-intelligent designed organisms (due to ways evolution works) are well aware that absolute certainty is counterproductive. Nevertheless, the proven explanatory and technological power of the scientific enterprise cannot help but reinforce the strong impression that there is some deep link between scientific ideas and the way the world really works.  And while some scientists have advocated unscientific speculations (think multiverses and cosmic consciousness), the truth, with a small t, of scientific thinking is all around us.  

Photograph of the Milky Way by Tim Carl photography, used by permission 

 A science-based appreciation of the unimaginable size and age of the universe, taken together with compelling evidence for the relatively recent appearance of humans (Homo sapiens from their metazoan, vertebrate, tetrapod, mammalian, and primate ancestors) cannot help but impact our thinking as to our significance in the grand scheme of things (assuming that there is such a, possibly ineffable, plan)(1). The demonstrably random processes of mutation and the generally ruthless logic by which organisms survive, reproduce, and evolve, can lead even the most optimistic to question whether existence has any real meaning.  

Consider, as an example, the potential implications of the progress being made in terms of computer-based artificial intelligence, together with advances in our understanding of the molecular and cellular connection networks that underlie human consciousness and self-consciousness. It is a small step to conclude, implicitly or explicitly, that humans (and all other organisms with a nervous system) are “just” wet machines that can (and perhaps should) be controlled and manipulated. The premise, the “self-evident truth”, that humans should be valued in and of themselves, and that their rights should be respected (2) is eroded by the ability of machines to perform what were previously thought to be exclusively human behaviors. 

Humans and their societies have, after all, been around for only a few tens of thousands of years.  During this time, human social organizations have passed from small wandering bands influenced by evolutionary kin and group selection processes to produce various social systems, ranging from more or less functional democracies, pseudo-democracies (including our own growing plutocracy), dictatorships, some religion-based, and totalitarian police states.  Whether humans have a long term future (compared to the millions of years that dinosaurs dominated life on Earth) remains to be seen – although we can be reasonably sure that the Earth, and many of its non-human inhabitants, will continue to exist and evolve for millions to billions of years, at least until the Sun explodes. 

So how do we teach scientific conclusions and their empirical foundations, which combine to argue that science represents how the world really works, without upsetting the most religiously and politically fanatical among us?  Those who most vehemently reject scientific thinking because they are the most threatened by its apparently unavoidable implications. The answer is open to debate, but to my mind it involves teaching students (and encouraging the public) to distinguish empirically-based, and so inherently limited observations and the logical, coherent, and testable scientific models they give rise to from unquestionable TRUTH- and revelation-based belief systems. Perhaps we need to focus explicitly on the value of science rather than its “Truth”. To reinforce what science is ultimately for; what justifies society’s support for it, namely to help reduce human suffering and (where it makes sense) to enhance the human experience, goals anchored in the perhaps logically unjustifiable, but nevertheless essential acceptance of the inherent value of each person.   

  1. Apologies to “Good Omens”
  2. For example, “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their creator with certain unalienable rights, that among these are life, liberty and the pursuit of happiness.” 

Science “awareness” versus “literacy” and why it matters, politically.

Montaigne concludes, like Socrates, that ignorance aware of itself is the only true knowledge”  – from “forbidden knowledge” by Roger Shattuck

A month or so ago we were treated to a flurry of media excitement surrounding the release of the latest Pew Research survey on Americans’ scientific knowledge.  The results of such surveys have been interpreted to mean many things. As an example, the title of Maggie Koerth-Baker’s short essay for the 538 web site was a surprising “Americans are Smart about Science”, a conclusion not universally accepted (see also).  Koerth-Baker was taken by the observation that the survey’s results support a conclusion that Americans’ display “pretty decent scientific literacy”.  Other studies (see Drummond & Fischhoff 2017) report that one’s ability to recognize scientifically established statements does not necessarily correlate with the acceptance of science policies – on average climate change “deniers” scored as well on the survey as “acceptors”.  In this light, it is worth noting that science-based policy pronouncements generally involve projections of what the future will bring, rather than what exactly is happening now.  Perhaps more surprisingly, greater “science literacy” correlates with more polarized beliefs that, given the tentative nature of scientific understanding –which is not about truth per se but practical knowledge–suggests that the surveys’ measure something other than scientific literacy.  While I have written on the subject before  it seems worth revisiting – particularly since since then I have read Rosling’s FactFullness and thought more about the apocalyptic bases of many secular and religious movements, described in detail by the historian Norman Cohn and the philosopher John Gray and gained a few, I hope, potentially useful insights on the matter.  

First, to understand what the survey reports we should take a look at the questions asked and decide what the ability to chose correctly implies about scientific literacy, as generally claimed, or something simpler – perhaps familiarity.  It is worth recognizing that all such instruments, particularly  those that are multiple choice in format, are proxies for a more detailed, time consuming, and costly Socratic interrogation designed to probe the depth of a persons’ knowledge and understanding.  In the Pew (and most other such surveys) choosing the correct response implies familiarity with various topics impacted by scientific observations. They do not necessarily reveal whether or not the respondent understands where the ideas come from, why they are the preferred response, or exactly where and when they are relevant (2). So is “getting the questions correct” demonstrates a familiarity with the language of science and some basic observations and principles but not the limits of respondents’ understanding.  

Take for example the question on antibiotic resistance (→).  The correct answer “it can lead to antibiotic-resistant bacteria” does not reveal whether the respondent understands the evolutionary (selective) basis for this effect, that is random mutagenesis (or horizontal gene transfer) and antibiotic-resistance based survival.  It is imaginable that a fundamentalist religious creationist could select the correct answer based on  plausible, non-evolutionary mechanisms (3).  In a different light, the question on oil, natural gas and coal (↓) could be seen as ambiguous – aren’t these all derived from long dead organisms, so couldn’t they reasonably be termed biofuels?  

While there are issues with almost any such multiple choice survey instrument, surely we would agree that choosing the “correct” answers to these 11 questions reflects some awareness of current scientific ideas and terminologies.  Certainly knowing (I think) that a base can neutralize and acid leaves unresolved how exactly the two interact, that is what chemical reaction is going on, not to mention what is going on in the stomach and upper gastrointestinal tract of a human being.  In this case, selecting the correct answer is not likely to conflict with one’s view of anthropogenic effects on climate, sex versus gender, or whether one has an up to date understanding of the mechanisms of immunity and brain development, or the social dynamics behind vaccination – specifically the responsibilities that members of a social group have to one another.   

But perhaps a more relevant point is our understanding of how science deals with the subject of predictions, because at the end of the day it is these predictions that may directly impact people in personal, political, and economically impactful ways. 

We can, I think, usefully divide scientific predictions into two general classes.  There are predictions about a system that can be immediately confirmed or dismissed through direct experiment and observation and those that cannot. The immediate (accessible) type of prediction is the standard model of scientific hypothesis testing, an approach that reveals errors or omissions in one’s understanding of a system or process.  Generally these are the empirical drivers of theoretical understanding (although perhaps not in some areas of physics).  The second type of prediction is inherently more problematic, as it deals with the currently unobservable future (or the distant past).  We use our current understanding of the system, and various assumptions, to build a predictive model of the system’s future behavior (or past events), and then wait to see if they are confirmed. In the case of models about the past, we often have to wait for a fortuitous discovery, for example the discovery of a fossil that might support or disprove our model.   

It’s tough to make predictions, especially about the future
– Yogi Berra (apparently)

Anthropogenic effects on climate are an example of the second type of prediction. No matter our level of confidence, we cannot be completely sure our model is accurate until the future arrives. Nevertheless, there is a marked human tendency to take predictions, typically about the end of the world or the future of the stock market, very seriously and to make urgent decisions based upon them. In many cases, these predictions impact only ourselves, they are personal.  In the case of climate change, however, they are likely to have disruptive effects that impact many. Part of the concern about study predictions is that responses to these predictions will have immediate impacts, they produce social and economic winners and losers whether or not the predictions are confirmed by events. As Hans Rosling points out in his book Factfullness, there is an urge to take urgent, drastic, and pro-active actions in the face of perceived (predicted) threats.  These recurrent and urgent calls to action (not unlike repeated, and unfulfilled predictions of the apocalypse) can lead to fatigue with the eventual dismissal of important warnings; warnings that should influence albeit perhaps not dictate ecological-economic and political policy decisions.  

Footnotes and literature cited:
1. As a Pew Biomedical Scholar, I feel some peripheral responsibility for the impact of these reports

2. As pointed out in a forthcoming review, the quality of the distractors, that is the incorrect choices, can dramatically impact the conclusions derived from such instruments. 

3.  I won’t say intelligent design creationist, as that makes no sense. Organisms are clearly not intelligently designed, as anyone familiar with their workings can attest

Drummond, C. & B. Fischhoff (2017). “Individuals with greater science literacy and education have more polarized beliefs on controversial science topics.” Proceedings of the National Academy of Sciences 114: 9587-9592.


Visualizing and teaching evolution through synteny

Embracing the rationalist and empirically-based perspective of science is not easy. Modern science generates disconcerting ideas that can be difficult to accept and often upsetting to philosophical or religious views of what gives meaning to existence [link]. In the context of biological evolutionary mechanisms, the fact that variation is generated by random (stochastic) events, unpredictable at the level of the individual or within small populations, led to the rejection of Darwinian principles by many working scientists around the turn of the 20th century (see Bowler’s The Eclipse of Darwinism + link).  Educational research studies, such as our own “Understanding randomness and its impact on student learning“, reinforce the fact that ideas involving stochastic processes relevant to evolutionary, as well as cellular and molecular, biology, are inherently difficult for people to accept (see also: Why being human makes evolution hard to understand). Yet there is no escape from the science-based conclusion that stochastic events provide the raw material upon which evolutionary mechanisms act, as well as playing a key role in a wide range of molecular and cellular level processes, including the origin of various diseases, particularly cancer [Cancer is partly caused by bad luck](1).

Teach Evolution

All of which leaves the critical question, at least for educators, of how to best teach students about evolutionary mechanisms and outcomes. The problem becomes all the more urgent given the anti-science posturing of politicians and public “intellectuals”, on both the right and the left, together with various overt and covert attacks on the integrity of science education, such as a new Florida law that lets “anyone in Florida challenge what’s taught in schools”.

Just to be clear, we are not looking for students to simply “believe” in the role of evolutionary processes in generating the diversity of life on Earth, but rather that they develop an understanding of how such processes work and how they make a wide range of observations scientifically intelligible. Of course the end result, unless you are prepared to abandon science altogether, is that you will find yourself forced to seriously consider the implications of inescapable scientific conclusions, no matter how weird and disconcerting they may be.

spacer bar

There are a number of educational strategies, in part depending upon one’s disciplinary perspective, on how to approach teaching evolutionary processes. Here I consider one, based on my background in cell and molecular biology.  Genomicus is a web tool that “enables users to navigate in genomes in several dimensions: linearly along chromosome axes, transversely across different species, and chronologically along evolutionary time.”  It is one of a number of  web-based resources that make it possible to use the avalanche of DNA (gene and genomic) sequence data being generated by the scientific community. For example, the ExAC/Gnomad Browser enables one to examine genetic variation in over 60,000 unrelated people. Such tools supplement and extend the range of tools accessible through the U.S. National Library of Medicine / NIH / National Center for Biotechnology Information (NCBI) web portal (PubMed).

In the biofundamentals / coreBio course (with an evolving text available here), we originally used the observation that members of our subfamily of primates,  the Haplorhini or dry nose primates, are, unlike most mammals, dependent on the presence of vitamin C (ascorbic acid) in their diet. Without vitamin C we develop scurvy, a potentially lethal condition. While there may be positive reasons for vitamin C dependence, in biofundamentals we present this observation in the context of small population size and a forgiving environment. A plausible scenario is that the ancestral Haplorhini population the lost the L-gulonolactone oxidase (GULO) gene (see OMIM) necessary for vitamin C synthesis. The remains of the GULO gene found in humans and other Haplorhini genomes is mutated and non-functional, resulting in our requirement for dietary vitamin C.

How, you might ask, can we be so sure? Because we can transfer a functional mouse GULO gene into human cells; the result is that vitamin C dependent human cells become vitamin C independent (see: Functional rescue of vitamin C synthesis deficiency in human cells). This is yet another experimental result, similar to the ability of bacteria to accurately decode a human insulin gene), that supports the explanatory power of an evolutionary perspective (2).


In an environment in which vitamin C is plentiful in a population’s diet, the mutational loss of the GULO gene would be benign, that is, not strongly selected against. In a small population, the stochastic effects of genetic drift can lead to the loss of genetic variants that are not strongly selected for. More to the point, once a gene’s function has been lost due to mutation, it is unlikely, although not impossible, that a subsequent mutation will lead to the repair of the gene. Why? Because there are many more ways to break a molecular machine, such as the GULO enzyme, but only a few ways to repair it. As the ancestor of the Haplorhini diverged from the ancestor of the vitamin C independent Strepsirrhini (wet-nose) group of primates, an event estimated to have occurred around 65 million years ago, its ancestors had to deal with their dietary dependence on vitamin C either by remaining within their original (vitamin C-rich) environment or by adjusting their diet to include an adequate source of vitamin C.

At this point we can start to use Genomicus to examine the results of evolutionary processes (see a YouTube video on using Genomicus)(3).  In Genomicus a gene is indicated  by a pointed box  ; for simplicity all genes are drawn as if they are the same size (they are not); different genes get different colors and the direction of the box indicates the direction of RNA synthesis, the first stage of gene expression. Each horizontal line in the diagram below represents a segment of a chromosome from a particular species, while the blue lines to the left represent phylogenic (evolutionary) relationships. If we search for the GULO gene in the mouse, we find it and we discover that its orthologs (closely related genes) are found in a wide range of eukaryotes, that is, organisms whose cells have a nucleus (humans are eukaryotes).

We find a version of the GULO gene in single-celled eukaryotes, such as baker’s yeast, that appear to have diverged from other eukaryotes about ~1.500,000,000 years ago (1500 million years ago, abbreviated Mya).  Among the mammalian genomes sequenced to date, the genes surrounding the GULO gene are (largely) the same, a situation known as synteny (mammals are estimated to have shared a common ancestor about 184 Mya). Since genes can move around in a genome without necessarily disrupting their normal function(s), a topic for another day, synteny between distinct organisms is assumed to reflect the organization of genes in their common ancestor. The synteny around the GULO gene, and the presence of a GULO gene in yeast and other distantly related organisms, suggests that the ability to synthesize vitamin C is a trait conserved from the earliest eukaryotic ancestors.GULO phylogeny mouse
Now a careful examination of this map (↑) reveals the absence of humans (Homo sapiens) and other Haplorhini primates – Whoa!!! what gives?  The explanation is, it turns out, rather simple. Because of mutation, presumably in their common ancestor, there is no functional GULO gene in Haplorhini primates. But the Haplorhini are related to the rest of the mammals, aren’t they?  We can test this assumption (and circumvent the absence of a functional GULO gene) by exploiting synteny – we search for other genes present in the syntenic region (↓). What do we find? We find that this region, with the exception of GULO, is present and conserved in the Haplorhini: the syntenic region around the GULO gene lies on human chromosome 8 (highlighted by the red box); the black box indicates the GULO region in the mouse. Similar syntenic regions are found in the homologous (evolutionarily-related) chromosomes of other Haplorhini primates.synteny-GULO region

The end result of our Genomicus exercise is a set of molecular level observations, unknown to those who built the original anatomy-based classification scheme, that support the evolutionary relationship between the Haplorhini and more broadly among mammals. Based on these observations, we can make a number of unambiguous and readily testable predictions. A newly discovered Haplorhini primate would be predicted to share a similar syntenic region and to be missing a functional GULO gene, whereas a newly discovered Strepsirrhini primate (or any mammal that does not require dietary ascorbic acid) should have a functional GULO gene within this syntenic region.  Similarly, we can explain the genomic similarities between those primates closely related to humans, such as the gorilla, gibbon, orangutan, and chimpanzee, as well as to make testable predictions about the genomic organization of extinct relatives, such as Neanderthals and Denisovians, using DNA recovered from fossils [link].

It remains to be seen how best to use these tools in a classroom context and whether having students use such tools influences their working understanding, and more generally, their acceptance of evolutionary mechanisms. That said, this is an approach that enables students to explore real data and to develop  plausible and predictive explanations for a range of genomic discoveries, likely to be relevant both to understanding how humans came to be, and in answering pragmatic questions about the roles of specific mutations and allelic variations in behavior, anatomy, and disease susceptibility.spacer bar

Some footnotes (figures reinserted 2 November 2020, with minor edits)

(1) Interested in a magnetic bumper image? visit: http://www.cafepress.com/bioliteracy

(2) An insight completely missing (unpredicted and unexplained) by any creationist / intelligent design approach to biology.

(3) Note, I have no connection that I know of with the Genomicus team, but I thank Tyler Square (now at UC Berkeley) for bringing it to my attention.

The trivialization of science education

It’s time for universities to accept their role in scientific illiteracy.  

There is a growing problem with scientific illiteracy, and its close relative, scientific over-confidence. While understanding science, by which most people seem to mean technological skills, or even the ability to program a device (1), is purported to be a critical competitive factor in our society, we see a parallel explosion of pseudo-scientific beliefs, often religiously held.  Advocates of a gluten-free paleo-diet battle it out with orthodox vegans for a position on the Mount Rushmore of self-righteousness, at the same time astronomers and astrophysicists rebrand themselves as astrobiologists (a currently imaginary discipline) while a subset of theoretical physicists, and the occasional evolutionary biologist, claim to have rendered ethicists and philosophers obsolete (oh, if it were only so). There are many reasons for this situation, most of which are probably innate to the human condition.  Our roots are in the vitamin C-requiring Haplorhini (dry nose) primate family, we were not evolved to think scientifically, and scientific thinking does not come easy for most of us, or for any of us over long periods of time (2). The fact that the sciences are referred to as disciplines reflects this fact, it requires constant vigilance, self-reflection, and the critical skepticism of knowledgeable colleagues to build coherent, predictive, and empirically validated models of the Universe (and ourselves).  In point of fact, it is amazing that our models of the Universe have become so accurate, particularly as they are counter-intuitive and often seem incredible, using the true meaning of the word.

Many social institutions claim to be in the business of developing and supporting scientific literacy and disciplinary expertise, most obviously colleges and universities.  Unfortunately, there are several reasons to question the general efficacy of their efforts and several factors that have led to this failure. There is the general tendency (although exactly how wide-spread is unclear, I cannot find appropriate statistics on this question) of requiring non-science students to take one, two, or more  “natural science” courses, often with associated laboratory sections, as a way to “enhance literacy and knowledge of one or more scientific disciplines, and enhance those reasoning and observing skills that are necessary to evaluate issues with scientific content” (source).

That such a requirement will “enable students to understand the current state of knowledge in at least one scientific discipline, with specific reference to important past discoveries and the directions of current development; to gain experience in scientific observation and measurement, in organizing and quantifying results, in drawing conclusions from data, and in understanding the uncertainties and limitations of the results; and to acquire sufficient general scientific vocabulary and methodology to find additional information about scientific issues, to evaluate it critically, and to make informed decisions” (source) suggests a rather serious level of faculty/institutional distain or apathy for observable learning outcomes, devotional levels of wishful thinking,  or simple hubris.  To my knowledge there is no objective evidence to support the premise that such requirements achieve these outcomes – which renders the benefits of such requirements problematic, to say the least (link).

On the other hand, such requirements have clear and measurable costs; going beyond the simple burden of added and potentially ineffective or off-putting course credit hours. The frequent requirement for multi-hour laboratory courses impacts the ability of students to schedule courses.  It would be an interesting study to examine how, independently of benefit, such laboratory course requirements impact students’ retention and time to degree, that is, bluntly put, costs to students and their families.

Now, if there were objective evidence that taking such courses improved students’ understanding of a specific disciplinary science and its application, perhaps the benefit would warrant the cost.  But one can be forgiven if one assumes a less charitable driver, that is, science departments’ self-interest in using laboratory and other non-major course requirements as means to support graduate students.  Clearly there is a need for objective metrics for scientific, that is disciplinary, literacy and learning outcomes.

And this brings up another cause for concern.  Recently, there has been a movement within the science education research community to attempt to quantify learning in terms of what are known as “forced choice testing instruments;” that is, tests that rely on true/false and multiple-choice questions, an actively anti-Socratic strategy.  In some cases, these tests claim to be research based.  As one involved in the development of such a testing instrument (the Biology Concepts Instrument or BCI), it is clear to me that such tests can serve a useful role in helping to identify areas in which student understanding is weak or confused [example], but whether they can provide an accurate or, at the end of the day, meaningful measure of whether students have developed an accurate working understanding of complex concepts and the broader meaning of observations is problematic at best.

Establishing such a level of understanding relies on Socratic, that is, dynamic and adaptive evaluations: can the learner clearly explain, either to other experts or to other students, the source and implications of their assumptions?  This is the gold standard for monitoring disciplinary understanding. It is being increasingly side-lined by those who rely on forced choice tests to evaluate learning outcomes and to support their favorite pedagogical strategies (examples available upon request).  In point of fact, it is often difficult to discern, in most science education research studies, what students have come to master, what exactly they know, what they can explain and what they can do with their knowledge. Rather unfortunately, this is not a problem restricted to non-majors taking science course requirements; majors can also graduate with a fragmented and partially, or totally, incoherent understanding of key ideas and their empirical foundations.

So what are the common features of a functional understanding of a particular scientific discipline, or more accurately, a sub-discipline?  A few ideas seem relevant.  A proficient needs to be realistic about their own understanding.  We need to teach disciplinary (and general) humility – no one actually understands all aspects of most scientific processes.  This is a point made by Fernbach & Sloman in their recent essay, “Why We Believe Obvious Untruths.”  Humility about our understanding has a number of beneficial aspects.  It helps keep us skeptical when faced with, and asked to accept, sweeping generalizations.

Such skepticism is part of a broader perspective, common among working scientists, namely the ability to distinguish the obvious from the unlikely, the implausible, and the impossible. When considering a scientific claim, the first criterion is whether there is a plausible mechanism that can be called upon to explain it, or does it violate some well-established “law of nature”. Claims of “zero waste” processes butt up against the laws of thermodynamics.

Going further, we need to consider how the observation or conclusions fits with other well established principles, which means that we have to be aware of these principles, as well as acknowledging that we are not universal experts in all aspects of science.  A molecular biologist may recognize that quantum mechanics dictates the geometries of atomic bonding interactions without being able to formally describe the intricacies of the molecule’s wave equation. Similarly, a physicist might think twice before ignoring the evolutionary history of a species, and claiming that quantum mechanics explains consciousness, or that consciousness is a universal property of matter.  Such a level of disciplinary expertise can take extended experience to establish, but is critical to conveying what disciplinary mastery involves to students; it is the major justification for having disciplinary practitioners (professors) as instructors.

From a more prosaic educational perspective other key factors need to be acknowledged, namely a realistic appreciation of what people can learn in the time available to them, while also understanding at least some of their underlying motivations, which is to say that the relevance of a particular course to disciplinary goals or desired educational outcomes needs to be made explicit and as engaging as possible, or at least not overtly off putting, something that can happen when a poor unsuspecting molecular biology major takes a course in macroscopic physics, taught by an instructor who believes organisms are deducible from first principles based on the conditions of the big bang.  Respecting the learner requires that we explicitly acknowledge that an unbridled thirst for an empirical, self-critical, mastery of a discipline is not a basic human trait, although it is something that can be cultivated, and may emerge given proper care.  Understanding the real constraints that act on meaningful learning can help focus courses on what is foundational, and help eliminate the irrelevant or the excessively esoteric.

Unintended consequences arise from “pretending” to teach students, both majors and non-science majors, science. One is an erosion of humility in the face of the complexity of science and our own limited understanding, a point made in a recent National Academy report that linked superficial knowledge with more non-scientific attitudes. The end result is an enhancement of what is known as the Kruger-Dunning effect, the tendency of people to seriously over-estimate their own expertise: “the effect describes the way people who are the least competent at a task often rate their skills as exceptionally high because they are too ignorant to know what it would mean to have the skill”.

A person with a severe case of Kruger-Dunning-itis is likely to lose respect for people who actually know what they are talking about. The importance of true expertise is further eroded and trivialized by the current trend of having photogenic and well-speaking experts in one domain pretend to talk, or rather to pontificate, authoritatively on another (3).  In a world of complex and arcane scientific disciplines, the role of a science guy or gal can promote rather than dispel scientific illiteracy.

We see the effects of the lack of scientific humility when people speak outside of their domain of established expertise to make claims of certainty, a common feature of the conspiracy theorist.  An oft used example is the claim that vaccines cause autism (they don’t), when the actual causes of autism, whether genetic and/or environmental, are currently unknown and the subject of active scientific study.  An honest expert can, in all humility, identify the limits of current knowledge as well as what is known for certain.  Unfortunately, revealing and ameliorating the levels of someone’s Kruger-Dunning-itis involves a civil and constructive Socratic interrogation, something of an endangered species in this day and age, where unseemly certainty and unwarranted posturing have replaced circumspect and critical discourse.  Any useful evaluation of what someone knows demands the time and effort inherent in a Socratic discourse, the willingness to explain how one knows what one thinks one knows, together with a reflective consideration of its implications, and what it is that other trained observers, people demonstrably proficient in the discipline, have concluded. It cannot be replaced by a multiple choice test.

Perhaps a new (old) model of encouraging in students, as well as politicians and pundits, an understanding of where science comes from, the habits of mind involved, the limits of, and constraints on, our current understanding  is needed.  At the college level, courses that replace superficial familiarity and unwarranted certainty with humble self-reflection and intellectual modesty might help treat the symptoms of Kruger-Dunning-itis, even though the underlying disease may be incurable, and perhaps genetically linked to other aspects of human neuronal processing.


some footnotes:

  1. after all, why are rather distinct disciplines lumped together as STEM (science, technology, engineering and mathematics).
  2.  Given the long history of Homo sapiens before the appearance of science, it seems likely that such patterns of thinking are an unintended consequence of selection for some other trait, and the subsequent emergence of (perhaps excessively) complex and self-reflective nervous system.
  3.  Another example of Neil Postman’s premise that education is be replaced by edutainment (see  “Amusing ourselves to Death”.

Go ahead and “teach the controversy:” it is the best way to defend science.

as long as teachers understand the science and its historical context

The role of science in modern societies is complex. Science-based observations and innovations drive a range of economically important, as well as socially disruptive, technologies. Many opinion polls indicate that the American public “supports” science, while at the same time rejecting rigorously established scientific conclusions on topics ranging from the safety of genetically modified organisms and the role of vaccines in causing autism to the effects of burning fossil fuels on the global environment [Pew: Views on science and society]. Given that a foundational principle of science is that the natural world can be explained without calling on supernatural actors, it remains surprising that a substantial majority of people appear to believe that supernatural entities are involved in human evolution [as reported by the Gallup organization]; although the theistic percentage has been dropping  (a little) of late.

God+evolution

This situation highlights the fact that when science intrudes on the personal or the philosophical (within which I include the theological and the  ideological), many people are willing to abandon the discipline of science to embrace explanations based on personal beliefs. These include the existence of a supernatural entity that cares for people, at least enough to create them, and that there are reasons behind life’s tragedies.

 

Where science appears to conflict with various non-scientific positions, the public has pushed back and rejected the scientific. This is perhaps best represented by the recent spate of “teach the controversy” legislative efforts, primarily centered on evolutionary theory and the realities of anthropogenic climate change [see Nature: Revamped ‘anti-science’ education bills]. We might expect to see, on more politically correct campuses, similar calls for anti-GMO, anti-vaccination, or gender-based curricula. In the face of the disconnect between scientific and non-scientific (philosophical, ideological, theological) personal views, I would suggest that an important part of the problem has didaskalogenic roots; that is, it arises from the way science is taught – all too often expecting students to memorize terms and master various heuristics (tricks) to answer questions rather than developing a self-critical understanding of ideas, their origins, supporting evidence, limitations, and practice in applying them.

Science is a social activity, based on a set of accepted core assumptions; it is not so much concerned with Truth, which could, in fact, be beyond our comprehension, but rather with developing a universal working knowledge, composed of ideas based on empirical observations that expand in their explanatory power over time to allow us to predict and manipulate various phenomena.  Science is a product of society rather than isolated individuals, but only rarely is the interaction between the scientific enterprise and its social context articulated clearly enough so that students and the general public can develop an understanding of how the two interact.  As an example, how many people appreciate the larger implications of the transition from an Earth to a Sun- or galaxy-centered cosmology?  All too often students are taught about this transition without regard to its empirical drivers and philosophical and sociological implications, as if the opponents at the time were benighted religious dummies. Yet, how many students or their teachers appreciate that as originally presented the Copernican system had more hypothetical epicycles and related Rube Goldberg-esque kludges, introduced to make the model accurate, than the competing Ptolemic Sun-centered system? Do students understand how Kepler’s recognition of elliptical orbits eliminated the need for such artifices and set the stage for Newtonian physics?  And how did the expulsion of humanity from the center to the periphery of things influence peoples’ views on humanity’s role and importance?

 

So how can education adapt to better help students and the general public develop a more realistic understanding of how science works?  To my mind, teaching the controversy is a particularly attractive strategy, on the assumption that teachers have a strong grounding in the discipline they are teaching, something that many science degree programs do not achieve, as discussed below. For example, a common attack against evolutionary mechanisms relies on a failure to grasp the power of variation, arising from stochastic processes (mutation), coupled to the power of natural, social, and sexual selection. There is clear evidence that people find stochastic processes difficult to understand and accept [see Garvin-Doxas & Klymkowsky & Fooled by Randomness].  An instructor who is not aware of the educational challenges associated with grasping stochastic processes, including those central to evolutionary change, risks the same hurdles that led pre-molecular biologists to reject natural selection and turn to more “directed” processes, such as orthogenesis [see Bowler: The eclipse of Darwinism & Wikipedia]. Presumably students are even more vulnerable to intelligent-design  creationist arguments centered around probabilities.

The fact that single cell measurements enable us to visualize biologically meaningful stochastic processes makes designing course materials to explicitly introduce such processes easier [Biology education in the light of single cell/molecule studies].  An interesting example is the recent work on visualizing the evolution of antibiotic resistance macroscopically [see The evolution of bacteria on a “mega-plate” petri dish].bacterial evolution-antibiotic resistancepng

To be in a position to “teach the controversy” effectively, it is critical that students understand how science works, specifically its progressive nature, exemplified through the process of generating and testing, and where necessary, rejecting or revising, clearly formulated and predictive hypotheses – a process antithetical to a Creationist (religious) perspective [a good overview is provided here: Using creationism to teach critical thinking].  At the same time, teachers need a working understanding of the disciplinary foundations of their subject, its core observations, and their implications. Unfortunately, many are called upon to teach subjects with which they may have only a passing familiarity.  Moreover, even majors in a subject may emerge with a weak understanding of foundational concepts and their origins – they may be uncomfortable teaching what they have learned.  While there is an implicit assumption that a college curriculum is well designed and effective, there is often little in the way of objective evidence that this is the case. While many of our dedicated teachers (particularly those I have met as part of the CU Teach program) work diligently to address these issues on their own, it is clear that many have not been exposed to a critical examination of the empirical observations and experimental results upon which their discipline is based [see Biology teachers often dismiss evolution & Teachers’ Knowledge Structure, Acceptance & Teaching of Evolution].  Many is the molecular biology department that does not require formal coursework in basic evolutionary mechanisms, much less a thorough consideration of natural, social, and sexual selection, and non-adaptive mechanisms, such as those associated with founder effects, population bottlenecks, and genetic drift, stochastic processes that play a key role in the evolution of many species, including humankind. Similarly, more ecologically- and physiologically-oriented majors are often “afraid” of the molecular foundations of evolutionary processes. As part of an introductory chemistry curriculum redesign project (CLUE), Melanie Cooper and her group at Michigan State University have found that students in conventional courses often fail to grasp key concepts, and that subsequent courses can sometimes fail to remediate the didaskalogenic damage done in earlier courses [see: an Achilles Heel in Chemistry Education].

The importance of a historical perspective: The power of scientific explanations are obvious, but they can become abstract when their historical roots are forgotten, or never articulated. A clear example is that the value of vaccination is obvious in the presence of deadly and disfiguring diseases. In their absence, due primarily to the effectiveness of wide-spread vaccination, the value of vaccination can be called into question, resulting in the avoidable re-emergence of these diseases.  In this context, it would be important that students understand the dynamics and molecular complexity of biological systems, so that they can explain why it is that all drugs and treatments have potential side-effects, and how each individual’s genetic background influences these side-effects (although in the case of vaccination, such side effects do not include autism).

Often “controversy” arises when scientific explanations have broader social, political, or philosophical implications. Religious objections to evolutionary theory arise primarily, I believe, from the implication that we (humans) are not the result of a plan, created or evolved, but rather that we are accidents of mindless, meaningless, and often gratuitously cruel processes. The idea that our species, which emerged rather recently (that is, a few million years ago) on a minor planet on the edge of an average galaxy, in a universe that popped into existence for no particular reason or purpose ~14 billion years ago, can have disconcerting implications [link]. Moreover, recognizing that a “small” change in the trajectory of an asteroid could change the chance that humanity ever evolved [see: Dinosaur asteroid hit ‘worst possible place’] can be sobering and may well undermine one’s belief in the significance of human existence. How does it impact our social fabric if we are an accident, rather than the intention of a supernatural being or the inevitable product of natural processes?

Yet, as a person who firmly believes in the French motto of liberté, égalité, fraternité and laïcité, I feel fairly certain that no science-based scenario on the origin and evolution of the universe or life, or the implications of sexual dimorphism or “racial” differences, etc, can challenge the importance of our duty to treat others with respect, to defend their freedoms, and to insure their equality before the law. Which is not to say that conflicts do not inevitably arise between different belief systems – in my own view, patriarchal oppression needs to be called out and actively opposed where ever it occurs, whether in Saudi Arabia or on college campuses (e.g. UC Berkeley or Harvard).

This is not to say that presenting the conflicts between scientific explanations of phenomena and non-scientific, but more important beliefs, such as equality under the law, is easy. When considering a number of natural cruelties, Charles Darwin wrote that evolutionary theory would claim that these are “as small consequences of one general law, leading to the advancement of all organic beings, namely, multiply, vary, let the strongest live  and the weakest die” note the absence of any reference to morality, or even sympathy for the “weakest”.  In fact, Darwin would have argued that the apparent, and overt cruelty that is rampant in the “natural” world is evidence that God was forced by the laws of nature to create the world the way it is, presumably a worlDarwin to Grayd that is absurdly old and excessively vast. Such arguments echo the view that God had no choice other than whether to create or not; that for all its flaws, evils, and unnecessary suffering this is, as posited by Gottfried Leibniz (1646-1716) and satirized by Voltaire (1694–1778) in his novel Candide, the best of all possible worlds. Yet, as a member of a reasonably liberal, and periodically enlightened, society, we see it as our responsibility to ameliorate such evils, to care for the weak, the sick, and the damaged and to improve human existence; to address prejudice and political manipulations [thank you Supreme Court for ruling against race-based redistricting].  Whether anchored by philosophical or religious roots, many of us are driven to reject a scientific (biological) quietism (“a theology and practice of inner prayer that emphasizes a state of extreme passivity”) by actively manipulating our social, political, and physical environment and striving to improve the human condition, in part through science and the technologies it makes possible.

At the same time, introducing social-scientific interactions can be fraught with potential  controversies, particularly in our excessively politicized and self-righteous society. In my own introductory biology class (biofundamentals), we consider potentially contentious issues that include sexual dimorphism and selection and social evolutionary processes and their implications.  As an example, social systems (and we are social animals) are susceptible to social cheating and groups develop defenses against cheaters; how such biological ideas interact with historical, political and ideological perspectives is complex, and certainly beyond the scope of an introductory biology course, but worth acknowledging [PLoS blog link].

yeats quote

In a similar manner, we understand the brain as an evolved cellular system influenced by various experiences, including those that occur during development and subsequent maturation.  Family life interacts with genetic factors in a complex, and often unpredictable way, to shape behaviors.  But it seems unlikely that a free and enlightened society can function if it takes seriously the premise that we lack free-will and so cannot be held responsible for our actions, an idea of some current popularity [see Free will could all be an illusion ]. Given the complexity of biological systems, I for one am willing to embrace the idea of constrained free will, no matter what scientific speculations are currently in vogue [but see a recent video by Sabine Hossenfelder You don’t have free will, but don’t worry, which has me rethinking].

Recognizing the complexities of biological systems, including the brain, with their various adaptive responses and feedback systems can be challenging. In this light, I am reminded of the contrast between the Doomsday scenario of Paul Ehrlich’s The Population Bomb, and the data-based view of the late Hans Rosling in Don’t Panic – The Facts About Population.

All of which is to say that we need to see science not as authoritarian, telling us who we are or what we should do, but as a tool to do what we think is best and why it might be difficult to achieve. We need to recognize how scientific observations inform and may constrain, but do not dictate our decisions. We need to embrace the tentative, but strict nature of the scientific enterprise which, while it cannot arrive at “Truth” can certainly identify non-sense.

Minor edits and updates and the addition of figures that had gone missing –  20 October 2020

In an age of rampant narcissism and social cheating – the importance of teaching social evolutionary mechanisms.

As socioeconomic inequality grows,  the publicly acknowledged importance of traits such as honesty, loyalty, self-sacrifice, and reciprocity appears to have fallen out of favor with some of our socio-economic and political elites. How many people condeHutton quotemn a person as dishonest one day and embrace them the next? Dishonesty and selfishness no longer appear to be taboo, or a source of shame that needs to be expurgated (perhaps my Roman Catholic upbringing is bubbling to the surface here).  A disavowal of shame and guilt and the lack of serious social censure appears to be on the rise, particularly within the excessively wealthy and privileged, as if the society from which they extracted their wealth and fame does not deserve their active participation and support [link: Hutton, 2009].  They have embraced a “winning takes all” strategy.

birds in a flockIf an understanding of evolutionary mechanisms is weak within the general population [link], the situation is likely to be much worse when it comes to an understanding of the role and outcomes of social evolutionary mechanisms. Yet, the evolutionary origins of social systems, and the mechanisms by which such systems are maintained against the effects of what are known as “social cheaters”, are critical to understanding and defending, human social behaviors  such as honesty, cooperation, loyalty, self-sacrifice, self-restraint, mutual respect, responsibility and kindness.

While evolutionary processes are often caricatured as favoring selfish behaviors, the facts tell a more complex, organism-specific story [link: Aktipis 2016]. Cooperation between organisms underlies a wide range of behaviors, from sexual reproduction and the formation of multicellular organisms (animals, plants, and people) to social systems, ranging from microbial films to bee colonies and construction companies [see Bourke, 2011: Principles of Social Evolution] [Wikipedia link].

One of the best studied of social systems involves the cellular slime mold Dictyostelium discoideum [Wikipedia link].  When life is good, that is when the world is moist and bacteria, the food of these organisms, are plentiful, D. discoideum live and reproduce happily as single celled amoeba-like individuals in soil.  Given their small size (~5 μm diameter), they cannot travel far, but that does not matter as long as their environment is hospitable.  When the environment turns hostile, however, an important survival strategy is to migrate to a new location – but what is a little guy to do?  The answer in this species is to cooperate.  Individual amoeba begin to secrete a chemical that acts to attract others; eventually thousands of individuals aggregate to form a multicellular “slug”; slugs migrate around to 1066px-Dicty_Life_Cycle_H01.svgfind a hospitable place and then differentiate into a fruiting body that stands ~1mm (20x the size of an individual amoeba) above the ground.  To form the stalk that lifts the “fruiting body” into the air, a subset of cells (once independent individuals) change shape. These stalk cells die, while the rest of the cells form the fruiting body, which consists of spores – cells specialized to survive dehydration.  Spores are released into the air where they float and are dispersed over a wide range.  Those spores that land in a happy place (moist and verdant), revert to the amoeboid life style, eat, grow, divide and generate a new (clonal) population of amoeboid cells: they have escaped from a hostile environment to inhabit a new world, a migration made possible by the sacrifice of the cells that became the stalk (and died in the process).  Similar types of behavior occur in a wide range of macroscopic organisms [Scrambling to the top: link].  Normally, who becomes a stalk cell and who becomes a spore is a stochastic process [see previous PLoS blog post on stochastics and biology education].

Cheaters in the slime mold system are individuals who take part in the aggregation process (they respond to the migration signal and become part of the slug), but have altered their behavior to avoid becoming a stalk cell – no self-sacrifice for them. Instead they become spores.  In the short run, such a strategy can be beneficial to the individual, after all it has a better chance of survival if it can escape a hostile environment.  But imagine a population made up only of cheaters – no self-sacrifice, no stalk, no survival advantage = death [see link: Strassmann & Queller, 2009].

A classic example of social cheating with immediate relevance to the human situation is cancer.  Within a sexually reproducing multicellular organism, reproduction is strictly restricted to the cells of the germ line – eggs and sperm.  The other cells of the organism, known collectively as somatic cells, have ceded their reproductive rights to the organism as a whole.  While somatic cells can divide, they divide in a controlled and strictly regulated (unselfish) way.  Somatic cells do not survive the death of the organism – only germ line cells (sperm and eggs) are able to produce a new organism.  In the end cellular cooperation has been a productive strategy, as witness the number of different types of multicellular organisms, including humans.  If a somatic cell breaks the social contract and cheats, that is, begins to divide (asexually) in an independent manner, it can lead to the formation of a  tumor and later, if the cells of the tumor start to migrate within the organism, to metastatic cancer.  More rarely (apparently) such cells can migrate between organisms, as in the case of transmissible cancers in dogs, Tasmanian Devils, and clams [see links: Murchison 2009 and Ujvari et al 2016).  The growth and evolution of the tumor cell leads to the death of the organism and the cancer cells’ own extinction, another example of the myopic nature of evolutionary processes.

In the case of cancer the organism’s defenses against social cheaters comes in two forms, intrinsic to the individual cheater cells, in the form of cell suicide (known through a number of technical terms including apoptosis, anoikis and necroptosis)[link: Su et al., 2015] and extrinsic and organismic processes, such as the ability of the organism’s immune system to identify and kill cancer cells – a phenomena with therapeutically relevant implications [link: Ledford, 2014].  We can think of these two processes as guilt + shame (leading to cellular suicide) and policing + punishment (leading to immune system killing).  For a cell to escape growth control and to evolve to produce metastatic disease, it needs to inactivate or ignore intrinsic cell death systems and to evade the immune system.

To consider another example, social systems are based on cooperation, often involving the sharing of resources with those in need.  A recent example is the sharing of food (blood) between vampire bats [see link: Carter & Wilkinson, 2013].  The rules, as noted by Aktipis, are simple, 1) ask only when in need and 2) give when asked and able.  In this context, we can identify two types of social cheaters – those who ask when they do not need and those you fail to give when asked and able.  People who refuse to work even when they can and when jobs are available fall into the first group, the rich who avoid taxes and fail to donate significant funds to charities the other.  It is an interesting question of how to characterize those who borrow money and fail to repay it.  Bankruptcy laws that protect the wealth of the borrower while leading to losses to the lender might be seen as acting to undermine the social contract (clearly philosophers’ and economists’ comments here would be relevant).

Given that social systems at all levels are based on potentially costly traits, such as honesty, loyalty, self-sacrifice, and reciprocity, the evolutionary origins of social systems must lie in their ability to increase reproductive success, either directly or through effects on relatives, a phenomena known as inclusive fitness [Wikipedia link]. Evolutionary processes also render social systems vulnerable to cheating and so have driven the development of a range of defenses against various forms of social cheaters (see above).  But recent political and cultural events appear to be acting to erode and/or ignore society’s defenses.

So what to do?  Revolution? From a PLoS Science education perspective, one strategy suggests itself:  to encourage (require) that students and the broader public be introduced to effective instruction on social evolutionary mechanisms, the traits they can generate (various forms of altruism and cooperation), the reality and pernicious effects of social cheaters, and the importance of defenses against them.  In this light, it appears that social evolutionary processes are missing from the Next Generation Science Standards [NGSS link]. Understanding the biology, together with effective courses in civics [see link: Teaching Civics in the Year of The Donald] might serve to bolster the defense of civil society.

December 22, 2016, minor update 23 October 2020 – Mike Klymkowsky

Featured image is used with permission from Matthew Lutz (Princeton University).

Army ants’ ‘living’ bridges span collective intelligence, ‘swarm’ robotics (PNAS)