Is it time to start worrying about conscious human “mini-brains”?

A human iPSC cerebral organoid in which pigmented retinal epithelial cells can be seen (from the work of McClure-Begley et al.).   Also see “Can lab-grown brains become conscious?” by Sara Readon Nature 2020.

The fact that experiments on people are severely constrained is a major obstacle in understanding human development and disease.  Some of these constraints are moral and ethical and clearly appropriate and necessary given the depressing history of medical atrocities.  Others are technical, associated with the slow pace of human development. The combination of moral and technical factors has driven experimental biologists to explore the behavior of a wide range of “model systems” from bacteria, yeasts, fruit flies, and worms to fish, frogs, birds, rodents, and primates.  Justified by the deep evolutionary continuity between these organisms (after all, all organisms appear to be descended from a single common ancestor and share many molecular features), experimental evolution-based studies of model systems have led to many therapeutically valuable insights in humans – something that I suspect a devotee of intelligent design creationism would be hard pressed to predict or explain (post link).

While humans are closely related to other mammals, it is immediately obvious that there are important differences – after all people are instantly recognizable from members of other closely related species and certainly look and behave differently from mice. For example, the surface layer of our brains is extensively folded (they are known as gyrencephalic) while the brain of a mouse is smooth as a baby’s bottom (and referred to as lissencephalic). In humans, the failure of the brain cortex to fold is known as lissencephaly, a disorder associated with severe neurological defects. With the advent of more and more genomic sequence data, we can identify human specific molecular (genomic) differences. Many of these sequence differences occur in regions of our DNA that regulate when and where specific genes are expressed.  Sholtis & Noonan (1) provide an example: the HACNS1 locus is a 81 basepair region that is highly conserved in various vertebrates from birds to chimpanzees; there are 13 human specific changes in this sequence that appear to alter its activity, leading to human-specific changes in the expression of nearby genes (↓). At this point ~1000 genetic elements that are different in humans compared to other vertebrates have been identified and more are likely to emerge (2).  Such human-specific changes can make modeling human-specific behaviors, at the cellular, tissue, organ, and organism level, in non-human model systems difficult and problematic (3, 4).   It is for this reason that scientists have attempted to generate better human specific systems.

human sequence divergence

One particularly promising approach is based on what are known as embryonic stem cells (ESCs) or pluripotent stem cells (PSCs). Human embryonic stem cells are generated from the inner cell mass of a human embryo and so involve the destruction of that embryo – which raises a number of ethical and religious concerns as to when “life begins” (5).  Human pluripotent stem cells are isolated from adult tissues but in most cases require invasive harvesting methods that limit their usefulness.  Both ESCs and PSCs can be grown in the laboratory and can be induced to differentiate into what are known as gastruloids.  Such gastruloids can develop anterior-posterior (head-tail), dorsal-ventral (back-belly), and left-right axes analogous to those found in embryos (6) and adults (top panel ↓). In the case of PSCs, the gastruloid (bottom panel ↓) is essentially a twin of the organism from which the PSCs were derived, a situation that raises difficult questions: is it a distinct individual, is it the property of the donor or the creation of a technician.  The situation will be further complicated if (or rather, when) it becomes possible to generate viable embryos from such gastruloids.

Axes

gastruloid-embryo-comparisonThe Nobel prize winning work of Kazutoshi Takahashi and Shinya Yamanaka (7), who devised methods to take differentiated (somatic) human cells and reprogram them into ESC/PSC-like cells, cells known as induced pluripotent stem cells (iPSCs)(8), represented a technical breakthrough that jump-started this field. While the original methods derived sample cells from tissue biopsies, it is possible to reprogram kidney epithelial cells recovered from urine, a non-invasive approach (910).  Subsequently, Madeline Lancaster, Jurgen Knōblich, and colleagues devised an approach by which such cells could be induced to form what they termed “cerebral organoids” (although Yoshiki Sasai and colleagues were the first to generate neuronal organoids); they used this method to examine the developmental defects associated with microencephaly (11).  The value of the approach was rapidly recognized and a number of studies on human conditions, including  lissencephaly (12), Zika-virus infection-induced microencephaly (13), and Down’s syndrome (14);  investigators have begun to exploit these methods to study a range of human diseases – and rapid technological progress is being made.

The production of cerebral organoids from reprogrammed human somatic cells has also attracted the attention of the media (15).  While “mini-brain” is certainly a catchier name, it is a less accurate description of a cerebral organoid, itself possibly a bit of an overstatement, since it is not clear exactly how “cerebral” such organoids are. For example, the developing brain is patterned by embryonic signals that establish its asymmetries; it forms at the anterior end of the neural tube (the nascent central nervous system and spinal cord) and with distinctive anterior-posterior, dorsal-ventral, and left-right asymmetries, something that simple cerebral organoids do not display.  Moreover, current methods for generating cerebral organoids involve primarily what are known as neuroectodermal cells – our nervous system (and that of other vertebrates) is a specialized form of the embryo’s surface layer that gets internalized during development. In the embryo, the developing neuroectoderm interacts with cells of the circulatory system (capillaries, veins, and arteries), formed by endothelial cells and what are known as pericytes that surround them. These cells, together with interactions with glial cells (astrocytes, a non-neuronal cell type) combine to form the blood brain barrier.  Other glial cells (oligodendrocytes) are also present; in contrast, both types of glia (astrocytes and oligodendrocytes) are rare in the current generation of cerebral organoids. Finally, there are microglial cells,  immune system cells that originate from outside the neuroectoderm; they invade and interact with neurons and glia as part of the brain’s dynamic neural capillary and neuronssystem. The left panel of the figure shows, in highly schematic form how these cells interact (16). The right panel is a drawing of neural tissue stained by the Golgi method (17), which reveals ~3-5% of the neurons present. There are at least as many glial cells present, as well as microglia, none of which are visible in the image. At this point, cerebral organoids typically contain few astrocytes and oligodendrocytes, no vasculature, and no microglia. Moreover, they grow to be about 1 to 3 mm in diameter over the course of 6 to 9 months; that is significantly smaller in volume than a fetal or newborn’s brain. While cerebral organoids can generate structures characteristic of retinal pigment epithelia (top figure) and photo-responsive neurons (18), such as those associated with the retina, an extension of the brain, it is not at all clear that there is any significant sensory input into the neuronal networks that are formed within a cerebral organoid, or any significant outputs, at least compared to the role that the human brain plays in controlling bodily and mental functions.

The reasonable question, then, must be whether a  cerebral organoid, which is a relatively simple system of cells (although itself complex), is conscious. It becomes more reasonable as increasingly complex systems are developed, and such work is proceeding apace. Already researchers are manipulating the developing organoid’s environment to facilitate axis formation, and one can anticipate the introduction of vasculature. Indeed, the generation of microglia-like cells from iPSCs has been reported; such cells can be incorporated into cerebral organoids where they appear to respond to neuronal damage in much the same way as microglia behave in intact neural tissue (19).

We can ask ourselves, what would convince us that a cerebral organoid, living within a laboratory incubator, was conscious? How would such consciousness manifest itself? Through some specific pattern of neural activity, perhaps?  As a biologist, albeit one primarily interested in molecular and cellular systems, I discount the idea, proposed by some physicists and philosophers as well as the more mystical, that consciousness is a universal property of matter (20,21).  I take consciousness to be an emergent property of complex neural systems, generated by evolutionary mechanisms, built during embryonic and subsequent development, and influenced by social interactions (BLOG LINK) using information encoded within the human genome (something similar to this: A New Theory Explains How Consciousness Evolved). While a future concern, in a world full of more immediate and pressing issues, it will be interesting to listen to the academic, social, and political debate on what to do with mini-brains as they grow in complexity and perhaps inevitably, towards consciousness.

Footnotes and references

Thanks to Rebecca Klymkowsky, Esq. and Joshua Sanes, Ph.D. for editing and disciplinary support. Minor updates and the reintroduction of figures 22 Oct. 2020.

  1. Gene regulation and the origins of human biological uniqueness
  2.  See also Human-specific loss of regulatory DNA and the evolution of human-specific traits
  3. The mouse trap
  4. Mice Fall Short as Test Subjects for Some of Humans’ Deadly Ill
  5. The status of the human embryo in various religions
  6. Interactions between Nodal and Wnt signalling Drive Robust Symmetry Breaking and Axial Organisation in Gastruloids (Embryonic Organoids)
  7.  Induction of pluripotent stem cells from mouse embryonic and adult fibroblast cultures by defined factors
  8.  How iPS cells changed the world
  9.  Generation of Induced Pluripotent Stem Cells from Urine
  10. Urine-derived induced pluripotent stem cells as a modeling tool to study rare human diseases
  11. Cerebral organoids model human brain development and microcephaly.
  12. Human iPSC-Derived Cerebral Organoids Model Cellular Features of Lissencephaly and Reveal Prolonged Mitosis of Outer Radial Glia
  13. Using brain organoids to understand Zika virus-induced microcephaly
  14. Probing Down Syndrome with Mini Brains
  15. As an example, see The Beauty of “Mini Brains”
  16. Derived from Central nervous system pericytes in health and disease
  17. Golgi’s method .
  18. Cell diversity and network dynamics in photosensitive human brain organoids
  19. Efficient derivation of microglia-like cells from human pluripotent stem cells
  20. The strange link between the human mind and quantum physics – BBC:
  21. Can Quantum Physics Explain Consciousness?

Visualizing and teaching evolution through synteny

Embracing the rationalist and empirically-based perspective of science is not easy. Modern science generates disconcerting ideas that can be difficult to accept and often upsetting to philosophical or religious views of what gives meaning to existence [link]. In the context of biological evolutionary mechanisms, the fact that variation is generated by random (stochastic) events, unpredictable at the level of the individual or within small populations, led to the rejection of Darwinian principles by many working scientists around the turn of the 20th century (see Bowler’s The Eclipse of Darwinism + link).  Educational research studies, such as our own “Understanding randomness and its impact on student learning“, reinforce the fact that ideas involving stochastic processes relevant to evolutionary, as well as cellular and molecular, biology, are inherently difficult for people to accept (see also: Why being human makes evolution hard to understand). Yet there is no escape from the science-based conclusion that stochastic events provide the raw material upon which evolutionary mechanisms act, as well as playing a key role in a wide range of molecular and cellular level processes, including the origin of various diseases, particularly cancer [Cancer is partly caused by bad luck](1).

Teach Evolution

All of which leaves the critical question, at least for educators, of how to best teach students about evolutionary mechanisms and outcomes. The problem becomes all the more urgent given the anti-science posturing of politicians and public “intellectuals”, on both the right and the left, together with various overt and covert attacks on the integrity of science education, such as a new Florida law that lets “anyone in Florida challenge what’s taught in schools”.

Just to be clear, we are not looking for students to simply “believe” in the role of evolutionary processes in generating the diversity of life on Earth, but rather that they develop an understanding of how such processes work and how they make a wide range of observations scientifically intelligible. Of course the end result, unless you are prepared to abandon science altogether, is that you will find yourself forced to seriously consider the implications of inescapable scientific conclusions, no matter how weird and disconcerting they may be.

spacer bar

There are a number of educational strategies, in part depending upon one’s disciplinary perspective, on how to approach teaching evolutionary processes. Here I consider one, based on my background in cell and molecular biology.  Genomicus is a web tool that “enables users to navigate in genomes in several dimensions: linearly along chromosome axes, transversely across different species, and chronologically along evolutionary time.”  It is one of a number of  web-based resources that make it possible to use the avalanche of DNA (gene and genomic) sequence data being generated by the scientific community. For example, the ExAC/Gnomad Browser enables one to examine genetic variation in over 60,000 unrelated people. Such tools supplement and extend the range of tools accessible through the U.S. National Library of Medicine / NIH / National Center for Biotechnology Information (NCBI) web portal (PubMed).

In the biofundamentals / coreBio course (with an evolving text available here), we originally used the observation that members of our subfamily of primates,  the Haplorhini or dry nose primates, are, unlike most mammals, dependent on the presence of vitamin C (ascorbic acid) in their diet. Without vitamin C we develop scurvy, a potentially lethal condition. While there may be positive reasons for vitamin C dependence, in biofundamentals we present this observation in the context of small population size and a forgiving environment. A plausible scenario is that the ancestral Haplorhini population the lost the L-gulonolactone oxidase (GULO) gene (see OMIM) necessary for vitamin C synthesis. The remains of the GULO gene found in humans and other Haplorhini genomes is mutated and non-functional, resulting in our requirement for dietary vitamin C.

How, you might ask, can we be so sure? Because we can transfer a functional mouse GULO gene into human cells; the result is that vitamin C dependent human cells become vitamin C independent (see: Functional rescue of vitamin C synthesis deficiency in human cells). This is yet another experimental result, similar to the ability of bacteria to accurately decode a human insulin gene), that supports the explanatory power of an evolutionary perspective (2).


In an environment in which vitamin C is plentiful in a population’s diet, the mutational loss of the GULO gene would be benign, that is, not strongly selected against. In a small population, the stochastic effects of genetic drift can lead to the loss of genetic variants that are not strongly selected for. More to the point, once a gene’s function has been lost due to mutation, it is unlikely, although not impossible, that a subsequent mutation will lead to the repair of the gene. Why? Because there are many more ways to break a molecular machine, such as the GULO enzyme, but only a few ways to repair it. As the ancestor of the Haplorhini diverged from the ancestor of the vitamin C independent Strepsirrhini (wet-nose) group of primates, an event estimated to have occurred around 65 million years ago, its ancestors had to deal with their dietary dependence on vitamin C either by remaining within their original (vitamin C-rich) environment or by adjusting their diet to include an adequate source of vitamin C.

At this point we can start to use Genomicus to examine the results of evolutionary processes (see a YouTube video on using Genomicus)(3).  In Genomicus a gene is indicated  by a pointed box  ; for simplicity all genes are drawn as if they are the same size (they are not); different genes get different colors and the direction of the box indicates the direction of RNA synthesis, the first stage of gene expression. Each horizontal line in the diagram below represents a segment of a chromosome from a particular species, while the blue lines to the left represent phylogenic (evolutionary) relationships. If we search for the GULO gene in the mouse, we find it and we discover that its orthologs (closely related genes) are found in a wide range of eukaryotes, that is, organisms whose cells have a nucleus (humans are eukaryotes).

We find a version of the GULO gene in single-celled eukaryotes, such as baker’s yeast, that appear to have diverged from other eukaryotes about ~1.500,000,000 years ago (1500 million years ago, abbreviated Mya).  Among the mammalian genomes sequenced to date, the genes surrounding the GULO gene are (largely) the same, a situation known as synteny (mammals are estimated to have shared a common ancestor about 184 Mya). Since genes can move around in a genome without necessarily disrupting their normal function(s), a topic for another day, synteny between distinct organisms is assumed to reflect the organization of genes in their common ancestor. The synteny around the GULO gene, and the presence of a GULO gene in yeast and other distantly related organisms, suggests that the ability to synthesize vitamin C is a trait conserved from the earliest eukaryotic ancestors.GULO phylogeny mouse
Now a careful examination of this map (↑) reveals the absence of humans (Homo sapiens) and other Haplorhini primates – Whoa!!! what gives?  The explanation is, it turns out, rather simple. Because of mutation, presumably in their common ancestor, there is no functional GULO gene in Haplorhini primates. But the Haplorhini are related to the rest of the mammals, aren’t they?  We can test this assumption (and circumvent the absence of a functional GULO gene) by exploiting synteny – we search for other genes present in the syntenic region (↓). What do we find? We find that this region, with the exception of GULO, is present and conserved in the Haplorhini: the syntenic region around the GULO gene lies on human chromosome 8 (highlighted by the red box); the black box indicates the GULO region in the mouse. Similar syntenic regions are found in the homologous (evolutionarily-related) chromosomes of other Haplorhini primates.synteny-GULO region

The end result of our Genomicus exercise is a set of molecular level observations, unknown to those who built the original anatomy-based classification scheme, that support the evolutionary relationship between the Haplorhini and more broadly among mammals. Based on these observations, we can make a number of unambiguous and readily testable predictions. A newly discovered Haplorhini primate would be predicted to share a similar syntenic region and to be missing a functional GULO gene, whereas a newly discovered Strepsirrhini primate (or any mammal that does not require dietary ascorbic acid) should have a functional GULO gene within this syntenic region.  Similarly, we can explain the genomic similarities between those primates closely related to humans, such as the gorilla, gibbon, orangutan, and chimpanzee, as well as to make testable predictions about the genomic organization of extinct relatives, such as Neanderthals and Denisovians, using DNA recovered from fossils [link].

It remains to be seen how best to use these tools in a classroom context and whether having students use such tools influences their working understanding, and more generally, their acceptance of evolutionary mechanisms. That said, this is an approach that enables students to explore real data and to develop  plausible and predictive explanations for a range of genomic discoveries, likely to be relevant both to understanding how humans came to be, and in answering pragmatic questions about the roles of specific mutations and allelic variations in behavior, anatomy, and disease susceptibility.spacer bar

Some footnotes (figures reinserted 2 November 2020, with minor edits)

(1) Interested in a magnetic bumper image? visit: http://www.cafepress.com/bioliteracy

(2) An insight completely missing (unpredicted and unexplained) by any creationist / intelligent design approach to biology.

(3) Note, I have no connection that I know of with the Genomicus team, but I thank Tyler Square (now at UC Berkeley) for bringing it to my attention.

The trivialization of science education

It’s time for universities to accept their role in scientific illiteracy.  

There is a growing problem with scientific illiteracy, and its close relative, scientific over-confidence. While understanding science, by which most people seem to mean technological skills, or even the ability to program a device (1), is purported to be a critical competitive factor in our society, we see a parallel explosion of pseudo-scientific beliefs, often religiously held.  Advocates of a gluten-free paleo-diet battle it out with orthodox vegans for a position on the Mount Rushmore of self-righteousness, at the same time astronomers and astrophysicists rebrand themselves as astrobiologists (a currently imaginary discipline) while a subset of theoretical physicists, and the occasional evolutionary biologist, claim to have rendered ethicists and philosophers obsolete (oh, if it were only so). There are many reasons for this situation, most of which are probably innate to the human condition.  Our roots are in the vitamin C-requiring Haplorhini (dry nose) primate family, we were not evolved to think scientifically, and scientific thinking does not come easy for most of us, or for any of us over long periods of time (2). The fact that the sciences are referred to as disciplines reflects this fact, it requires constant vigilance, self-reflection, and the critical skepticism of knowledgeable colleagues to build coherent, predictive, and empirically validated models of the Universe (and ourselves).  In point of fact, it is amazing that our models of the Universe have become so accurate, particularly as they are counter-intuitive and often seem incredible, using the true meaning of the word.

Many social institutions claim to be in the business of developing and supporting scientific literacy and disciplinary expertise, most obviously colleges and universities.  Unfortunately, there are several reasons to question the general efficacy of their efforts and several factors that have led to this failure. There is the general tendency (although exactly how wide-spread is unclear, I cannot find appropriate statistics on this question) of requiring non-science students to take one, two, or more  “natural science” courses, often with associated laboratory sections, as a way to “enhance literacy and knowledge of one or more scientific disciplines, and enhance those reasoning and observing skills that are necessary to evaluate issues with scientific content” (source).

That such a requirement will “enable students to understand the current state of knowledge in at least one scientific discipline, with specific reference to important past discoveries and the directions of current development; to gain experience in scientific observation and measurement, in organizing and quantifying results, in drawing conclusions from data, and in understanding the uncertainties and limitations of the results; and to acquire sufficient general scientific vocabulary and methodology to find additional information about scientific issues, to evaluate it critically, and to make informed decisions” (source) suggests a rather serious level of faculty/institutional distain or apathy for observable learning outcomes, devotional levels of wishful thinking,  or simple hubris.  To my knowledge there is no objective evidence to support the premise that such requirements achieve these outcomes – which renders the benefits of such requirements problematic, to say the least (link).

On the other hand, such requirements have clear and measurable costs; going beyond the simple burden of added and potentially ineffective or off-putting course credit hours. The frequent requirement for multi-hour laboratory courses impacts the ability of students to schedule courses.  It would be an interesting study to examine how, independently of benefit, such laboratory course requirements impact students’ retention and time to degree, that is, bluntly put, costs to students and their families.

Now, if there were objective evidence that taking such courses improved students’ understanding of a specific disciplinary science and its application, perhaps the benefit would warrant the cost.  But one can be forgiven if one assumes a less charitable driver, that is, science departments’ self-interest in using laboratory and other non-major course requirements as means to support graduate students.  Clearly there is a need for objective metrics for scientific, that is disciplinary, literacy and learning outcomes.

And this brings up another cause for concern.  Recently, there has been a movement within the science education research community to attempt to quantify learning in terms of what are known as “forced choice testing instruments;” that is, tests that rely on true/false and multiple-choice questions, an actively anti-Socratic strategy.  In some cases, these tests claim to be research based.  As one involved in the development of such a testing instrument (the Biology Concepts Instrument or BCI), it is clear to me that such tests can serve a useful role in helping to identify areas in which student understanding is weak or confused [example], but whether they can provide an accurate or, at the end of the day, meaningful measure of whether students have developed an accurate working understanding of complex concepts and the broader meaning of observations is problematic at best.

Establishing such a level of understanding relies on Socratic, that is, dynamic and adaptive evaluations: can the learner clearly explain, either to other experts or to other students, the source and implications of their assumptions?  This is the gold standard for monitoring disciplinary understanding. It is being increasingly side-lined by those who rely on forced choice tests to evaluate learning outcomes and to support their favorite pedagogical strategies (examples available upon request).  In point of fact, it is often difficult to discern, in most science education research studies, what students have come to master, what exactly they know, what they can explain and what they can do with their knowledge. Rather unfortunately, this is not a problem restricted to non-majors taking science course requirements; majors can also graduate with a fragmented and partially, or totally, incoherent understanding of key ideas and their empirical foundations.

So what are the common features of a functional understanding of a particular scientific discipline, or more accurately, a sub-discipline?  A few ideas seem relevant.  A proficient needs to be realistic about their own understanding.  We need to teach disciplinary (and general) humility – no one actually understands all aspects of most scientific processes.  This is a point made by Fernbach & Sloman in their recent essay, “Why We Believe Obvious Untruths.”  Humility about our understanding has a number of beneficial aspects.  It helps keep us skeptical when faced with, and asked to accept, sweeping generalizations.

Such skepticism is part of a broader perspective, common among working scientists, namely the ability to distinguish the obvious from the unlikely, the implausible, and the impossible. When considering a scientific claim, the first criterion is whether there is a plausible mechanism that can be called upon to explain it, or does it violate some well-established “law of nature”. Claims of “zero waste” processes butt up against the laws of thermodynamics.

Going further, we need to consider how the observation or conclusions fits with other well established principles, which means that we have to be aware of these principles, as well as acknowledging that we are not universal experts in all aspects of science.  A molecular biologist may recognize that quantum mechanics dictates the geometries of atomic bonding interactions without being able to formally describe the intricacies of the molecule’s wave equation. Similarly, a physicist might think twice before ignoring the evolutionary history of a species, and claiming that quantum mechanics explains consciousness, or that consciousness is a universal property of matter.  Such a level of disciplinary expertise can take extended experience to establish, but is critical to conveying what disciplinary mastery involves to students; it is the major justification for having disciplinary practitioners (professors) as instructors.

From a more prosaic educational perspective other key factors need to be acknowledged, namely a realistic appreciation of what people can learn in the time available to them, while also understanding at least some of their underlying motivations, which is to say that the relevance of a particular course to disciplinary goals or desired educational outcomes needs to be made explicit and as engaging as possible, or at least not overtly off putting, something that can happen when a poor unsuspecting molecular biology major takes a course in macroscopic physics, taught by an instructor who believes organisms are deducible from first principles based on the conditions of the big bang.  Respecting the learner requires that we explicitly acknowledge that an unbridled thirst for an empirical, self-critical, mastery of a discipline is not a basic human trait, although it is something that can be cultivated, and may emerge given proper care.  Understanding the real constraints that act on meaningful learning can help focus courses on what is foundational, and help eliminate the irrelevant or the excessively esoteric.

Unintended consequences arise from “pretending” to teach students, both majors and non-science majors, science. One is an erosion of humility in the face of the complexity of science and our own limited understanding, a point made in a recent National Academy report that linked superficial knowledge with more non-scientific attitudes. The end result is an enhancement of what is known as the Kruger-Dunning effect, the tendency of people to seriously over-estimate their own expertise: “the effect describes the way people who are the least competent at a task often rate their skills as exceptionally high because they are too ignorant to know what it would mean to have the skill”.

A person with a severe case of Kruger-Dunning-itis is likely to lose respect for people who actually know what they are talking about. The importance of true expertise is further eroded and trivialized by the current trend of having photogenic and well-speaking experts in one domain pretend to talk, or rather to pontificate, authoritatively on another (3).  In a world of complex and arcane scientific disciplines, the role of a science guy or gal can promote rather than dispel scientific illiteracy.

We see the effects of the lack of scientific humility when people speak outside of their domain of established expertise to make claims of certainty, a common feature of the conspiracy theorist.  An oft used example is the claim that vaccines cause autism (they don’t), when the actual causes of autism, whether genetic and/or environmental, are currently unknown and the subject of active scientific study.  An honest expert can, in all humility, identify the limits of current knowledge as well as what is known for certain.  Unfortunately, revealing and ameliorating the levels of someone’s Kruger-Dunning-itis involves a civil and constructive Socratic interrogation, something of an endangered species in this day and age, where unseemly certainty and unwarranted posturing have replaced circumspect and critical discourse.  Any useful evaluation of what someone knows demands the time and effort inherent in a Socratic discourse, the willingness to explain how one knows what one thinks one knows, together with a reflective consideration of its implications, and what it is that other trained observers, people demonstrably proficient in the discipline, have concluded. It cannot be replaced by a multiple choice test.

Perhaps a new (old) model of encouraging in students, as well as politicians and pundits, an understanding of where science comes from, the habits of mind involved, the limits of, and constraints on, our current understanding  is needed.  At the college level, courses that replace superficial familiarity and unwarranted certainty with humble self-reflection and intellectual modesty might help treat the symptoms of Kruger-Dunning-itis, even though the underlying disease may be incurable, and perhaps genetically linked to other aspects of human neuronal processing.


some footnotes:

  1. after all, why are rather distinct disciplines lumped together as STEM (science, technology, engineering and mathematics).
  2.  Given the long history of Homo sapiens before the appearance of science, it seems likely that such patterns of thinking are an unintended consequence of selection for some other trait, and the subsequent emergence of (perhaps excessively) complex and self-reflective nervous system.
  3.  Another example of Neil Postman’s premise that education is be replaced by edutainment (see  “Amusing ourselves to Death”.

Go ahead and “teach the controversy:” it is the best way to defend science.

as long as teachers understand the science and its historical context

The role of science in modern societies is complex. Science-based observations and innovations drive a range of economically important, as well as socially disruptive, technologies. Many opinion polls indicate that the American public “supports” science, while at the same time rejecting rigorously established scientific conclusions on topics ranging from the safety of genetically modified organisms and the role of vaccines in causing autism to the effects of burning fossil fuels on the global environment [Pew: Views on science and society]. Given that a foundational principle of science is that the natural world can be explained without calling on supernatural actors, it remains surprising that a substantial majority of people appear to believe that supernatural entities are involved in human evolution [as reported by the Gallup organization]; although the theistic percentage has been dropping  (a little) of late.

God+evolution

This situation highlights the fact that when science intrudes on the personal or the philosophical (within which I include the theological and the  ideological), many people are willing to abandon the discipline of science to embrace explanations based on personal beliefs. These include the existence of a supernatural entity that cares for people, at least enough to create them, and that there are reasons behind life’s tragedies.

 

Where science appears to conflict with various non-scientific positions, the public has pushed back and rejected the scientific. This is perhaps best represented by the recent spate of “teach the controversy” legislative efforts, primarily centered on evolutionary theory and the realities of anthropogenic climate change [see Nature: Revamped ‘anti-science’ education bills]. We might expect to see, on more politically correct campuses, similar calls for anti-GMO, anti-vaccination, or gender-based curricula. In the face of the disconnect between scientific and non-scientific (philosophical, ideological, theological) personal views, I would suggest that an important part of the problem has didaskalogenic roots; that is, it arises from the way science is taught – all too often expecting students to memorize terms and master various heuristics (tricks) to answer questions rather than developing a self-critical understanding of ideas, their origins, supporting evidence, limitations, and practice in applying them.

Science is a social activity, based on a set of accepted core assumptions; it is not so much concerned with Truth, which could, in fact, be beyond our comprehension, but rather with developing a universal working knowledge, composed of ideas based on empirical observations that expand in their explanatory power over time to allow us to predict and manipulate various phenomena.  Science is a product of society rather than isolated individuals, but only rarely is the interaction between the scientific enterprise and its social context articulated clearly enough so that students and the general public can develop an understanding of how the two interact.  As an example, how many people appreciate the larger implications of the transition from an Earth to a Sun- or galaxy-centered cosmology?  All too often students are taught about this transition without regard to its empirical drivers and philosophical and sociological implications, as if the opponents at the time were benighted religious dummies. Yet, how many students or their teachers appreciate that as originally presented the Copernican system had more hypothetical epicycles and related Rube Goldberg-esque kludges, introduced to make the model accurate, than the competing Ptolemic Sun-centered system? Do students understand how Kepler’s recognition of elliptical orbits eliminated the need for such artifices and set the stage for Newtonian physics?  And how did the expulsion of humanity from the center to the periphery of things influence peoples’ views on humanity’s role and importance?

 

So how can education adapt to better help students and the general public develop a more realistic understanding of how science works?  To my mind, teaching the controversy is a particularly attractive strategy, on the assumption that teachers have a strong grounding in the discipline they are teaching, something that many science degree programs do not achieve, as discussed below. For example, a common attack against evolutionary mechanisms relies on a failure to grasp the power of variation, arising from stochastic processes (mutation), coupled to the power of natural, social, and sexual selection. There is clear evidence that people find stochastic processes difficult to understand and accept [see Garvin-Doxas & Klymkowsky & Fooled by Randomness].  An instructor who is not aware of the educational challenges associated with grasping stochastic processes, including those central to evolutionary change, risks the same hurdles that led pre-molecular biologists to reject natural selection and turn to more “directed” processes, such as orthogenesis [see Bowler: The eclipse of Darwinism & Wikipedia]. Presumably students are even more vulnerable to intelligent-design  creationist arguments centered around probabilities.

The fact that single cell measurements enable us to visualize biologically meaningful stochastic processes makes designing course materials to explicitly introduce such processes easier [Biology education in the light of single cell/molecule studies].  An interesting example is the recent work on visualizing the evolution of antibiotic resistance macroscopically [see The evolution of bacteria on a “mega-plate” petri dish].bacterial evolution-antibiotic resistancepng

To be in a position to “teach the controversy” effectively, it is critical that students understand how science works, specifically its progressive nature, exemplified through the process of generating and testing, and where necessary, rejecting or revising, clearly formulated and predictive hypotheses – a process antithetical to a Creationist (religious) perspective [a good overview is provided here: Using creationism to teach critical thinking].  At the same time, teachers need a working understanding of the disciplinary foundations of their subject, its core observations, and their implications. Unfortunately, many are called upon to teach subjects with which they may have only a passing familiarity.  Moreover, even majors in a subject may emerge with a weak understanding of foundational concepts and their origins – they may be uncomfortable teaching what they have learned.  While there is an implicit assumption that a college curriculum is well designed and effective, there is often little in the way of objective evidence that this is the case. While many of our dedicated teachers (particularly those I have met as part of the CU Teach program) work diligently to address these issues on their own, it is clear that many have not been exposed to a critical examination of the empirical observations and experimental results upon which their discipline is based [see Biology teachers often dismiss evolution & Teachers’ Knowledge Structure, Acceptance & Teaching of Evolution].  Many is the molecular biology department that does not require formal coursework in basic evolutionary mechanisms, much less a thorough consideration of natural, social, and sexual selection, and non-adaptive mechanisms, such as those associated with founder effects, population bottlenecks, and genetic drift, stochastic processes that play a key role in the evolution of many species, including humankind. Similarly, more ecologically- and physiologically-oriented majors are often “afraid” of the molecular foundations of evolutionary processes. As part of an introductory chemistry curriculum redesign project (CLUE), Melanie Cooper and her group at Michigan State University have found that students in conventional courses often fail to grasp key concepts, and that subsequent courses can sometimes fail to remediate the didaskalogenic damage done in earlier courses [see: an Achilles Heel in Chemistry Education].

The importance of a historical perspective: The power of scientific explanations are obvious, but they can become abstract when their historical roots are forgotten, or never articulated. A clear example is that the value of vaccination is obvious in the presence of deadly and disfiguring diseases. In their absence, due primarily to the effectiveness of wide-spread vaccination, the value of vaccination can be called into question, resulting in the avoidable re-emergence of these diseases.  In this context, it would be important that students understand the dynamics and molecular complexity of biological systems, so that they can explain why it is that all drugs and treatments have potential side-effects, and how each individual’s genetic background influences these side-effects (although in the case of vaccination, such side effects do not include autism).

Often “controversy” arises when scientific explanations have broader social, political, or philosophical implications. Religious objections to evolutionary theory arise primarily, I believe, from the implication that we (humans) are not the result of a plan, created or evolved, but rather that we are accidents of mindless, meaningless, and often gratuitously cruel processes. The idea that our species, which emerged rather recently (that is, a few million years ago) on a minor planet on the edge of an average galaxy, in a universe that popped into existence for no particular reason or purpose ~14 billion years ago, can have disconcerting implications [link]. Moreover, recognizing that a “small” change in the trajectory of an asteroid could change the chance that humanity ever evolved [see: Dinosaur asteroid hit ‘worst possible place’] can be sobering and may well undermine one’s belief in the significance of human existence. How does it impact our social fabric if we are an accident, rather than the intention of a supernatural being or the inevitable product of natural processes?

Yet, as a person who firmly believes in the French motto of liberté, égalité, fraternité and laïcité, I feel fairly certain that no science-based scenario on the origin and evolution of the universe or life, or the implications of sexual dimorphism or “racial” differences, etc, can challenge the importance of our duty to treat others with respect, to defend their freedoms, and to insure their equality before the law. Which is not to say that conflicts do not inevitably arise between different belief systems – in my own view, patriarchal oppression needs to be called out and actively opposed where ever it occurs, whether in Saudi Arabia or on college campuses (e.g. UC Berkeley or Harvard).

This is not to say that presenting the conflicts between scientific explanations of phenomena and non-scientific, but more important beliefs, such as equality under the law, is easy. When considering a number of natural cruelties, Charles Darwin wrote that evolutionary theory would claim that these are “as small consequences of one general law, leading to the advancement of all organic beings, namely, multiply, vary, let the strongest live  and the weakest die” note the absence of any reference to morality, or even sympathy for the “weakest”.  In fact, Darwin would have argued that the apparent, and overt cruelty that is rampant in the “natural” world is evidence that God was forced by the laws of nature to create the world the way it is, presumably a worlDarwin to Grayd that is absurdly old and excessively vast. Such arguments echo the view that God had no choice other than whether to create or not; that for all its flaws, evils, and unnecessary suffering this is, as posited by Gottfried Leibniz (1646-1716) and satirized by Voltaire (1694–1778) in his novel Candide, the best of all possible worlds. Yet, as a member of a reasonably liberal, and periodically enlightened, society, we see it as our responsibility to ameliorate such evils, to care for the weak, the sick, and the damaged and to improve human existence; to address prejudice and political manipulations [thank you Supreme Court for ruling against race-based redistricting].  Whether anchored by philosophical or religious roots, many of us are driven to reject a scientific (biological) quietism (“a theology and practice of inner prayer that emphasizes a state of extreme passivity”) by actively manipulating our social, political, and physical environment and striving to improve the human condition, in part through science and the technologies it makes possible.

At the same time, introducing social-scientific interactions can be fraught with potential  controversies, particularly in our excessively politicized and self-righteous society. In my own introductory biology class (biofundamentals), we consider potentially contentious issues that include sexual dimorphism and selection and social evolutionary processes and their implications.  As an example, social systems (and we are social animals) are susceptible to social cheating and groups develop defenses against cheaters; how such biological ideas interact with historical, political and ideological perspectives is complex, and certainly beyond the scope of an introductory biology course, but worth acknowledging [PLoS blog link].

yeats quote

In a similar manner, we understand the brain as an evolved cellular system influenced by various experiences, including those that occur during development and subsequent maturation.  Family life interacts with genetic factors in a complex, and often unpredictable way, to shape behaviors.  But it seems unlikely that a free and enlightened society can function if it takes seriously the premise that we lack free-will and so cannot be held responsible for our actions, an idea of some current popularity [see Free will could all be an illusion ]. Given the complexity of biological systems, I for one am willing to embrace the idea of constrained free will, no matter what scientific speculations are currently in vogue [but see a recent video by Sabine Hossenfelder You don’t have free will, but don’t worry, which has me rethinking].

Recognizing the complexities of biological systems, including the brain, with their various adaptive responses and feedback systems can be challenging. In this light, I am reminded of the contrast between the Doomsday scenario of Paul Ehrlich’s The Population Bomb, and the data-based view of the late Hans Rosling in Don’t Panic – The Facts About Population.

All of which is to say that we need to see science not as authoritarian, telling us who we are or what we should do, but as a tool to do what we think is best and why it might be difficult to achieve. We need to recognize how scientific observations inform and may constrain, but do not dictate our decisions. We need to embrace the tentative, but strict nature of the scientific enterprise which, while it cannot arrive at “Truth” can certainly identify non-sense.

Minor edits and updates and the addition of figures that had gone missing –  20 October 2020

Power Posing & Science Education

Developing a coherent understanding of a scientific idea is neither trivial nor easy and it is counter-productive to pretend that it is.

For some time now the idea of “active learning” (as if there is any other kind) has become a mantra in the science education community (see Active Learning Day in America: link). Yet the situation is demonstrably more complex, and depends upon what exactly is to be learned, something rarely stated explicitly in many published papers on active learning (an exception can be found here with respect to understanding evolutionary mechanisms : link).  The best of such work generally relies on results from multiple-choice “concept tests” that  provide, at best, a limited (low resolution) characterization of what students know. Moreover it is clear that, much like in other areas, research into the impact of active learning strategies is rarely reproduced (see: link, link & link).

As is clear from the level of aberrant and non-sensical talk about the implications of “science” currently on display in both public and private spheres (link : link), the task of effective science education and rigorous scientific (data-based) decision making is not a simple one.  As noted by many there is little about modern science that is intuitively obvious and most is deeply counterintuitive or actively disconcerting (see link).  In the absence of a firm religious or philosophical perspective, scientific conclusions about the size and age of the Universe, the various processes driving evolution, and the often grotesque outcomes they can produce can be deeply troubling; one can easily embrace a solipsistic, ego-centric and/or fatalistic belief/behavioral system.

There are two videos of Richard Feynman that capture much of what is involved in, and required for understanding a scientific idea and its implications. The first involves the basis scientific process, where the path to a scientific understanding of a phenomena begins with a guess, but these are a special kind of guess, namely a guess that implies unambiguous (and often quantitative) predictions of what future (or retrospective) observations will reveal (video: link).  This scientific discipline (link) implies the willingness to accept that scientifically-meaningful ideas need to have explicit, definable, and observable implications, while those that do not are non-scientific and need to be discarded. As witness the stubborn adherence to demonstrably untrue ideas (such as where past Presidents were born or how many people attended an event or voted legally), which mark superstitious and non-scientific worldviews.  Embracing a scientific perspective is not easy, nor is letting go of a favorite idea (or prejudice).  The difficulty of thinking and acting scientifically needs to be kept in the mind of instructors; it is one of the reasons that peer review continues to be important – it reminds us that we are part of a community committed to the rules of scientific inquiry and its empirical foundations and that we are accountable to that community.

The second Feynman video (video : link) captures his description of what it means to understand a particular phenomenon scientifically, in this particular case, why magnets attract one another.  The take home message is that many (perhaps most) scientific ideas require a substantial amount of well-understood background information before one can even begin a scientifically meaningful consideration of the topic. Yet all too often such background information is not considered by those who develop (and deliver) courses and curricula. To use an example from my own work (in collaboration with Melanie Cooper @MSU), it is very rare to find course and curricular materials (textbooks and such) that explicitly recognize (or illustrate) the underlying assumptions involved in a scientific explanation.  Often the “central dogma” of molecular biology is taught as if it were simply a description of molecular processes, rather than explicitly recognizing that information flows from DNA outward (link)(and into DNA through mutation and selection).  Similarly it is rare to see stated explicitly that random collisions with other molecules supply the energy needed for chemical reactions to proceed or to break intermolecular interactions, or that the energy released upon complex formation is transferred to other molecules in the system (see : link), even though these events control essentially all aspects of the systems active in organisms, from gene expression to consciousness.

The basic conclusion is that achieving a working understanding of a scientific ideas is hard, and that, while it requires an engaging and challenging teacher and a supportive and interactive community, it is also critical that students be presented with conceptually coherent content that acknowledges and presents all of the ideas needed to actually understand the concepts and observations upon which a scientific understanding is based (see “now for the hard part” :  link).  Bottom line, there is no simple or painless path to understanding science – it involves a serious commitment on the part of the course designer as well as the student, the instructor, and the institution (see : link).

This brings us back to the popularity of the “active learning” movement, which all too often ignores course content and the establishment of meaningful learning outcomes.  Why then has it attracted such attention?  My own guess it that is provides a simple solution that circumvents the need for instructors (and course designers) to significantly modify the materials that they present to students.  The current system rarely rewards or provides incentives for faculty to carefully consider the content that they are presenting to students, asking whether it is relevant or sufficient for students’ to achieve a working understanding of the subject presented, an understanding that enables the student to accurately interpret and then generate reasoned and evidence-based (plausible) responses.

Such a reflective reconsideration of a topic will often result in dramatic changes in course (and curricular) emphasis; traditional materials may be omitted or relegated to more specialized courses.  Such changes can provoke a negative response from other faculty, based of often inherited (an uncritically accepted) ideas about course “coverage”, as opposed to desired and realistic student learning outcomes.  Given the resistance of science faculty (particularly at institutions devoted to scientific research) to investing time in educational projects (often a reasonable strategy, given institutional reward systems), there is a seductive lure to easy fixes. One such fix is to leave the content unaltered and to “adopt a pose” in the classroom.

All of which brings me to the main problem – the frequency with which superficial (low cost, but often ineffectual) strategies can act to inhibit and distract from significant, but difficult reforms.  One cannot help but be reminded of other quick fixes for complex problems.  The most recent being the idea, promulgated by Amy Cuddy (Harvard: link) and others, that adopting a “power pose” can overcome various forms of experienced- and socioeconomic-based prejudices and injustices, as if over-coming a person’sexperiences and situation is simply a matter of will. The message is that those who do not succeed have only themselves to blame, because the way to succeed is (basically) so damn simple.  So imagine one’s surprise (or not) when one discovers that the underlying biological claims associated with “power posing” are not true (or at least cannot be replicated, even by the co-authors of the original work (see Power Poser: When big ideas go bad: link).  Seems as if the lesson that needs to be learned, both in science education and more generally, is that claims that seem too easy or universal are unlikely to be true.  It is worth remembering that even the most effective modern (and traditional) medicines, all have potentially dangerous side effects. Why, because they lead to significant changes to the system and such modifications can discomfort the comfortable. This stands in stark contrast to non-scientific approaches; homeopathic “remedies” come to mind, which rely on placebo effects (which is not to say that taking ineffective remedies does not itself  involve risks.)

As in the case of effective medical treatments, the development and delivery of engaging and meaningful science education reform often requires challenging current assumptions and strategies that are often based in outdated traditions, and are influenced more by the constraints of class size and the logistics of testing than they are by the importance of achieving demonstrable enhancements of students’ working understanding of complex ideas.

The pernicious effects of disrespecting the constraints of science

By Mike Klymkowsky

Recent political events and the proliferation of “fake news” and the apparent futility of fact checking in the public domain have led me to obsess about the role played by the public presentation of science. “Truth” can often trump reality, or perhaps better put, passionately held beliefs can overwhelm a circumspect worldview based on a critical and dispassionate analysis of empirically established facts and theories. Those driven by various apocalyptic visions of the world, whether religious or political, can easily overlook or corrupted sciencetrivialize evidence that contradicts their assumptions and conclusions. While historically there have been periods during which non-empirical presumptions are called into question, more often than not such periods have been short-lived. Some may claim that the search for absolute truth, truths significant enough to sacrifice the lives of others for, is restricted to the religious, they are sadly mistaken – political, often explicitly anti-religious  movements are also susceptible, often with horrific consequences, think Nazism and communist-inspired apocalyptic purges. The history of eugenics and forced sterilization based on flawed genetic and ideological premises have similar roots.

Given the seductive nature of belief-based “Truth”, many turned to science as a bulwark against wishful and arational thinking. The evolving social and empirical (data-based) nature of the scientific enterprise, beginning with guesses as to how the world (or rather some small part of the world) works, then following the guess’s logical implications together with the process of testing those implications through experiment or observation, leading to the revision (or abandonment) of the original guess, moving it toward hypothesis and then, as it becomes more explanatory and accurately predictive, and as those predictions are confirmed, into a theory.  So science is a dance between speculation and observation. In contrast to a free form dance, the dance of science is controlled by a number of rigid, and oppressive to some, constraints [see Feynman &  Rothman 2020. How does Science Really work].

Perhaps surprisingly, this scientific enterprise has converged onto a small set of over-arching theories and universal laws that appear to explain much of what is observable, these include the theory of general relativity, quantum and atomic theory, the laws of thermodynamics, and the theory of evolution. With the noticeable exception of relativity and quantum mechanics, these conceptual frameworks appear to be compatible with one another. As an example, organisms, and behaviors such as consciousness, obey and are constrained by, well established and (apparently) universal physical and chemical rules.

endosymbiosis-1.fwA central constraint on scientific thinking is that what cannot in theory be known is not a suitable topic for scientific discussion. This leaves outside of the scope of science a number of interesting topics, ranging from what came before the “Big Bang” to the exact steps in the origin of life. In the latter case, the apparently inescapable conclusion that all terrestrial organisms share a complex “Last Universal Common Ancestor” (LUCA) makes theoretically unconfirmable speculations about pre-LUCA living systems outside of science.  While we can generate evidence that the various building blocks of life can be produced abiogenically (a process begun with Wohler’s synthesis of urea) we can only speculate as to the systems that preceded LUCA.

Various pressures have led many who claim to speak scientifically (or to speak for science) to ignore the rules of the scientific enterprise – they often act as if their are no constraints, no boundaries to scientific speculation. Consider the implications of establishing “astrobiology” programs based on speculation (rather than observations) presented with various levels of certainty as to the ubiquity of life outside of Earth [the speculations of Francis Crick and Leslie Orgel on “directed panspermia”: and the psuedoscientific Drake equation come to mind, see Michael Crichton’s famous essay on Aliens and global warming]. Yet such public science pronouncements appear to ignore (or dismiss) the fact that we know, and can study, only one type of life (at the moment), the descendants of LUCA. They appear untroubled when breaking the rules and abandoning the discipline that has made science a powerful, but strictly constrained human activity.

Whether life is unique to Earth or not requires future explorations and discoveries that may (or given the technological hurdles involved, may not) occur. Similarly postulating theoretically unobservable alternative universes or the presence of some form of consciousness in inanimate objects [an unscientific speculation illustrated here] crosses a dividing line between belief for belief’s sake, and the scientific – it distorts and obscures the rules of the game, the rules that make the game worth playing [again, the Crichton article cited above makes this point]. A recent rather dramatic proposal from some in the physical-philosophical complex has been the claim that the rules of prediction and empirical confirmation (or rejection) are no longer valid – that we can abandon requiring scientific ideas to make observable predictions [see Ellis & Silk]. It is as if objective reality is no longer the benchmark against which scientific claims are made; that perhaps mathematical elegance (see Sabine Hossenfelder’s Losts in Math) or spiritual comfort are more important – and well they might be (more important) but they are outside of the limited domain of science. At the 2015 “Why Trust a Theory” meeting, the physicist Carlo Rovelli concluded “by pointing out that claiming that a theory is valid even though no experiment has confirmed it destroys the confidence that society has in science, and it also misleads young scientists into embracing sterile research programs.” [quote from Massimo’s Pigliucci’s Footnotes to Plato blog].

While the examples above are relatively egregious, it is worth noting that various pressures for glory, fame, and funding can to impact science more frequently – leading to claims that are less obviously non-scientific, but that bend (and often break) the scientific charter. Take, for example, claims about animal models of human diseases. Often the expediencies associated with research make the use of such animal models necessary and productive, but they remain a scientific compromise. While mice, rats, chimpanzees, and humans are related evolutionarily, they also carry distinct traits associated with each lineage’s evolutionary history, and the associated adaptive and non-adaptive processes and events associated with that history. A story from a few years back illustrates how the differences between the immune systems of mice and humans help explain why the search, in mice, for drugs to treat sepsis in humans was so relatively unsuccessful [Mice Fall Short as Test Subjects for Some of Humans’ Deadly Ills]. A similar type of situation occurs when studies in the mouse fail to explicitly acknowledge how genetic background influences experimental phenotypes [Effect of the genetic background on the phenotype of mouse mutations], as well as how details of experimental scenarios influence human relevance [Can Animal Models of Disease Reliably Inform Human Studies?].

Speculations that go beyond science (while hiding under the aegis of science – see any of a number of articles on quantum consciousness) – may seem just plain silly, but by abandoning the rules of science they erode the status of the scientific process.  How, exactly, would one distinguish a conscious from an unconscious electron?

In science (again as pointed out by Crichton) we do not agree through consensus but through data and respect for critical analyzed empirical observations. The laws of thermodynamics, general relativity, the standard model of particle physics, and evolution theory are conceptual frameworks that we are forced (if we are scientifically honest) to accept. Moreover the implications of these scientific frameworks can be annoying to some; there is no possibility of a “zero waste” process that involves physical objects, no free lunch (perpetual motion machine), no efficient, intelligently-designed evolutionary process (just blind variation and differential reproduction), and no zipping around the galaxy. The apparent limitation of motion to the speed of light means that a “Star Wars” universe is impossible – happily, I would argue, given the number of genocidal events that appear to be associated with that fictional vision – just too many storm troopers for my taste.

Whether our models for the behavior of Earth’s climate or the human brain can be completely accurate (deterministic), given the roles of chaotic and stochastic events in these systems, remains to be demonstrated; until they are, there is plenty of room for conflicting interpretations and prescriptions. That atmospheric levels of greenhouse gases are increasing due to human activities is unarguable, what it implies for future climate is less clear, and what to do about it (a social, political, and economic discussion informed but not determined by scientific observations) is another.

As we discuss science, we must teach (and continually remind ourselves, even if we are working scientific practitioners) about the limits of the scientific enterprise. As science educators, one of our goals needs to be to help students develop an appreciation of the importance of an honest and critical attitude to observations and conclusions, a recognition of the limits of scientific pronouncements. We need to explicitly identify, acknowledge, and respect the constraints under which effective science works and be honest in labeling when we have left scientific statements, lest we begin to walk down the path of little lies that morph into larger ones.  In contrast to politicians and other forms of religious and secular mystics, we should know better than to be seduced into abandoning scientific discipline, and all that that entails.

Minor update + figures reintroduced 20 October 2020

M.W. Klymkowsky  web site:  http://klymkowskylab.colorado.edu

Biology education in the light of single cell/molecule studies

By Mike Klymkowsky

from Stochastic Gene Expression in a Single Cell by Michael B. Elowitz, Arnold J. Levine, Eric D. Siggia & Peter S. Swain

Stochastic processes are often presented in terms of random, that is unpredictable, events.  This framing obscures the reality that stochastic processes, while more or less unpredictable at the level of individual events, are well behaved at the population level.  It also obscures the role of stochastic processes in a wide range of predictable phenomena; in atomic systems, for example, unknown factors determine the timing of the radioactive decay of a particular unstable atom, at the same time the rate of radioactive decay is highly predictable in a large enough population. Similarly, in the classical double-slit experiment the passage of a single photon, electron, or C60 molecule is unpredictable while the behavior of a larger population is perfectly predictable. The macroscopic predictability of the Brownian motion (a stochastic process) enabled Einstein to argue for the reality of atoms.  Similarly, the dissociation of a molecular complex or the occurrence of a chemical reaction, driven as they are by thermal collisions, are stochastic processes, whereas dissociation constants and reaction rates are predictable. In fact this type of unpredictability at the individual level and predictability at the population level is the hallmark of stochastic, as compared to truly random, that is, unpredictable behaviors.

stochastic wordle

Single cell and single molecule studies increasingly provide mechanistic insights into a range of biological processes, from evolutionary to cognitive and pathogenic mechanisms. The effects of stochastic events are complicated by the developing and adaptive

FIG.1  A schematic of the mutational origin of antibiotic resistance over time (and space) from McNally & Brown, 2016).

nature of biological systems and appears to be influenced by the genetic background. In some cases, homeostatic (feedback) mechanisms return the system to its original state.  In others, the stochastic expression (or mutation) of a particular gene (or set of genes) leads to a cascade of downstream effects that change the system, such that subsequent events become more or less probable, a process nicely illustrated in recent real time studies of the evolution of antibiotic resistance in bacteria (←FIG & a seriously cool video). The stochastic (molecular clock) nature of an organism’s intrinsic mutation rate has recently been used with the EXAC system to visualize the impact of selective and non-adaptive effects on human genes.

Pedagogical studies:  The “Framework for K12 science education”  ignores stochastic processes altogether, while the Vision and Change in Undergraduate Biology Education” document contains a single point that calls for “incorporating stochasticity into biological models” (p. 17), but omits details of what this means in practice. People (even scholars) often have a difficult time developing an accurate understanding of stochastic processes (see “Understanding Randomness and its Impact on Student Learning” and “Fooled by Randomness“.  The failure to appreciate the ubiquity of stochastic processes in biological system has been an obstacle to the acceptance of Darwinian evolution. In this light, it seems well past time to rethink the foundational roles of stochastic processes in biological (as well as chemical and physical) systems and how best to introduce such processes to students through coherent course narratives and supporting materials.

A number of studies indicate that students call upon deterministic models to explain a range of stochastic processes. The fact that all too often students are introduced to the behaviors of cellular and molecular level biological systems through depictions that are overtly deterministic does not help the situation. In the majority of instructional videos, for example, molecules appear to know were they are heading and move there with a purpose. Similarly the folding of polypeptides is often depicted as a deterministic process[1] although the proliferation of model-based simulations offers a more realistic depiction (see below).  That said the widespread involvement of chaperones is rarely acknowledged. Macromolecules are commonly depicted as rigid rather than as dynamic. The thermally driven opening and closing of the DNA double-helix (a consequence of the weakness of intermolecular interactions) is rarely illustrated.  Molecules recognize one another and (apparently) stay locked in their mutual embrace forever; the role of thermal collisions in driving molecular dissociation (and binding specificity) is rarely considered in most textbooks, and presumably, in the classes that use these books. Moreover, the factors involved in inter-molecular interactions are often poorly understood, even after the completion of conventional university level chemistry courses. The energetic factors that determine enzyme specificity and reaction rates and the binding of transcription factors to their target DNA sequences, as well as the effects of mutations on these and other processes, often go uncommented on. It is not at all clear whether students appreciate that thermal collisions are responsible for the reversal of molecular interactions or that they supply a reaction’s activation energy. Cells with the same genotype are implicitly expected to behave in identical ways (display the same phenotypes), a situation at odds with direct observation (see “Stochastic Gene Expression in a Single Cell” and “What’s Luck Got to Do with It: Single Cells, Multiple Fates, and Biological Non-determinism” and the general processes involved in cellular differentiation and social behaviors. Phenotypic penetrance and expressivity also involve stochastic behaviors, together with genetic background effects. It certainly does not help when instructors introduce a stochastic process, such as genetic drift, in the context of the Hardy-Weinberg model, a situation in which genetic drift does not occur. Such presentations are likely to increase student confusion.

FIG. 2 A comparison between the expression of lacZ as measured in a bulk population and in individual cells (adapted from Vilar et al., 2003).

It is our impression that the typical instructional approach is to present molecular level processes in terms of large populations of molecules that behave in a deterministic manner. Consider the bacteria Escherichia coli’s lac operon, a group of genes that has been a workhorse in modern molecular biology and a common context through which to present the regulation of gene expression. Expression of the lac operon results in the synthesis of two proteins (lactose permease and β-galactosidase) that enable lactose to enter the cell and convert lactose into the monosaccharides glucose and galactose (which can be metabolized futher) and allolactone, which binds to, and inhibits the binding of the lac repressor protein to DNA, allowing the expression of the lac operon. When the bulk behavior of a bacterial culture is analyzed, the expression of the lac operon increases as a smooth function over time (in the absence of other energy sources)(FIG. ↑). The result is that the expression of the proteins required for lactose metabolism is restricted to situations in which lactose is present.

The mechanistic quandary, rarely if ever considered explicitly as far as we can tell, is how the lac operon can “turn on” when the entry of lactose into the cell and the inactivation of the lac repressor both depend upon the operon’s expression?  The situation becomes clear only when we consider the behavior of individual cells; LacZ expression goes from off to fully on in a stochastic manner (FIG. 2↑).[2]  Given that there are ~5 to 10 lac repressor molecules and one to two copies of the lac operon per cell, the lac operon can be expressed when the operon is free of bound repressor.  If such a “noisy” event happens to occur when lactose is present in the media, expression of the lac operon allows lactose to enter the cell, the conversion of lactose into allolactone, the inactivation of the lac repressor, and stable expression of the lac operon. The stochastic behavior of the system enables individual cells to sample their environment and respond when useful metabolites are present while minimizing unnecessary metabolic expense (the synthesis of irrelevant polypeptides) when they are not. A similar logic is involved in the quorum sensing, the emission of light (via the luciferase system), the regulation of the DNA uptake system, the generation of persister phenotypes, and programmed cell death (to benefit genetically related neighbors).


What is a biology educator to do?
The question that faces the reflective educational designer and enlightened instructor is how should their course address the multiple roles of stochastic processes within biological systems?  I have a short set of recommendations that I think both designers and instructors might want to consider; many have been incorporated into ongoing efforts at course design, which I have only recently (2019) begun to think of as educational engineering. First, it should be explicitly recognized, and conveyed to students, that stochastic processes are difficult to understand, as witness the common belief in the Gambler’s fallacy and the “hot hand”. Students need to be given adequate time to work with, and appropriate feedback on the behavior of stochastic systems. Secondly, and rather obviously, instructors should illustrate and articulate the role of stochastic processes in range of biological systems, from phenotypic variation and evolutionary events, including the effects of mutations and various non-adaptive processes (such as genetic drift) to de novo gene formation, gene expression, drug-target interactions, and reaction kinetics. Finally, the stochastic behaviors of molecular (and cellular) level processes should be accurately and explicitly illustrated .[3] Among currently available examples there are those that illustrate the movement of a water molecule through a membrane either through an aquaporin molecule or on its own,[4] as well as a PhET applet that illustrates the Elowitz et al study on stochastic GFP-expression in E. coli (and allows for student manipulation of key regulatory parameters).[5]  A simulation of the nature of intermolecular interactions and the role of molecular collisions in their formation has been developed for use with the CLUE Chemistry curriculum.[6]

FIG.3 Students are asked to predict the trajectory of a cannon ball (A) or a molecule (B); few students recognize the transition from projectile to Brownian motion (adapted from Klymkowsky et al., 2016).

Students’ understanding of stochastic processes can be revealed to instructors (and their students) through the use of various targeted assessment tools (such as the Biology Concepts Instrument or BCI, the Genetic Drift Instrument, and diagnostic assessments of student thinking about stochastic processes). For example, students can be asked to draw a graph that reflects the movement of a macroscopic projectile FIG. 3A→) versus a molecular (microscopic) object (FIG.3B→); such a task can reveal whether students can make the transition from the well behaved (deterministic) to the stochastic.  Drawing (and explanation) has been used extensively in the analysis of student understanding with the context of the CLUE project (“lost in lewis structures“, “noncovalent interactions“, and “relationships between molecular structure and properties“).  In a similar vein, network dynamics, including the cascade effects driving cell level divergence and the feedback and regulatory interactions involved in limiting the effects of noise and generating various outcomes (cellular differentiation) can be presented to students (see “Network motifs: simple building blocks of complex networks“, “Using graph-based assessments within socratic tutorials” and “Noise facilitates transcriptional control under dynamic inputs“).  One can consider the role of stochastic events within social systems, including responses to various aberrant behaviors (social cheating, cancer) and in terms of social feedback mechanisms (apoptosis, positive and negative feedback, lateral inhibition of cell differentiation) and in the context of the decisions involved in stem cells division,  differentiation, and cancer formation.  By introducing students to the roles and implications of stochastic processes in biological systems, we can help them develop a coherent understanding of the predictable, but not completely deterministic, nature of such systems.

Some minor edits: 5 May 2019 and 2 October 2020 – figures re-inserted 20 June 2020

[1] Scientific animation: protein production and folding

[2] Lac repressor numbers

[3] See this recording of stem cells

[4] Aquaporin and a lipid membrane

[5] PhET gene expression basics  – see panel 3

[6] http://virtuallaboratory.colorado.edu/LDF+binding-interactions/1.2-interactions-0.html

Recognizing scientific illiteracy

This is the first of the blog posts that I prepared for the PLOS science-education blog, a blog that later moved to bioliteracy.  I am a PLOS ONE author and Academic Editor. For more about my work click @ ORCID or my lab website.

Scientific literacy – what is it, how to recognize it, and how to help people achieve it through educational efforts, remains a difficult topic.  The latest attempt to inform the conversation is a recent National Academy report “Science Literacy: concepts, contexts, and consequences”.  While there is lots of substance to take away from the report, three quotes seem particularly telling to me. The first is from Roberts [1] that points out that scientific literacy has “become an umbrella concept with a sufficiently broad, composite meaning that it meant both everything, and nothing specific, about science education and the competency it sought to describe.”   The second quote, from the report’s authors, is that “In the field of  education, at least, the lack of consensus surrounding science literacy has not stopped it from occupying a prominent place in policy discourse(p. 2.6).  And finally, “the data suggested almost no relationship between general science knowledge and attitudes about genetically modified food, a potentially negative relationship between biology-specific knowledge and attitudes about genetically modified food, and a small, but negative relationship between that same general science knowledge measure and attitudes toward environmental science” (p. 5.4).

cosmology

Recognizing the scientifically illiterate

Perhaps it would be useful to consider the question of scientific literacy from a different perspective, namely, how can we recognize a scientifically illiterate person based on what they write or say? What clues imply illiteracy?[1]   To start, let us consider the somewhat simpler situation of standard literacy.  Constructing a literate answer implies two distinct abilities: the respondent needs to be able to  accurately interpret what the question asks and they need to recognize what an adequate answer contains. These are not innate skills; students need feedback and practice in both, particularly when the question is a scientific one. In my own experience with teaching, as well as data collected in the context of an introductory course [2], all to often a student’s answers consist of a single technical term, spoken (or written) as if a word = an argument or explanation. We need a more detailed response in order to accurately judge whether an answer addresses what the question asks (whether it is relevant) and whether it has a logical coherence and empirical foundations, information that is traditionally obtained through a Socratic interrogation.[2]  At the same time, an answer’s relevance and coherence serve as a proxy for whether the respondent understood (accurately interpreted) what was being asked of them.

So what is added when we move to scientific literacy, what is missing from the illiterate response.  At the simplest level we are looking for mistakes, irrelevancies, failures in logic, or in recognizing contradictions within the answer, explanation or critique. The presence of unnecessary language suggests, at the very least, a confused understanding of the situation.[3]  A second feature of a scientifically illiterate response is a failure to recognize the limits of scientific knowledge; this includes an explicit recognition of the tentative nature of science, combined with the fact that some things are, theoretically, unknowable scientifically.  For example, is “dark matter” real or might an alternative model of gravity remove its raison d’être?[4]  (see “Dark Matter: The Situation has Changed

When people speculate about what existed before the “big bang” or what is happening in various unobservable parts of the multiverse, they have left science for fantasy.  Similarly, speculation on the steps to the origin of life on Earth (including what types of organisms, or perhaps better put living or pre-living systems, existed before the “last universal common ancestor”), the presence of “consciousness” outside of organisms, or the probability of life elsewhere in the universe can be seen as transcending either what is knowable or likely to be knowable without new empirical observations.  While this can make scientific pronouncements somewhat less dramatic or engaging, respecting the limits of scientific discourse avoids doing violence to the foundations upon which the scientific enterprise is built.  It is worth being explicit, universal truth is beyond the scope of the scientific enterprise.

The limitations of scientific explanations

Acknowledging the limits of scientific explanations is a marker of understanding how science actually works.  As an example, while a drug may be designed to treat a particular disease, a scientifically literate person would reject the premise that any such drug could, given the nature of interactions with other molecular targets and physiological systems, be without side effects and that these side effects will vary depending upon the features (genetic, environmental, historic, physiological) of the individual taking the drug.  While science knowledge reflects a social consensus, it is constrained by rules of evidence and logic (although this might appear to be anachronistic in the current post- and more recently alternative-fact age).

Even though certain ideas are well established (Laws of Conservation and Thermodynamics, and a range of evolutionary mechanisms), it is possible to imagine exceptions (and revisions).  Moreover, since scientific inquiry is (outside of some physics departments) about a single common Universe, conclusions from different disciplines cannot contradict one another – such contradictions must inevitably be resolved through modification of one or the other discipline.  A classic example is Lord Kelvin’s estimate of the age of the Earth (~20 to 50 million years) and estimates of the time required for geological and evolutionary processes to produce the observed structure of the Earth and the diversity of life (hundreds of millions to billions of years), a contradiction resolved in favor of an ancient Earth by the discovery of radioactivity.

Scientific illiteracy in the scientific community

There are also suggestions of scientific illiteracy (or perhaps better put, sloppy and/or self-serving thinking) in much of the current “click-bait” approach to the public dissemination of scientific ideas and observations.  All too often, scientific practitioners, who we might expect to be as scientifically literate as possible, abandon the discipline of science to make claims that are over-arching and often self-serving (this is, after all, why peer-review is necessary).

A common example [of scientific illiteracy practiced by scientists and science communicators] is provided by studies of human disease in “model” organisms, ranging from yeasts to non-human primates. While there is no doubt that such studies have been, and continue to be critical to understanding how organisms work (and certainly deserving of public and private support) – their limitations need to be made explicit. While a mouse that displays behavioral defects (for a mouse) might well provide useful insights into the mechanisms involved in human autism, an autistic mouse may well be a scientific oxymoron.

Discouraging scientific illiteracy within the scientific community is challenging, particularly in the highly competitive, litigious,[5] and high stakes environment we find ourselves in.[6]  How to best help our students, both within and without scientific disciplines, avoid scientific illiteracy remains unclear, but is likely to involve establishing a culture of Socratic discourse (as opposed to posturing).  Understanding what a person is saying, what empirical data and assumptions it is based on, and what it implies and or predicts are necessary features of literate discourse.

Minor edits 23 October 2020; added SH Dark Matter video 15 June 2021. Twitter @mikeklymkowsky

Literature cited:

  1. Roberts, D.A., Scientific literacy/science literacy. I SK Abell & NG Lederman (Eds.). Handbook of research on science education (pp. 729-780). 2007, Mahwah, NJ: Lawrence Erlbaum.
  2. Klymkowsky, M.W., J.D. Rentsch, E. Begovic, and M.M. Cooper, The design and transformation of Biofundamentals: a non-survey introductory evolutionary and molecular biology course. LSE Cell Biol Edu, in press., 2016. in press.
  3. Lee, H.-S., O.L. Liu, and M.C. Linn, Validating measurement of knowledge integration in science using multiple-choice and explanation items. Applied Measurement in Education, 2011. 24(2): p. 115-136.
  4. Henson, K., M.M. Cooper, and M.W. Klymkowsky, Turning randomness into meaning at the molecular level using Muller’s morphs. Biol Open, 2012. 1: p. 405-10.

[1] Assuming, of course, that what a person’s says reflects what they actually think, something that is not always the case.

[2] This is one reason why multiple-choice concept tests consistently over-estimate students’ understanding ( 3. Lee, H.-S., O.L. Liu, and M.C. Linn, Validating measurement of knowledge integration in science using multiple-choice and explanation items. Applied Measurement in Education, 2011. 24(2): p. 115-136.)

[3] We have used this kind of analysis to consider the effect of various learning activities

[4] http://curious.astro.cornell.edu/physics/108-the-universe/cosmology-and-the-big-bang/dark-matter/659-could-a-different-theory-of-gravity-explain-the-dark-matter-mystery-intermediate

[5] http://science.sciencemag.org/content/353/6303/977 and http://www.cjr.org/the_observatory/local_science_fraud_misses_nat.php

[6] See as an example: http://www.nature.com/news/bitter-fight-over-crispr-patent-heats-up-1.17961http://www.sciencemag.org/news/2016/03/accusations-errors-and-deception-fly-crispr-patent-fight