When is a gene product a protein when is it a polypeptide?

On the left is a negatively-stained electron micrograph of a membrane vesicle isolated from the electric ray Torpedo california, with a muscle-type nicotinic single acetylcholine receptor (AcChR) pointed out . To the right the structure of the AcChR determined to NN resolution using cryoelectron microscopy by Rahman, Teng, Worrell, Noviello, Lee, Karlin, Stowell & Hibbs (2020). “Structure of the native muscle-type nicotinic receptor and inhibition by snake venom toxins.”

As a new assistant professor (1), I was called upon to teach my department’s “Cell Biology” course. I found, and still find, the prospect challenging in part because I am not exactly sure which aspects of cell biology are important for students to know, both in the context of the major, as well as their lives and subsequent careers.  While it seems possible (at least to me) to lay out a coherent conceptual foundation for biology as a whole [see 1], cell biology can often appear to students as an un-unified hodge-podge of terms and disconnected cellular systems, topics too often experienced as a vocabulary lesson, rather than as a compelling narrative. As such, I am afraid that the typical cell biology course often re-enforces an all too common view of biology as a discipline, a view, while wrong in most possible ways, was summarized by the 19th/early 20th century physicist Ernest Rutherford as “All science is either physics or stamp collecting.”  A key motivator for the biofundamentals project [2] has been to explore how to best dispel this prejudice, and how to more effectively present to students a coherent narrative and the key foundational observations and ideas by which to scientifically consider living systems, by any measure the most complex systems in the Universe, systems shaped, but not determined by, physical chemical properties and constraints, together with the historical vagaries of evolutionary processes on an ever-changing Earth. 

Two types of information:  There is an underlying dichotomy within biological systems: there is the hereditary information encoded in the sequence of nucleotides along double-stranded DNA molecules (genes and chromosomes).  There is also the information inherent in the living system.  The information in DNA is meaningful only in the context of the living cell, a reaction system that has been running without interruption since the origin of life.  While these two systems are inextricably interconnected, there is a basic difference between them. Cellular systems are fragile, once dead there is no coming back.  In contrast the information in DNA can survive death – it can move from cell to cell in the process of horizontal gene transfer.  The Venter group has replaced the DNA of bacterial cells with synthetic genomes in an effort to define the minimal number of genes needed to support life, at least in a laboratory setting [see 3, 4].  In eukaryotes, cloning is carried out by replacing a cell’s DNA, with that of another cell (reference).  

Conflating protein synthesis and folding with assembly and function: Much of the information stored in a cell’s DNA is used to encode the sequence of various amino acid polymers (polypeptides).  While over-simplified [see 5], students are generally presented with the view that each gene encodes a particular protein through DNA-directed RNA synthesis (transcription) and RNA-directed polypeptide synthesis (translation).  As the newly synthesized polypeptide emerges from the ribosomal tunnel, it begins to fold, and is released into the cytoplasm or inserted into or through a cellular membrane, where it often interacts with one or more other polypeptides to form a protein  [see 6].  The assembled protein is either functional or becomes functional after association with various non-polypeptide co-factors or post-translational modifications.  It is the functional aspect of proteins that is critical, but too often their assembly dynamics are overlooked in the presentation of gene expression/protein synthesis, which is really a combination of distinct processes. 

Students are generally introduced to protein synthesis through the terms primary, secondary, tertiary, and quaternary structure, an approach that can be confusing since many (most) polypeptides are not proteins and many proteins are parts of complex molecular machines [here is the original biofundamentals web page on proteins + a short video][see Teaching without a Textbook]. Consider the nuclear pore complex, a molecular machine that mediates the movement of molecules into and out of the nucleus.  A nuclear pore is “composed of ∼500, mainly evolutionarily conserved, individual protein molecules that are collectively known as nucleoporins (Nups)” [7]. But what is the function of a particular NUP, particularly if it does not exist in significant numbers outside of a nuclear pore?  Is a nuclear pore one protein?  In contrast, the membrane bound, mitochondrial ATP synthase found in aerobic bacteria and eukaryotic mitochondria, is described as composed “of two functional domains, F1 and Fo. F1 comprises 5 different subunits (three α, three β, and one γ, δ and ε)” while “Fo contains subunits c, a, b, d, F6, OSCP and the accessory subunits e, f, g and A6L” [8].  Are these proteins or subunits? is the ATP synthase a protein or a protein complex?  

Such confusions arise, at least in part, from the primary-quaternary view of protein structure, since the same terms are applied, generally without clarifying distinction, to both polypeptides and proteins. These terms emerged historically. The purification of a protein was based on its activity, which can only be measured for an intact protein. The primary structure of  a polypeptide was based on the recognition that DNA-encoded amino acid polymers are unbranched, with a defined sequence of amino acid residues (see Sanger. The chemistry of insulin).  The idea of a polypeptide’s secondary structure was based on the “important constraint that all six atoms of the amide (or peptide) group, which joins each amino acid residue to the next in the protein chain, lie in a single plane” [9], which led Pauling, Corey and Branson [10] to recognized the α-helix and β-sheet, as common structural motifs.  When a protein is composed of a single polypeptide, the final folding pattern of the polypeptide, is referred to as its tertiary structure and is apparent in the first protein structure solved, that of myoglobin (↓), by Max Perutz and John Kendell. 

Myoglobin’s role in O2 transport depends upon a non-polypeptide (prosthetic) heme group. So far so good, a gene encodes a polypeptide and as it folds a polypeptide becomes a protein – nice and simple (2).  Complications arise from the observations that 1) many proteins are composed of multiple polypeptides, encoded for by one or more genes, and 2) some polypeptides are a part of different proteins.  Hemoglobin, the second protein whose structure was

determined, illustrates the point (←).  Hemoglobin is composed of four polypeptides encoded by distinct genes encoding α- and β-globin polypeptides.  These polypeptides are related in structure, function, and evolutionary origins to myoglobin, as well  as the cytoglobin and neuroglobin proteins (↓).  In

humans, there are a number of distinct α-like globin and β-like globin genes that are expressed in different hematopoetic tissues during development, so functional hemoglobin proteins can have a number of distinct (albeit similar) subunit compositions and distinct properties, such as their affinities for O2 [see 11].  

But the situation often gets more complicated.  Consider centrin-2, a eukaryotic Ca2+ binding polypeptide that plays roles in organizing microtubules, building cilia, DNA repair, and gene expression [see 12 and references therein].  So, is the centrin-2 polypeptide just a polypeptide, a protein, or a part of a number of other proteins?  As another example, consider the basic-helix-loop-helix family of transcription factors; these transcription factor proteins are typically homo- or hetero-dimeric; are these polypeptides proteins in their own right?  The activity of these transcription factors is regulated in part by which binding partners they contain. bHLH polypeptides also interact with the Id polypeptide (or is it a protein); Id lacks a DNA binding domain so when it forms a dimer with a bHLH polypeptide it inhibits DNA binding (↓).  So is a single bHLH polypeptide a protein or is the protein necessarily a dimer?  More to the point, does the current primary→quaternary view of protein structure help or hinder student understanding of the realities of biological systems?  A potentially interesting bio-education research question.

A recommendation or two:  While under no illusion that the complexities of polypeptide synthesis and protein assembly can be easily resolved – it is surely possible to present them in a more coherent, consistent, and accessible manner.  Here are a few suggestions that might provoke discussion.  Let us first recognize that, for those genes that encode polypeptides: i) they encode polypeptides rather than functional proteins (a reality confused by the term “quaternary structure”).  We might well distinguish a polypeptide from a protein based on the concentration of free monomeric polypeptide (gene product) within the cell.  Then we need to convey the reality to students that the assembly of a protein is no simple process, particularly within the crowded cytoplasm [13], a misconception supported by the simple secondary-tertiary structure perspective. While some proteins assemble on their own, many (most?) cannot.


As an example, consider the protein tubulin (↑). As noted by Nithianantham et al [14], “ Five conserved tubulin cofactors/chaperones and the Arl2 GTPase regulate α- and β-tubulin assembly into heterodimers” and the “tubulin cofactors TBCD, TBCE, and Arl2, which together assemble a GTP-hydrolyzing tubulin chaperone critical for the biogenesis, maintenance, and degradation of soluble αβ-tubulin.”  Without these various chaperones the tubulin protein cannot be formed.  Here the distinction between protein and multiprotein complex is clear, since tubulin protein exists in readily detectable levels within the cell, in contrast to the α- and β-tubulin polypeptides, which are found complexed to the TBCB and TBCA chaperone polypeptides. Of course the balance between tubulin and tubulin polymers (microtubules) is itself regulated by a number of factors. 

 The situation is even more complex when we come to the ribosome and other structures, such as the nuclear pore.  Woolford [15] estimates that “more than 350 protein and RNA molecules participate in yeast ribosome assembly, and many more in metazoa”; in addition to four ribsomal RNAs and ~80 polypeptides (often referred to as ribosomal proteins) that are synthesized in the cytoplasm and transported into the nucleus in association with various transport factors, these “assembly factors, including diverse RNA-binding proteins, endo- and exonucleases, RNA helicases, GTPases and ATPases. These assembly factors promote pre-rRNA folding and processing, remodeling of protein–protein and RNA–protein networks, nuclear export and quality control” [16].  While I suspect that some structural components of the ribosome and the nuclear pore may have functions as monomeric polypeptides, and so could be considered as proteins, at this point it is best (most accurate) to assume that they are polypeptides, components of proteins and larger, molecular machines (past post). 

We can, of course, continue to consider the roles of common folding motifs,  arising from the chemistry of the peptide bond and the environment within and around the assembling protein, in the context of protein structure [17, 18], The knottier problem is how to help students recognize how functional entities, proteins and molecular machines, together with the coupled reaction systems that drive them and the molecular interactions that regulate them, function. How mutations, alleleic variations, and various environmentally induced perturbations influence the behaviors of cells and organisms, and how they generate normal and pathogenic phenotypes. Such a view emphasizes the dynamics of the living state, and the complex flow of information out of DNA into networks of molecular machines and reaction systems. 


Acknowledgements
: Thanks to Michael Stowell for feedback and suggestions and Jon Van Blerkom for encouragement.  All remaining errors are mine. Post updated to include imagines in the right places (and to include the cryoEM structure of the AcChR + minor edits – 16 December 2020.

Footnotes:

  1. Recently emerged from the labs of Martin Raff and Lee Rubin – Martin is one of the founding authors of the transformative “molecular biology of the cell” textbook. 
  2. Or rather quite over-simplistic, as it ignore complexities arising from differential splicing, alternative promoters, and genes encoding non-polypeptide encoding RNAs. 

Literature cited (please excuse excessive self-citation – trying to avoid self-plagarism)

1. Klymkowsky, M.W., Thinking about the conceptual foundations of the biological sciences. CBE Life Science Education, 2010. 9: p. 405-7.

2. Klymkowsky, M.W., J.D. Rentsch, E. Begovic, and M.M. Cooper, The design and transformation of Biofundamentals: a non-survey introductory evolutionary and molecular biology course. LSE Cell Biol Edu, in press., 2016. pii: ar70.

3. Gibson, D.G., J.I. Glass, C. Lartigue, V.N. Noskov, R.-Y. Chuang, M.A. Algire, G.A. Benders, M.G. Montague, L. Ma, and M.M. Moodie, Creation of a bacterial cell controlled by a chemically synthesized genome. science, 2010. 329(5987): p. 52-56.

4. Hutchison, C.A., R.-Y. Chuang, V.N. Noskov, N. Assad-Garcia, T.J. Deerinck, M.H. Ellisman, J. Gill, K. Kannan, B.J. Karas, and L. Ma, Design and synthesis of a minimal bacterial genome. Science, 2016. 351(6280): p. aad6253.

5. Samandi, S., A.V. Roy, V. Delcourt, J.-F. Lucier, J. Gagnon, M.C. Beaudoin, B. Vanderperre, M.-A. Breton, J. Motard, and J.-F. Jacques, Deep transcriptome annotation enables the discovery and functional characterization of cryptic small proteins. Elife, 2017. 6.

6. Hartl, F.U., A. Bracher, and M. Hayer-Hartl, Molecular chaperones in protein folding and proteostasis. Nature, 2011. 475(7356): p. 324.

7. Kabachinski, G. and T.U. Schwartz, The nuclear pore complex–structure and function at a glance. J Cell Sci, 2015. 128(3): p. 423-429.

8. Jonckheere, A.I., J.A. Smeitink, and R.J. Rodenburg, Mitochondrial ATP synthase: architecture, function and pathology. Journal of inherited metabolic disease, 2012. 35(2): p. 211-225.

9. Eisenberg, D., The discovery of the α-helix and β-sheet, the principal structural features of proteins. Proceedings of the National Academy of Sciences, 2003. 100(20): p. 11207-11210.

10. Pauling, L., R.B. Corey, and H.R. Branson, The structure of proteins: two hydrogen-bonded helical configurations of the polypeptide chain. Proceedings of the National Academy of Sciences, 1951. 37(4): p. 205-211.

11. Hardison, R.C., Evolution of hemoglobin and its genes. Cold Spring Harbor perspectives in medicine, 2012. 2(12): p. a011627.

12. Shi, J., Y. Zhou, T. Vonderfecht, M. Winey, and M.W. Klymkowsky, Centrin-2 (Cetn2) mediated regulation of FGF/FGFR gene expression in Xenopus. Scientific Reports, 2015. 5:10283.

13. Luby-Phelps, K., The physical chemistry of cytoplasm and its influence on cell function: an update. Molecular biology of the cell, 2013. 24(17): p. 2593-2596.

14. Nithianantham, S., S. Le, E. Seto, W. Jia, J. Leary, K.D. Corbett, J.K. Moore, and J. Al-Bassam, Tubulin cofactors and Arl2 are cage-like chaperones that regulate the soluble αβ-tubulin pool for microtubule dynamics. Elife, 2015. 4.

15. Woolford, J., Assembly of ribosomes in eukaryotes. RNA, 2015. 21(4): p. 766-768.

16. Peña, C., E. Hurt, and V.G. Panse, Eukaryotic ribosome assembly, transport and quality control. Nature Structural and Molecular Biology, 2017. 24(9): p. 689.

17. Dobson, C.M., Protein folding and misfolding. Nature, 2003. 426(6968): p. 884.

18. Schaeffer, R.D. and V. Daggett, Protein folds and protein folding. Protein Engineering, Design & Selection, 2010. 24(1-2): p. 11-19.

Molecular machines and the place of physics in the biology curriculum

The other day, through no fault of my own, I found myself looking at the courses required by our molecular biology undergraduate degree program. I discovered a requirement for a 5 credit hour physics course, and a recommendation that this course be taken in the students’ senior year – a point in their studies when most have already completed their required biology courses.  Befuddlement struck me, what was the point of requiring an introductory physics course in the context of a molecular biology major?  Was this an example of time-travel (via wormholes or some other esoteric imagining) in which a physics course in the future impacts a students’ understanding of molecular biology in the past?  I was also struck by the possibility that requiring such a course in the students’ senior year would measurably impact their time to degree.  

In a search for clarity and possible enlightenment, I reflected back on my own experiences in an undergraduate biology degree program – as a practicing cell and molecular  biologist, I was somewhat confused. I could not put my finger on the purpose of our physics requirement, except perhaps the admirable goal of supporting physics graduate students. But then, after feverish reflections on the responsibilities of faculty in the design of the courses and curricula they prescribe for their students and the more general concepts of instructional (best) practice and malpractice, my mind calmed, perhaps because I was distracted by an article on Oxford Nanopore’s MinION (→) “portable real-time device for DNA and RNA sequencing”, a device that plugs into the USB port on one’s laptop! Distracted from the potentially quixotic problem of how to achieve effective educational reform at the undergraduate level, I found myself driven on by an insatiable curiosity (or a deep-seated insecurity) to ensure that I actually understood how this latest generation of DNA sequencers worked. This led me to a paper by Meni Wanunu (2012. Nanopores: A journey towards DNA sequencing)[1].  On reading the paper, I found myself returning to my original belief, yes, understanding physics is critical to developing a molecular-level understanding of how biological systems work, BUT it was just not the physics normally inflicted upon (required of) students [2]. Certainly this was no new idea.  Bruce Alberts had written on this topic a number of times, most dramatically in his 1989 paper “The cell as a collection of molecular machines” [3].  Rather sadly, and not withstanding much handwringing about the importance of expanding student interest in, and understanding of, STEM disciplines, not much of substance in this area has occurred. While (some minority of) physics courses may have adopted active engagement pedagogies, in the meaning of Hake [4], most insist on teaching macroscopic physics, rather than to focus on, or even to consider, the molecular level physics relevant to biological systems, explicitly the physics of protein machines in a cellular (biological) context. Why sadly? Because conventional, that is non-biologically relevant introductory physics and chemistry courses, all too often serve the role of a hazing ritual, driving many students out of the biological sciences [5], in part I suspect because they often seem irrelevant to students’ interests in the workings of biological systems.[6].  

Nanopore’s sequencer and Wanunu’s article got me thinking again about biological machines, of which there are a great number, ranging from pumps, propellers, and oars to  various types of transporters, molecular truckers that move chromosomes, membrane vesicles, and parts of cells with respect to one another, to DNA detanglers, protein unfolders, and molecular recyclers (→).  The Nanopore sequencer works because as a single strand of DNA (or RNA) moves through a narrow pore, the different bases (A,C,T,G) occlude the pore to different extents, allowing different numbers of ions, different amounts of current, to pass through the pore. These current differences can be detected, and allows for nucleotide sequence to be read as the nucleic acid strand moves through the pore. Understanding the process involves understanding how molecules move, that is the physics of molecular collisions and energy transfer, how proteins and membranes allow and restrict ion movement, and the impact of chemical gradients and electrical fields across a membrane on molecular movements  – all physical concepts of widespread significance in biological systems.  Such ideas can be extended to the more general questions of how molecules move within the cell, and the effects of molecular size and inter-molecular interactions within a concentrated solution of proteins, protein polymers, lipid membranes, and nucleic acids, such as described in Oliverira et al., Increased cytoplasm viscosity hampers aggregate polar segregation in Escherichia coli [7].  At the molecular level the processes, while biased by electric fields (potentials) and concentration gradients, are stochastic (noisy). Understanding of stochastic processes is difficult for students [8], but critical to developing an appreciation of how such processes can lead to phenotypic differences between cells with the same genotypes (previous post) and how such noisy processes are managed by the cell and within a multicellular organism.    

As path leads on to path, I found myself considering the (← adapted from Joshi et al., 2017) spear-chucking protein machine present in the pathogenic bacteria Vibrio cholerae; this molecular machine is used to inject toxins into neighbors that the bacterium happens to bump into (see Joshi et al., 2017. Rules of Engagement: The Type VI Secretion System in Vibrio cholerae)[9].  The system is complex and acts much like a spring-loaded and rather “inhumane” mouse trap.  This is one of a  number of bacterial type VI systems, and “has structural and functional homology to the T4 bacteriophage tail spike and tube” – the molecular machine that injects bacterial cells with the virus’s genetic material, its DNA.

Building the bacterium’s spear-based injection system is controlled by a social (quorum sensing) system (previous post). One of the ways that such organisms determine whether they are alone or living in an environment crowded with other organisms. During the process of assembly, potential energy, derived from various chemically coupled, thermodynamically favorable reactions, is stored in both type VI “spears” and the contractile (nucleic acid injecting) tails of the bacterial viruses (phage). Understanding the energetics of this process, for example, how coupling thermodynamically favorable chemical reactions, such as ATP hydrolysis, or physico-chemical reactions such as the diffusion of ions down an electrochemical gradient, can be used to set these “mouse traps”, and understandingwhere the energy goes when the traps are sprung is central to students’ understanding of these and a wide range of other molecular machines. 

Energy stored in such molecular machines during their assembly can be used to move the cell. As an example, another bacterial system generates contractile (type IV pili) filaments; the contraction of such a filament can allow “the bacterium to move 10 000 times its own body weight, which results in rapid movement” (see Berry & Belicic 2015. Exceptionally widespread nanomachines composed of type IV pilins: the prokaryotic Swiss Army knives)[10].  The contraction of such a filament has also been found to be used to import DNA into the cell, the first step in the process of horizontal gene transfer.  In other situations (other molecular machines) such protein filaments access thermodynamically favorable processes to rotate, acting like a propeller, driving cellular movement. 

 

During my biased random walk through the literature, I came across another, but molecularly distinct, machine used to import DNA into Vibrio (see Matthey & Blokesch 2016. The DNA-Uptake Process of Naturally Competent Vibrio cholerae)[11]. This molecular machine enables the bacterium to import DNA from the environment, released, perhaps, from a neighbor killed by its spear.  In this system (adapted from Matthey & Bioesch  et al., 2017 →), the double stranded DNA molecule is first transported through the bacterium’s outer membrane (“OM”); the DNA’s two strands are then separated, and one strand passes through a channel protein through the inner (plasma) membrane, and into the cytoplasm, where it can interact with the bacterium’s  genomic DNA.    

The value of introducing students to the idea of molecular machines is that it can be used to demystify how biological systems work, how such machines carry out specific functions, whether moving the cell or recognizing and repairing damaged DNA.  If physics matters in biological curriculum, it matters for this reason – it establishes th e core premise of biology that organisms are not driven by “vital” forces, but by prosaic physiochemical ones.  At the same time, the molecular mechanisms behind evolution, such as mutation, gene duplication, and genomic reorganization, provide the means by which new structures emerge from pre-existing ones, yet many is the molecular biology degree program that does not include an introduction to evolutionary mechanisms in its required course sequence – imagine that, requiring physics but not evolution?  [see 12].One final point regarding requiring students to take a biologically relevant physics course early in their degree program is that it can be used to reinforce what I think is a critical and often misunderstood point. While biological systems rely on molecular machines, we (and by we I mean all organisms) are NOT machines, no matter what physicists might postulate – see We Are All Machines That Think.  We are something different and distinct. Our behaviors and our feelings, whether ultimately understandable or not, emerge from the interaction of genetically encoded, stochastically driven non-equilibrium systems, modified through evolutionary, environmental, social, and a range of unpredictable events occurring in an uninterrupted, and basically undirected fashion for ~3.5 billion years.  While we are constrained, we are more, in some weird and probably ultimately incomprehensible way.

footnotes and literature cited

1. Wanunu, M., Nanopores: A journey towards DNA sequencing. Physics of life reviews, 2012. 9: p. 125-158.

2. Klymkowsky, M.W. Physics for (molecular) biology students. 2014  

3. Alberts, B., The cell as a collection of protein machines: preparing the next generation of molecular biologists. Cell, 1998. 92: p. 291-294.

4. Hake, R.R., Interactive-engagement versus traditional methods: a six-thousand-student survey of mechanics test data for introductory physics courses. Am. J. Physics, 1998. 66: p. 64-74.

5. Mervis, J., Weed-out courses hamper diversity. Science, 2011. 334: p. 1333-1333.

6. A discussion with Melanie Cooper on what chemistry is relevant to a life science major was a critical driver in our collaboration to develop the chemistry, life, the universe, and everything (CLUE) general chemistry course sequence.  

7. Oliveira, S., R. Neeli‐Venkata, N.S. Goncalves, J.A. Santinha, L. Martins, H. Tran, J. Mäkelä, A. Gupta, M. Barandas, and A. Häkkinen, Increased cytoplasm viscosity hampers aggregate polar segregation in Escherichia coli. Molecular microbiology, 2016. 99: p. 686-699.

8. Garvin-Doxas, K. and M.W. Klymkowsky, Understanding Randomness and its impact on Student Learning: Lessons from the Biology Concept Inventory (BCI). Life Science Education, 2008. 7: p. 227-233.

9. Joshi, A., B. Kostiuk, A. Rogers, J. Teschler, S. Pukatzki, and F.H. Yildiz, Rules of engagement: the type VI secretion system in Vibrio cholerae. Trends in microbiology, 2017. 25: p. 267-279.

10. Berry, J.-L. and V. Pelicic, Exceptionally widespread nanomachines composed of type IV pilins: the prokaryotic Swiss Army knives. FEMS microbiology reviews, 2014. 39: p. 134-154.

11. Matthey, N. and M. Blokesch, The DNA-uptake process of naturally competent Vibrio cholerae. Trends in microbiology, 2016. 24: p. 98-110.

12. Pallen, M.J. and N.J. Matzke, From The Origin of Species to the origin of bacterial flagella. Nat Rev Microbiol, 2006. 4: p. 784-90.

Is a little science a dangerous thing?

Is the popularization of science encouraging a growing disrespect for scientific expertise? 
Do we need to reform science education so that students are better able to detect scientific BS? 

It is common wisdom that popularizing science by exposing the public to scientific ideas is an unalloyed good,  bringing benefits to both those exposed and to society at large. Many such efforts are engaging and entertaining, often taking the form of compelling images with quick cuts between excited sound bites from a range of “experts.” A number of science-centered programs, such PBS’s NOVA series, are particularly adept and/or addicted to this style. Such presentations introduce viewers to natural wonders, and often provide scientific-sounding, albeit often superficial and incomplete, explanations – they appeal to the gee-whiz and inspirational, with “mind-blowing” descriptions of how old, large, and weird the natural world appears to be. But there are darker sides to such efforts. Here I focus on one, the idea that a rigorous, realistic understanding of the scientific enterprise and its conclusions, is easy to achieve, a presumption that leads to unrealistic science education standards, and the inability to judge when scientific pronouncements are distorted or unsupported, as well as anti-scientific personal and public policy positions.That accurate thinking about scientific topics is easy to achieve is an unspoken assumption that informs much of our educational, entertainment, and scientific research system. This idea is captured in the recent NYT best seller “Astrophysics for people in a hurry” – an oxymoronic presumption. Is it possible for people “in a hurry” to seriously consider the observations and logic behind the conclusions of modern astrophysics? Can they understand the strengths and weaknesses of those conclusions? Is a superficial familiarity with the words used the same as understanding their meaning and possible significance? Is acceptance understanding?  Does such a cavalier attitude to science encourage unrealistic conclusions about how science works and what is known with certainty versus what remains speculation?  Are the conclusions of modern science actually easy to grasp?
The idea that introducing children to science will lead to an accurate grasp the underlying concepts involved, their appropriate application, and their limitations is not well supported [1]; often students leave formal education with a fragile and inaccurate understanding – a lesson made explicit in Matt Schneps and Phil Sadler’s Private Universe videos. The feeling that one understands a topic, that science is in some sense easy, undermines respect for those who actually do understand a topic, a situation discussed in detail in Tom Nichols “The Death of Expertise.” Under-estimating how hard it can be to accurately understand a scientific topic can lead to unrealistic science standards in schools, and often the trivialization of science education into recognizing words rather than understanding the concepts they are meant to convey.

The fact is, scientific thinking about most topics is difficult to achieve and maintain – that is what editors, reviewers, and other scientists, who attempt to test and extend the observations of others, are for – together they keep science real and honest. Until an observation has been repeated or confirmed by others, it can best be regarded as an interesting possibility, rather than a scientifically established fact.  Moreover, until a plausible mechanism explaining the observation has been established, it remains a serious possibility that the entire phenomena will vanish, more or less quietly (think cold fusion). The disappearing physiological effects of “power posing” comes to mind. Nevertheless the incentives to support even disproven results can be formidable, particularly when there is money to be made and egos on the line.

While power-posing might be helpful to some, even though physiologically useless, there are more dangerous pseudo-scientific scams out there. The gullible may buy into “raw water” (see: Raw water: promises health, delivers diarrhea) but the persistent, and in some groups growing, anti-vaccination movement continues to cause real damage to children (see Thousands of cheerleaders exposed to mumps).  One can ask oneself, why haven’t professional science groups, such as the American Association for the Advancement of Science (AAAS), not called for a boycott of NETFLIX, given that NETFLIX continues to distribute the anti-scientific, anti-vaccination program VAXXED [2]?  And how do Oprah Winfrey and Donald Trump  [link: Oprah Spreads Pseudoscience and Trump and the anti-vaccine movement] avoid universal ridicule for giving credence to ignorant non-sense, and for disparaging the hard fought expertise of the biomedical community?  A failure to accept well established expertise goes along way to understanding the situation. Instead of an appreciation for what we do and do not know about the causes of autism (see: Genetics and Autism Risk & Autism and infection), there are desperate parents who turn to a range of “therapies” promoted by anti-experts. The tragic case of parents trying to cure autism by forcing children to drink bleach (see link) illustrates the seriousness of the situation.

So why do a large percentage of the public ignore the conclusions of disciplinary experts?  I would argue that an important driver is the way that science is taught and popularized [3]. Beyond the obvious fact that a range of politicians and capitalists (in both the West and the East) actively distain expertise that does not support their ideological or pecuniary positions [4], I would claim that the way we teach science, often focussing on facts rather than processes, largely ignoring the historical progression by which knowledge is established, and the various forms of critical analyses to which scientific conclusions are subjected to, combines with the way science is popularized, erodes respect for disciplinary expertise. Often our education systems fail to convey how difficult it is to attain real disciplinary expertise, in particular the ability to clearly articulate where ideas and conclusions come from and what they do and do not imply. Such expertise is more than a degree, it is a record of rigorous and productive study and useful contributions, and a critical and objective state of mind. Science standards are often heavy on facts, and weak on critical analyses of those ideas and observations that are relevant to a particular process. As Carl Sagan might say, we have failed to train students on how to critically evaluate claims, how to detect baloney (or BS in less polite terms)[5].

In the area of popularizing scientific ideas, we have allowed hype and over-simplification to capture the flag. To quote from a article by David Berlinski [link: Godzooks], we are continuously bombarded with a range of pronouncements about new scientific observations or conclusions and there is often a “willingness to believe what some scientists say without wondering whether what they say is true”, or even what it actually means.  No longer is the in-depth, and often difficult and tentative explanation conveyed, rather the focus is on the flashy conclusion (independent of its plausibility). Self proclaimed experts pontificate on topics that are often well beyond their areas of training and demonstrated proficiency – many is the physicist who speaks not only about the completely speculative multiverse, but on free will and ethical beliefs. Complex and often irreconcilable conflicts between organisms, such as those between mother and fetus (see: War in the womb), male and female (in sexually dimorphic species), and individual liberties and social order, are ignored instead of explicitly recognized, and their origins understood. At the same time, there are real pressures acting on scientific researchers (and the institutions they work for) and the purveyors of news to exaggerate the significance and broader implications of their “stories” so as to acquire grants, academic and personal prestige, and clicks.  Such distortions serve to erode respect for scientific expertise (and objectivity).

So where are the scientific referees, the individuals that are tasked to enforce the rules of the game; to call a player out of bounds when they leave the playing field (their area of expertise) or to call a foul when rules are broken or bent, such as the fabrication, misreporting, suppression, or over-interpretation of data, as in the case of the anti-vaccinator Wakefield. Who is responsible for maintaining the integrity of the game?  Pointing out the fact that many alternative medicine advocates are talking meaningless blather (see: On skepticism & pseudo-profundity)? Where are the referees who can show these charlatans the “red card” and eject them from the game?

Clearly there are no such referees. Instead it is necessary to train as large a percentage of the population as possible to be their own science referees – that is, to understand how science works, and to identify baloney when it is slung at them. When a science popularizer, whether for well meaning or self-serving reasons, steps beyond their expertise, we need to call them out of
bounds!  And when scientists run up against the constraints of the scientific process, as appears to occur periodically with theoretical physicists, and the occasional neuroscientist (see: Feuding physicists and The Soul of Science) we need to recognize the foul committed.  If our educational system could help develop in students a better understanding of the rules of the scientific game, and why these rules are essential to scientific progress, perhaps we can help re-establish both an appreciation of rigorous scientific expertise, as well as a respect for what is that scientists struggle to do.



Footnotes and references:

  1. And is it clearly understood that they have nothing to say as to what is right or wrong.
  2.  Similarly, many PBS stations broadcast pseudoscientific infomercials: for example see Shame on PBS, Brain Scam, and the Deepak Chopra’s anti-scientific Brain, Mind, Body, Connection, currently playing on my local PBS station. Holocaust deniers and slavery apologists are confronted much more aggressively.
  3.  As an example, the idea that new neurons are “born” in the adult hippocampus, up to now established orthodoxy, has recently been called into question: see Study Finds No Neurogenesis in Adult Humans’ Hippocampi
  4.  Here is a particular disturbing example: By rewriting history, Hindu nationalists lay claim to India
  5. Pennycook, G., J. A. Cheyne, N. Barr, D. J. Koehler and J. A. Fugelsang (2015). “On the reception and detection of pseudo-profound bullshit.” Judgment and Decision Making 10(6): 549.

Humanized mice & porcinized people

mouse and pig

A new and relevant news link: “US FDA declares genetically modified pork ‘safe to eat‘”

A practical benefit, from a scientific and medical perspective, of the evolutionary unity of life (link) are the molecular and cellular similarities between different types of organisms. Even though humans and bacteria diverged more than 2 billion years ago (give or take), the molecular level conservation of key systems makes it possible for human insulin to be synthesized in and secreted by bacteria and pig-derived heart valves to be used to replace defective human heart valves (see link). Similarly, while mice, pigs, and people are clearly different from one another in important ways they have, essentially, all of the same body parts. Such underlying similarities raise interesting experimental and therapeutic possibilities.

A (now) classic way to study the phenotypic effects of human-specific versions of genes is to introduce these changes into a model organism, such as a mouse (for a review of human brain-specific human genes – see link).  A example of such a study involves the gene that encodes the protein foxp2, a protein involved in the regulation of gene expression (a transcription factor). The human foxp2  protein differs from the foxp2 protein in other primates at two positions; foxP2 evolution these two amio acid changes alter the activity of the human protein, that is the ensemble of genes that it regulates. That foxp2 has an important role in humans was revealed through studies of individuals in a family that displayed a severe language disorder linked to a mutation that disrupts the function of the foxp2 protein. Individuals carrying this mutant  foxp2 allele display speech apraxia, a “severe impairment in the selection and sequencing of fine oral and facial movements, the ability to break up words into their constituent phonemes, and the production and comprehension of word inflections and syntax” (cited in Bae et al, 2015).  Male mice that carry this foxp2 mutation display changes in the “song” that they sing to female mice (1), while mice carrying a humanized form of foxp2 display changes in “dopamine levels, dendrite morphology, gene expression and synaptic plasticity” in a subset of CNS neurons (2).  While there are many differences between mice and humans, such studies suggest that changes in foxp2 played a role in human evolution, and human speech in particular.

Another way to study the role of human genes using mouse as a model system is to generate what are known as chimeras, named after the creature in Greek mythology composed of parts of multiple organisms.  A couple of years ago, Goldman and colleagues (3) reported that human glial progenitor cells could, when introduced into immune-compromised mice (to circumvent tissue rejection), displaced the mouse’s own glia, replacing them with human glia cells.iPSC transplant Glial cells are the major non-neuronal component of the central nervous system. Once thought of as passive “support” cells, it is now clear that the two major types of glia, known as astrocytes and oligodendrocytes, play a number of important roles in neural functioning [back track post].  In their early studies, they found that the neurological defects associated with the shaker mutation, a mutation that disrupts the normal behavior of oligodendrocytes, could be rescued by the implantation of normal human glial progenitor cells (hGPCs)(4).  Such studies confirmed what was already known, that the shaker mutation disrupts the normal function of myelin, the insulating structure around axons that dramatically speeds the rate at which neuronal signals (action potentials) move down the axons and activate the links between neurons (synapses). In the central nervous system, myelin is produced by oligodendrocytes as they ensheath neuronal axons.  Human oligodendrocytes derived from hGPCs displaced the mouse’s mutation carrying oligodendrocytes and rescued the shaker mouse’s mutation-associated neurological defect.

golgi staining- diagramSubsequently, Goldman and associates used a variant of this approach to introduce hGPCs (derived from human embryonic stem cells) carrying either a normal or mutant version of the  Huntingtin protein, a protein associated with the severe neural disease Huntington’s chorea (OMIM: 143100)(5).  Their studies strongly support a model that locates defects associated with human Huntington’s disease to defects in glia.  This same research group has generated hGPCs from patient-derived, induced pluripotent stem cells (patient-derived HiPSCs). In this case, the patients had been diagnosed with childhood-onset schizophrenia (SCZ) [link](6).  Skin biopsies were taken from both normal and children diagnosed with SCZ; fibroblasts were isolated, and reprogrammed to form human iPSCs. These iPSCs were treated so that they formed hGPCs that were then injected into mice to generate chimeric (human glial/mouse neuronal) animals. The authors reported systematic differences in the effects of control and SCZ-derived hGPCs; “SCZ glial mice showed reduced prepulse inhibition and abnormal behavior, including excessive anxiety, antisocial traits, and disturbed sleep”, a result that suggests that defects in glial behavior underlie some aspects of the human SCZ phenotype.

The use of human glia chimeric mice provides a powerful research tool for examining the molecular and cellular bases for a subset of human neurological disorders.  Does it raise a question of making mice more human?  Not for me, but perhaps I do not appreciate the more subtle philosophical and ethical issues involved. The mice are still clearly mice, most of their nervous systems are composed of mouse cells, and the overall morphology, size, composition, and organization of their central nervous systems are mouse-derived and mouse-like. The situation becomes rather more complex and potentially therapeutically useful when one talks about generating different types of chimeric animals or of using newly developed genetic engineering tools (the CRISPR CAS9 system found in prokaryotes), that greatly simplify and improve the specificity of the targeted manipulation of specific genes (link).  In these studies the animal of choice is not mice, but pigs – which because of their larger size produce organs for transplantion that are similar in size to the organs of people (see link).  While similar in size, there are two issues that complicate pig to human organ transplantation: first there is the human immune system mediated rejection of foreign  tissue and second there is the possibility that transplantation of porcine organs will lead to the infection of the human recipient with porcine retroviruses.

The issue of rejection (pig into human), always a serious problem, is further exacerbated by the presence in pigs of a gene encoding the enzyme α-1,3 galactosyl transferase (GGTA1). GGTA1 catalyzes the addition of the gal-epitope to a number of cell surface proteins. The gal-epitope is “expressed on the tissues of all mammals except humans and subhuman primates, which have antibodies against the epitope” (7). The result is that pig organs provoke an extremely strong immune (rejection) response in humans.  The obvious technical fix to this (and related problems) is to remove the gal-epitope from pig cells by deleting the GGTA1 enzyme (see 8). It is worth noting that “organs from genetically engineered animals have enjoyed markedly improved survivals in non-human primates” (see Sachs & Gall, 2009).

pig to humanThe second obstacle to pig → human transplantation is the presence of retroviruses within the pig genome.  All vertebrate genomes, including those of humans, contain many inserted retroviruses; almost 50% of the human genome is retrovirus-derived sequence (an example of unintelligent design if ever there was one). Most of these endogenous retroviruses are “under control” and are normally benign (see 9). The concern, however, is that the retroviruses present in pig cells could be activated when introduced into humans. To remove (or minimize) this possibility, Niu et al set out to use the CRISPR CAS9 system to delete these porcine endogenous retroviral sequences (PERVs) from the pig genome; they appear to have succeeded, generating a number of genetically modified pigs without PERVs (see 10).  The hope is that organs generated from PERV-minus pigs from which antigen-generating genes, such as α-1,3 galactosyl transferase, have also been removed or inactivated together with more sophisticated inhibitors of tissue rejection, will lead to an essentially unlimited supply of pig organs that can be used for heart and other organ transplantation (see 11), and so alleviate the delays in transplantation, and so avoid deaths in sick people and the often brutal and criminal harvesting of organs carried out in some countries.

The final strategy being explored is to use genetically modified hosts and patient derived iPSCs  to generate fully patient compatible human organs. To date, pilot studies have been carried out, apparently successfully, using rat embryos with mouse stem cells (see 12 and 13), with much more preliminary studies using pig embryos and human iPSCs (see 14).  The approach involves what is known as chimeric  embryos.  In this case, host animals are genetically modified so that they cannot generate the organ of choice. Typically this is done by mutating a key gene that encodes a transcription factor directly involved in formation of the organ; embryos missing pancreas, kidney, heart, human pig embryo chimeraor eyes can be generated.  In an embryo that cannot make these organs, which can be a lethal defect, the introduction of stem cells from an animal that can form these organs can lead to the formation of an organ composed primarily of cells derived from the transplanted (human) cells.

At this point the strategy appears to work reasonably well for mouse-rat chimeras, which are much more closely related, evolutionarily, than are humans and pigs. Early studies on pig-human chimeras appear to be dramatically less efficient. At this point, Jun Wu has been reported as saying of human-pig chimeras that “we estimate [each had] about one in 100,000 human cells” (see 15), with the rest being pig cells.  The bottom line appears to be that there are many technical hurdles to over-come before this method of developing patient-compatible human organs becomes feasible.  Closer to reality are PERV-free/gal-antigen free pig-derived, human compatible organs. The reception of such life-saving organs by the general public, not to mention religious and philosophical groups that reject the consumption of animals in general, or pigs in particular, remains to be seen.

figures reinserted & minor edits 23 October 2020 – new link 17 December 2020.
references cited

  1. A Foxp2 Mutation Implicated in Human Speech Deficits Alters Sequencing of Ultrasonic Vocalizations in Adult Male Mice.
  2. A Humanized Version of Foxp2 Affects Cortico-Basal Ganglia Circuits in Mice
  3. Modeling cognition and disease using human glial chimeric mice.
  4. Human iPSC-derived oligodendrocyte progenitor cells can myelinate and rescue a mouse model of congenital hypomyelination.
  5. Human glia can both induce and rescue aspects of disease phenotype in Huntington disease
  6. Human iPSC Glial Mouse Chimeras Reveal Glial Contributions to Schizophrenia.
  7.  The potential advantages of transplanting organs from pig to man: A transplant Surgeon’s view
  8. see Sachs and Gall. 2009. Genetic manipulation in pigs. and Fisher et al., 2016. Efficient production of multi-modified pigs for xenotransplantation by ‘combineering’, gene stacking and gene editing
  9. Hurst & Magiokins. 2017. Epigenetic Control of Human Endogenous Retrovirus Expression: Focus on Regulation of Long-Terminal Repeats (LTRs)
  10. Nui et al., 2017. Inactivation of porcine endogenous retrovirus in pigs using CRISPR-Cas9
  11. Zhang  2017. Genetically Engineering Pigs to Grow Organs for People
  12. Kobayashi et al., 2010. Generation of rat pancreas in mouse by interspecific blastocyst injection of pluripotent stem cells.
  13. Kobayashi et al., 2015. Targeted organ generation using Mixl1-inducible mouse pluripotent stem cells in blastocyst complementation.
  14. Wu et al., 2017. Interspecies Chimerism with Mammalian Pluripotent Stem Cells
  15. Human-Pig Hybrid Created in the Lab—Here Are the Facts

Reverse Dunning-Kruger effects and science education

The Dunning-Kruger (DK) effect is the well-established phenomenon that people tend to over estimate their understanding of a particular topic or their skill at a particular task, often to a dramatic degree [link][link]. We see examples of the DK effect throughout society; the current administration (unfortunately) and the nutritional supplements / homeopathy section of Whole Foods spring to mind as examples. But there is a less well-recognized “reverse DK” effect, namely the tendency of instructors, and a range of other public communicators, to over-estimate what the people they are talking to are prepared to understand, appreciate, and accurately apply. The efforts of science communicators and instructors can be entertaining but the failure to recognize and address the reverse DK effect results in ineffective educational efforts. These efforts can themselves help generate the illusion of understanding in students and the broader public (discussed here). While a confused understanding of the intricacies of cosmology or particle physics can be relatively harmless in their social and personal implications, similar misunderstandings become personally and publicly significant when topics such as vaccination, alternative medical treatments, and climate change are in play.

There are two synergistic aspects to the reverse DK effect that directly impact science instruction: the need to understand what one’s audience does not understand together with the need to clearly articulate the conceptual underpinnings needed to understand the subject to be taught. This is in part because modern science has, at its core, become increasingly counter-intuitive over the last approximately 100 years or so, a situation that can cause serious confusions that educators must address directly and explicitly. The first reverse DK effect involves the extent to which the instructor (and by implication the course and textbook designer) has an accurate appreciation of what students think or think they know, what ideas they have previously been exposed to, and what they actually understand about the implications of those ideas.  Are they prepared to learn a subject or does the instructor first have to acknowledge and address conceptual confusions and build or rebuild base concepts?  While the best way to discover what students think is arguably a Socratic discussion, this only rarely occurs for a range of practical reasons. In its place, a number of concept inventory-type testing instruments have been generated to reveal whether various pre-identified common confusions exist in students’ thinking. Knowing the results of such assessments BEFORE instruction can help customize how the instructor structures the learning environment and content to be presented and whether the instructor gives students the space to work with these ideas to develop a more accurate and nuanced understanding of a topic.  Of course, this implies that instructors have the flexibility to adjust the pace and focus of their classroom activities. Do they take the time needed to address student issues or do they feel pressured to plow through the prescribed course content, come hell, high water, or cascading student befuddlement.

A complementary aspect of the reverse DK effect, well-illustrated in the “why magnets attract” interview with the physicist Richard Feynman, is that the instructor, course designer, or textbook author(s) needs to have a deep and accurate appreciation of the underlying core knowledge necessary to understand the topic they are teaching. Such a robust conceptual understanding makes it possible to convey the complexities involved in a particular process and explicitly values appreciating a topic rather than memorizing it.  It focuses on the general, rather than the idiosyncratic. A classic example from many an introductory biology course is the difference between expecting students to remember the steps in glycolysis or the Krebs cycle reaction system, as opposed to the general principles that underlie the non-equilibrium reaction networks involved in all biological functions, a reaction network based on coupled chemical reactions and governed by the behaviors of thermodynamically favorable and unfavorable reactions. Without a explicit discussion of these topics, all too often students are required to memorize names without understanding the underlying rationale driving the processes involved; that is, why the system behaves as it does.  Instructors also give false “rubber band” analogies or heuristics to explain complex phenomena (see Feynman video 6:18 minutes in). A similar situation occurs when considering how molecules come to associate and dissociate from one another, for example in the process of regulating gene expression or repairing mutations in DNA. Most textbooks simply do not discuss the physiochemical processes involved in binding specificity, association, and dissociation rates, such as the energy changes associated with molecular interactions and thermal collisions (don’t believe me? look for yourself!). But these factors are essential for a student to understand the dynamics of gene expression [link], as well as the specificity of modern methods involved in genetic engineering, such as restriction enzymes, polymerase chain reaction, and CRISPR CAS9-mediated mutagenesis. By focusing on the underlying processes involved we can avoid their trivialization and enable students to apply basic principles to a broad range of situations. We can understand exactly why CRISPR CAS9-directed mutagenesis can be targeted to a single site within a multibillion-base pair genome.

Of course, as in the case of recognizing and responding to student misunderstandings and knowledge gaps, a thoughtful consideration of underlying processes takes course time, time that trades the development of a working understanding of core processes and principles for broader “coverage” of frequently disconnected facts, the memorization and regurgitation of which has been privileged over understanding why those facts are worth knowing. If our goal is for students to emerge from a course with an accurate understanding of the basic processes involved rather than a superficial familiarity with a plethora of unrelated facts, however, a Socratic interaction with the topic is essential. What assumptions are being made, where do they come from, how do they constrain the system, and what are their implications?  Do we understand why the system behaves the way it does? In this light, it is a serious educational mystery that many molecular biology / biochemistry curricula fail to introduce students to the range of selective and non-selective evolutionary mechanisms (including social and sexual selection – see link), that is, the processes that have shaped modern organisms.

Both aspects of the reverse DK effect impact educational outcomes. Overcoming the reverse DK effect depends on educational institutions committing to effective and engaging course design, measured in terms of retention, time to degree, and a robust inquiry into actual student learning. Such an institutional dedication to effective course design and delivery is necessary to empower instructors and course designers. These individuals bring a deep understanding of the topics taught and their conceptual foundations and historic development to their students AND must have the flexibility and authority to alter the pace (and design) of a course or a curriculum when they discover that their students lack the pre-existing expertise necessary for learning or that the course materials (textbooks) do not present or emphasize necessary ideas. Radiation-kills-in-BoulderUnfortunately, all too often instructors, particularly in introductory level college science courses, are not the masters of their ships; that is, they are not rewarded for generating more effective course materials. An emphasis on course “coverage” over learning, whether through peer-pressure, institutional apathy, or both, generates unnecessary obstacles to both student engagement and content mastery.  To reverse the effects of the reverse DK effect, we need to encourage instructors, course designers, and departments to see the presentation of core disciplinary observations and concepts as the intellectually challenging and valuable endeavor that it is. In its absence, there are serious (and growing) pressures to trivialize or obscure the educational experience – leading to the socially- and personally-damaging growth of fake knowledge.

empty images holders removed, new image added – 17 December 2020

Is it time to start worrying about conscious human “mini-brains”?

A human iPSC cerebral organoid in which pigmented retinal epithelial cells can be seen (from the work of McClure-Begley et al.).   Also see “Can lab-grown brains become conscious?” by Sara Readon Nature 2020.

The fact that experiments on people are severely constrained is a major obstacle in understanding human development and disease.  Some of these constraints are moral and ethical and clearly appropriate and necessary given the depressing history of medical atrocities.  Others are technical, associated with the slow pace of human development. The combination of moral and technical factors has driven experimental biologists to explore the behavior of a wide range of “model systems” from bacteria, yeasts, fruit flies, and worms to fish, frogs, birds, rodents, and primates.  Justified by the deep evolutionary continuity between these organisms (after all, all organisms appear to be descended from a single common ancestor and share many molecular features), experimental evolution-based studies of model systems have led to many therapeutically valuable insights in humans – something that I suspect a devotee of intelligent design creationism would be hard pressed to predict or explain (post link).

While humans are closely related to other mammals, it is immediately obvious that there are important differences – after all people are instantly recognizable from members of other closely related species and certainly look and behave differently from mice. For example, the surface layer of our brains is extensively folded (they are known as gyrencephalic) while the brain of a mouse is smooth as a baby’s bottom (and referred to as lissencephalic). In humans, the failure of the brain cortex to fold is known as lissencephaly, a disorder associated with severe neurological defects. With the advent of more and more genomic sequence data, we can identify human specific molecular (genomic) differences. Many of these sequence differences occur in regions of our DNA that regulate when and where specific genes are expressed.  Sholtis & Noonan (1) provide an example: the HACNS1 locus is a 81 basepair region that is highly conserved in various vertebrates from birds to chimpanzees; there are 13 human specific changes in this sequence that appear to alter its activity, leading to human-specific changes in the expression of nearby genes (↓). At this point ~1000 genetic elements that are different in humans compared to other vertebrates have been identified and more are likely to emerge (2).  Such human-specific changes can make modeling human-specific behaviors, at the cellular, tissue, organ, and organism level, in non-human model systems difficult and problematic (3, 4).   It is for this reason that scientists have attempted to generate better human specific systems.

human sequence divergence

One particularly promising approach is based on what are known as embryonic stem cells (ESCs) or pluripotent stem cells (PSCs). Human embryonic stem cells are generated from the inner cell mass of a human embryo and so involve the destruction of that embryo – which raises a number of ethical and religious concerns as to when “life begins” (5).  Human pluripotent stem cells are isolated from adult tissues but in most cases require invasive harvesting methods that limit their usefulness.  Both ESCs and PSCs can be grown in the laboratory and can be induced to differentiate into what are known as gastruloids.  Such gastruloids can develop anterior-posterior (head-tail), dorsal-ventral (back-belly), and left-right axes analogous to those found in embryos (6) and adults (top panel ↓). In the case of PSCs, the gastruloid (bottom panel ↓) is essentially a twin of the organism from which the PSCs were derived, a situation that raises difficult questions: is it a distinct individual, is it the property of the donor or the creation of a technician.  The situation will be further complicated if (or rather, when) it becomes possible to generate viable embryos from such gastruloids.

Axes

gastruloid-embryo-comparisonThe Nobel prize winning work of Kazutoshi Takahashi and Shinya Yamanaka (7), who devised methods to take differentiated (somatic) human cells and reprogram them into ESC/PSC-like cells, cells known as induced pluripotent stem cells (iPSCs)(8), represented a technical breakthrough that jump-started this field. While the original methods derived sample cells from tissue biopsies, it is possible to reprogram kidney epithelial cells recovered from urine, a non-invasive approach (910).  Subsequently, Madeline Lancaster, Jurgen Knōblich, and colleagues devised an approach by which such cells could be induced to form what they termed “cerebral organoids” (although Yoshiki Sasai and colleagues were the first to generate neuronal organoids); they used this method to examine the developmental defects associated with microencephaly (11).  The value of the approach was rapidly recognized and a number of studies on human conditions, including  lissencephaly (12), Zika-virus infection-induced microencephaly (13), and Down’s syndrome (14);  investigators have begun to exploit these methods to study a range of human diseases – and rapid technological progress is being made.

The production of cerebral organoids from reprogrammed human somatic cells has also attracted the attention of the media (15).  While “mini-brain” is certainly a catchier name, it is a less accurate description of a cerebral organoid, itself possibly a bit of an overstatement, since it is not clear exactly how “cerebral” such organoids are. For example, the developing brain is patterned by embryonic signals that establish its asymmetries; it forms at the anterior end of the neural tube (the nascent central nervous system and spinal cord) and with distinctive anterior-posterior, dorsal-ventral, and left-right asymmetries, something that simple cerebral organoids do not display.  Moreover, current methods for generating cerebral organoids involve primarily what are known as neuroectodermal cells – our nervous system (and that of other vertebrates) is a specialized form of the embryo’s surface layer that gets internalized during development. In the embryo, the developing neuroectoderm interacts with cells of the circulatory system (capillaries, veins, and arteries), formed by endothelial cells and what are known as pericytes that surround them. These cells, together with interactions with glial cells (astrocytes, a non-neuronal cell type) combine to form the blood brain barrier.  Other glial cells (oligodendrocytes) are also present; in contrast, both types of glia (astrocytes and oligodendrocytes) are rare in the current generation of cerebral organoids. Finally, there are microglial cells,  immune system cells that originate from outside the neuroectoderm; they invade and interact with neurons and glia as part of the brain’s dynamic neural capillary and neuronssystem. The left panel of the figure shows, in highly schematic form how these cells interact (16). The right panel is a drawing of neural tissue stained by the Golgi method (17), which reveals ~3-5% of the neurons present. There are at least as many glial cells present, as well as microglia, none of which are visible in the image. At this point, cerebral organoids typically contain few astrocytes and oligodendrocytes, no vasculature, and no microglia. Moreover, they grow to be about 1 to 3 mm in diameter over the course of 6 to 9 months; that is significantly smaller in volume than a fetal or newborn’s brain. While cerebral organoids can generate structures characteristic of retinal pigment epithelia (top figure) and photo-responsive neurons (18), such as those associated with the retina, an extension of the brain, it is not at all clear that there is any significant sensory input into the neuronal networks that are formed within a cerebral organoid, or any significant outputs, at least compared to the role that the human brain plays in controlling bodily and mental functions.

The reasonable question, then, must be whether a  cerebral organoid, which is a relatively simple system of cells (although itself complex), is conscious. It becomes more reasonable as increasingly complex systems are developed, and such work is proceeding apace. Already researchers are manipulating the developing organoid’s environment to facilitate axis formation, and one can anticipate the introduction of vasculature. Indeed, the generation of microglia-like cells from iPSCs has been reported; such cells can be incorporated into cerebral organoids where they appear to respond to neuronal damage in much the same way as microglia behave in intact neural tissue (19).

We can ask ourselves, what would convince us that a cerebral organoid, living within a laboratory incubator, was conscious? How would such consciousness manifest itself? Through some specific pattern of neural activity, perhaps?  As a biologist, albeit one primarily interested in molecular and cellular systems, I discount the idea, proposed by some physicists and philosophers as well as the more mystical, that consciousness is a universal property of matter (20,21).  I take consciousness to be an emergent property of complex neural systems, generated by evolutionary mechanisms, built during embryonic and subsequent development, and influenced by social interactions (BLOG LINK) using information encoded within the human genome (something similar to this: A New Theory Explains How Consciousness Evolved). While a future concern, in a world full of more immediate and pressing issues, it will be interesting to listen to the academic, social, and political debate on what to do with mini-brains as they grow in complexity and perhaps inevitably, towards consciousness.

Footnotes and references

Thanks to Rebecca Klymkowsky, Esq. and Joshua Sanes, Ph.D. for editing and disciplinary support. Minor updates and the reintroduction of figures 22 Oct. 2020.

  1. Gene regulation and the origins of human biological uniqueness
  2.  See also Human-specific loss of regulatory DNA and the evolution of human-specific traits
  3. The mouse trap
  4. Mice Fall Short as Test Subjects for Some of Humans’ Deadly Ill
  5. The status of the human embryo in various religions
  6. Interactions between Nodal and Wnt signalling Drive Robust Symmetry Breaking and Axial Organisation in Gastruloids (Embryonic Organoids)
  7.  Induction of pluripotent stem cells from mouse embryonic and adult fibroblast cultures by defined factors
  8.  How iPS cells changed the world
  9.  Generation of Induced Pluripotent Stem Cells from Urine
  10. Urine-derived induced pluripotent stem cells as a modeling tool to study rare human diseases
  11. Cerebral organoids model human brain development and microcephaly.
  12. Human iPSC-Derived Cerebral Organoids Model Cellular Features of Lissencephaly and Reveal Prolonged Mitosis of Outer Radial Glia
  13. Using brain organoids to understand Zika virus-induced microcephaly
  14. Probing Down Syndrome with Mini Brains
  15. As an example, see The Beauty of “Mini Brains”
  16. Derived from Central nervous system pericytes in health and disease
  17. Golgi’s method .
  18. Cell diversity and network dynamics in photosensitive human brain organoids
  19. Efficient derivation of microglia-like cells from human pluripotent stem cells
  20. The strange link between the human mind and quantum physics – BBC:
  21. Can Quantum Physics Explain Consciousness?

Visualizing and teaching evolution through synteny

Embracing the rationalist and empirically-based perspective of science is not easy. Modern science generates disconcerting ideas that can be difficult to accept and often upsetting to philosophical or religious views of what gives meaning to existence [link]. In the context of biological evolutionary mechanisms, the fact that variation is generated by random (stochastic) events, unpredictable at the level of the individual or within small populations, led to the rejection of Darwinian principles by many working scientists around the turn of the 20th century (see Bowler’s The Eclipse of Darwinism + link).  Educational research studies, such as our own “Understanding randomness and its impact on student learning“, reinforce the fact that ideas involving stochastic processes relevant to evolutionary, as well as cellular and molecular, biology, are inherently difficult for people to accept (see also: Why being human makes evolution hard to understand). Yet there is no escape from the science-based conclusion that stochastic events provide the raw material upon which evolutionary mechanisms act, as well as playing a key role in a wide range of molecular and cellular level processes, including the origin of various diseases, particularly cancer [Cancer is partly caused by bad luck](1).

Teach Evolution

All of which leaves the critical question, at least for educators, of how to best teach students about evolutionary mechanisms and outcomes. The problem becomes all the more urgent given the anti-science posturing of politicians and public “intellectuals”, on both the right and the left, together with various overt and covert attacks on the integrity of science education, such as a new Florida law that lets “anyone in Florida challenge what’s taught in schools”.

Just to be clear, we are not looking for students to simply “believe” in the role of evolutionary processes in generating the diversity of life on Earth, but rather that they develop an understanding of how such processes work and how they make a wide range of observations scientifically intelligible. Of course the end result, unless you are prepared to abandon science altogether, is that you will find yourself forced to seriously consider the implications of inescapable scientific conclusions, no matter how weird and disconcerting they may be.

spacer bar

There are a number of educational strategies, in part depending upon one’s disciplinary perspective, on how to approach teaching evolutionary processes. Here I consider one, based on my background in cell and molecular biology.  Genomicus is a web tool that “enables users to navigate in genomes in several dimensions: linearly along chromosome axes, transversely across different species, and chronologically along evolutionary time.”  It is one of a number of  web-based resources that make it possible to use the avalanche of DNA (gene and genomic) sequence data being generated by the scientific community. For example, the ExAC/Gnomad Browser enables one to examine genetic variation in over 60,000 unrelated people. Such tools supplement and extend the range of tools accessible through the U.S. National Library of Medicine / NIH / National Center for Biotechnology Information (NCBI) web portal (PubMed).

In the biofundamentals / coreBio course (with an evolving text available here), we originally used the observation that members of our subfamily of primates,  the Haplorhini or dry nose primates, are, unlike most mammals, dependent on the presence of vitamin C (ascorbic acid) in their diet. Without vitamin C we develop scurvy, a potentially lethal condition. While there may be positive reasons for vitamin C dependence, in biofundamentals we present this observation in the context of small population size and a forgiving environment. A plausible scenario is that the ancestral Haplorhini population the lost the L-gulonolactone oxidase (GULO) gene (see OMIM) necessary for vitamin C synthesis. The remains of the GULO gene found in humans and other Haplorhini genomes is mutated and non-functional, resulting in our requirement for dietary vitamin C.

How, you might ask, can we be so sure? Because we can transfer a functional mouse GULO gene into human cells; the result is that vitamin C dependent human cells become vitamin C independent (see: Functional rescue of vitamin C synthesis deficiency in human cells). This is yet another experimental result, similar to the ability of bacteria to accurately decode a human insulin gene), that supports the explanatory power of an evolutionary perspective (2).


In an environment in which vitamin C is plentiful in a population’s diet, the mutational loss of the GULO gene would be benign, that is, not strongly selected against. In a small population, the stochastic effects of genetic drift can lead to the loss of genetic variants that are not strongly selected for. More to the point, once a gene’s function has been lost due to mutation, it is unlikely, although not impossible, that a subsequent mutation will lead to the repair of the gene. Why? Because there are many more ways to break a molecular machine, such as the GULO enzyme, but only a few ways to repair it. As the ancestor of the Haplorhini diverged from the ancestor of the vitamin C independent Strepsirrhini (wet-nose) group of primates, an event estimated to have occurred around 65 million years ago, its ancestors had to deal with their dietary dependence on vitamin C either by remaining within their original (vitamin C-rich) environment or by adjusting their diet to include an adequate source of vitamin C.

At this point we can start to use Genomicus to examine the results of evolutionary processes (see a YouTube video on using Genomicus)(3).  In Genomicus a gene is indicated  by a pointed box  ; for simplicity all genes are drawn as if they are the same size (they are not); different genes get different colors and the direction of the box indicates the direction of RNA synthesis, the first stage of gene expression. Each horizontal line in the diagram below represents a segment of a chromosome from a particular species, while the blue lines to the left represent phylogenic (evolutionary) relationships. If we search for the GULO gene in the mouse, we find it and we discover that its orthologs (closely related genes) are found in a wide range of eukaryotes, that is, organisms whose cells have a nucleus (humans are eukaryotes).

We find a version of the GULO gene in single-celled eukaryotes, such as baker’s yeast, that appear to have diverged from other eukaryotes about ~1.500,000,000 years ago (1500 million years ago, abbreviated Mya).  Among the mammalian genomes sequenced to date, the genes surrounding the GULO gene are (largely) the same, a situation known as synteny (mammals are estimated to have shared a common ancestor about 184 Mya). Since genes can move around in a genome without necessarily disrupting their normal function(s), a topic for another day, synteny between distinct organisms is assumed to reflect the organization of genes in their common ancestor. The synteny around the GULO gene, and the presence of a GULO gene in yeast and other distantly related organisms, suggests that the ability to synthesize vitamin C is a trait conserved from the earliest eukaryotic ancestors.GULO phylogeny mouse
Now a careful examination of this map (↑) reveals the absence of humans (Homo sapiens) and other Haplorhini primates – Whoa!!! what gives?  The explanation is, it turns out, rather simple. Because of mutation, presumably in their common ancestor, there is no functional GULO gene in Haplorhini primates. But the Haplorhini are related to the rest of the mammals, aren’t they?  We can test this assumption (and circumvent the absence of a functional GULO gene) by exploiting synteny – we search for other genes present in the syntenic region (↓). What do we find? We find that this region, with the exception of GULO, is present and conserved in the Haplorhini: the syntenic region around the GULO gene lies on human chromosome 8 (highlighted by the red box); the black box indicates the GULO region in the mouse. Similar syntenic regions are found in the homologous (evolutionarily-related) chromosomes of other Haplorhini primates.synteny-GULO region

The end result of our Genomicus exercise is a set of molecular level observations, unknown to those who built the original anatomy-based classification scheme, that support the evolutionary relationship between the Haplorhini and more broadly among mammals. Based on these observations, we can make a number of unambiguous and readily testable predictions. A newly discovered Haplorhini primate would be predicted to share a similar syntenic region and to be missing a functional GULO gene, whereas a newly discovered Strepsirrhini primate (or any mammal that does not require dietary ascorbic acid) should have a functional GULO gene within this syntenic region.  Similarly, we can explain the genomic similarities between those primates closely related to humans, such as the gorilla, gibbon, orangutan, and chimpanzee, as well as to make testable predictions about the genomic organization of extinct relatives, such as Neanderthals and Denisovians, using DNA recovered from fossils [link].

It remains to be seen how best to use these tools in a classroom context and whether having students use such tools influences their working understanding, and more generally, their acceptance of evolutionary mechanisms. That said, this is an approach that enables students to explore real data and to develop  plausible and predictive explanations for a range of genomic discoveries, likely to be relevant both to understanding how humans came to be, and in answering pragmatic questions about the roles of specific mutations and allelic variations in behavior, anatomy, and disease susceptibility.spacer bar

Some footnotes (figures reinserted 2 November 2020, with minor edits)

(1) Interested in a magnetic bumper image? visit: http://www.cafepress.com/bioliteracy

(2) An insight completely missing (unpredicted and unexplained) by any creationist / intelligent design approach to biology.

(3) Note, I have no connection that I know of with the Genomicus team, but I thank Tyler Square (now at UC Berkeley) for bringing it to my attention.

The trivialization of science education

It’s time for universities to accept their role in scientific illiteracy.  

There is a growing problem with scientific illiteracy, and its close relative, scientific over-confidence. While understanding science, by which most people seem to mean technological skills, or even the ability to program a device (1), is purported to be a critical competitive factor in our society, we see a parallel explosion of pseudo-scientific beliefs, often religiously held.  Advocates of a gluten-free paleo-diet battle it out with orthodox vegans for a position on the Mount Rushmore of self-righteousness, at the same time astronomers and astrophysicists rebrand themselves as astrobiologists (a currently imaginary discipline) while a subset of theoretical physicists, and the occasional evolutionary biologist, claim to have rendered ethicists and philosophers obsolete (oh, if it were only so). There are many reasons for this situation, most of which are probably innate to the human condition.  Our roots are in the vitamin C-requiring Haplorhini (dry nose) primate family, we were not evolved to think scientifically, and scientific thinking does not come easy for most of us, or for any of us over long periods of time (2). The fact that the sciences are referred to as disciplines reflects this fact, it requires constant vigilance, self-reflection, and the critical skepticism of knowledgeable colleagues to build coherent, predictive, and empirically validated models of the Universe (and ourselves).  In point of fact, it is amazing that our models of the Universe have become so accurate, particularly as they are counter-intuitive and often seem incredible, using the true meaning of the word.

Many social institutions claim to be in the business of developing and supporting scientific literacy and disciplinary expertise, most obviously colleges and universities.  Unfortunately, there are several reasons to question the general efficacy of their efforts and several factors that have led to this failure. There is the general tendency (although exactly how wide-spread is unclear, I cannot find appropriate statistics on this question) of requiring non-science students to take one, two, or more  “natural science” courses, often with associated laboratory sections, as a way to “enhance literacy and knowledge of one or more scientific disciplines, and enhance those reasoning and observing skills that are necessary to evaluate issues with scientific content” (source).

That such a requirement will “enable students to understand the current state of knowledge in at least one scientific discipline, with specific reference to important past discoveries and the directions of current development; to gain experience in scientific observation and measurement, in organizing and quantifying results, in drawing conclusions from data, and in understanding the uncertainties and limitations of the results; and to acquire sufficient general scientific vocabulary and methodology to find additional information about scientific issues, to evaluate it critically, and to make informed decisions” (source) suggests a rather serious level of faculty/institutional distain or apathy for observable learning outcomes, devotional levels of wishful thinking,  or simple hubris.  To my knowledge there is no objective evidence to support the premise that such requirements achieve these outcomes – which renders the benefits of such requirements problematic, to say the least (link).

On the other hand, such requirements have clear and measurable costs; going beyond the simple burden of added and potentially ineffective or off-putting course credit hours. The frequent requirement for multi-hour laboratory courses impacts the ability of students to schedule courses.  It would be an interesting study to examine how, independently of benefit, such laboratory course requirements impact students’ retention and time to degree, that is, bluntly put, costs to students and their families.

Now, if there were objective evidence that taking such courses improved students’ understanding of a specific disciplinary science and its application, perhaps the benefit would warrant the cost.  But one can be forgiven if one assumes a less charitable driver, that is, science departments’ self-interest in using laboratory and other non-major course requirements as means to support graduate students.  Clearly there is a need for objective metrics for scientific, that is disciplinary, literacy and learning outcomes.

And this brings up another cause for concern.  Recently, there has been a movement within the science education research community to attempt to quantify learning in terms of what are known as “forced choice testing instruments;” that is, tests that rely on true/false and multiple-choice questions, an actively anti-Socratic strategy.  In some cases, these tests claim to be research based.  As one involved in the development of such a testing instrument (the Biology Concepts Instrument or BCI), it is clear to me that such tests can serve a useful role in helping to identify areas in which student understanding is weak or confused [example], but whether they can provide an accurate or, at the end of the day, meaningful measure of whether students have developed an accurate working understanding of complex concepts and the broader meaning of observations is problematic at best.

Establishing such a level of understanding relies on Socratic, that is, dynamic and adaptive evaluations: can the learner clearly explain, either to other experts or to other students, the source and implications of their assumptions?  This is the gold standard for monitoring disciplinary understanding. It is being increasingly side-lined by those who rely on forced choice tests to evaluate learning outcomes and to support their favorite pedagogical strategies (examples available upon request).  In point of fact, it is often difficult to discern, in most science education research studies, what students have come to master, what exactly they know, what they can explain and what they can do with their knowledge. Rather unfortunately, this is not a problem restricted to non-majors taking science course requirements; majors can also graduate with a fragmented and partially, or totally, incoherent understanding of key ideas and their empirical foundations.

So what are the common features of a functional understanding of a particular scientific discipline, or more accurately, a sub-discipline?  A few ideas seem relevant.  A proficient needs to be realistic about their own understanding.  We need to teach disciplinary (and general) humility – no one actually understands all aspects of most scientific processes.  This is a point made by Fernbach & Sloman in their recent essay, “Why We Believe Obvious Untruths.”  Humility about our understanding has a number of beneficial aspects.  It helps keep us skeptical when faced with, and asked to accept, sweeping generalizations.

Such skepticism is part of a broader perspective, common among working scientists, namely the ability to distinguish the obvious from the unlikely, the implausible, and the impossible. When considering a scientific claim, the first criterion is whether there is a plausible mechanism that can be called upon to explain it, or does it violate some well-established “law of nature”. Claims of “zero waste” processes butt up against the laws of thermodynamics.

Going further, we need to consider how the observation or conclusions fits with other well established principles, which means that we have to be aware of these principles, as well as acknowledging that we are not universal experts in all aspects of science.  A molecular biologist may recognize that quantum mechanics dictates the geometries of atomic bonding interactions without being able to formally describe the intricacies of the molecule’s wave equation. Similarly, a physicist might think twice before ignoring the evolutionary history of a species, and claiming that quantum mechanics explains consciousness, or that consciousness is a universal property of matter.  Such a level of disciplinary expertise can take extended experience to establish, but is critical to conveying what disciplinary mastery involves to students; it is the major justification for having disciplinary practitioners (professors) as instructors.

From a more prosaic educational perspective other key factors need to be acknowledged, namely a realistic appreciation of what people can learn in the time available to them, while also understanding at least some of their underlying motivations, which is to say that the relevance of a particular course to disciplinary goals or desired educational outcomes needs to be made explicit and as engaging as possible, or at least not overtly off putting, something that can happen when a poor unsuspecting molecular biology major takes a course in macroscopic physics, taught by an instructor who believes organisms are deducible from first principles based on the conditions of the big bang.  Respecting the learner requires that we explicitly acknowledge that an unbridled thirst for an empirical, self-critical, mastery of a discipline is not a basic human trait, although it is something that can be cultivated, and may emerge given proper care.  Understanding the real constraints that act on meaningful learning can help focus courses on what is foundational, and help eliminate the irrelevant or the excessively esoteric.

Unintended consequences arise from “pretending” to teach students, both majors and non-science majors, science. One is an erosion of humility in the face of the complexity of science and our own limited understanding, a point made in a recent National Academy report that linked superficial knowledge with more non-scientific attitudes. The end result is an enhancement of what is known as the Kruger-Dunning effect, the tendency of people to seriously over-estimate their own expertise: “the effect describes the way people who are the least competent at a task often rate their skills as exceptionally high because they are too ignorant to know what it would mean to have the skill”.

A person with a severe case of Kruger-Dunning-itis is likely to lose respect for people who actually know what they are talking about. The importance of true expertise is further eroded and trivialized by the current trend of having photogenic and well-speaking experts in one domain pretend to talk, or rather to pontificate, authoritatively on another (3).  In a world of complex and arcane scientific disciplines, the role of a science guy or gal can promote rather than dispel scientific illiteracy.

We see the effects of the lack of scientific humility when people speak outside of their domain of established expertise to make claims of certainty, a common feature of the conspiracy theorist.  An oft used example is the claim that vaccines cause autism (they don’t), when the actual causes of autism, whether genetic and/or environmental, are currently unknown and the subject of active scientific study.  An honest expert can, in all humility, identify the limits of current knowledge as well as what is known for certain.  Unfortunately, revealing and ameliorating the levels of someone’s Kruger-Dunning-itis involves a civil and constructive Socratic interrogation, something of an endangered species in this day and age, where unseemly certainty and unwarranted posturing have replaced circumspect and critical discourse.  Any useful evaluation of what someone knows demands the time and effort inherent in a Socratic discourse, the willingness to explain how one knows what one thinks one knows, together with a reflective consideration of its implications, and what it is that other trained observers, people demonstrably proficient in the discipline, have concluded. It cannot be replaced by a multiple choice test.

Perhaps a new (old) model of encouraging in students, as well as politicians and pundits, an understanding of where science comes from, the habits of mind involved, the limits of, and constraints on, our current understanding  is needed.  At the college level, courses that replace superficial familiarity and unwarranted certainty with humble self-reflection and intellectual modesty might help treat the symptoms of Kruger-Dunning-itis, even though the underlying disease may be incurable, and perhaps genetically linked to other aspects of human neuronal processing.


some footnotes:

  1. after all, why are rather distinct disciplines lumped together as STEM (science, technology, engineering and mathematics).
  2.  Given the long history of Homo sapiens before the appearance of science, it seems likely that such patterns of thinking are an unintended consequence of selection for some other trait, and the subsequent emergence of (perhaps excessively) complex and self-reflective nervous system.
  3.  Another example of Neil Postman’s premise that education is be replaced by edutainment (see  “Amusing ourselves to Death”.

Go ahead and “teach the controversy:” it is the best way to defend science.

as long as teachers understand the science and its historical context

The role of science in modern societies is complex. Science-based observations and innovations drive a range of economically important, as well as socially disruptive, technologies. Many opinion polls indicate that the American public “supports” science, while at the same time rejecting rigorously established scientific conclusions on topics ranging from the safety of genetically modified organisms and the role of vaccines in causing autism to the effects of burning fossil fuels on the global environment [Pew: Views on science and society]. Given that a foundational principle of science is that the natural world can be explained without calling on supernatural actors, it remains surprising that a substantial majority of people appear to believe that supernatural entities are involved in human evolution [as reported by the Gallup organization]; although the theistic percentage has been dropping  (a little) of late.

God+evolution

This situation highlights the fact that when science intrudes on the personal or the philosophical (within which I include the theological and the  ideological), many people are willing to abandon the discipline of science to embrace explanations based on personal beliefs. These include the existence of a supernatural entity that cares for people, at least enough to create them, and that there are reasons behind life’s tragedies.

 

Where science appears to conflict with various non-scientific positions, the public has pushed back and rejected the scientific. This is perhaps best represented by the recent spate of “teach the controversy” legislative efforts, primarily centered on evolutionary theory and the realities of anthropogenic climate change [see Nature: Revamped ‘anti-science’ education bills]. We might expect to see, on more politically correct campuses, similar calls for anti-GMO, anti-vaccination, or gender-based curricula. In the face of the disconnect between scientific and non-scientific (philosophical, ideological, theological) personal views, I would suggest that an important part of the problem has didaskalogenic roots; that is, it arises from the way science is taught – all too often expecting students to memorize terms and master various heuristics (tricks) to answer questions rather than developing a self-critical understanding of ideas, their origins, supporting evidence, limitations, and practice in applying them.

Science is a social activity, based on a set of accepted core assumptions; it is not so much concerned with Truth, which could, in fact, be beyond our comprehension, but rather with developing a universal working knowledge, composed of ideas based on empirical observations that expand in their explanatory power over time to allow us to predict and manipulate various phenomena.  Science is a product of society rather than isolated individuals, but only rarely is the interaction between the scientific enterprise and its social context articulated clearly enough so that students and the general public can develop an understanding of how the two interact.  As an example, how many people appreciate the larger implications of the transition from an Earth to a Sun- or galaxy-centered cosmology?  All too often students are taught about this transition without regard to its empirical drivers and philosophical and sociological implications, as if the opponents at the time were benighted religious dummies. Yet, how many students or their teachers appreciate that as originally presented the Copernican system had more hypothetical epicycles and related Rube Goldberg-esque kludges, introduced to make the model accurate, than the competing Ptolemic Sun-centered system? Do students understand how Kepler’s recognition of elliptical orbits eliminated the need for such artifices and set the stage for Newtonian physics?  And how did the expulsion of humanity from the center to the periphery of things influence peoples’ views on humanity’s role and importance?

 

So how can education adapt to better help students and the general public develop a more realistic understanding of how science works?  To my mind, teaching the controversy is a particularly attractive strategy, on the assumption that teachers have a strong grounding in the discipline they are teaching, something that many science degree programs do not achieve, as discussed below. For example, a common attack against evolutionary mechanisms relies on a failure to grasp the power of variation, arising from stochastic processes (mutation), coupled to the power of natural, social, and sexual selection. There is clear evidence that people find stochastic processes difficult to understand and accept [see Garvin-Doxas & Klymkowsky & Fooled by Randomness].  An instructor who is not aware of the educational challenges associated with grasping stochastic processes, including those central to evolutionary change, risks the same hurdles that led pre-molecular biologists to reject natural selection and turn to more “directed” processes, such as orthogenesis [see Bowler: The eclipse of Darwinism & Wikipedia]. Presumably students are even more vulnerable to intelligent-design  creationist arguments centered around probabilities.

The fact that single cell measurements enable us to visualize biologically meaningful stochastic processes makes designing course materials to explicitly introduce such processes easier [Biology education in the light of single cell/molecule studies].  An interesting example is the recent work on visualizing the evolution of antibiotic resistance macroscopically [see The evolution of bacteria on a “mega-plate” petri dish].bacterial evolution-antibiotic resistancepng

To be in a position to “teach the controversy” effectively, it is critical that students understand how science works, specifically its progressive nature, exemplified through the process of generating and testing, and where necessary, rejecting or revising, clearly formulated and predictive hypotheses – a process antithetical to a Creationist (religious) perspective [a good overview is provided here: Using creationism to teach critical thinking].  At the same time, teachers need a working understanding of the disciplinary foundations of their subject, its core observations, and their implications. Unfortunately, many are called upon to teach subjects with which they may have only a passing familiarity.  Moreover, even majors in a subject may emerge with a weak understanding of foundational concepts and their origins – they may be uncomfortable teaching what they have learned.  While there is an implicit assumption that a college curriculum is well designed and effective, there is often little in the way of objective evidence that this is the case. While many of our dedicated teachers (particularly those I have met as part of the CU Teach program) work diligently to address these issues on their own, it is clear that many have not been exposed to a critical examination of the empirical observations and experimental results upon which their discipline is based [see Biology teachers often dismiss evolution & Teachers’ Knowledge Structure, Acceptance & Teaching of Evolution].  Many is the molecular biology department that does not require formal coursework in basic evolutionary mechanisms, much less a thorough consideration of natural, social, and sexual selection, and non-adaptive mechanisms, such as those associated with founder effects, population bottlenecks, and genetic drift, stochastic processes that play a key role in the evolution of many species, including humankind. Similarly, more ecologically- and physiologically-oriented majors are often “afraid” of the molecular foundations of evolutionary processes. As part of an introductory chemistry curriculum redesign project (CLUE), Melanie Cooper and her group at Michigan State University have found that students in conventional courses often fail to grasp key concepts, and that subsequent courses can sometimes fail to remediate the didaskalogenic damage done in earlier courses [see: an Achilles Heel in Chemistry Education].

The importance of a historical perspective: The power of scientific explanations are obvious, but they can become abstract when their historical roots are forgotten, or never articulated. A clear example is that the value of vaccination is obvious in the presence of deadly and disfiguring diseases. In their absence, due primarily to the effectiveness of wide-spread vaccination, the value of vaccination can be called into question, resulting in the avoidable re-emergence of these diseases.  In this context, it would be important that students understand the dynamics and molecular complexity of biological systems, so that they can explain why it is that all drugs and treatments have potential side-effects, and how each individual’s genetic background influences these side-effects (although in the case of vaccination, such side effects do not include autism).

Often “controversy” arises when scientific explanations have broader social, political, or philosophical implications. Religious objections to evolutionary theory arise primarily, I believe, from the implication that we (humans) are not the result of a plan, created or evolved, but rather that we are accidents of mindless, meaningless, and often gratuitously cruel processes. The idea that our species, which emerged rather recently (that is, a few million years ago) on a minor planet on the edge of an average galaxy, in a universe that popped into existence for no particular reason or purpose ~14 billion years ago, can have disconcerting implications [link]. Moreover, recognizing that a “small” change in the trajectory of an asteroid could change the chance that humanity ever evolved [see: Dinosaur asteroid hit ‘worst possible place’] can be sobering and may well undermine one’s belief in the significance of human existence. How does it impact our social fabric if we are an accident, rather than the intention of a supernatural being or the inevitable product of natural processes?

Yet, as a person who firmly believes in the French motto of liberté, égalité, fraternité and laïcité, I feel fairly certain that no science-based scenario on the origin and evolution of the universe or life, or the implications of sexual dimorphism or “racial” differences, etc, can challenge the importance of our duty to treat others with respect, to defend their freedoms, and to insure their equality before the law. Which is not to say that conflicts do not inevitably arise between different belief systems – in my own view, patriarchal oppression needs to be called out and actively opposed where ever it occurs, whether in Saudi Arabia or on college campuses (e.g. UC Berkeley or Harvard).

This is not to say that presenting the conflicts between scientific explanations of phenomena and non-scientific, but more important beliefs, such as equality under the law, is easy. When considering a number of natural cruelties, Charles Darwin wrote that evolutionary theory would claim that these are “as small consequences of one general law, leading to the advancement of all organic beings, namely, multiply, vary, let the strongest live  and the weakest die” note the absence of any reference to morality, or even sympathy for the “weakest”.  In fact, Darwin would have argued that the apparent, and overt cruelty that is rampant in the “natural” world is evidence that God was forced by the laws of nature to create the world the way it is, presumably a worlDarwin to Grayd that is absurdly old and excessively vast. Such arguments echo the view that God had no choice other than whether to create or not; that for all its flaws, evils, and unnecessary suffering this is, as posited by Gottfried Leibniz (1646-1716) and satirized by Voltaire (1694–1778) in his novel Candide, the best of all possible worlds. Yet, as a member of a reasonably liberal, and periodically enlightened, society, we see it as our responsibility to ameliorate such evils, to care for the weak, the sick, and the damaged and to improve human existence; to address prejudice and political manipulations [thank you Supreme Court for ruling against race-based redistricting].  Whether anchored by philosophical or religious roots, many of us are driven to reject a scientific (biological) quietism (“a theology and practice of inner prayer that emphasizes a state of extreme passivity”) by actively manipulating our social, political, and physical environment and striving to improve the human condition, in part through science and the technologies it makes possible.

At the same time, introducing social-scientific interactions can be fraught with potential  controversies, particularly in our excessively politicized and self-righteous society. In my own introductory biology class (biofundamentals), we consider potentially contentious issues that include sexual dimorphism and selection and social evolutionary processes and their implications.  As an example, social systems (and we are social animals) are susceptible to social cheating and groups develop defenses against cheaters; how such biological ideas interact with historical, political and ideological perspectives is complex, and certainly beyond the scope of an introductory biology course, but worth acknowledging [PLoS blog link].

yeats quote

In a similar manner, we understand the brain as an evolved cellular system influenced by various experiences, including those that occur during development and subsequent maturation.  Family life interacts with genetic factors in a complex, and often unpredictable way, to shape behaviors.  But it seems unlikely that a free and enlightened society can function if it takes seriously the premise that we lack free-will and so cannot be held responsible for our actions, an idea of some current popularity [see Free will could all be an illusion ]. Given the complexity of biological systems, I for one am willing to embrace the idea of constrained free will, no matter what scientific speculations are currently in vogue [but see a recent video by Sabine Hossenfelder You don’t have free will, but don’t worry, which has me rethinking].

Recognizing the complexities of biological systems, including the brain, with their various adaptive responses and feedback systems can be challenging. In this light, I am reminded of the contrast between the Doomsday scenario of Paul Ehrlich’s The Population Bomb, and the data-based view of the late Hans Rosling in Don’t Panic – The Facts About Population.

All of which is to say that we need to see science not as authoritarian, telling us who we are or what we should do, but as a tool to do what we think is best and why it might be difficult to achieve. We need to recognize how scientific observations inform and may constrain, but do not dictate our decisions. We need to embrace the tentative, but strict nature of the scientific enterprise which, while it cannot arrive at “Truth” can certainly identify non-sense.

Minor edits and updates and the addition of figures that had gone missing –  20 October 2020

Science, Politics & Marches

Marching is much in the air of late. After the “Women’s March”, that engaged many millions and was motivated in part by misogynistic statements and proposed policies from various politicians, we find ourselves faced with a range of anti-science behaviors, remarks, and proposed policy changes that have encouraged a similar March for Science.  The March for Science has garnered the support of a wide range of scientific organizations, including the American Association for the Advancement of Science (AAAS) and a range of more march-logospecialized professional science organizations, including the Public Library of Science (PLoS).  There have been a number of arguments for and against marching for science, summarized in this PLoS On Science blog post, so I will not repeat them here.  What is clear is that science does not exist independently of humanity, and this implies a complex interaction between scientific observations and ideas, the scientific enterprise, politics, economics, and personal belief systems: it seems evident that not nearly enough effort is spent in our educational systems to help people understand these interactions (see PLoS SciEd post: From the Science March to the Classroom: Recognizing science in politics and politics in science).

What I want to do here is to present some reflections on the relationship between science and politics, by which I include various belief systems (ideologies).

The mystic Giordano Bruno, burnt at the stake by the Roman Catholic Church as a heretic in 1600, is sometimes put forward as a patron saint of science, mistakenly in my view.  Bruno was a mystic, whose ideas were at best loosely grounded in the observable and in no way scientific as we understand the term. His type of magical thinking is similar to that of modern anti-vaccination-ists who claim vaccination can cause autism (it does not)(1) or that GMOs are somehow innately “unhealthy” and more dangerous than “natural” organisms (see: The GMO safety debate is over).  A better model, particularly in the context of current political controversies, would be the many Soviet geneticists who suffered exile and often death (the famed geneticist N.I. Vavilov starved to death in a Soviet gulag in 1943) as a result of the state/party-driven politicization of science, specifically genetics, carried out by Joseph Stalin (1878-1953) and the Communist party/state of the Soviet Union (see: The tragic story of Soviet genetics shows the folly of political meddling in science). In response to the  implications of genetic and evolutionary mechanisms, Stalin favored Lamarckism (inheritance of acquired traits) posited by Ivan Michurin (1855–1935) and Trofim Lysenko (1898–1976)[see link]. Communist ideology required (or rather demanded) that traits, including human traits, be seen as malleable, that the “nature” of plants and people could be altered permanently with appropriate manipulations (vernalization for plants, political re-education for people)[see: The consequences of political dictatorship for Russian science).  No need to wait for the messy, multi-generational processes associated with conventional plant breeding (and Darwinian evolution).  In both cases, the unforgiving realities of the natural world intervened, but not without intense human suffering and starvation associated with both efforts.russion march for science

It is worth noting explicitly that there are, and likely always will be, pressures to politicize science, due in large measure to science’s success in explaining the natural world and providing the basis for its technology-based manipulation. Giordano Bruno was an early martyr in the evolution of a highly ideological world view (illustrated by the house arrest of Galileo and the suppression of heliocentric models of the solar system)(2). Eventually such forms of natural theology were replaced by the apolitical and empirical ideals implicit in Enlightenment science. Aspects of ideological (racist) influences can be seen in 19th century science, most dramatically illustrated by Gould (Morton’s ranking of races by cranial capacity. Unconscious manipulation of data may be a scientific norm)(see link). How racist policies were initially embraced, and then rejected by American geneticists during the course of the 20th century is described by Provine (Geneticists and the Biology of Race Crossing).

More recent events remind us of the pressures to politicize science.  A number of states (Kentucky in 1976, Mississippi in 2006,  Louisiana in 2008, and Tennessee in 2012) have passed bills that allow teachers to present non-scientific ideas to students (think intelligent design creationism and climate change denial).  Such bills continue to come up with depressing frequency.  Most recently an admitted creationist has been appointed to lead a federal  higher education reform task force in the United States [see link]. Is creationism simply alt-science? a position explicitly or tacitly supported by both the religiously orthodox and those of a post-modernist persuasion, such as left-leaning college instructors, who claim that science is a social construct [see: Is Science ‘Forever Tentative’ and ‘Socially Constructed’?].

While such recent anti-science/alt-science attitudes have not had quite the draconian effects found in the Soviet Union, Nazi Germany or eugenist America), I would argue that they have a role in eroding the public’s faith in the scientific understanding of complex processes, a faith that is largely justified even in the face of the so-called “reproducibility crises”, which in a sense is no crises at all, but an expected outcome from the size, complexity, and competing forces acting on scientists and  the scientific enterprise. That said, laws and various forms of coercion dictating right-wing/religious or left-wing/political correctness in science threaten to impact the education of a generation of students. Predictions of climate changed based on human-driven (anthropogenic) increases in atmospheric CO2 levels or the effects of lead in public water systems on human health [link] cannot simply be discarded or discounted based on ideological positions on the role of government in protecting the public interest, a role that neither unfettered capitalism or fundampolitics + science cartoonentalist communism seems particularly good at addressing. Similarly the lack of any demonstrable connection between autism and vaccination (see above), the physicochemical impossibility of homeopathic treatments (or various versions of “Christian Science”), and the lack of evidence for the therapeutic claims made for the rather startling array of nutritional supplements serve to inject a political, ideological, and economic  dimension into scientific discourse.  In fact science is constantly under pressure to distort its message.  Consider the European response to GMOs in favor of the “organic” (non-GMO); most GMOs have been banned from the EU for what appears to be ideological (non-scientific) reasons, even though the same organisms have been found safe and are grown in the US and most of Asia (see this Economist essay).

It is clear that the rejection of scientific observations is wide-spread on both the left and the right, basically whenever scientific observations, ideas, or models lead to disturbing or  discomforting conclusions or implications (link). Consider the violent response when Charles Murray was invited to speak at Middlebury College (see Andrew Sullivan’s Is intersectionality a religion?). That human populations might (and in fact can be expected to) display genetic differences, the result of their migration history and subsequent evolutionary processes, both adaptive and non-adaptive (see Henn et al., The great human expansion), is labelled racist and by implication beyond the pale of scientific discourse, even though it is tacitly recognized by the scientific community to be well established (no one, I think, gets particularly upset at the suggestion that noses are shaped by evolutionary processes and reflect genetic differences between populations (see Climate shaped the human nose) or that nose shape might play a role in human sexual selection (see Facial Attractiveness and Sexual Selection; and sexual dimorphism).  One might even speculate that studies of the role of nose shape in mate selection could form the basis of an interesting research project (see Beauty and the beast: mechanisms of sexual selection in humans.

What often goes undiscussed is whether differences in specific traits (different alleles and allele frequencies) between populations have any meaningful significance in the context f public policy – I would argue that they do not).  What is clear is that in a pre-genomic era recognizing such differences can be of practical value, for example in the treatment of diseases (see Ethnic Differences in Cardiovascular Drug Response). That said, the era of genomics-based personalized diagnosis and treatment is rapidly making such population-based considerations obsolete (see: Genetic tests for disease risks and ethical debate on personal genome testing), while at the same time raising serious issues of privacy and discrimination based on the presence of the “wrong” alleles (see: genome sequencing–ethical issues). In a world of facile genomic engineering the dangers of unfettered technological manipulations move more and more rapidly from science fiction to the boutique (intelligent?) design of people (see: CRISPR gene-editing and human evolution).

So back (about time, you may be thinking) to the original question – if we “march for science”, what exactly are we marching for [link]?  Are we marching to defend the apolitical nature of science and the need to maintain economic support (increased public funding levels) for the scientific enterprise, or are we conflating support for science with a range of social and political positions?  Are we affirming our commitment to a politically independent (skeptical) community of practitioners who serve to produce, reproduce, critically examine, and extend empirical observations and explanatory (predictive) models?

This is not to ignore the various pressures acting on scientists as they carry out their work. These pressures act to tempt (and sometimes reward) practitioners to exaggerate (if not fabricate) the significance of their observations and ideas in order to capture the resources (funds and people) needed to carry out modern science, as well as the public’s attention. Since resources are limited, extra-scientific forces have an increasing impact on the scientific enterprise – enticing scientists to make exaggerated claims and to put forth extra-scientific arguments and various semi-hysterical scenarios based on their observations and models.  In the context of an inherently political event (a march) the apolitical ideals of science can seem too bland to command attention and stir action, not to mention the damage that politicizing science does to the integrity of science.

At the end of the day my decision is not to march, because I believe that science must be protected from the politPearl quote - aegisical and the partisan(see: The pernicious effects of disrespecting the constraints of science); that the ultimate working nature (as opposed to delivered truth) of scientific observations and conclusions must be respected, something rarely seen in any political movement and certainly not on display in the Lysenkoist, climate change, anti-vaccination, or eugenics movements (see this provocative essay: The Disgraceful Episode Of Lysenkoism Brings Us Global Warming Theory.)

 


Thanks and footnotes
:

Thanks for help on this post from Glenn Branch @ National Center for Science Education.   Of course all opinions are mine alone.

(1) While there is not doubt that vaccinations can, like all drugs and medical interventions, lead to side effects in certain individuals, there is unambiguous evidence against any link between autism and vaccination.

(2) It is worth noting that as originally proposed the Copernican (Sun-centered) model of the solar system was more complex than the Ptolemaic (Earth-centered) system it was meant to replace. It was Kepler’s elliptical, rather than circular, orbits that made the heliocentric model dramatically simpler, more accurate, and more aesthetically compelling.