Making sense of noise: introducing students to stochastic processes in order to better understand biological behaviors (and even free will).

 Biological systems are characterized by the ubiquitous roles of weak, that is, non-covalent molecular interactions, small, often very small, numbers of specific molecules per cell, and Brownian motion. These combine to produce stochastic behaviors at all levels from the molecular and cellular to the behavioral. That said, students are rarely introduced to the ubiquitous role of stochastic processes in biological systems, and how they produce unpredictable behaviors. Here I present the case that they need to be and provide some suggestions as to how it might be approached.  

Background: Three recent events combined to spur this reflection on stochasticity in biological systems, how it is taught, and why it matters. The first was an article describing an approach to introducing students to homeostatic processes in the context of the bacterial lac operon (Booth et al., 2022), an adaptive gene regulatory system controlled in part by stochastic events. The second were in-class student responses to the question, why do interacting molecules “come back apart” (dissociate).  Finally, there is the increasing attention paid to what are presented as deterministic genetic factors, as illustrated by talk by Kathryn Harden, author of the “The Genetic Lottery: Why DNA matters for social equality” (Harden, 2021).  Previous work has suggested that students, and perhaps some instructors, find the ubiquity, functional roles, and implications of stochastic, that is inherently unpredictable processes, difficult to recognize and apply. Given their practical and philosophical implications, it seems essential to introduce students to stochasticity early in their educational journey.

added 7 March 2023; Should have cited:  You & Leu (2020).

What is stochasticity and why is it important for understanding biological systems? Stochasticity results when intrinsically unpredictable events, e.g. molecular collisions, impact the behavior of a system. There are a number of drivers of stochastic behaviors. Perhaps the most obvious, and certainly the most ubiquitous in biological systems is thermal motion. The many molecules within a solution (or a cell) are moving, they have kinetic energy – the energy of motion and mass. The exact momentum of each molecule cannot, however, be accurately and completely characterized without perturbing the system (echos of Heisenberg). Given the impossibility of completely characterizing the system, we are left uncertain as to the state of the system’s components, who is bound to whom, going forward. 

Through collisions energy is exchanged between molecules.  A number of chemical processes are driven by the energy delivered through such collisions. Think about a typical chemical reaction. In the course of the reaction, atoms are rearranged – bonds are broken (a process that requires energy) and bonds are formed (a process that releases energy). Many (most) of the chemical reactions that occur in biological systems require catalysts to bring their required activation energies into the range available within the cell.   [1]  

What makes the impact of thermal motion even more critical for biological systems is that many (most) regulatory interactions and macromolecular complexes, the molecular machines discussed by Alberts (1998) are based on relatively weak, non-covalent surface-surface interactions between or within molecules. Such interactions are central to most regulatory processes, from the activation of signaling pathways to the control of gene expression. The specificity and stability of these non-covalent interactions, which include those involved in determining the three-dimensional structure of macromolecules, are directly impacted by thermal motion, and so by temperature – one reason controlling body temperature is important.  

So why are these interactions stochastic and why does it matter?  A signature property of a stochastic process is that while it may be predictable when large numbers of atoms, molecules, or interactions are involved, the behaviors of individual atoms, molecules, and interactions are not. A classic example, arising from factors intrinsic to the atom, is the decay of radioactive isotopes. While the half-life of a large enough population of a radioactive isotope is well defined, when any particular atom will decay is, in current theory, unknowable, a concept difficult for students (see Hull and Hopf, 2020). This is the reason we cannot accurately predict whether Schrȍdinger’s cat is alive or dead. The same behavior applies to the binding of a regulatory protein to a specific site on a DNA molecule and its subsequent dissociation: predictable in large populations, not-predictable for individual molecules. The situation is exacerbated by the fact that biological systems are composed of cells and cells are, typically, small, and so contain relatively few molecules of each type (Milo and Phillips, 2015). There are typically one or two copies of each gene in a cell, and these may be different from one another (when heterozygous). The expression of any one gene depends upon the binding of specific proteins, transcription factors, that act to activate or repress gene expression. In contrast to a number of other cellular proteins, “as a rule of thumb, the concentrations of such transcription factors are in the nM range, corresponding to only 1-1000 copies per cell in bacteria or 103-106 in mammalian cells” (Milo and Phillips, 2015). Moreover, while DNA binding proteins bind to specific DNA sequences with high affinity, they also bind to DNA “non-specifically” in a largely sequence independent manner with low affinity. Given that there are many more non-specific (non-functional) binding sites in the DNA than functional ones, the effective concentration of a particular transcription factor can be significantly lower than its total cellular concentration would suggest. For example, in the case of the lac repressor of the bacterium Escherichia coli (discussed further below), there are estimated to be ~10 molecules of the tetrameric lac repressor per cell, but “non-specific affinity to the DNA causes >90% of LacI copies to be bound to the DNA at locations that are not the cognate promoter site” (Milo and Phillips, 2015); at most only a few molecules are free in the cytoplasm and available to bind to specific regulatory sites.  Such low affinity binding to DNA allows proteins to undergo one-dimensional diffusion, a process that can greatly speed up the time it takes for a DNA binding protein to “find” high affinity binding sites (Stanford et al., 2000; von Hippel and Berg, 1989). Most transcription factors bind in a functionally significant manner to hundreds to thousands of gene regulatory sites per cell, often with distinct binding affinities. The effective binding affinity can also be influenced by positive and negative interactions with other transcription and accessory factors, chromatin structure, and DNA modifications. Functional complexes can take time to assemble, and once assembled can initiate multiple rounds of polymerase binding and activation, leading to a stochastic phenomena known as transcriptional bursting. An analogous process occurs with RNA-dependent polypeptide synthesis (translation). The result, particularly for genes expressed at lower levels, is that stochastic (unpredictable) bursts of transcription/translation can lead to functionally significant changes in protein levels (Raj et al., 2010; Raj and van Oudenaarden, 2008).

Figure adapted from Elowitz et al 2002

There are many examples of stochastic behaviors in biological systems. Originally noted by Novick and Weiner (1957) in their studies of the lac operon, it was clear that gene expression occurred in an all or none manner. This effect was revealed in a particularly compelling manner by Elowitz et al (2002) who used lac operon promoter elements to drive expression of transgenes encoding cyan and yellow fluorescent proteins (on a single plasmid) in E. coli.  The observed behaviors were dramatic; genetically identical cells were found to express, stochastically, one, the other, both, or neither transgenes. The stochastic expression of genes and downstream effects appear to be the source of much of the variance found in organisms with the same genotype in the same environmental conditions (Honegger and de Bivort, 2018).

Beyond gene expression, the unpredictable effects of stochastic processes can be seen at all levels of biological organization, from the biased random walk behaviors that underlie various forms of chemotaxis (e.g. Spudich and Koshland, 1976) and the search behaviors in C. elegans (Roberts et al., 2016) and other animals (Smouse et al., 2010), the noisiness in the opening of individual neuronal voltage-gated ion channels (Braun, 2021; Neher and Sakmann, 1976), and various processes within the immune system (Hodgkin et al., 2014), to variations in the behavior of individual organisms (e.g. the leafhopper example cited by Honegger and de Bivort, 2018). Stochastic events are involved in a range of “social” processes in bacteria (Bassler and Losick, 2006). Their impact serves as a form of “bet-hedging” in populations that generate phenotypic variation in a homogeneous environment (see Symmons and Raj, 2016). Stochastic events can regulate the efficiency of replication-associated error-prone mutation repair (Uphoff et al., 2016) leading to increased variation in a population, particularly in response to environmental stresses. Stochastic “choices” made by cells can be seen as questions asked of the environment, the system’s response provides information that informs subsequent regulatory decisions (see Lyon, 2015) and the selective pressures on individuals in a population (Jablonka and Lamb, 2005). Together stochastic processes introduce a non-deterministic (i.e. unpredictable) element into higher order behaviors (Murakami et al., 2017; Roberts et al., 2016).

Controlling stochasticity: While stochasticity can be useful, it also needs to be controlled. Not surprisingly then there are a number of strategies for “noise-suppression”, ranging from altering regulatory factor concentrations, the formation of covalent disulfide bonds between or within polypeptides, and regulating the activity of repair systems associated with DNA replication, polypeptide folding, and protein assembly via molecular chaperones and targeted degradation. For example, the identification of “cellular competition” effects has revealed that “eccentric cells” (sometimes, and perhaps unfortunately referred to as of “losers”) can be induced to undergo apoptosis (die) or migration in response to their “normal” neighbors (Akieda et al., 2019; Di Gregorio et al., 2016; Ellis et al., 2019; Hashimoto and Sasaki, 2020; Lima et al., 2021).

Student understanding of stochastic processes: There is ample evidence that students (and perhaps some instructors as well) are confused by or uncertain about the role of thermal motion, that is the transfer of kinetic energy via collisions, and the resulting stochastic behaviors in biological systems. As an example, Champagne-Queloz et al (2016; 2017) found that few students, even after instruction through molecular biology courses, recognize that collisions with other molecules were  responsible for the disassembly of molecular complexes. In fact, many adopt a more “deterministic” model for molecular disassembly after instruction (see part A panel figure on next page). In earlier studies, we found evidence for a similar confusion among instructors (part B of figure on the next page)(Klymkowsky et al., 2010). 

Introducing stochasticity to students: Given that understanding stochastic (random) processes can be difficult for many (e.g. Garvin-Doxas and Klymkowsky, 2008; Taleb, 2005), the question facing course designers and instructors is when and how best to help students develop an appreciation for the ubiquity, specific roles, and implications of stochasticity-dependent processes at all levels in biological systems. I would suggest that  introducing students to the dynamics of non-covalent molecular interactions, prevalent in biological systems in the context of stochastic interactions (i.e. kinetic theory) rather than a ∆G-based approach may be useful. We can use the probability of garnering the energy needed to disrupt an interaction to present concepts of binding specificity (selectivity) and stability. Developing an understanding of the formation and  disassembly of molecular interactions builds on the same logic that Albert Einstein and Ludwig Böltzman used to demonstrate the existence of atoms and molecules and the reversibility of molecular reactions (Bernstein, 2006). Moreover, as noted by Samoilov et al (2006) “stochastic mechanisms open novel classes of regulatory, signaling, and organizational choices that can serve as efficient and effective biological solutions to problems that are more complex, less robust, or otherwise suboptimal to deal with in the context of purely deterministic systems.”

The selectivity (specificity) and stability of molecular interactions can be understood from an energetic perspective – comparing the enthalpic and entropic differences between bound and unbound states. What is often missing from such discussions, aside from the fact of their inherent complexity, particularly in terms of calculating changes in entropy and exactly what is meant by energy (Cooper and Klymkowsky, 2013) is that many students enter biology classes without a robust understanding of enthalpy, entropy, or free energy (Carson and Watson, 2002).  Presenting students with a molecular  collision, kinetic theory-based mechanism for the dissociation of molecular interactions, may help them better understand (and apply) both the dynamics and specificity of molecular interactions. We can gage the strength of an interaction (the sum of the forces stabilizing an interaction) based on the amount of energy (derived from collisions with other molecules) needed to disrupt it.  The implication of student responses to relevant Biology Concepts Instrument (BCI) questions and beSocratic activities (data not shown), as well as a number of studies in chemistry, is that few students consider the kinetic/vibrational energy delivered through collisions with other molecules (a function of temperature), as key to explaining why interactions break (see Carson and Watson, 2002 and references therein).  Although this paper is 20 years old, there is little or no evidence that the situation has improved. Moreover, there is evidence that the conventional focus on mathematics-centered, free energy calculations in the absence of conceptual understanding may serve as an unnecessary barrier to the inclusion of a more socioeconomically diverse, and under-served populations of students (Ralph et al., 2022; Stowe and Cooper, 2019). 

The lac operon as a context for introducing stochasticity: Studies of the E. coli  lac operon hold an iconic place in the history of molecular biology and are often found in introductory courses, although typically presented in a deterministic context. The mutational analysis of the lac operon helped define key elements involved in gene regulation (Jacob and Monod, 1961; Monod et al., 1963). Booth et al (2022) used the lac operon as the context for their “modeling and simulation lesson”, Advanced Concepts in Regulation of the Lac Operon. Given its inherently stochastic regulation (Choi et al., 2008; Elowitz et al., 2002; Novick and Weiner, 1957; Vilar et al., 2003), the lac operon is a good place to start introducing students to stochastic processes. In this light, it is worth noting that Booth et al describes the behavior of the lac operon as “leaky”, which would seem to imply a low, but continuous level of expression, much as a leaky faucet continues to drip. As this is a peer-reviewed lesson, it seems likely that it reflects widely held mis-understandings of how stochastic processes are introduced to, and understood by students and instructors.

E. coli cells respond to the presence of lactose in growth media in a biphasic manner, termed diauxie, due to “the inhibitory action of certain sugars, such as glucose, on adaptive enzymes (meaning an enzyme that appears only in the presence of its substrate)” (Blaiseau and Holmes, 2021). When these (preferred) sugars are depleted from the media, growth slows. If lactose is present, however, growth will resume following a delay associated with the expression of the proteins encoded by the operon that enables the cell to import and metabolize lactose. Although the term homeostatic is used repeatedly by Booth et al, the lac operon is part of an adaptive, rather than a homeostatic, system. In the absence of glucose, cyclic AMP (cAMP) levels in the cell rise. cAMP binds to and activates the catabolite activator protein (CAP), encoded for by the crp gene. Activation of CAP leads to the altered expression of a number of target genes, whose products are involved in adaption to the stress associated with the absence of common and preferred metabolites. cAMP-activated CAP acts as both a transcriptional repressor and activator, “and has been shown to regulate hundreds of genes in the E. coli genome, earning it the status of “global” or “master” regulator” (Frendorf et al., 2019). It is involved in the adaptation to environmental factors, rather than maintaining the cell in a particular state (homeostasis). 

The lac operon is a classic polycistronic bacterial gene, encoding three distinct polypeptides: lacZ (β-galactosidase), lacY (β-galactoside permease), and lacA (galactoside acetyltransferase). When glucose or other preferred energy sources are present, expression of the lac operon is blocked by the inactivity of CAP. The CAP protein is a homodimer and its binding to DNA is regulated by the binding of the allosteric effector cAMP.  cAMP is generated from ATP by the enzyme adenylate cyclase, encoded by the cya gene. In the absence of glucose the enyzme encoded by the crr gene is phosphorylated and acts to activate adenylate cyclase (Krin et al., 2002).  As cAMP levels increase, cAMP binds to the CAP protein, leading to a dramatic change in its structure (↑), such that the protein’s  DNA binding domain becomes available to interact with promoter sequences (figure from Sharma et al., 2009).

Binding of activated (cAMP-bound) CAP is not, by itself sufficient to activate expression of the lac operon because of the presence of the constitutively expressed lac repressor protein, encoded for by the lacI gene. The active repressor is a tetramer, present at very low levels (~10 molecules) per cell. The lac operon contains three repressor (“operator”) binding sites; the tetrameric repressor can bind two operator sites simultaneously (upper figure → from Palanthandalam-Madapusi and Goyal, 2011). In the absence of lactose, but in the presence of cAMP-activated CAP, the operon is expressed in discrete “bursts” (Novick and Weiner, 1957; Vilar et al., 2003). Choi et al (2008) found that these burst come in two types, short and long, with the size of the burst referring to the number of mRNA molecules synthesized (bottm figure adapted from Choi et al ↑). The difference between burst sizes arises from the length of time that the operon’s repressor binding sites are unoccupied by repressor. As noted above, the tetravalent repressor protein can bind to two operator sites at the same time. When released from one site, polymerase binding and initiation produces a small number of mRNA molecules. Persistent binding to the second site means that the repressor concentration remains locally high, favoring rapid rebinding to the operator and the cessation of transcription (RNA synthesis). When the repressor releases from both operator sites, a rarer event, it is free to diffuse away and interact (non-specifically, i.e. with low affinity) with other DNA sites in the cell, leaving the lac operator sites unoccupied for a longer period of time. The number of such non-specific binding sites greatly exceeds the number (three) of specific binding sites in the operon. The result is the synthesis of a larger “burst” (number) of mRNA molecules. The average length of time that the operator  sites remain unoccupied is a function of the small number of repressor molecules present and the repressor’s low but measurable non-sequence specific binding to DNA. 

The expression of the lac operon leads to the appearance of β-galactosidase and β-galactoside permease. An integral membrane protein, β-galactoside permease enables extracellular lactose to enter the cell while cytoplasmic β-galactosidase catalyzes its breakdown and the generation of allolactone, which binds to the lac repressor protein, inhibiting its binding to operator sites, and so removing repression of transcription. In the absence of lactose, there are few if any of the proteins (β-galactosidase and β-galactoside permease) needed to activate the expression of the lac operon, so the obvious question is how, when lactose does appear in the extracellular media, does the lac operon turn on? Booth et al and the Wikipedia entry on the lac operon (accessed 29 June 2022) describe the turn on of the lac operon as “leaky” (see above). The molecular modeling studies of Vilar et al and Choi et al (which, together with Novick and Weiner, are not cited by Booth et al) indicate that the system displays distinct threshold and maintenance concentrations of lactose needed for stable lac gene expression. The term “threshold” does not occur in the Booth et al article. More importantly, when cultures are examined at the single cell level, what is observed is not a uniform increase in lac expression in all cells, as might be expected in the context of leaky expression, but more sporadic (noisy) behaviors. Increasing numbers of cells are “full on” in terms of lac operon expression over time when cultured in lactose concentrations above the operon’s activation threshold. This illustrates the distinctly different implications of a leaky versus a stochastic process in terms of their impacts on gene expression. While a leak is a macroscopic metaphor that produces a continuous, dependable, regular flow (drips), the occurrence of “bursts” of gene expression implies a stochastic (unpredictable) process ( figure from Vilar et al ↓). 

As the ubiquity and functionally significant roles of stochastic processes in biological systems becomes increasingly apparent, e.g. in the prediction of phenotypes from genotypes (Karavani et al., 2019; Mostafavi et al., 2020), helping students appreciate and understand the un-predictable, that is stochastic, aspects of biological systems becomes increasingly important. As an example, revealed dramatically through the application of single cell RNA sequencing studies, variations in gene expression between cells of the same “type” impacts organismic development and a range of behaviors. For example, in diploid eukaryotic cells is now apparent that in many cells, and for many genes, only one of the two alleles present is expressed; such “monoallelic” expression can impact a range of processes (Gendrel et al., 2014). Given that stochastic processes are often not well conveyed through conventional chemistry courses (Williams et al., 2015) or effectively integrated into, and built upon in molecular (and other) biology curricula; presenting them explicitly in introductory biology courses seems necessary and appropriate.

It may also help make sense of discussions of whether humans (and other organisms) have “free will”.  Clearly the situation is complex. From a scientific perspective we are analyzing systems without recourse to non-natural processes. At the same time, “Humans typically experience freely selecting between alternative courses of action” (Maoz et al., 2019)(Maoz et al., 2019a; see also Maoz et al., 2019b)It seems possible that recognizing the intrinsically unpredictable nature of many biological processes (including those of the central nervous system) may lead us to conclude that whether or not free will exists is in fact a non-scientific, unanswerable (and perhaps largely meaningless) question. 

footnotes

[1]  For this discussion I will ignore entropy, a factor that figures in whether a particular reaction in favorable or unfavorable, that is whether, and the extent to which it occurs.  

Acknowledgements: Thanks to Melanie Cooper and Nick Galati for taking a look and Chhavinder Singh for getting it started. Updated 6 January 2023.

literature cited:

Akieda, Y., Ogamino, S., Furuie, H., Ishitani, S., Akiyoshi, R., Nogami, J., Masuda, T., Shimizu, N., Ohkawa, Y. and Ishitani, T. (2019). Cell competition corrects noisy Wnt morphogen gradients to achieve robust patterning in the zebrafish embryo. Nature communications 10, 1-17.

Alberts, B. (1998). The cell as a collection of protein machines: preparing the next generation of molecular biologists. Cell 92, 291-294.

Bassler, B. L. and Losick, R. (2006). Bacterially speaking. Cell 125, 237-246.

Bernstein, J. (2006). Einstein and the existence of atoms. American journal of physics 74, 863-872.

Blaiseau, P. L. and Holmes, A. M. (2021). Diauxic inhibition: Jacques Monod’s Ignored Work. Journal of the History of Biology 54, 175-196.

Booth, C. S., Crowther, A., Helikar, R., Luong, T., Howell, M. E., Couch, B. A., Roston, R. L., van Dijk, K. and Helikar, T. (2022). Teaching Advanced Concepts in Regulation of the Lac Operon With Modeling and Simulation. CourseSource.

Braun, H. A. (2021). Stochasticity Versus Determinacy in Neurobiology: From Ion Channels to the Question of the “Free Will”. Frontiers in Systems Neuroscience 15, 39.

Carson, E. M. and Watson, J. R. (2002). Undergraduate students’ understandings of entropy and Gibbs free energy. University Chemistry Education 6, 4-12.

Champagne-Queloz, A. (2016). Biological thinking: insights into the misconceptions in biology maintained by Gymnasium students and undergraduates”. In Institute of Molecular Systems Biology. Zurich, Switzerland: ETH Zürich.

Champagne-Queloz, A., Klymkowsky, M. W., Stern, E., Hafen, E. and Köhler, K. (2017). Diagnostic of students’ misconceptions using the Biological Concepts Instrument (BCI): A method for conducting an educational needs assessment. PloS one 12, e0176906.

Choi, P. J., Cai, L., Frieda, K. and Xie, X. S. (2008). A stochastic single-molecule event triggers phenotype switching of a bacterial cell. Science 322, 442-446.

Coop, G. and Przeworski, M. (2022). Lottery, luck, or legacy. A review of “The Genetic Lottery: Why DNA matters for social equality”. Evolution 76, 846-853.

Cooper, M. M. and Klymkowsky, M. W. (2013). The trouble with chemical energy: why understanding bond energies requires an interdisciplinary systems approach. CBE Life Sci Educ 12, 306-312.

Di Gregorio, A., Bowling, S. and Rodriguez, T. A. (2016). Cell competition and its role in the regulation of cell fitness from development to cancer. Developmental cell 38, 621-634.

Ellis, S. J., Gomez, N. C., Levorse, J., Mertz, A. F., Ge, Y. and Fuchs, E. (2019). Distinct modes of cell competition shape mammalian tissue morphogenesis. Nature 569, 497.

Elowitz, M. B., Levine, A. J., Siggia, E. D. and Swain, P. S. (2002). Stochastic gene expression in a single cell. Science 297, 1183-1186.

Feldman, M. W. and Riskin, J. (2022). Why Biology is not Destiny. In New York Review of Books. NY.

Frendorf, P. O., Lauritsen, I., Sekowska, A., Danchin, A. and Nørholm, M. H. (2019). Mutations in the global transcription factor CRP/CAP: insights from experimental evolution and deep sequencing. Computational and structural biotechnology journal 17, 730-736.

Garvin-Doxas, K. and Klymkowsky, M. W. (2008). Understanding Randomness and its impact on Student Learning: Lessons from the Biology Concept Inventory (BCI). Life Science Education 7, 227-233.

Gendrel, A.-V., Attia, M., Chen, C.-J., Diabangouaya, P., Servant, N., Barillot, E. and Heard, E. (2014). Developmental dynamics and disease potential of random monoallelic gene expression. Developmental cell 28, 366-380.

Harden, K. P. (2021). The genetic lottery: why DNA matters for social equality: Princeton University Press.

Hashimoto, M. and Sasaki, H. (2020). Cell competition controls differentiation in mouse embryos and stem cells. Current Opinion in Cell Biology 67, 1-8.

Hodgkin, P. D., Dowling, M. R. and Duffy, K. R. (2014). Why the immune system takes its chances with randomness. Nature Reviews Immunology 14, 711-711.

Honegger, K. and de Bivort, B. (2018). Stochasticity, individuality and behavior. Current Biology 28, R8-R12.

Hull, M. M. and Hopf, M. (2020). Student understanding of emergent aspects of radioactivity. International Journal of Physics & Chemistry Education 12, 19-33.

Jablonka, E. and Lamb, M. J. (2005). Evolution in four dimensions: genetic, epigenetic, behavioral, and symbolic variation in the history of life. Cambridge: MIT press.

Jacob, F. and Monod, J. (1961). Genetic regulatory mechanisms in the synthesis of proteins. Journal of molecular biology 3, 318-356.

Karavani, E., Zuk, O., Zeevi, D., Barzilai, N., Stefanis, N. C., Hatzimanolis, A., Smyrnis, N., Avramopoulos, D., Kruglyak, L. and Atzmon, G. (2019). Screening human embryos for polygenic traits has limited utility. Cell 179, 1424-1435. e1428.

Klymkowsky, M. W., Kohler, K. and Cooper, M. M. (2016). Diagnostic assessments of student thinking about stochastic processes. In bioArXiv: http://biorxiv.org/content/early/2016/05/20/053991.

Klymkowsky, M. W., Underwood, S. M. and Garvin-Doxas, K. (2010). Biological Concepts Instrument (BCI): A diagnostic tool for revealing student thinking. In arXiv: Cornell University Library.

Krin, E., Sismeiro, O., Danchin, A. and Bertin, P. N. (2002). The regulation of Enzyme IIAGlc expression controls adenylate cyclase activity in Escherichia coli. Microbiology 148, 1553-1559.

Lima, A., Lubatti, G., Burgstaller, J., Hu, D., Green, A., Di Gregorio, A., Zawadzki, T., Pernaute, B., Mahammadov, E. and Montero, S. P. (2021). Cell competition acts as a purifying selection to eliminate cells with mitochondrial defects during early mouse development. bioRxiv, 2020.2001. 2015.900613.

Lyon, P. (2015). The cognitive cell: bacterial behavior reconsidered. Frontiers in microbiology 6, 264.

Maoz, U., Sita, K. R., Van Boxtel, J. J. and Mudrik, L. (2019a). Does it matter whether you or your brain did it? An empirical investigation of the influence of the double subject fallacy on moral responsibility judgments. Frontiers in Psychology 10, 950.

Maoz, U., Yaffe, G., Koch, C. and Mudrik, L. (2019b). Neural precursors of decisions that matter—an ERP study of deliberate and arbitrary choice. Elife 8, e39787.

Milo, R. and Phillips, R. (2015). Cell biology by the numbers: Garland Science.

Monod, J., Changeux, J.-P. and Jacob, F. (1963). Allosteric proteins and cellular control systems. Journal of molecular biology 6, 306-329.

Mostafavi, H., Harpak, A., Agarwal, I., Conley, D., Pritchard, J. K. and Przeworski, M. (2020). Variable prediction accuracy of polygenic scores within an ancestry group. Elife 9, e48376.

Murakami, M., Shteingart, H., Loewenstein, Y. and Mainen, Z. F. (2017). Distinct sources of deterministic and stochastic components of action timing decisions in rodent frontal cortex. Neuron 94, 908-919. e907.

Neher, E. and Sakmann, B. (1976). Single-channel currents recorded from membrane of denervated frog muscle fibres. Nature 260, 799-802.

Novick, A. and Weiner, M. (1957). Enzyme induction as an all-or-none phenomenon. Proceedings of the National Academy of Sciences 43, 553-566.

Palanthandalam-Madapusi, H. J. and Goyal, S. (2011). Robust estimation of nonlinear constitutive law from static equilibrium data for modeling the mechanics of DNA. Automatica 47, 1175-1182.

Raj, A., Rifkin, S. A., Andersen, E. and van Oudenaarden, A. (2010). Variability in gene expression underlies incomplete penetrance. Nature 463, 913-918.

Raj, A. and van Oudenaarden, A. (2008). Nature, nurture, or chance: stochastic gene expression and its consequences. Cell 135, 216-226.

Ralph, V., Scharlott, L. J., Schafer, A., Deshaye, M. Y., Becker, N. M. and Stowe, R. L. (2022). Advancing Equity in STEM: The Impact Assessment Design Has on Who Succeeds in Undergraduate Introductory Chemistry. JACS Au.

Roberts, W. M., Augustine, S. B., Lawton, K. J., Lindsay, T. H., Thiele, T. R., Izquierdo, E. J., Faumont, S., Lindsay, R. A., Britton, M. C. and Pokala, N. (2016). A stochastic neuronal model predicts random search behaviors at multiple spatial scales in C. elegans. Elife 5, e12572.

Samoilov, M. S., Price, G. and Arkin, A. P. (2006). From fluctuations to phenotypes: the physiology of noise. Science’s STKE 2006, re17-re17.

Sharma, H., Yu, S., Kong, J., Wang, J. and Steitz, T. A. (2009). Structure of apo-CAP reveals that large conformational changes are necessary for DNA binding. Proceedings of the National Academy of Sciences 106, 16604-16609.

Smouse, P. E., Focardi, S., Moorcroft, P. R., Kie, J. G., Forester, J. D. and Morales, J. M. (2010). Stochastic modelling of animal movement. Philosophical Transactions of the Royal Society B: Biological Sciences 365, 2201-2211.

Spudich, J. L. and Koshland, D. E., Jr. (1976). Non-genetic individuality: chance in the single cell. Nature 262, 467-471.

Stanford, N. P., Szczelkun, M. D., Marko, J. F. and Halford, S. E. (2000). One-and three-dimensional pathways for proteins to reach specific DNA sites. The EMBO Journal 19, 6546-6557.

Stowe, R. L. and Cooper, M. M. (2019). Assessment in Chemistry Education. Israel Journal of Chemistry.

Symmons, O. and Raj, A. (2016). What’s Luck Got to Do with It: Single Cells, Multiple Fates, and Biological Nondeterminism. Molecular cell 62, 788-802.

Taleb, N. N. (2005). Fooled by Randomness: The hidden role of chance in life and in the markets. (2nd edn). New York: Random House.

Uphoff, S., Lord, N. D., Okumus, B., Potvin-Trottier, L., Sherratt, D. J. and Paulsson, J. (2016). Stochastic activation of a DNA damage response causes cell-to-cell mutation rate variation. Science 351, 1094-1097.

You, Shu-Ting, and Jun-Yi Leu. “Making sense of noise.” Evolutionary Biology—A Transdisciplinary Approach(2020): 379-391.

Vilar, J. M., Guet, C. C. and Leibler, S. (2003). Modeling network dynamics: the lac operon, a case study. J Cell Biol 161, 471-476.

von Hippel, P. H. and Berg, O. G. (1989). Facilitated target location in biological systems. Journal of Biological Chemistry 264, 675-678.

Williams, L. C., Underwood, S. M., Klymkowsky, M. W. and Cooper, M. M. (2015). Are Noncovalent Interactions an Achilles Heel in Chemistry Education? A Comparison of Instructional Approaches. Journal of Chemical Education 92, 1979–1987.

 

Misinformation in and about science.

originally published as https://facultyopinions.com/article/739916951 – July 2021

There have been many calls for improved “scientific literacy”. Scientific literacy has been defined in a number of, often ambiguous, ways (see National Academies of Sciences and Medicine, 2016 {1}). According to Krajcik & Sutherland (2010) {2} it is “the understanding of science content and scientific practices and the ability to use that knowledge”, which implies “the ability to critique the quality of evidence or validity of conclusions about science in various media, including newspapers, magazines, television, and the Internet”. But what types of critiques are we talking about, and how often is this ability to critique, and the scientific knowledge it rests on, explicitly emphasized in the courses non-science (or science) students take? As an example, highlighted by Sabine Hossenfelder (2020) {3}, are students introduced to the higher order reasoning and understanding of the scientific enterprise needed to dismiss a belief in a flat (or a ~6000 year old) Earth?

While the sources of scientific illiteracy are often ascribed to social media, religious beliefs, or economically or politically motivated distortions, West and Bergstrom point out how scientists and the scientific establishment (public relations departments and the occasional science writer) also play a role. They identify the problems arising from the fact that the scientific enterprise (and the people who work within it) act within “an attention economy” and “compete for eyeballs just as journalists do.” The authors provide a review of all of the factors that contribute to misinformation within the scientific literature and its media ramifications, including the contribution of “predatory publishers” and call for “better ways of detecting untrustworthy publishers.” At the same time, there are ingrained features of the scientific enterprise that serve to distort the relevance of published studies, these include not explicitly identifying the organism in which the studies are carried out, and so obscuring the possibility that they might not be relevant to humans (see Kolata, 2013 {4}). There are also systemic biases within the research community. Consider the observation, characterized by Pandey et al. (2014) {5} that studies of “important” genes, expressed in the nervous system, are skewed: the “top 5% of genes absorb 70% of the relevant literature” while “approximately 20% of genes have essentially no neuroscience literature”. What appears to be the “major distinguishing characteristic between these sets of genes is date of discovery, early discovery being associated with greater research momentum—a genomic bandwagon effect”, a version of the “Matthew effect” described by Merton (1968) {6}. In the context of the scientific community, various forms of visibility (including pedigree and publicity) are in play in funding decisions and career advancement. Not pointed out explicitly by West and Bergstrom is the impact of disciplinary experts who pontificate outside of their areas of expertise and speculate beyond what can be observed or rejected experimentally, including speculations on the existence of non-observable multiverses, the ubiquity of consciousness (Tononi & Koch, 2015 {7}), and the rejection of experimental tests as a necessary criterion of scientific speculation (see Loeb, 2018 {8}) spring to mind.

Many educational institutions demand that non-science students take introductory courses in one or more sciences in the name of cultivating “scientific literacy”. This is a policy that seems to me to be tragically misguided, and perhaps based more on institutional economics than student learning outcomes. Instead, a course on “how science works and how it can be distorted” would be more likely to move students close to the ability to “critique the quality of evidence or validity of conclusions about science”. Such a course could well be based on an extended consideration of the West and Bergstrom article, together with their recently published trade book “Calling bullshit: the art of skepticism in a data-driven world” (Bergstrom and West, 2021 {9}), which outlines many of the ways that information can be distorted. Courses that take this approach to developing a skeptical (and realistic) approach to understanding how the sciences work are mentioned, although what measures of learning outcomes have been used to assess their efficacy are not described.

literature cited

  1. Science literacy: concepts, contexts, and consequencesCommittee on Science Literacy and Public Perception of Science, Board on Science Education, Division of Behavioral and Social Sciences and Education, National Academies of Sciences, Engineering, and Medicine.2016 10 14; PMID: 27854404
  2. Supporting students in developing literacy in science. Krajcik JS, Sutherland LM.Science. 2010 Apr 23; 328(5977):456-459PMID: 20413490
  3. Flat Earth “Science”: Wrong, but not Stupid. Hossenfelder S. BackRe(Action) blog, 2020, Aug 22 (accessed Jul 29, 2021)
  4. Mice fall short as test subjects for humans’ deadly ills. Kolata G. New York Times, 2013, Feb 11 (accessed Jul 29, 2021)
  5. Functionally enigmatic genes: a case study of the brain ignorome. Pandey AK, Lu L, Wang X, Homayouni R, Williams RW.PLoS ONE. 2014; 9(2):e88889PMID: 24523945
  6. The Matthew Effect in Science: The reward and communication systems of science are considered.Merton RK.Science. 1968 Jan 5; 159:56-63 PMID: 17737466
  7. Consciousness: here, there and everywhere? Tononi G, Koch C.Philos Trans R Soc Lond B Biol Sci. 2015 May 19; 370(1668)PMID: 25823865
  8. Theoretical Physics Is Pointless without Experimental Tests. Loeb A. Scientific American blog, 2018, Aug 10 [ Blog piece] (accessed Jul 29, 2021)
  9. Calling bullshit: the art of skepticism in a data-driven world.Bergstrom CT, West JD. Random House Trade Paperbacks, 2021ISBN: ‎ 978-0141987057

Anti-Scientific & anti-vax propaganda (1926 and today)

“Montaigne concludes, like Socrates, that ignorance aware of itself is the only true knowledge” – from “Forbidden Knowledge” by Roger Shattuck

A useful review of the history of the anti-vaccination movement: Poland & Jacobson 2011. The Age-Old Struggle against the Antivaccinationists NEJM

Science educators and those who aim to explain the implications of scientific or clinical observations to the public have their work cut out for them. In large part, this is because helping others, including the diverse population of health care providers and their clients, depends upon more than just critical thinking skills. Equally important is what might be termed “disciplinary literacy,” the ability to evaluate whether the methods applied are adequate and appropriate and so whether a particular observation is relevant to or able to resolve a specific question. To illustrate this point, I consider an essay from 1926 by Peter Frandsen and a 2021 paper by Ou et al. (2021) on the mechanism of hydroxychloroquine inhibition of SARS-CoV-2 replication in tissue culture cells.                

In Frandsen’s essay, well before the proliferation of unfettered web-based social pontification and ideologically-motivated distortions, he notes that “pseudo and unscientific cults are springing up and finding it easy to get a hold on the popular mind,” and “are making some headway in establishing themselves on an equally recognized basis with scientific medicine,” in part due to their ability to lobby politicians to exclude them from any semblance of “truth in advertising.”  Of particular resonance were the efforts in Minnesota, California, and Montana to oppose mandatory vaccination for smallpox. Given these successful anti-vax efforts, Frandsen asks, “is it any wonder that smallpox is one thousand times more prevalent in Montana than in Massachusetts in proportion to population?”  One cannot help but analogize to today’s COVID-19 statistics on the dramatically higher rate of hospitalization for the unvaccinated (e.g. Scobie et al., 2021). The comparison is all the more impactful (and disheartening) given the severity of smallpox as a disease, its elimination, in 1977, together with the near elimination of other dangerous viral human diseases (poliomyelitis and measles) primarily via vaccination efforts (Hopkins, 2013), and the discouraging number of high profile celebrities, some of whom I for one previously considered admirable figures (various forms of influencers in modern parlance) who actively promulgate positions that directly contradict objective and reproducible observation and embrace blatantly scientifically untenable beliefs (the vaccine-autism link serves as a prime example).                 

While much is made of the idea that education-based improvements in critical thinking ability can render its practitioners less susceptible to unwarranted conspiracy theories and beliefs (Lantian et al., 2021), the situation becomes more complex when we consider how it is that presumably highly educated practitioners, e.g. medical doctors, can become conspiracists (ignoring for the moment the more banal, and likely universal, reasons associated with greed and the need to draw attention to themselves).  As noted, many is the conspiracist who considers themselves to be a “critical freethinker” (see Lantian et al). The fact that they fail to recognize the flaws in their own thinking leads us to ask, what are they missing?            

A point rarely considered is what we might term “disciplinary literacy.” That is, do the members of an audience have the background information necessary to question foundational presumptions associated with an observation? Here I draw on personal experience. I have (an increasingly historical) interest in the interactions between intermediate filaments and viral infection (Doedens et al., 1994; Murti et al., 1988). In 2020, I found myself involved quite superficially with studies by colleagues here at the University of Colorado Boulder; they reproduced the ability of hydroxychloroquine to inhibit coronavirus replication in cultured cells.  Nevertheless, and in the face of various distortions, it quickly became apparent that hydroxychloroquine was ineffective for treating SARS-CoV-2 infection in humans. So, what disciplinary facts did one need to understand this apparent contradiction (which appears to have fueled unreasonable advocacy of hydroxychloroquine treatment for COVID)? The paper by Ou et al. (2021) provides a plausible mechanistic explanation. The process of in vitro infection of various cells appears to involve endocytosis followed by proteolytic events leading to the subsequent movement of viral nucleic acid into the cytoplasm, a prerequisite for viral replication. Hydroxychloroquine treatment acts by blocking the acidification of the endosome, which inhibits the capsid cleavage reaction and the subsequent cytoplasmic transport of the virus’s nucleic acid genome (see figure 1, Ou et al. 2021).  In contrast, in vivo infection involves a surface protease, rather than endocytosis, and is therefore independent of endosomal acidification.  Without a (disciplinary) understanding of the various mechanisms involve in viral entry, and their relevance in various experimental contexts, it remains a mystery for why hydroxychloroquine treatment blocks viral replication in one system (in vitro cultured cells) and not another (in vivo).             

 In the context of science education and how it can be made more effective, it appears that helping students understand underlying cellular processes, experimental details, and their often substantial impact on observed outcomes is central. This is in contrast to the common focus (in many courses) on the memorization of largely irrelevant details. Understanding how one can be led astray by the differences between experimental systems (and inadequate sample sizes) is essential. One cannot help but think of how mouse studies on diseases such as sepsis (Kolata, 2013) and Alzheimer’s (Reardon, 2018) have been haunted by the assumption that systems that differ in physiologically significant details are good models for human disease and the development of effective treatments. Helping students understand how we come to evaluate observations and the molecular and physiological mechanisms involved should be the primary focus of a modern education in the biological sciences, since it helps build up the disciplinary literacy needed to distinguish reasoned argument from anti-scientific propaganda. 

Acknowledgement: Thanks to Qing Yang for bringing the Ou et al paper to my attention.  

Literature cited:
Shattuck, R. (1996). Forbidden knowledge: from Prometheus to pornography. New York: St. Martin’s Press.

Doedens, J., Maynell, L. A., Klymkowsky, M. W. and Kirkegaard, K. (1994). Secretory pathway function, but not cytoskeletal integrity, is required in poliovirus infection. Arch Virol. suppl. 9, 159-172.

Hopkins, D. R. (2013). Disease eradication. New England Journal of Medicine 368, 54-63.

Kolata, G. (2013). Mice fall short as test subjects for some of humans’ deadly ills. New York Times 11, 467-477.

Lantian, A., Bagneux, V., Delouvée, S. and Gauvrit, N. (2021). Maybe a free thinker but not a critical one: High conspiracy belief is associated with low critical thinking ability. Applied Cognitive Psychology 35, 674-684.

Murti, K. G., Goorha, R. and Klymkowsky, M. W. (1988). A functional role for intermediate filaments in the formation of frog virus 3 assembly sites. Virology 162, 264-269.
 
Ou, T., Mou, H., Zhang, L., Ojha, A., Choe, H. and Farzan, M. (2021). Hydroxychloroquine-mediated inhibition of SARS-CoV-2 entry is attenuated by TMPRSS2. PLoS pathogens 17, e1009212.

Reardon, S. (2018). Frustrated Alzheimer’s researchers seek better lab mice. Nature 563, 611-613.

Scobie, H. M., Johnson, A. G., Suthar, A. B., Severson, R., Alden, N. B., Balter, S., Bertolino, D., Blythe, D., Brady, S. and Cadwell, B. (2021). Monitoring incidence of covid-19 cases, hospitalizations, and deaths, by vaccination status—13 US jurisdictions, April 4–July 17, 2021. Morbidity and Mortality Weekly Report 70, 1284.

Higher Education Malpractice: curving grades

If there is one thing that university faculty and administrators could do today to demonstrate their commitment to inclusion, not to mention teaching and learning over sorting and status, it would be to ban curve-based, norm-referenced grading. Many obstacles exist to the effective inclusion and success of students from underrepresented (and underserved) groups in science and related programs.  Students and faculty often, and often correctly, perceive large introductory classes as “weed out” courses preferentially impacting underrepresented students. In the life sciences, many of these courses are “out-of-major” requirements, in which students find themselves taught with relatively little regard to the course’s relevance to bio-medical careers and interests. Often such out-of-major requirements spring not from a thoughtful decision by faculty as to their necessity, but because they are prerequisites for post-graduation admission to medical or graduate school. “In-major” instructors may not even explicitly incorporate or depend upon the materials taught in these out-0f-major courses – rare is the undergraduate molecular biology degree program that actually calls on students to use calculus or a working knowledge of physics, despite the fact that such skills may be relevant in certain biological contexts – see Magnetofiction – A Reader’s Guide.  At the same time, those teaching “out of major” courses may overlook the fact that many (and sometimes most) of their students are non-chemistry, non-physics, and/or non-math majors.  The result is that those teaching such classes fail to offer a doorway into the subject matter to any but those already comfortable with it. But reconsidering the design and relevance of these courses is no simple matter.  Banning grading on a curve, on the other  hand, can be implemented overnight (and by fiat if necessary). 

 So why ban grading on a curve?  First and foremost, it would put faculty and institutions on record as valuing student learning outcomes (perhaps the best measure of effective teaching) over the sorting of students into easy-to-judge groups.  Second, there simply is no pedagogical justification for curved grading, with the possible exception of providing a kludgy fix to correct for poorly designed examinations and courses. There are more than enough opportunities to sort students based on their motivation, talent, ambition, “grit,” and through the opportunities they seek after and successfully embraced (e.g., through volunteerism, internships, and independent study projects). 

The negative impact of curving can be seen in a recent paper by Harris et al,  (Reducing achievement gaps in undergraduate general chemistry …), who report a significant difference in overall student inclusion and subsequent success based on a small grade difference between a C, which allows a student to proceed with their studies (generally as successfully as those with higher grades) and a C-minus, which requires them to retake the course before proceeding (often driving them out of the major).  Because Harris et al., analyzed curved courses, a subset of students cannot escape these effects.  And poor grades disproportionately impact underrepresented and underserved groups – they say explicitly “you do not belong” rather than “how can I help you learn”.   

Often naysayers disparage efforts to improve course design as “dumbing down” the course, rather than improving it.  In many ways this is a situation analogous to blaming patients for getting sick or not responding to treatment, rather than conducting an objective analysis of the efficacy of the treatment.  If medical practitioners had maintained this attitude, we would still be bleeding patients and accepting that more than a third are fated to die, rather than seeking effective treatments tailored to patients’ actual diseases – the basis of evidence-based medicine.  We would have failed to develop antibiotics and vaccines – indeed, we would never have sought them out. Curving grades implies that course design and delivery are already optimal, and the fate of students is predetermined because only a percentage can possibly learn the material.  It is, in an important sense, complacent quackery.

Banning grading on a curve, and labelling it for what it is – educational malpractice – would also change the dynamics of the classroom and might even foster an appreciation that a good teacher is one with the highest percentage of successful students, e.g. those who are retained in a degree program and graduate in a timely manner (hopefully within four years). Of course, such an alternative evaluation of teaching would reflect a department’s commitment to construct and deliver the most engaging, relevant, and effective educational program. Institutional resources might even be used to help departments generate more objective, instructor-independent evaluations of learning outcomes, in part to replace the current practice of student-based opinion surveys, which are often little more than measures of popularity.  We might even see a revolution in which departments compete with one another to maximize student inclusion, retention, and outcomes (perhaps even to the extent of applying pressure on the design and delivery of “out of major” required courses offered by other departments).  

“All a pipe dream” you might say, but the available data demonstrates that resources spent on rethinking course design, including engagement and relevance, can have significant effects on grades, retention, time to degree, and graduation rates.  At the risk of being labeled as self-promoting, I offer the following to illustrate the possibilities: working with Melanie Cooper at Michigan State University, we have built such courses in general and organic chemistry and documented their impact, see Evaluating the extent of a large-scale transformation in gateway science courses.

Perhaps we should be encouraging students to seek out legal representation to hold institutions (and instructors) accountable for detrimental practices, such as grading on a curve.  There might even come a time when professors and departments would find it prudent to purchase malpractice insurance if they insist on retaining and charging students for ineffective educational strategies.(1)  

Acknowledgements: Thanks to daughter Rebecca who provided edits and legal references and Melanie Cooper who inspired the idea. Educate! image from the Dorian De Long Arts & Music Scholarship site.

(1) One cannot help but wonder if such conduct could ever rise to the level of fraud. See, e.g., Bristol Bay Productions, LLC vs. Lampack, 312 P.3d 1155, 1160 (Colo. 2013) (“We have typically stated that a plaintiff seeking to prevail on a fraud claim must establish five elements: (1) that the defendant made a false representation of a material fact; (2) that the one making the representation knew it was false; (3) that the person to whom the representation was made was ignorant of the falsity; (4) that the representation was made with the intention that it be acted upon; and (5) that the reliance resulted in damage to the plaintiff.”).

Going virtual without a net

Is the coronavirus-based transition from face to face to on-line instruction yet another step to down-grading instructional quality?

It is certainly a strange time in the world of higher education. In response to the current corona virus pandemic, many institutions have quickly, sometimes within hours and primarily by fiat, transitioned from face to face to distance (web-based) instruction. After a little confusion, it appears that laboratory courses are included as well, which certainly makes sense. While virtual laboratories can be built (see our own virtual laboratories in biology)  they typically fail to capture the social setting of a real laboratory.  More to the point, I know of no published studies that have measured the efficacy of such on-line experiences in terms of the ideas and skills students master.

Many instructors (including this one) are being called upon to carry out a radical transformation of instructional practice “on the fly.” Advice is being offered from all sides, from University administrators and technical advisors (see as an example Making Online Teaching a Success).  It is worth noting that much (all?) of this advice falls into the category of “personal empiricism”, suggestions based on various experiences but unsupported  by objective measures of educational outcomes – outcomes that include the extent of student engagement as well as clear descriptions of i) what students are expected to have mastered, ii) what they are expected to be able to do with their knowledge, and iii) what they can actually do. Again, to my knowledge there have been few if any careful comparative studies on learning outcomes achieved via face to face versus virtual teaching experiences. Part of the issue is that many studies on teaching strategies (including recent work on what has been termed “active learning” approaches) have failed to clearly define what exactly is to be learned, a necessary first step in evaluating their efficacy.  Are we talking memorization and recognition, or the ability to identify and apply core and discipline-specific ideas appropriately in novel and complex situations?

At the same time, instructors have not had practical training in using available tools (zoom, in my case) and little in the way of effective support. Even more importantly, there are few published and verified studies to inform what works best in terms of student engagement and learning outcomes. Even if there were clear “rules of thumb” in place to guide the instructor or course designer, there has not been the time or resources needed to implement these changes. The situation is not surprising given that the quality of university level educational programs rarely attracts critical analysis, or the necessary encouragement, support, and recognition needed to make it a departmental priority (see Making education matter in higher education).  It seems to me that the current situation is not unlike attempting to perform a complicated surgery after being told to watch a 3 minute youtube video. Unsurprisingly patient (student learning) outcomes may not be pretty.     

Much of what is missing from on-line instructional scenarios is the human connection, the ability of an instructor to pay attention to how students respond to the ideas presented. Typically this involves reading the facial expressions and body language of students, and through asking challenging (Socratic) questions – questions that address how the information presented can be used to generate plausible explanations or to predict the behavior of a system. These are interactions that are difficult, if not impossible to capture in an on-line setting.

While there is much to be said for active engagement/active learning strategies (see Hake 1998, Freeman et al 2014 and Theobald et al 2020), one can easily argue that all effective learning scenarios involve an instructor who is aware and responsive to students’ pre-existing knowledge. It is also important that the instructor has the willingness (and freedom) to entertain their questions, confusions, and the need for clarification (saying it a different way), or when it may be necessary to revisit important, foundational, ideas and skills – a situation that can necessitate discarding planned materials and “coaching up” students on core concepts and their application. The ability of the instructor to customize instruction “on the fly” is one of the justifications for hiring disciplinary experts in instructional positions, they (presumably) understand the conceptual foundations of the materials they are called upon to present. In its best (Socratic) form, the dialog between student and instructor drives students (and instructors) to develop a more sophisticated and metacognitive understanding of the web of ideas involved in most scientific explanations.

In the absence of an explicit appreciation of the importance of the human interactions between instructor and student, interactions already strained in the context of large enrollment courses, we are likely to find an increase in the forces driving instruction to become more and more about rote knowledge, rather than the higher order skills associated with the ability to juggle ideas, identifying those needed and those irrelevant to a specific situation.  While I have been trying to be less cynical (not a particularly easy task in the modern world), I suspect that the flurry of advice on how to carry out distance learning is more about avoiding the need to refund student fees than about improving students’ educational outcomes (see Colleges Sent Students Home. Now Will They Refund Tuition?)

A short post-script (17 April 2020): Over the last few weeks I have put together the tools to make the on-line MCDB 4650 Developmental Biology course somewhat smoother for me (and hopefully the students). I use Keynote (rather than Powerpoint) for slides; since the iPad is connected wirelessly to the project, this enables me to wander around the class room. The iOS version of Keynote enables me, and students, to draw on slides. Now that I am tethered, I rely more on pre-class beSocratic activities and the Mirroring360 application to connect my iPad to my laptop for Zoom sessions. I am back to being more interactive with the materials presented. I am also starting to pick students at random to answer questions & provide explanations (since they are quiet otherwise) – hopefully that works. Below (↓) is my set up, including a good microphone, laptop, iPad, and the newly arrived volume on Active Learning.

Conceptual simplicity and mechanistic complexity: the implications of un-intelligent design

Using “Thinking about the Conceptual Foundations of the Biological Sciences” as a jumping off point. “Engineering biology for real?” by Derek Lowe (2018) is also relevant

Biological systems can be seen as conceptually simple, but mechanistically complex, with hidden features that make “fixing” them difficult.  

Biological systems are evolving, bounded, non-equilibrium reaction systems. Based on their molecular details, it appears that all known organisms, both extinct or extant, are derived from a single last universal common ancestor, known as LUCA.  LUCA lived ~4,000,000,000 years ago (give or take).  While the steps leading to LUCA are hidden, and its precursors are essentially unknowable (much like the universe before the big bang), we can come to some general and unambiguous conclusions about LUCA itself [see Catchpole & Forterre, 2019].  First LUCA was cellular and complex, probably more complex that some modern organisms, certainly more complex than the simplest obligate intracellular parasite [Martinez-Cano et al., 2014].  Second, LUCA was a cell with a semi-permeable lipid bilayer membrane. Its boundary layer is semi-permeable because such a system needs to import energy and matter and export waste in order to keep from reaching equilibrium, since equilibrium = death with no possibility of resurrection. Finally, LUCA could produce offspring, through some version of a cell division process. The amazing conclusion is that every cell in your body (and every cell in every organism on the planet) has an uninterrupted connection to LUCA. 

 So what are the non-equilibrium reactions within LUCA and other organisms doing?  building up (synthesizing) and degrading various molecules, including proteins, nucleic acids, lipids, carbohydrates and such – the components needed to maintain the membrane barrier while importing materials so that the cell can adapt, move, grow and divide. This non-equilibrium reaction network has been passed from parent to offspring cells, going back to LUCA. A new cell does not “start up” these reactions, they are running continuously through out the processes of growth and cell division. While fragile, these reaction systems have been running uninterruptedly for billions of years. 

There is a second system, more or less fully formed, present in and inherited from LUCA, the DNA-based genetic information storage and retrieval system. The cell’s DNA (its genotype) encodes the “operating system” of the cell. The genotype interacts with and shapes the cell’s reaction systems to produce phenotypes, what the organism looks like and how it behaves, that is how it reacts to and interacts with the rest of the world.  Because DNA is thermodynamically unstable, the information it contains, encoded in the sequences of nucleotides within it, and read out by the reaction systems, can be altered – it can change (mutate) in response to its environmental chemicals, radiation, and other processes, such as errors that occur when DNA is replicated. Once mutated, the change is stable, it becomes part of the genotype.

The mutability of DNA could be seen as a design flaw; you would not want the information in a computer file to be randomly altered over time or when copied. In living systems, however, the mutability of DNA is a feature – together with the effects of mutations on a cell’s reproductive success mutations lead to evolutionary change.  Over time, they convert the noise of mutation into evolutionary adaptations and diversification of life.  

 Organisms rarely exist in isolation. Our conceptual picture of LUCA is not complete until we include social interactions (background: aggregative and clonal metazoans). Cells (organisms) interact with one another in complex ways, whether as individuals within a microbial community, as cells within a multicellular organism, or in the context of predator-prey, host-pathogen and symbiotic interactions. These social processes drive a range of biological behaviors including what, at the individual cell level, can be seen as cooperative and self-sacrificing. The result is the production of even more complex biological structures, from microbial biofilms to pangolins and human beings, and complex societies. The breakdown of such interactions, whether in response to pathogens, environmental insult, mutations, politicians’ narcissistic behaviors and the madness of crowds, underlie a wide range of aberrant and pathogenic outcomes – after all cancer is based on the anti-social behavior of tumor cells.

The devil is in the details – from the conceptual to the practical: What a biologist/ bioengineer rapidly discovers when called upon to fix the effects of a mutation, defeat a pathogen, or repair a damaged organ is that biological systems are mechanistically more complex that originally thought, and are no means intelligently designed. There are a number of sources for this biological complexity. First, and most obviously, modern cells (as well as LUCA) are not intelligently designed systems – they are the product of evolutionary processes, through which noise is captured in useful forms. These systems emerge rather than are imposed (as is the case with humanly designed objects). Second, within the cell there is a high concentration of molecules that interact with one another, often in unexpected ways.  As examples of molecular interactions that my lab has worked on, the protein β-catenin – originally identified as playing a role in cell adhesion and cytoskeletal organization, has a second role as a regulator of gene expression (link). The protein Chibby, a component of the basal body of cilia (a propeller-like molecular machine involved in moving fluids) has a second role as an inhibitor of β-catenin’s gene regulatory activity (link), while centrin-2. another basal body component, plays a role in the regulation of DNA repair and gene expression (link).  These are interactions that have emerged during the process of evolution – they work, so they are retained.    

More evidence as to the complexity of biological systems is illustrated by studies that examined the molecular targets of specific anti-cancer drugs (see Lowe 2019. Your Cancer Targets May Not Be Real).  The authors of these studies used the CRISPR-Cas9 system to knock out the gene encoding a drugs’ purported target; they found that the drug continued to function (see Lin et al., 2019).  At the same time, a related study raises a note of caution.  Smits et al (2019) examined the effects of what were expected to be CRISPR-CAS9-induced “loss of function” mutations. They found expression of the (mutated) targeted gene, either by using alternative promoters (RNA synthesis start sites) or alternative translation start sites. The results were mutant polypeptides that retained some degree of wild type activity.  Finally, in a system that bears some resemblance to the CRISPR system was found in mutations that induce what is known as non-sense mediated decay.  A protection against the synthesis of aberrant (toxic) mutant polypeptides, one effect of non-sense mediated decay is to lead to the degradation of the mutant RNA.  As described by Wilkinson (2019. Genetic paradox explained by nonsense) the resulting RNA fragments can be transported back into the nucleus where they interact with proteins involved in the regulation of gene expression, leading to the expression of genes related to the originally mutated gene. The expression of these related genes can modify the phenotype of the original mutation.   

Biological systems are further complicated by the fact that the folding of polypeptides and the assembly of proteins (background: polypeptides and proteins) is mediated by a network of chaperone proteins, that act to facilitate correct, and suppress incorrect, folding, interactions, and assembly of proteins. This chaperone network helps explain the ability of cells to tolerate a range of genetic variations; they render cells more adaptive and “non-fragile”. Some chaperones are constitutively expressed and inherited when cells divide, the synthesis of others is induced in response to environmental stresses, such as increased temperatures (heat shock). The result is that, in some cases, the phenotypic effects of a mutation on a target protein may not be primarily due to the absence of the mutated protein, but rather to secondary effects, effects that can be significantly ameliorated by the expression of molecular chaperones (discussed in Klymkowsky. 2019 Filaments and phenotypes). 

The expression of chaperones along with other genetics factors complicate our understanding of what a particular gene product does, or how variations (polymorphisms) in a gene can influence human health.  This is one reason why genetic background effects are important when making conclusions as the health (or phenotypic) effects of inheriting a particular allele (Schrodi et al., 2014. Genetic-based prediction of disease traits: prediction is very difficult, especially about the future). 

As one more, but certainly not the last, complexity, there is the phenomena by which “normal” cells interact with cells that are discordant with respect to some behavior (Di Gregorio et al 2016).1  These cells, termed “fit and unfit” and “winners and losers”, clearly socially inappropriate and unfortunate terms, interact in unexpected ways. The eccentricity of these cells can be due to various stochastic processes, including monoallelic expression (Chess, 2016), that lead to clones that behave differently (background: Biology education in the light of single cell/molecule studies).  Akieda et al (2019) describe  the presence of cells that respond inappropriately to a morphogen gradient during embryonic development. These eccentric cells are “out of step” with their neighbors are induced to die. Experimentally blocking their execution leads to defects in subsequent development.  Similar competitive effects are described by Ellis et al (2019. Distinct modes of cell competition shape mammalian tissue morphogenesis). That said, not all eccentric behaviors lead to cell death.  In some cases the effect is more like an ostracism, cells responding inappropriately migrate to a more hospitable region (Xiong et al., 2013). 

All of which is to emphasize that while conceptually simple, biologically systems, and their responses to mutations and other pathogenic insults, are remarkably complex and unpredictable – a byproduct of the unintelligent evolutionary processes that produced them.  

  1. Adapted from a F1000 review recommendation.

Remembering the past and recognizing the limits of science …

A recent article in the Guardian reports on a debate at University College London (1) on whether to rename buildings because the people honored harbored odious ideological and political positions. Similar debates and decisions, in some cases involving unacceptable and abusive behaviors rather than ideological positions, have occurred at a number of institutions (see Calhoun at Yale, Sackler in NYC, James Watson at Cold Spring Harbor, Tim Hunt at the MRC, and sexual predators within the National Academy of Sciences). These debates raise important and sometimes troubling issues.

When a building is named after a scientist, it is generally in order to honor that person’s scientific contributions. The scientist’s ideological opinions are rarely considered explicitly, although they may influence the decision at the time.  In general, scientific contributions are timeless in that they represent important steps in the evolution of a discipline, often by establishing a key observation, idea, or conceptual framework upon which subsequent progress is based – they are historically important.  In this sense, whether a scientific contribution was correct (as we currently understand the natural world) is less critical than what that contribution led to. The contribution marks a milestone or a turning point in a discipline, understanding that the efforts of many underlie disciplinary progress and that those contributors made it possible for others to “see further.” (2)

Since science is not about recognizing or establishing a single unchanging capital-T-Truth, but rather about developing an increasingly accurate model for how the world works, it is constantly evolving and open to revision.  Working scientists are not particularly upset when new observations lead to revisions to or the abandonment of ideas or the addition of new terms to equations.(3)

Compare that to the situation in the ideological, political, or religious realms.  A new translation or interpretation of a sacred text can provoke schism and remarkably violent responses between respective groups of believers. The closer the groups are to one another, the more horrific the levels of violence that emerge often are.  In contrast, over the long term, scientific schools of thought resolve, often merging with one another to form unified disciplines. From my own perspective, and not withstanding the temptation to generate new sub-disciplines (in part in response to funding factors), all of the life sciences have collapsed into a unified evolutionary/molecular framework.  All scientific disciplines tend to become, over time, consistent with, although not necessarily deducible from, one another, particularly when the discipline respects and retains connections to the real (observable) world.(4)  How different from the political and ideological.

The historical progression of scientific ideas is dramatically different from that of political, religious, or social mores.  No matter what some might claim, the modern quantum mechanical view of the atom bears little meaningful similarity to the ideas of the cohort that included Leucippus and Democritus.  There is progress in science.  In contrast, various belief systems rarely abandon their basic premises.  A politically right- or left-wing ideologue might well find kindred spirits in the ancient world.  There were genocidal racists, theists, and nationalists in the past and there are genocidal racists, theists, and nationalists now.  There were (limited) democracies then, as there are (limited) democracies now; monarchical, oligarchical, and dictatorial political systems then and now; theistic religions then and now. Absolutist ideals of innate human rights, then as now, are routinely sacrificed for a range of mostly self-serving or politically expedient reasons.  Advocates of rule by the people repeatedly install repressive dictatorships. The authors of the United States Constitution declare the sacredness of human rights and then legitimized slavery. “The Bible … posits universal brotherhood, then tells Israel to kill all the Amorites.” (Phil Christman). The eugenic movement is a good example; for the promise of a genetically perfect future, existing people are treated inhumanely – just another version of apocalyptic (ends justify the means) thinking. 

Ignoring the simpler case of not honoring criminals (sexual and otherwise), most calls for removing names from buildings are based on the odious ideological positions espoused by the honored – typically some version of racist, nationalistic, or sexist ideologies.  The complication comes from the fact that people are complex, shaped by the context within which they grow up, their personal histories and the dominant ideological milieu they experienced, as well as their reactions to it.  But these ideological positions are not scientific, although a person’s scientific worldview and their ideological positions may be intertwined. The honoree may claim that science “says” something unambiguous and unarguable, often in an attempt to force others to acquiesce to their perspective.  A modern example would be arguments about whether climate is changing due to anthropogenic factors, a scientific topic, and what to do about it, an economic, political, and perhaps ideological question.(5)

So what to do?  To me, the answer seems reasonably obvious – assuming that the person’s contribution was significant enough, we should leave the name in place and use the controversy to consider why they held their objectionable beliefs and more explicitly why they were wrong to claim scientific justification for their ideological (racist / nationalist / sexist / socially prejudiced) positions.(6)  Consider explicitly why an archeologist (Flinders Petrie), a naturalist (Francis Galton), a statistician (Karl Pearson), and an advocate for women’s reproductive rights (Marie Stopes) might all support the non-scientific ideology of eugenics and forced sterilization.  We can use such situations as a framework within which to delineate the boundaries between the scientific and the ideological. 

Understanding this distinction is critical and is one of the primary justifications for why people not necessarily interested in science or science-based careers are often required to take science courses.  Yet all too often these courses fail to address the constraints of science, the difference between political and ideological opinions, and the implications of scientific models.  I would argue that unless students (and citizens) come to understand what constitutes a scientific idea or conclusion and what reflects a political or ideological position couched in scientific or pseudo-scientific terms, they are not learning what they need to know about science or its place in society.  That science is used as a proxy for Truth writ large is deeply misguided. It is much more important to understand how science works than it is to remember the number of phyla or the names of amino acids, the ability to calculate the pH of a solution, or to understand processes going on at the center of a galaxy or the details of a black hole’s behavior.  While sometimes harmless, misunderstanding science and how it is used socially can result in traumatic social implications, such as drawing harmful conclusions about individuals from statistical generalizations of populations, avoidable deaths from measles, and the forced “eugenic” sterilization of people deemed defective.  We should seek out and embrace opportunities to teach about these issues, even if it means we name buildings after imperfect people.  

footnotes:

  1. The location of some of my post-doc work.
  2. In the words of Isaac Newton, “If I have seen further than others, it is by standing upon the shoulders of giants.”
  3.  Unless, of course, the ideas and equations being revised or abandoned are one’s own. 
  4.  Perhaps the most striking exception occurs in physics on the subjects of quantum mechanics and relativity, but as I am not a physicist, I am not sure about that. 
  5.  Perhaps people are “meant” to go extinct. 
  6.  The situation is rather different outside of science, because the reality of progress is more problematic and past battles continue to be refought.  Given the history of Reconstruction and the Confederate “Lost Cause” movement [see PBS’s Reconstruction] following the American Civil War, monuments to defenders of slavery, no matter how admirable they may have been in terms of personal bravery and such, reek of implied violence, subjugation, and repression, particularly when the person honored went on to found an institution dedicated to racial hatred and violent intimidation [link]. There would seem little doubt that a monument in honor of a Nazi needs to be eliminated and replaced by one to their victims or to those who defeated them.

Is it possible to teach evolutionary biology “sensitively”?

Michael Reiss, a professor of science education at University College London and an Anglican Priest, suggests that “we need to rethink the way we teach evolution” largely because conventional approaches can be unduly confrontational and “force religious children to choose between their faith and evolution” or to result in students who”refuse to engage with a lesson.” He suggests that a better strategy would be akin to those use to teach a range of “sensitive” subjects “such as sex, pornography, ethnicity, religion, death studies, terrorism, and others” and could “help some students to consider evolution as a possibility who would otherwise not do so.” [link to his original essay and a previous post on teaching evolution: Go ahead and teach the controversy].

There is no doubt that an effective teacher attempts to present materials sensitively; it is the rare person who will listen to someone who “teaches” ideas in a hostile, alienating, or condescending manner. That said, it can be difficult to avoid the disturbing implications of scientific ideas, implications that can be a barrier to their acceptance. The scientific conclusion that males and females are different but basically the same can upset people on various sides of the theo-political spectrum. 

In point of fact an effective teacher, a teacher who encourages students to question their long held, or perhaps better put, familial or community beliefs, can cause serious social push-back  – Trouble with a capital T.  It is difficult to imagine a more effective teacher than Socrates (~470-399 BCE). Socrates “was found guilty of ‘impiety’ and ‘corrupting the young’, sentenced to death” in part because he was an effective teacher (see Socrates was guilty as charged).  In a religious and political context, challenging accepted Truths (again with a capital T) can be a crime.  In Socrates’ case”Athenians probably genuinely felt that undesirables in their midst had offended Zeus and his fellow deities,” and that, “Socrates, an unconventional thinker who questioned the legitimacy and authority of many of the accepted gods, fitted that bill.”  

So we need to ask of scientists and science instructors, does the presentation of a scientific, that is, a naturalistic and non-supernatural, perspective in and of itself represent an insensitivity to those with a super-natural belief system. Here it is worth noting a point made by the philosopher John Gray, that such systems extend beyond those based on a belief in god(s); they include those who believe, with apocalyptic certainty, in any of a number of Truths, ranging from the triumph of a master race, the forced sterilization of the unfit, the dictatorship of the proletariat, to history’s end in a glorious capitalist and technological utopia. Is a science or science instruction that is “sensitive” to, that is, uncritical of or upsetting to those who hold such beliefs, possible? 

My original impression is that one’s answer to this question is likely to be determined by whether one considers science a path to Truth, with a purposeful capital T, or rather that the goal of scientists is to build a working understanding of the world around and within us.  Working scientists, and particularly biologists who must daily confront the implications of apparently un-intelligent designed organisms (due to ways evolution works) are well aware that absolute certainty is counterproductive. Nevertheless, the proven explanatory and technological power of the scientific enterprise cannot help but reinforce the strong impression that there is some deep link between scientific ideas and the way the world really works.  And while some scientists have advocated unscientific speculations (think multiverses and cosmic consciousness), the truth, with a small t, of scientific thinking is all around us.  

Photograph of the Milky Way by Tim Carl photography, used by permission 

 A science-based appreciation of the unimaginable size and age of the universe, taken together with compelling evidence for the relatively recent appearance of humans (Homo sapiens from their metazoan, vertebrate, tetrapod, mammalian, and primate ancestors) cannot help but impact our thinking as to our significance in the grand scheme of things (assuming that there is such a, possibly ineffable, plan)(1). The demonstrably random processes of mutation and the generally ruthless logic by which organisms survive, reproduce, and evolve, can lead even the most optimistic to question whether existence has any real meaning.  

Consider, as an example, the potential implications of the progress being made in terms of computer-based artificial intelligence, together with advances in our understanding of the molecular and cellular connection networks that underlie human consciousness and self-consciousness. It is a small step to conclude, implicitly or explicitly, that humans (and all other organisms with a nervous system) are “just” wet machines that can (and perhaps should) be controlled and manipulated. The premise, the “self-evident truth”, that humans should be valued in and of themselves, and that their rights should be respected (2) is eroded by the ability of machines to perform what were previously thought to be exclusively human behaviors. 

Humans and their societies have, after all, been around for only a few tens of thousands of years.  During this time, human social organizations have passed from small wandering bands influenced by evolutionary kin and group selection processes to produce various social systems, ranging from more or less functional democracies, pseudo-democracies (including our own growing plutocracy), dictatorships, some religion-based, and totalitarian police states.  Whether humans have a long term future (compared to the millions of years that dinosaurs dominated life on Earth) remains to be seen – although we can be reasonably sure that the Earth, and many of its non-human inhabitants, will continue to exist and evolve for millions to billions of years, at least until the Sun explodes. 

So how do we teach scientific conclusions and their empirical foundations, which combine to argue that science represents how the world really works, without upsetting the most religiously and politically fanatical among us?  Those who most vehemently reject scientific thinking because they are the most threatened by its apparently unavoidable implications. The answer is open to debate, but to my mind it involves teaching students (and encouraging the public) to distinguish empirically-based, and so inherently limited observations and the logical, coherent, and testable scientific models they give rise to from unquestionable TRUTH- and revelation-based belief systems. Perhaps we need to focus explicitly on the value of science rather than its “Truth”. To reinforce what science is ultimately for; what justifies society’s support for it, namely to help reduce human suffering and (where it makes sense) to enhance the human experience, goals anchored in the perhaps logically unjustifiable, but nevertheless essential acceptance of the inherent value of each person.   

  1. Apologies to “Good Omens”
  2. For example, “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their creator with certain unalienable rights, that among these are life, liberty and the pursuit of happiness.” 

Gradients & Molecular Switches: a biofundamentalist perspective

Embryogenesis is based on a framework of social (cell-cell) interactions, initial and early asymmetries, and cascading cell-cell signaling and gene regulatory networks (DEVO posts one, two, & three). The result is the generation of embryonic axes, germ layers (ectoderm, mesoderm, endoderm), various organs and tissues (brains, limbs, kidneys, hearts, and such) and their characteristic cell types, their patterning, and their coordination into a functioning organism. It is well established that all animals share a common ancestor (hundreds of millions of years ago) and that a number of molecular  modules were already present in that common ancestor.  

At the same time evolutionary processes are, and need to be, flexible enough to generate the great diversity of organisms, with their various adaptations to particular life-styles. The extent of both conservation and flexibility (new genes, new mechanisms) in developmental systems is, however, surprising. Perhaps the most striking evidence for the depth of this conservation was supplied by the discovery of the organization of the Hox gene cluster in the fruit fly Drosophila and in the mouse (and other vertebrates). In both, the Hox genes are arranged and expressed in a common genomic and expression patterns. But as noted by Denis Duboule (2007) Hox gene organization is often presented in textbooks in a distorted manner (↓).

hox gene cluster variation

The Hox gene clusters of vertebrates are compact, but are split, disorganized, and even “atomized” in other types of organisms. Similarly, processes that might appear foundational, such as the role of the Bicoid gradient in the early fruit fly embryo (a standard topic in developmental biology textbooks), is in fact restricted to a small subset of flies (Stauber et al., 1999). New genes can be generated through well defined processes, such as gene duplication and divergence, or they can arise de novo out of sequence noise (Carvunis et al., 2012; Zhao et al., 2014 – see Van Oss & Carvunis 2019. De novo gene birth). Comparative genomic analyses can reveal the origins of specific adaptations (see Stauber et al., 1999).  The result is that organisms as closely related to each other as the great apes (including humans) have significant species-specific genetic differences (see Florio et al., 2018; McLean et al., 2011; Sassa, 2013 and references therein) as well as common molecular and cellular mechanisms.

A universal (?) feature of developing systems – gradients and non-linear responses: There is a predilection to find (and even more to teach) simple mechanisms that attempt to explain everything (witness the distortion of the Hox cluster, above) – a form of physics “theory of everything” envy.  But the historic nature, evolutionary plasticity, and need for regulatory robustness generally lead to complex and idiosyncratic responses in biological systems.  Biological systems are not “intelligently designed” but rather cobbled together over time through noise (mutation) and selection (Jacob, 1977)(see blog post). 
That said, a  common (universal?) developmental process appears to be the transformation of asymmetries into unambiguous cell fate decisions. Such responses are based on threshold events controlled by a range of molecular behaviors, leading to discrete gene expression states. We can approach the question of how such decisions are made from both an abstract and a concrete perspective. Here I outline my initial approach – I plan to introduce organism specific details as needed.  I start with the response to a signaling gradient, such as that found in many developmental systems, including the vertebrate spinal cord (top image Briscoe and Small, 2015) and the early Drosophila embryo (Lipshitz, 2009)(↓). gradients-decisions

bicoid gradient - lipschitz

We begin with a gradient in the concentration of a “regulatory molecule” (the regulator).  The shape of the gradient depends upon the sites and rates of synthesis, transport away from these sites, and turnover (degradation and/or inactivation). We assume, for simplicity’s sake, that the regulator directly controls the expression of target gene(s). Such a molecule binds in a sequence specific manner to regulatory sites, there could be a few or hundreds, and lead to the activation (or inhibition) of the DNA-dependent, RNA polymerase (polymerase), which generates RNA molecules complementary to one strand of the DNA. Both the binding of the regulator and the polymerase are stochastic processes, driven by diffusion, molecular collisions, and binding interactions.(1) 

Let us now consider the response of target gene(s) as a function of cell position within the gradient.  We might (naively) expect that the rate of target gene expression would be a simple function of regulator concentration. For an activator, where the gradient is high, target gene expression would be high, where the gradient concentration is low, target gene expression would be low – in between, target gene expression would be proportional to regulator concentration.  But generally we find something different, we find that the expression of target genes is non-uniform, that is there are thresholds in the gradient: on one side of the threshold concentration the target gene is completely off (not expressed), while on the other side of the threshold concentration, the target gene is fully on (maximally expressed).  The target gene responds as if it is controlled by an on-off switch. How do we understand the molecular basis for this behavior? 

Distinct mechanisms are used in different systems, but we will consider a system from the gastrointestinal bacteria E. coli that students may already be familiar with; these are the genes that enable E. coli to digest the mammalian milk sugar lactose.  They encode a protein needed to import  lactose into a bacterial cell and an enzyme needed to break lactose down so that it can be metabolized.  Given the energetic cost to synthesize these proteins, it is in the bacterium’s adaptive self interest to synthesize them only when lactose is present at sufficient concentrations in their environment.  The response is functionally similar to that associated with quorum sensing, which is also governed by threshold effects. Similarly cells respond to the concentration of regulator molecules (in a gradient) by turning on specific genes in specific domains, rather than uniformly. 

Now let us look in a little more detail at the behavior of the lactose utilization system in E. coli following an analysis by Vilar et al (2003)(2).  At an extracellular lactose concentration below the threshold, the system is off.  If we increase the extracellular lactose concentration above threshold the system turns on, the lactose permease and β-galactosidase proteins are made and lactose can enter the cell and be broken down to produce metabolizable sugars.  By looking at individual cells, we find that they transition, apparently stochastically from off to on (→), but whether they stay on depends upon the extracellular lactose concentration. We can define a concentration, the maintenance concentration, below the threshold, at which “on” cells will remain on, while “off” cells will remain off.  

The circuitry of the lactose system is well defined  (Jacob and Monod, 1961; Lewis, 2013; Monod et al., 1963)(↓).  The lacI gene encodes the lactose operon repressor protein and it is expressed constituately at a low level; it binds to sequences in the lac operon and inhibits transcription.  The lac operon itself contains three genes whose expression is regulated by a constituatively active promoter.  LacY encodes the permease while the lacZ encodes β-galactosidase.  β-galactosidase has two functions: it catalyzes the reaction that transforms lactose into allolactone and it cleaves lactose into the metabolically useful sugars glucose and galactose. Allolactone is an allosteric modulator of the Lac repressor protein; if allolactone is present, it binds to lac epressor proteins and inactivates them, allowing lac operon expression.  

The cell normally contains only ~10 lactose repressor proteins. Periodically (stochastically), even in the absence of lactose, and so its derivative allolactone, the lac operon promoter region is free of repressor proteins, and a lactose operon is briefly expressed – a few LacY and LacZ  polypeptides are synthesized (↓).  This noisy leakiness in the regulation of the lac operon allows the cell to respond if lactose happens to be present – some lactose molecules enter the cell through the permease, are converted to allolactone by β-galactosidase.  Allolactone is an allosteric effector of the lac repressor; when present it binds to and inactivates the lac repressor protein so that it no longer binds to its target sequences (the operator or “O” sites).  In the absence of repressor binding, the lac operon is expressed.  If lactose is not present, the lac operon is inhibited and lacY and LacZ disappear from the cell by turnover or growth associated dilution.     

The question of how the threshold concentration for various signal-regulated decisions is set often involves homeostatic processes that oppose the signaling response. The binding and activation of regulators can involve cooperative interactions between molecular components and both positive and negative feedback effects. 

In the case of patterning a tissue, in terms of regional responses to a signaling gradient, there can be multiple regulatory thresholds for different genes, as well as indirect effects, where the initiation of gene expression of one set of target genes impacts the sensitive expression of subsequent sets of genes.  One widely noted mechanism, known as reaction-diffusion, was suggested by the English mathematician Alan Turing (see Kondo and Miura, 2010) – it postulates a two component system. One component is an activator of gene expression, which in addition to its own various targets, positively regulates its own expression. The second component is a repressor of the first.  Both of these two regulator molecules are released by the signaling cell or cells; the repressor diffuses away from the source faster than the activator does.  The result can be a domain of target gene expression (where the concentration of activator is sufficient to escape repression), surrounded by a zone in which expression is inhibited (where repressor concentration is sufficient to inhibit the activator).  Depending upon the geometry of the system, this can result in discrete regions (dots or stripes) of primary target gene expression  (see Sheth et al., 2012).  In real systems there are often multiple gradients present; their relative orientations can produce a range of patterns.   

The point of all of this, is that when we approach a particular system – we need to consider the mechanisms involved.  Typically they are selected to produce desired phenotypes, but also to be robust in the sense that they need to produce the same patterns even if the system in which they occur is subject to perturbations, such as embryo/tissue size (due to differences in cell division / growth rates) and temperature and other environmental variables. 

note: figures returned – updated 13 November 2020.  

Footnotes:

  1. While stochastic (random) these processes can still be predictable.  A classic example involves the decay of an unstable isotope (atom), which is predictable at the population level, but unpredictable at the level of an individual atom.  Similarly, in biological systems, the binding and unbinding of molecules to one another, such as a protein transcription regulator to its target DNA sequence is stochastic but can be predictable in a large enough population.   
  2. and presented in biofundamentals ( pages 216-218). 

literature cited: 

Briscoe & Small (2015). Morphogen rules: design principles of gradient-mediated embryo patterning. Development 142, 3996-4009.

Carvunis et al  (2012). Proto-genes and de novo gene birth. Nature 487, 370.

Duboule (2007). The rise and fall of Hox gene clusters. Development 134, 2549-2560.

Florio et al (2018). Evolution and cell-type specificity of human-specific genes preferentially expressed in progenitors of fetal neocortex. eLife 7.

Jacob  (1977). Evolution and tinkering. Science 196, 1161-1166.

Jacob & Monod (1961). Genetic regulatory mechanisms in the synthesis of proteins. Journal of Molecular Biology 3, 318-356.

Kondo & Miura (2010). Reaction-diffusion model as a framework for understanding biological pattern formation. Science 329, 1616-1620.

Lewis (2013). Allostery and the lac Operon. Journal of Molecular Biology 425, 2309-2316.

Lipshitz (2009). Follow the mRNA: a new model for Bicoid gradient formation. Nature Reviews Molecular Cell Biology 10, 509.

McLean et al  (2011). Human-specific loss of regulatory DNA and the evolution of human-specific traits. Nature 471, 216-219.

Monod Changeux & Jacob (1963). Allosteric proteins and cellular control systems. Journal of Molecular Biology 6, 306-329.

Sassa (2013). The role of human-specific gene duplications during brain development and evolution. Journal of Neurogenetics 27, 86-96.

Sheth et al (2012). Hox genes regulate digit patterning by controlling the wavelength of a Turing-type mechanism. Science 338, 1476-1480.

Stauber et al (1999). The anterior determinant bicoid of Drosophila is a derived Hox class 3 gene. Proceedings of the National Academy of Sciences 96, 3786-3789.

Vilar et al (2003). Modeling network dynamics: the lac operon, a case study. J Cell Biol 161, 471-476.

Zhao et al (2014). Origin and Spread of de Novo Genes in Drosophila melanogaster Populations. Science. 343, 769-772

Establishing Cellular Asymmetries: a biofundamentalist perspective

[21st Century DEVO-3]  Embryonic development is the process by which a fertilized egg becomes an independent organism, an organism capable of producing functional gametes, and so a new generation. In an animal, this process generally involves substantial growth and multiple rounds of mitotic cell division; the resulting organism, a clone of the single-celled zygote, contains hundreds, thousands, millions, billions, or trillions of cells [link]. These dividing, migrating, differentiating, and sometimes dying cells that interact to form the adult and its various tissues and organ systems. The various cell types generated can be characterized by the genes that they express, the shapes they assume, the behaviors that they display, and how they interact with neighboring and distant cells (1).  Based on first principles, one could imagine (at least) two general mechanisms that could lead to differences in gene expression between cells. The first would be that different cells contain different genes while the other is that while all cells contain all genes, which genes are expressed in a particular cell varies, it is regulated by molecular processes that determine when, where, and to what the levels particular genes are expressed (2).  Turns out, there are examples of both processes among the animals, although the latter is much more common.

The process of discarding genomic DNA in somatic cells is known as chromatin diminution. During the development of the soma, but not the germ line, regions of the genome are lost. In the germ line, for hopefully obvious reasons, the full genome is retained. The end result is that somatic cells contain different subsets of genes and non-coding DNA compared to the full genome. The classic case of chromosome diminution was described in the parasitic nematode of horses, now named Parascaris univalens (originally Ascaris megalocephala) by Theodore Boveri in 1887 (reviewed in Streit and Davis, 2016)[pdf link]. Based on its occurrence in a range of distinct animal lineages, chromatin diminution appears to be an emergent rather than an ancestral trait, that is, a trait present in the common ancestor of the animals.

While, as expected for an emergent trait, the particular mechanism of chromatin diminution appears to vary between different organisms: the best characterized example occurs in Parascaris. In the somatic cell lineages in which chromatin diminution occurs, double-stranded breaks are made in  chromosomal DNA molecules, and teleomeric sequences are added to ends of the resulting DNA molecules (↓). 

You may have learned that chromosomes interact with spindle microtubules through a localized regions on the chromosomes, known as centromeres. Centromeres are identified through their association with proteins that form the kinetochore, which is a structure that mediates interactions between condensed chromosomes and mitotic (and meiotic) spindle microtubules. While many organisms have a discrete spot-like (localized) centromere, in many nematodes centromere-binding proteins are found distributed along the length of the chromosomes, a situation known as a holocentric centromere.  At higher resolution it appears that centromere components are preferentially associated with euchromatic, that is, molecularly accessible chromosomal regions, which are (typically) the regions where most expressed genes are located.  Centromere components are largely excluded from heterochromatic (condensed and molecularly inaccessible) chromosomal regions. After chromosome fragmentation, those DNA fragments associated with centromere components can interact with the spindle microtubules and are accurately segregated to daughter cells during mitosis, while those, primarily heterochromatic fragments (without associated centromeric components) are degraded and lost. In contrast the integrity of the genome is maintained in those cells that come to form the germ line, the cells that can undergo meiosis to produce gametes.  Looking forward to the reprogramming of somatic cells (the process of producing what are known as induced pluripotent stem cells – iPSCs), one prediction is that it should not be possible to reprogram a somatic cell that has undergone chromatin diminution to form a functional germ line cell – you should be able to explain why, or what would have to be the case for such reprogramming to be successful. 

The origins of cellular asymmetries: Clearly, there must be differences between the cells that undergo chromatin diminution and those that do not; at the very least the nuclease(s) that cuts the DNA during chromatin diminution will need to be active in somatic cells and inactive in germ line cells, or it may simply not be present – the genes that encode it are not expressed in germ line cells. We can presume that similar cytoplasmic differences play a role in the differential regulation of gene expression in different cell types during the development of organisms in which the genome remains intact in somatic cells. So how might such asymmetries arise?  There are three potential, but certainly not mutually exclusive, mechanisms that can lead to cellular/cytoplasmic asymmetries: they can be inherited based on pre-existing asymmetries in the parental cell, they could emerge based on asymmetries in the signaling environments occupied by the two daughters, or they could arise from stochastic fluctuations in gene expression (see Chen et al., 2016; Neumüller and Knoblich, 2009).  

         One example of how an asymmetry can be established occurs in the free-living nematode Caenorhabditis elegans, where the site of sperm fusion with the egg leads to the recruitment and assembly of proteins around the site of sperm entry, the future posterior side of the embryo.  After male and female pronuclei fuse, mitosis begins and cytokinesis divides the zygote into two cells; the asymmetry initiated by sperm entry leads to an asymmetric division (↑); the anterior AB blastomere is larger, and molecularly distinct from the smaller posterior P1 blastomere.  These differences set off a regulatory cascade, in which the genes expressed at one stage influence those expressed subsequently, and so influence subsequent cell divisions / cell fate decisions.

Other organisms use different mechanisms to generate cellular asymmetries. In organisms that have external fertilization, such as the clawed frog Xenopus, development proceeds rapidly once fertilization occurs. The egg is large, since in contains all of the materials necessary for the formation until the time that the embryo can feed itself. The early embryo is immotile and vulnerable to predation, so early development in such species tends to be rapid, and based on materials supplied by the mother (leading to maternal effects on subsequent development).  In such cases, the initial asymmetry is built into the organization of the oocyte. 

Formed through a mitotic division the primary oocyte enters meiotic prophase I, during which it undergoes a period of growth. Maternal and paternal chromosomes align (syngamy) and undergo crossing-over (recombination).  The oocyte contains a single centrosome, a cytoplasmic structure that surrounds the centrioles of the oocyte’s inherited mitotic spindle pole. Cytoplasmic components become organized around the pole and then move from the pole toward the cell cortex (↓ image from Gard and Klymkowsky, 1998); this movement defines an “animal-vegetal” axisof the oocyte, which upon fertilization will play a role in generating the head-tail (anterior-posterior) and back-belly (dorsal-ventral) axes of the embryo and adult.

The primary oocyte remains in prophase I throughout oogenesis. The asymmetry of the oocyte becomes visible through the development of a pigmented animal hemisphere, largely non-pigmented vegetal hemisphere, and an large (~300 um diameter) and off-centered nucleus (known as the germinal vesicle or GV)(3).  Messenger RNA molecules, encoding different polypeptides, are differentially localized to the animal and vegetal regions of the late stage oocyte. The translation of these mRNAs is regulated by factors activated by subsequent developmental events, leading to molecular asymmetries between embryonic cells derived from the animal and vegetal regions of the oocyte.  In preparation for fertilization, the oocyte resumes active meiosis,  leading to the formation of two polar bodies and the secondary oocyte, the egg. Fertilization occurs within the pigmented animal hemisphere; the site of sperm entry (↓) produces a second driver of asymmetry, in addition to the animal-vegetal axis, albeit through a mechanism distinct from that used in C. elegans (De Domenico et al., 2015). 

Asymmetries in oocytes and eggs, and sperm entry points are not always the primary drivers of subsequent embryonic differentiation.  In the mouse, and other placental mammals, including humans, embryonic development occurs within, and is supported by and dependent upon the mother.  The mouse (mammalian) egg appears grossly symmetric, and sperm entry itself does not appear to impose an asymmetry.  Rather, as the zygote divides, the first cells formed appear to be similar to one another. As cell division continue, however, some cells find themselves on the surface while others are located within the interior of the forming ball of cells, or morula (↓). 

These two cell  populations are exposed to different environments, environments that influence patterns of gene expression. The cells on the surface differentiate to form the trophectoderm, which in turn differentiates into extra-embryonic placental tissues, the interface between mother and developing embryo.  The internal cells becomes the inner cell mass, which differentiate to form the embryo proper, the future mouse (or human). Early on inner cell mass cells appear similar to one another, but they also experience different environments, leading to emerging asymmetries associated with the activation of different signaling systems, the expression of different sets of genes, and difference in behavior – they begin the process of differentiating into distinct cell lineages and types forming, as embryogenesis continues, different tissues and organs.   

The response of a particular cell to a particular environment will depend upon the signaling molecules present, typically expressed by neighboring cells, the signaling molecule receptors expressed by the cell itself, and how the binding of signaling molecules to receptors alters receptor activity or stability. For example, an activated receptor can activate (or inhibit) a transcription factor protein that could influence the expression of a subset of genes. These genes may themselves encode regulators of  transcription, signals, signal receptors, or modifiers of the cellular localization, stability, activity, or interactions with other molecules. While some effects of signal-receptor interactions can be transient, leading to reversible changes in cell state (and gene expression), during embryonic development activating and responding to a signal generally starts a cascade of effects that leads to irreversible changes, and the formation of altered differentiated states.
       A  cell’s response to a signal can be variable, and influenced by the totality of the signals it receives and its past history.  For example, a signal could lead to a decrease in the level of a receptor, or an increase in an inhibitory protein, making the cell unresponsive to the signal (a negative feedback effect) or more sensitive (a positive feedback effect) or could lead to a change in its response to a signal – different genes could be regulated as time goes by following the signal.  Such emerging patterns of gene expression, based on signaling inputs, are the primary driver of embryonic development. 

footnotes:

  1. Not all genes are differentially expression, however – some genes, known as housekeeping genes, are expressed in essential all cells.
  2.  Hopefully it is clear what the term “expressed” means – namely that part of the gene is used to direct the synthesis of RNA (through the process of transcription (DNA-dependent, RNA polymerization).  Some such RNAs (messenger or mRNAs) are used to direct the synthesis of a polypeptide through the process of translation (RNA-directed, amino acid polymerization) others do not encode polypeptides, such non-coding RNAs (ncRNAs) can play roles in a number of processes, from catalysis to the regulation of transcription, RNA stability, and translation.  
  3. Eggs are laid in water and are exposed to the sun; the pigmentation of the animal hemisphere is thought to protect the oocyte/zygote/early embryo’s DNA from photo-damage.

Literature cited

Chen et al.,  (2016). The ins (ide) and outs (ide) of asymmetric stem cell division. Current opinion in cell biology 43, 1-6.

De Domenico et al., (2015). Molecular asymmetry in the 8-cell stage Xenopus tropicalis embryo described by single blastomere transcript sequencing. Developmental biology 408, 252-268.

Gard & Klymkowsky. (1998). Intermediate filament organization during oogenesis and early development in the clawed frog, Xenopus laevis. In Intermediate filaments (ed. H. Herrmann & J. R. Harris), pp. 35-69. New York: Plenum.

Neumüller & Knoblich. (2009). Dividing cellular asymmetry: asymmetric cell division and its implications for stem cells and cancer. Genes & development 23, 2675-2699.

Streit & Davis. (2016). Chromatin Diminution. In eLS: John Wiley & Sons Ltd, Chichester.