Making sense of noise: introducing students to stochastic processes in order to better understand biological behaviors (and even free will).

 Biological systems are characterized by the ubiquitous roles of weak, that is, non-covalent molecular interactions, small, often very small, numbers of specific molecules per cell, and Brownian motion. These combine to produce stochastic behaviors at all levels from the molecular and cellular to the behavioral. That said, students are rarely introduced to the ubiquitous role of stochastic processes in biological systems, and how they produce unpredictable behaviors. Here I present the case that they need to be and provide some suggestions as to how it might be approached.  

Background: Three recent events combined to spur this reflection on stochasticity in biological systems, how it is taught, and why it matters. The first was an article describing an approach to introducing students to homeostatic processes in the context of the bacterial lac operon (Booth et al., 2022), an adaptive gene regulatory system controlled in part by stochastic events. The second were in-class student responses to the question, why do interacting molecules “come back apart” (dissociate).  Finally, there is the increasing attention paid to what are presented as deterministic genetic factors, as illustrated by talk by Kathryn Harden, author of the “The Genetic Lottery: Why DNA matters for social equality” (Harden, 2021).  Previous work has suggested that students, and perhaps some instructors, find the ubiquity, functional roles, and implications of stochastic, that is inherently unpredictable processes, difficult to recognize and apply. Given their practical and philosophical implications, it seems essential to introduce students to stochasticity early in their educational journey.

added 7 March 2023; Should have cited:  You & Leu (2020).

What is stochasticity and why is it important for understanding biological systems? Stochasticity results when intrinsically unpredictable events, e.g. molecular collisions, impact the behavior of a system. There are a number of drivers of stochastic behaviors. Perhaps the most obvious, and certainly the most ubiquitous in biological systems is thermal motion. The many molecules within a solution (or a cell) are moving, they have kinetic energy – the energy of motion and mass. The exact momentum of each molecule cannot, however, be accurately and completely characterized without perturbing the system (echos of Heisenberg). Given the impossibility of completely characterizing the system, we are left uncertain as to the state of the system’s components, who is bound to whom, going forward. 

Through collisions energy is exchanged between molecules.  A number of chemical processes are driven by the energy delivered through such collisions. Think about a typical chemical reaction. In the course of the reaction, atoms are rearranged – bonds are broken (a process that requires energy) and bonds are formed (a process that releases energy). Many (most) of the chemical reactions that occur in biological systems require catalysts to bring their required activation energies into the range available within the cell.   [1]  

What makes the impact of thermal motion even more critical for biological systems is that many (most) regulatory interactions and macromolecular complexes, the molecular machines discussed by Alberts (1998) are based on relatively weak, non-covalent surface-surface interactions between or within molecules. Such interactions are central to most regulatory processes, from the activation of signaling pathways to the control of gene expression. The specificity and stability of these non-covalent interactions, which include those involved in determining the three-dimensional structure of macromolecules, are directly impacted by thermal motion, and so by temperature – one reason controlling body temperature is important.  

So why are these interactions stochastic and why does it matter?  A signature property of a stochastic process is that while it may be predictable when large numbers of atoms, molecules, or interactions are involved, the behaviors of individual atoms, molecules, and interactions are not. A classic example, arising from factors intrinsic to the atom, is the decay of radioactive isotopes. While the half-life of a large enough population of a radioactive isotope is well defined, when any particular atom will decay is, in current theory, unknowable, a concept difficult for students (see Hull and Hopf, 2020). This is the reason we cannot accurately predict whether Schrȍdinger’s cat is alive or dead. The same behavior applies to the binding of a regulatory protein to a specific site on a DNA molecule and its subsequent dissociation: predictable in large populations, not-predictable for individual molecules. The situation is exacerbated by the fact that biological systems are composed of cells and cells are, typically, small, and so contain relatively few molecules of each type (Milo and Phillips, 2015). There are typically one or two copies of each gene in a cell, and these may be different from one another (when heterozygous). The expression of any one gene depends upon the binding of specific proteins, transcription factors, that act to activate or repress gene expression. In contrast to a number of other cellular proteins, “as a rule of thumb, the concentrations of such transcription factors are in the nM range, corresponding to only 1-1000 copies per cell in bacteria or 103-106 in mammalian cells” (Milo and Phillips, 2015). Moreover, while DNA binding proteins bind to specific DNA sequences with high affinity, they also bind to DNA “non-specifically” in a largely sequence independent manner with low affinity. Given that there are many more non-specific (non-functional) binding sites in the DNA than functional ones, the effective concentration of a particular transcription factor can be significantly lower than its total cellular concentration would suggest. For example, in the case of the lac repressor of the bacterium Escherichia coli (discussed further below), there are estimated to be ~10 molecules of the tetrameric lac repressor per cell, but “non-specific affinity to the DNA causes >90% of LacI copies to be bound to the DNA at locations that are not the cognate promoter site” (Milo and Phillips, 2015); at most only a few molecules are free in the cytoplasm and available to bind to specific regulatory sites.  Such low affinity binding to DNA allows proteins to undergo one-dimensional diffusion, a process that can greatly speed up the time it takes for a DNA binding protein to “find” high affinity binding sites (Stanford et al., 2000; von Hippel and Berg, 1989). Most transcription factors bind in a functionally significant manner to hundreds to thousands of gene regulatory sites per cell, often with distinct binding affinities. The effective binding affinity can also be influenced by positive and negative interactions with other transcription and accessory factors, chromatin structure, and DNA modifications. Functional complexes can take time to assemble, and once assembled can initiate multiple rounds of polymerase binding and activation, leading to a stochastic phenomena known as transcriptional bursting. An analogous process occurs with RNA-dependent polypeptide synthesis (translation). The result, particularly for genes expressed at lower levels, is that stochastic (unpredictable) bursts of transcription/translation can lead to functionally significant changes in protein levels (Raj et al., 2010; Raj and van Oudenaarden, 2008).

Figure adapted from Elowitz et al 2002

There are many examples of stochastic behaviors in biological systems. Originally noted by Novick and Weiner (1957) in their studies of the lac operon, it was clear that gene expression occurred in an all or none manner. This effect was revealed in a particularly compelling manner by Elowitz et al (2002) who used lac operon promoter elements to drive expression of transgenes encoding cyan and yellow fluorescent proteins (on a single plasmid) in E. coli.  The observed behaviors were dramatic; genetically identical cells were found to express, stochastically, one, the other, both, or neither transgenes. The stochastic expression of genes and downstream effects appear to be the source of much of the variance found in organisms with the same genotype in the same environmental conditions (Honegger and de Bivort, 2018).

Beyond gene expression, the unpredictable effects of stochastic processes can be seen at all levels of biological organization, from the biased random walk behaviors that underlie various forms of chemotaxis (e.g. Spudich and Koshland, 1976) and the search behaviors in C. elegans (Roberts et al., 2016) and other animals (Smouse et al., 2010), the noisiness in the opening of individual neuronal voltage-gated ion channels (Braun, 2021; Neher and Sakmann, 1976), and various processes within the immune system (Hodgkin et al., 2014), to variations in the behavior of individual organisms (e.g. the leafhopper example cited by Honegger and de Bivort, 2018). Stochastic events are involved in a range of “social” processes in bacteria (Bassler and Losick, 2006). Their impact serves as a form of “bet-hedging” in populations that generate phenotypic variation in a homogeneous environment (see Symmons and Raj, 2016). Stochastic events can regulate the efficiency of replication-associated error-prone mutation repair (Uphoff et al., 2016) leading to increased variation in a population, particularly in response to environmental stresses. Stochastic “choices” made by cells can be seen as questions asked of the environment, the system’s response provides information that informs subsequent regulatory decisions (see Lyon, 2015) and the selective pressures on individuals in a population (Jablonka and Lamb, 2005). Together stochastic processes introduce a non-deterministic (i.e. unpredictable) element into higher order behaviors (Murakami et al., 2017; Roberts et al., 2016).

Controlling stochasticity: While stochasticity can be useful, it also needs to be controlled. Not surprisingly then there are a number of strategies for “noise-suppression”, ranging from altering regulatory factor concentrations, the formation of covalent disulfide bonds between or within polypeptides, and regulating the activity of repair systems associated with DNA replication, polypeptide folding, and protein assembly via molecular chaperones and targeted degradation. For example, the identification of “cellular competition” effects has revealed that “eccentric cells” (sometimes, and perhaps unfortunately referred to as of “losers”) can be induced to undergo apoptosis (die) or migration in response to their “normal” neighbors (Akieda et al., 2019; Di Gregorio et al., 2016; Ellis et al., 2019; Hashimoto and Sasaki, 2020; Lima et al., 2021).

Student understanding of stochastic processes: There is ample evidence that students (and perhaps some instructors as well) are confused by or uncertain about the role of thermal motion, that is the transfer of kinetic energy via collisions, and the resulting stochastic behaviors in biological systems. As an example, Champagne-Queloz et al (2016; 2017) found that few students, even after instruction through molecular biology courses, recognize that collisions with other molecules were  responsible for the disassembly of molecular complexes. In fact, many adopt a more “deterministic” model for molecular disassembly after instruction (see part A panel figure on next page). In earlier studies, we found evidence for a similar confusion among instructors (part B of figure on the next page)(Klymkowsky et al., 2010). 

Introducing stochasticity to students: Given that understanding stochastic (random) processes can be difficult for many (e.g. Garvin-Doxas and Klymkowsky, 2008; Taleb, 2005), the question facing course designers and instructors is when and how best to help students develop an appreciation for the ubiquity, specific roles, and implications of stochasticity-dependent processes at all levels in biological systems. I would suggest that  introducing students to the dynamics of non-covalent molecular interactions, prevalent in biological systems in the context of stochastic interactions (i.e. kinetic theory) rather than a ∆G-based approach may be useful. We can use the probability of garnering the energy needed to disrupt an interaction to present concepts of binding specificity (selectivity) and stability. Developing an understanding of the formation and  disassembly of molecular interactions builds on the same logic that Albert Einstein and Ludwig Böltzman used to demonstrate the existence of atoms and molecules and the reversibility of molecular reactions (Bernstein, 2006). Moreover, as noted by Samoilov et al (2006) “stochastic mechanisms open novel classes of regulatory, signaling, and organizational choices that can serve as efficient and effective biological solutions to problems that are more complex, less robust, or otherwise suboptimal to deal with in the context of purely deterministic systems.”

The selectivity (specificity) and stability of molecular interactions can be understood from an energetic perspective – comparing the enthalpic and entropic differences between bound and unbound states. What is often missing from such discussions, aside from the fact of their inherent complexity, particularly in terms of calculating changes in entropy and exactly what is meant by energy (Cooper and Klymkowsky, 2013) is that many students enter biology classes without a robust understanding of enthalpy, entropy, or free energy (Carson and Watson, 2002).  Presenting students with a molecular  collision, kinetic theory-based mechanism for the dissociation of molecular interactions, may help them better understand (and apply) both the dynamics and specificity of molecular interactions. We can gage the strength of an interaction (the sum of the forces stabilizing an interaction) based on the amount of energy (derived from collisions with other molecules) needed to disrupt it.  The implication of student responses to relevant Biology Concepts Instrument (BCI) questions and beSocratic activities (data not shown), as well as a number of studies in chemistry, is that few students consider the kinetic/vibrational energy delivered through collisions with other molecules (a function of temperature), as key to explaining why interactions break (see Carson and Watson, 2002 and references therein).  Although this paper is 20 years old, there is little or no evidence that the situation has improved. Moreover, there is evidence that the conventional focus on mathematics-centered, free energy calculations in the absence of conceptual understanding may serve as an unnecessary barrier to the inclusion of a more socioeconomically diverse, and under-served populations of students (Ralph et al., 2022; Stowe and Cooper, 2019). 

The lac operon as a context for introducing stochasticity: Studies of the E. coli  lac operon hold an iconic place in the history of molecular biology and are often found in introductory courses, although typically presented in a deterministic context. The mutational analysis of the lac operon helped define key elements involved in gene regulation (Jacob and Monod, 1961; Monod et al., 1963). Booth et al (2022) used the lac operon as the context for their “modeling and simulation lesson”, Advanced Concepts in Regulation of the Lac Operon. Given its inherently stochastic regulation (Choi et al., 2008; Elowitz et al., 2002; Novick and Weiner, 1957; Vilar et al., 2003), the lac operon is a good place to start introducing students to stochastic processes. In this light, it is worth noting that Booth et al describes the behavior of the lac operon as “leaky”, which would seem to imply a low, but continuous level of expression, much as a leaky faucet continues to drip. As this is a peer-reviewed lesson, it seems likely that it reflects widely held mis-understandings of how stochastic processes are introduced to, and understood by students and instructors.

E. coli cells respond to the presence of lactose in growth media in a biphasic manner, termed diauxie, due to “the inhibitory action of certain sugars, such as glucose, on adaptive enzymes (meaning an enzyme that appears only in the presence of its substrate)” (Blaiseau and Holmes, 2021). When these (preferred) sugars are depleted from the media, growth slows. If lactose is present, however, growth will resume following a delay associated with the expression of the proteins encoded by the operon that enables the cell to import and metabolize lactose. Although the term homeostatic is used repeatedly by Booth et al, the lac operon is part of an adaptive, rather than a homeostatic, system. In the absence of glucose, cyclic AMP (cAMP) levels in the cell rise. cAMP binds to and activates the catabolite activator protein (CAP), encoded for by the crp gene. Activation of CAP leads to the altered expression of a number of target genes, whose products are involved in adaption to the stress associated with the absence of common and preferred metabolites. cAMP-activated CAP acts as both a transcriptional repressor and activator, “and has been shown to regulate hundreds of genes in the E. coli genome, earning it the status of “global” or “master” regulator” (Frendorf et al., 2019). It is involved in the adaptation to environmental factors, rather than maintaining the cell in a particular state (homeostasis). 

The lac operon is a classic polycistronic bacterial gene, encoding three distinct polypeptides: lacZ (β-galactosidase), lacY (β-galactoside permease), and lacA (galactoside acetyltransferase). When glucose or other preferred energy sources are present, expression of the lac operon is blocked by the inactivity of CAP. The CAP protein is a homodimer and its binding to DNA is regulated by the binding of the allosteric effector cAMP.  cAMP is generated from ATP by the enzyme adenylate cyclase, encoded by the cya gene. In the absence of glucose the enyzme encoded by the crr gene is phosphorylated and acts to activate adenylate cyclase (Krin et al., 2002).  As cAMP levels increase, cAMP binds to the CAP protein, leading to a dramatic change in its structure (↑), such that the protein’s  DNA binding domain becomes available to interact with promoter sequences (figure from Sharma et al., 2009).

Binding of activated (cAMP-bound) CAP is not, by itself sufficient to activate expression of the lac operon because of the presence of the constitutively expressed lac repressor protein, encoded for by the lacI gene. The active repressor is a tetramer, present at very low levels (~10 molecules) per cell. The lac operon contains three repressor (“operator”) binding sites; the tetrameric repressor can bind two operator sites simultaneously (upper figure → from Palanthandalam-Madapusi and Goyal, 2011). In the absence of lactose, but in the presence of cAMP-activated CAP, the operon is expressed in discrete “bursts” (Novick and Weiner, 1957; Vilar et al., 2003). Choi et al (2008) found that these burst come in two types, short and long, with the size of the burst referring to the number of mRNA molecules synthesized (bottm figure adapted from Choi et al ↑). The difference between burst sizes arises from the length of time that the operon’s repressor binding sites are unoccupied by repressor. As noted above, the tetravalent repressor protein can bind to two operator sites at the same time. When released from one site, polymerase binding and initiation produces a small number of mRNA molecules. Persistent binding to the second site means that the repressor concentration remains locally high, favoring rapid rebinding to the operator and the cessation of transcription (RNA synthesis). When the repressor releases from both operator sites, a rarer event, it is free to diffuse away and interact (non-specifically, i.e. with low affinity) with other DNA sites in the cell, leaving the lac operator sites unoccupied for a longer period of time. The number of such non-specific binding sites greatly exceeds the number (three) of specific binding sites in the operon. The result is the synthesis of a larger “burst” (number) of mRNA molecules. The average length of time that the operator  sites remain unoccupied is a function of the small number of repressor molecules present and the repressor’s low but measurable non-sequence specific binding to DNA. 

The expression of the lac operon leads to the appearance of β-galactosidase and β-galactoside permease. An integral membrane protein, β-galactoside permease enables extracellular lactose to enter the cell while cytoplasmic β-galactosidase catalyzes its breakdown and the generation of allolactone, which binds to the lac repressor protein, inhibiting its binding to operator sites, and so removing repression of transcription. In the absence of lactose, there are few if any of the proteins (β-galactosidase and β-galactoside permease) needed to activate the expression of the lac operon, so the obvious question is how, when lactose does appear in the extracellular media, does the lac operon turn on? Booth et al and the Wikipedia entry on the lac operon (accessed 29 June 2022) describe the turn on of the lac operon as “leaky” (see above). The molecular modeling studies of Vilar et al and Choi et al (which, together with Novick and Weiner, are not cited by Booth et al) indicate that the system displays distinct threshold and maintenance concentrations of lactose needed for stable lac gene expression. The term “threshold” does not occur in the Booth et al article. More importantly, when cultures are examined at the single cell level, what is observed is not a uniform increase in lac expression in all cells, as might be expected in the context of leaky expression, but more sporadic (noisy) behaviors. Increasing numbers of cells are “full on” in terms of lac operon expression over time when cultured in lactose concentrations above the operon’s activation threshold. This illustrates the distinctly different implications of a leaky versus a stochastic process in terms of their impacts on gene expression. While a leak is a macroscopic metaphor that produces a continuous, dependable, regular flow (drips), the occurrence of “bursts” of gene expression implies a stochastic (unpredictable) process ( figure from Vilar et al ↓). 

As the ubiquity and functionally significant roles of stochastic processes in biological systems becomes increasingly apparent, e.g. in the prediction of phenotypes from genotypes (Karavani et al., 2019; Mostafavi et al., 2020), helping students appreciate and understand the un-predictable, that is stochastic, aspects of biological systems becomes increasingly important. As an example, revealed dramatically through the application of single cell RNA sequencing studies, variations in gene expression between cells of the same “type” impacts organismic development and a range of behaviors. For example, in diploid eukaryotic cells is now apparent that in many cells, and for many genes, only one of the two alleles present is expressed; such “monoallelic” expression can impact a range of processes (Gendrel et al., 2014). Given that stochastic processes are often not well conveyed through conventional chemistry courses (Williams et al., 2015) or effectively integrated into, and built upon in molecular (and other) biology curricula; presenting them explicitly in introductory biology courses seems necessary and appropriate.

It may also help make sense of discussions of whether humans (and other organisms) have “free will”.  Clearly the situation is complex. From a scientific perspective we are analyzing systems without recourse to non-natural processes. At the same time, “Humans typically experience freely selecting between alternative courses of action” (Maoz et al., 2019)(Maoz et al., 2019a; see also Maoz et al., 2019b)It seems possible that recognizing the intrinsically unpredictable nature of many biological processes (including those of the central nervous system) may lead us to conclude that whether or not free will exists is in fact a non-scientific, unanswerable (and perhaps largely meaningless) question. 

footnotes

[1]  For this discussion I will ignore entropy, a factor that figures in whether a particular reaction in favorable or unfavorable, that is whether, and the extent to which it occurs.  

Acknowledgements: Thanks to Melanie Cooper and Nick Galati for taking a look and Chhavinder Singh for getting it started. Updated 6 January 2023.

literature cited:

Akieda, Y., Ogamino, S., Furuie, H., Ishitani, S., Akiyoshi, R., Nogami, J., Masuda, T., Shimizu, N., Ohkawa, Y. and Ishitani, T. (2019). Cell competition corrects noisy Wnt morphogen gradients to achieve robust patterning in the zebrafish embryo. Nature communications 10, 1-17.

Alberts, B. (1998). The cell as a collection of protein machines: preparing the next generation of molecular biologists. Cell 92, 291-294.

Bassler, B. L. and Losick, R. (2006). Bacterially speaking. Cell 125, 237-246.

Bernstein, J. (2006). Einstein and the existence of atoms. American journal of physics 74, 863-872.

Blaiseau, P. L. and Holmes, A. M. (2021). Diauxic inhibition: Jacques Monod’s Ignored Work. Journal of the History of Biology 54, 175-196.

Booth, C. S., Crowther, A., Helikar, R., Luong, T., Howell, M. E., Couch, B. A., Roston, R. L., van Dijk, K. and Helikar, T. (2022). Teaching Advanced Concepts in Regulation of the Lac Operon With Modeling and Simulation. CourseSource.

Braun, H. A. (2021). Stochasticity Versus Determinacy in Neurobiology: From Ion Channels to the Question of the “Free Will”. Frontiers in Systems Neuroscience 15, 39.

Carson, E. M. and Watson, J. R. (2002). Undergraduate students’ understandings of entropy and Gibbs free energy. University Chemistry Education 6, 4-12.

Champagne-Queloz, A. (2016). Biological thinking: insights into the misconceptions in biology maintained by Gymnasium students and undergraduates”. In Institute of Molecular Systems Biology. Zurich, Switzerland: ETH Zürich.

Champagne-Queloz, A., Klymkowsky, M. W., Stern, E., Hafen, E. and Köhler, K. (2017). Diagnostic of students’ misconceptions using the Biological Concepts Instrument (BCI): A method for conducting an educational needs assessment. PloS one 12, e0176906.

Choi, P. J., Cai, L., Frieda, K. and Xie, X. S. (2008). A stochastic single-molecule event triggers phenotype switching of a bacterial cell. Science 322, 442-446.

Coop, G. and Przeworski, M. (2022). Lottery, luck, or legacy. A review of “The Genetic Lottery: Why DNA matters for social equality”. Evolution 76, 846-853.

Cooper, M. M. and Klymkowsky, M. W. (2013). The trouble with chemical energy: why understanding bond energies requires an interdisciplinary systems approach. CBE Life Sci Educ 12, 306-312.

Di Gregorio, A., Bowling, S. and Rodriguez, T. A. (2016). Cell competition and its role in the regulation of cell fitness from development to cancer. Developmental cell 38, 621-634.

Ellis, S. J., Gomez, N. C., Levorse, J., Mertz, A. F., Ge, Y. and Fuchs, E. (2019). Distinct modes of cell competition shape mammalian tissue morphogenesis. Nature 569, 497.

Elowitz, M. B., Levine, A. J., Siggia, E. D. and Swain, P. S. (2002). Stochastic gene expression in a single cell. Science 297, 1183-1186.

Feldman, M. W. and Riskin, J. (2022). Why Biology is not Destiny. In New York Review of Books. NY.

Frendorf, P. O., Lauritsen, I., Sekowska, A., Danchin, A. and Nørholm, M. H. (2019). Mutations in the global transcription factor CRP/CAP: insights from experimental evolution and deep sequencing. Computational and structural biotechnology journal 17, 730-736.

Garvin-Doxas, K. and Klymkowsky, M. W. (2008). Understanding Randomness and its impact on Student Learning: Lessons from the Biology Concept Inventory (BCI). Life Science Education 7, 227-233.

Gendrel, A.-V., Attia, M., Chen, C.-J., Diabangouaya, P., Servant, N., Barillot, E. and Heard, E. (2014). Developmental dynamics and disease potential of random monoallelic gene expression. Developmental cell 28, 366-380.

Harden, K. P. (2021). The genetic lottery: why DNA matters for social equality: Princeton University Press.

Hashimoto, M. and Sasaki, H. (2020). Cell competition controls differentiation in mouse embryos and stem cells. Current Opinion in Cell Biology 67, 1-8.

Hodgkin, P. D., Dowling, M. R. and Duffy, K. R. (2014). Why the immune system takes its chances with randomness. Nature Reviews Immunology 14, 711-711.

Honegger, K. and de Bivort, B. (2018). Stochasticity, individuality and behavior. Current Biology 28, R8-R12.

Hull, M. M. and Hopf, M. (2020). Student understanding of emergent aspects of radioactivity. International Journal of Physics & Chemistry Education 12, 19-33.

Jablonka, E. and Lamb, M. J. (2005). Evolution in four dimensions: genetic, epigenetic, behavioral, and symbolic variation in the history of life. Cambridge: MIT press.

Jacob, F. and Monod, J. (1961). Genetic regulatory mechanisms in the synthesis of proteins. Journal of molecular biology 3, 318-356.

Karavani, E., Zuk, O., Zeevi, D., Barzilai, N., Stefanis, N. C., Hatzimanolis, A., Smyrnis, N., Avramopoulos, D., Kruglyak, L. and Atzmon, G. (2019). Screening human embryos for polygenic traits has limited utility. Cell 179, 1424-1435. e1428.

Klymkowsky, M. W., Kohler, K. and Cooper, M. M. (2016). Diagnostic assessments of student thinking about stochastic processes. In bioArXiv: http://biorxiv.org/content/early/2016/05/20/053991.

Klymkowsky, M. W., Underwood, S. M. and Garvin-Doxas, K. (2010). Biological Concepts Instrument (BCI): A diagnostic tool for revealing student thinking. In arXiv: Cornell University Library.

Krin, E., Sismeiro, O., Danchin, A. and Bertin, P. N. (2002). The regulation of Enzyme IIAGlc expression controls adenylate cyclase activity in Escherichia coli. Microbiology 148, 1553-1559.

Lima, A., Lubatti, G., Burgstaller, J., Hu, D., Green, A., Di Gregorio, A., Zawadzki, T., Pernaute, B., Mahammadov, E. and Montero, S. P. (2021). Cell competition acts as a purifying selection to eliminate cells with mitochondrial defects during early mouse development. bioRxiv, 2020.2001. 2015.900613.

Lyon, P. (2015). The cognitive cell: bacterial behavior reconsidered. Frontiers in microbiology 6, 264.

Maoz, U., Sita, K. R., Van Boxtel, J. J. and Mudrik, L. (2019a). Does it matter whether you or your brain did it? An empirical investigation of the influence of the double subject fallacy on moral responsibility judgments. Frontiers in Psychology 10, 950.

Maoz, U., Yaffe, G., Koch, C. and Mudrik, L. (2019b). Neural precursors of decisions that matter—an ERP study of deliberate and arbitrary choice. Elife 8, e39787.

Milo, R. and Phillips, R. (2015). Cell biology by the numbers: Garland Science.

Monod, J., Changeux, J.-P. and Jacob, F. (1963). Allosteric proteins and cellular control systems. Journal of molecular biology 6, 306-329.

Mostafavi, H., Harpak, A., Agarwal, I., Conley, D., Pritchard, J. K. and Przeworski, M. (2020). Variable prediction accuracy of polygenic scores within an ancestry group. Elife 9, e48376.

Murakami, M., Shteingart, H., Loewenstein, Y. and Mainen, Z. F. (2017). Distinct sources of deterministic and stochastic components of action timing decisions in rodent frontal cortex. Neuron 94, 908-919. e907.

Neher, E. and Sakmann, B. (1976). Single-channel currents recorded from membrane of denervated frog muscle fibres. Nature 260, 799-802.

Novick, A. and Weiner, M. (1957). Enzyme induction as an all-or-none phenomenon. Proceedings of the National Academy of Sciences 43, 553-566.

Palanthandalam-Madapusi, H. J. and Goyal, S. (2011). Robust estimation of nonlinear constitutive law from static equilibrium data for modeling the mechanics of DNA. Automatica 47, 1175-1182.

Raj, A., Rifkin, S. A., Andersen, E. and van Oudenaarden, A. (2010). Variability in gene expression underlies incomplete penetrance. Nature 463, 913-918.

Raj, A. and van Oudenaarden, A. (2008). Nature, nurture, or chance: stochastic gene expression and its consequences. Cell 135, 216-226.

Ralph, V., Scharlott, L. J., Schafer, A., Deshaye, M. Y., Becker, N. M. and Stowe, R. L. (2022). Advancing Equity in STEM: The Impact Assessment Design Has on Who Succeeds in Undergraduate Introductory Chemistry. JACS Au.

Roberts, W. M., Augustine, S. B., Lawton, K. J., Lindsay, T. H., Thiele, T. R., Izquierdo, E. J., Faumont, S., Lindsay, R. A., Britton, M. C. and Pokala, N. (2016). A stochastic neuronal model predicts random search behaviors at multiple spatial scales in C. elegans. Elife 5, e12572.

Samoilov, M. S., Price, G. and Arkin, A. P. (2006). From fluctuations to phenotypes: the physiology of noise. Science’s STKE 2006, re17-re17.

Sharma, H., Yu, S., Kong, J., Wang, J. and Steitz, T. A. (2009). Structure of apo-CAP reveals that large conformational changes are necessary for DNA binding. Proceedings of the National Academy of Sciences 106, 16604-16609.

Smouse, P. E., Focardi, S., Moorcroft, P. R., Kie, J. G., Forester, J. D. and Morales, J. M. (2010). Stochastic modelling of animal movement. Philosophical Transactions of the Royal Society B: Biological Sciences 365, 2201-2211.

Spudich, J. L. and Koshland, D. E., Jr. (1976). Non-genetic individuality: chance in the single cell. Nature 262, 467-471.

Stanford, N. P., Szczelkun, M. D., Marko, J. F. and Halford, S. E. (2000). One-and three-dimensional pathways for proteins to reach specific DNA sites. The EMBO Journal 19, 6546-6557.

Stowe, R. L. and Cooper, M. M. (2019). Assessment in Chemistry Education. Israel Journal of Chemistry.

Symmons, O. and Raj, A. (2016). What’s Luck Got to Do with It: Single Cells, Multiple Fates, and Biological Nondeterminism. Molecular cell 62, 788-802.

Taleb, N. N. (2005). Fooled by Randomness: The hidden role of chance in life and in the markets. (2nd edn). New York: Random House.

Uphoff, S., Lord, N. D., Okumus, B., Potvin-Trottier, L., Sherratt, D. J. and Paulsson, J. (2016). Stochastic activation of a DNA damage response causes cell-to-cell mutation rate variation. Science 351, 1094-1097.

You, Shu-Ting, and Jun-Yi Leu. “Making sense of noise.” Evolutionary Biology—A Transdisciplinary Approach(2020): 379-391.

Vilar, J. M., Guet, C. C. and Leibler, S. (2003). Modeling network dynamics: the lac operon, a case study. J Cell Biol 161, 471-476.

von Hippel, P. H. and Berg, O. G. (1989). Facilitated target location in biological systems. Journal of Biological Chemistry 264, 675-678.

Williams, L. C., Underwood, S. M., Klymkowsky, M. W. and Cooper, M. M. (2015). Are Noncovalent Interactions an Achilles Heel in Chemistry Education? A Comparison of Instructional Approaches. Journal of Chemical Education 92, 1979–1987.

 

Sounds like science, but it ain’t …

we are increasingly assailed with science-related “news” – stories that too often involve hype and attempts to garner attention (and no, half-baked ideas are not theories, they are often non-scientific speculation or unconstrained fantasies).

The other day, as is my addiction, I turned to the “Real Clear Science” website to look for novel science-based stories (distractions from the more horrifying news of the day). I discovered two links that seduced me into clicking: “Atheism is not as rare or as rational as you think” by Will Gervais and Peter Sjöstedt-H’s “Consciousness and higher spatial dimensions“.  A few days later I encountered “Consciousness Is the Collapse of the Wave Function” by Stuart Hameroff. On reading them (more below), I faced the realization that science itself, and its distorted popularization by both institutional PR departments and increasingly by scientists and science writers, may be partially responsible for the absurdification of public discourse on scientific topics [1].  In part the problem arises from the assumption that science is capable of “explaining” much more than is actually the case. This insight is neither new nor novel. Timothy Caulfield’s essay Pseudoscience and COVID-19 — we’ve had enough already focuses on the fact that various, presumably objective data-based, medical institutions have encouraged the public’s thirst for easy cures for serious, and often incurable diseases.  As an example, “If a respected institution, such as the Cleveland Clinic in Ohio, offers reiki — a science-free practice that involves using your hands, without even touching the patient, to balance the “vital life force energy that flows through all living things” — is it any surprise that some people will think that the technique could boost their immune systems and make them less susceptible to the virus?” That public figures and trusted institutions provide platforms for such silliness [see Did Columbia University cut ties with Dr. Oz?] means that there is little to distinguish data-based treatments from faith- and magical-thinking based placebos. The ideal of disinterested science, while tempered by common human frailties, is further eroded by the lure of profit and/or hope of enhanced public / professional status and notoriety.  As noted by Pennock‘ “Science never guarantees absolute truth, but it aims to seek better ways to assess empirical claims and to attain higher degrees of certainty and trust in scientific conclusions“. Most importantly, “Science is a set of rules that keep the scientists from lying to each other. [2]

It should surprise no one that the failure to explicitly recognize the limits, and evolving nature of scientific knowledge, opens the door to self-interested hucksterism at both individual and institutional levels. Just consider the number of complementary/alternative non-scientific “medical” programs run by prestigious institutions. The proliferation of pundits, speaking outside of their areas of established expertise, and often beyond what is scientifically knowable (e.g. historical events such as the origin of life or the challenges of living in the multiverse which are, by their very nature, unobservable) speaks to the increasingly unconstrained growth of pathological, bogus, and corrupted science  which, while certainly not new [3], has been facilitated by the proliferation of public, no-barrier, no-critical feedback platforms [1,4].  Ignoring the real limits of scientific knowledge and rejecting, or ignoring, the expertise of established authorities, rejects the ideals that have led to science that “works”.  

Of course, we cannot blame the distortion of science for every wacky idea; crazy, conspiratorial and magical thinking may well be linked to the cognitive “features” (or are they bugs) of the human brain. Norman Cohn describes the depressing, and repeated pattern behind the construction of dehumanizing libels used to justify murderous behaviors towards certain groups [5].  Recent studies indicate that brains, whether complex or simple neural networks, appear to construct emergent models of the world, models they use to coordinate internal perceptions with external realities [6].  My own (out of my area of expertise) guess is that the complexity of the human brain is associated with, and leads to the emergence of internal “working models” that attempt to make sense of what is happening to us, in part to answer questions such as why the good die young and the wicked go unpunished. It seems likely that our social nature (and our increasing social isolation) influences these models, models that are “checked” or “validated” against our experiences. 

It was in this context that Gervais’s essay on atheism caught my attention. He approaches two questions: “how Homo sapiens — and Homo sapiens alone — came to be a religious species” and “how disbelief in gods can exist within an otherwise religious species?”  But is Homo sapiens really a religious species and what exactly is a religion? Is it a tool that binds social groups of organisms together, a way of coping with, and giving meaning to, the (apparent) capriciousness of existence and experience, both, or something else again?  And how are we to know what is going on inside other brains, including the brains of chimps, whales, or cephalopods? In this light I was struck by an essay by Sofia Deleniv “The ‘me’ illusion: How your brain conjures up your sense of self” that considers the number of species that appear to be able to recognize themselves in a mirror. Turns out, this is not nearly as short a list as was previously thought, and it seems likely that self-consciousness, the ability to recognize yourself as you, may be a feature of many such systems.  Do other organisms possess emergent “belief systems” that help process incoming and internal signals, including their own neural noise? When the author says, “We then subtly gauge participants’ intuitions” by using “a clever experiment to see how people mentally represent atheists” one is left to wonder whether there are direct and objective measures of “intuitions” or “mental representations”?   Then the shocker, after publishing a paper claiming that “Analytic Thinking Promotes Religious Disbelief“, the authors state that “the experiments in our initial Science paper were fatally flawed, the results no more than false positives.’ One is left to wonder did the questions asked make sense in the first place. While it initially seemed scientific (after all it was accepted and published in a premiere scientific journal), was it ever really science? 

Both “Consciousness and Higher Spatial Dimensions” and “Consciousness Is the Collapse of the Wave Function”, sound very scientific. Some physicists (the most sciencey of scientists, right?) have been speculating via “string theory” and “multiverses”, a series of unverified (and likely unverifiable) speculations, that they universe we inhabit has many many more than the three spatial dimensions we experience.  But how consciousness, an emergent property of biological (cellular) networks, is related to speculative physics is not clear, no matter what Nobel laureates in physics may say.  Should we, the people, take these remarks seriously?  After all these are the same folks who question the reality of time (for no good reason, as far as I can tell, as I watch my new grandchild and myself grow older rather than younger). 

Part of the issue involves what has been called “the hard problem of consciousness”, but as far as I can tell, consciousness is not a hard problem, but a process that emerges from systems of neural cells, interacting with one another and their environment in complex ways, not unlike the underlying processes of embryonic development, in which a new macroscopic organism composed of thousands to billions of cells emerges from a single cell.  And if the brain and body are generating signals (thoughts) then in makes sense these in turn feed back into the system, and as consciousness becomes increasingly complex, these thoughts need to be “understood” by the system that produced them.  The system may be forced to make sense of itself (perhaps that is how religions and other explanatory beliefs come into being, settling the brain so that it can cope with the material world, whether a nematode worm, an internet pundit, a QAnon wack-o, a religious fanatic, or a simple citizen, trying to make sense of things.

Thanks to Melanie Cooper for editorial advice and Steve Pollock for checking my understanding of physics; all remaining errors are mine alone!

  1. Scheufele, D. A. and Krause, N. M. (2019). Science audiences, misinformation, and fake news. Proceedings of the National Academy of Sciences 116, 7662-7669
  2. Kenneth S. Norris, cited in False Prophet by Alexander Kohn (and cited by John Grant in Corrupted Science. 
  3.  See Langmuir, I. (1953, recovered and published in 1989). “Pathological science.” Research-Technology Management 32: 11-17; “Corrupted Science: Fraud, Ideology, and Politics in Science” and “Bogus Science: or, Some people really believe these things” by John Grant (2007 and 2009)
  4.  And while I personally think Sabine Hossenfelder makes great explanatory videos, even she is occasionally tempted to go beyond the scientifically demonstrable: e.g. You don’t have free will, but don’t worry and An update on the status of superdeterminism with some personal notes  
  5.  Norman Cohn’s (1975) “Europe’s Inner Demons” will reveal.
  6. Kaplan, H. S. and Zimmer, M. (2020). Brain-wide representations of ongoing behavior: a universal principle? Current opinion in neurobiology 64, 60-69.

Anti-Scientific & anti-vax propaganda (1926 and today)

“Montaigne concludes, like Socrates, that ignorance aware of itself is the only true knowledge” – from “Forbidden Knowledge” by Roger Shattuck

A useful review of the history of the anti-vaccination movement: Poland & Jacobson 2011. The Age-Old Struggle against the Antivaccinationists NEJM

Science educators and those who aim to explain the implications of scientific or clinical observations to the public have their work cut out for them. In large part, this is because helping others, including the diverse population of health care providers and their clients, depends upon more than just critical thinking skills. Equally important is what might be termed “disciplinary literacy,” the ability to evaluate whether the methods applied are adequate and appropriate and so whether a particular observation is relevant to or able to resolve a specific question. To illustrate this point, I consider an essay from 1926 by Peter Frandsen and a 2021 paper by Ou et al. (2021) on the mechanism of hydroxychloroquine inhibition of SARS-CoV-2 replication in tissue culture cells.                

In Frandsen’s essay, well before the proliferation of unfettered web-based social pontification and ideologically-motivated distortions, he notes that “pseudo and unscientific cults are springing up and finding it easy to get a hold on the popular mind,” and “are making some headway in establishing themselves on an equally recognized basis with scientific medicine,” in part due to their ability to lobby politicians to exclude them from any semblance of “truth in advertising.”  Of particular resonance were the efforts in Minnesota, California, and Montana to oppose mandatory vaccination for smallpox. Given these successful anti-vax efforts, Frandsen asks, “is it any wonder that smallpox is one thousand times more prevalent in Montana than in Massachusetts in proportion to population?”  One cannot help but analogize to today’s COVID-19 statistics on the dramatically higher rate of hospitalization for the unvaccinated (e.g. Scobie et al., 2021). The comparison is all the more impactful (and disheartening) given the severity of smallpox as a disease, its elimination, in 1977, together with the near elimination of other dangerous viral human diseases (poliomyelitis and measles) primarily via vaccination efforts (Hopkins, 2013), and the discouraging number of high profile celebrities, some of whom I for one previously considered admirable figures (various forms of influencers in modern parlance) who actively promulgate positions that directly contradict objective and reproducible observation and embrace blatantly scientifically untenable beliefs (the vaccine-autism link serves as a prime example).                 

While much is made of the idea that education-based improvements in critical thinking ability can render its practitioners less susceptible to unwarranted conspiracy theories and beliefs (Lantian et al., 2021), the situation becomes more complex when we consider how it is that presumably highly educated practitioners, e.g. medical doctors, can become conspiracists (ignoring for the moment the more banal, and likely universal, reasons associated with greed and the need to draw attention to themselves).  As noted, many is the conspiracist who considers themselves to be a “critical freethinker” (see Lantian et al). The fact that they fail to recognize the flaws in their own thinking leads us to ask, what are they missing?            

A point rarely considered is what we might term “disciplinary literacy.” That is, do the members of an audience have the background information necessary to question foundational presumptions associated with an observation? Here I draw on personal experience. I have (an increasingly historical) interest in the interactions between intermediate filaments and viral infection (Doedens et al., 1994; Murti et al., 1988). In 2020, I found myself involved quite superficially with studies by colleagues here at the University of Colorado Boulder; they reproduced the ability of hydroxychloroquine to inhibit coronavirus replication in cultured cells.  Nevertheless, and in the face of various distortions, it quickly became apparent that hydroxychloroquine was ineffective for treating SARS-CoV-2 infection in humans. So, what disciplinary facts did one need to understand this apparent contradiction (which appears to have fueled unreasonable advocacy of hydroxychloroquine treatment for COVID)? The paper by Ou et al. (2021) provides a plausible mechanistic explanation. The process of in vitro infection of various cells appears to involve endocytosis followed by proteolytic events leading to the subsequent movement of viral nucleic acid into the cytoplasm, a prerequisite for viral replication. Hydroxychloroquine treatment acts by blocking the acidification of the endosome, which inhibits the capsid cleavage reaction and the subsequent cytoplasmic transport of the virus’s nucleic acid genome (see figure 1, Ou et al. 2021).  In contrast, in vivo infection involves a surface protease, rather than endocytosis, and is therefore independent of endosomal acidification.  Without a (disciplinary) understanding of the various mechanisms involve in viral entry, and their relevance in various experimental contexts, it remains a mystery for why hydroxychloroquine treatment blocks viral replication in one system (in vitro cultured cells) and not another (in vivo).             

 In the context of science education and how it can be made more effective, it appears that helping students understand underlying cellular processes, experimental details, and their often substantial impact on observed outcomes is central. This is in contrast to the common focus (in many courses) on the memorization of largely irrelevant details. Understanding how one can be led astray by the differences between experimental systems (and inadequate sample sizes) is essential. One cannot help but think of how mouse studies on diseases such as sepsis (Kolata, 2013) and Alzheimer’s (Reardon, 2018) have been haunted by the assumption that systems that differ in physiologically significant details are good models for human disease and the development of effective treatments. Helping students understand how we come to evaluate observations and the molecular and physiological mechanisms involved should be the primary focus of a modern education in the biological sciences, since it helps build up the disciplinary literacy needed to distinguish reasoned argument from anti-scientific propaganda. 

Acknowledgement: Thanks to Qing Yang for bringing the Ou et al paper to my attention.  

Literature cited:
Shattuck, R. (1996). Forbidden knowledge: from Prometheus to pornography. New York: St. Martin’s Press.

Doedens, J., Maynell, L. A., Klymkowsky, M. W. and Kirkegaard, K. (1994). Secretory pathway function, but not cytoskeletal integrity, is required in poliovirus infection. Arch Virol. suppl. 9, 159-172.

Hopkins, D. R. (2013). Disease eradication. New England Journal of Medicine 368, 54-63.

Kolata, G. (2013). Mice fall short as test subjects for some of humans’ deadly ills. New York Times 11, 467-477.

Lantian, A., Bagneux, V., Delouvée, S. and Gauvrit, N. (2021). Maybe a free thinker but not a critical one: High conspiracy belief is associated with low critical thinking ability. Applied Cognitive Psychology 35, 674-684.

Murti, K. G., Goorha, R. and Klymkowsky, M. W. (1988). A functional role for intermediate filaments in the formation of frog virus 3 assembly sites. Virology 162, 264-269.
 
Ou, T., Mou, H., Zhang, L., Ojha, A., Choe, H. and Farzan, M. (2021). Hydroxychloroquine-mediated inhibition of SARS-CoV-2 entry is attenuated by TMPRSS2. PLoS pathogens 17, e1009212.

Reardon, S. (2018). Frustrated Alzheimer’s researchers seek better lab mice. Nature 563, 611-613.

Scobie, H. M., Johnson, A. G., Suthar, A. B., Severson, R., Alden, N. B., Balter, S., Bertolino, D., Blythe, D., Brady, S. and Cadwell, B. (2021). Monitoring incidence of covid-19 cases, hospitalizations, and deaths, by vaccination status—13 US jurisdictions, April 4–July 17, 2021. Morbidity and Mortality Weekly Report 70, 1284.

Higher Education Malpractice: curving grades

If there is one thing that university faculty and administrators could do today to demonstrate their commitment to inclusion, not to mention teaching and learning over sorting and status, it would be to ban curve-based, norm-referenced grading. Many obstacles exist to the effective inclusion and success of students from underrepresented (and underserved) groups in science and related programs.  Students and faculty often, and often correctly, perceive large introductory classes as “weed out” courses preferentially impacting underrepresented students. In the life sciences, many of these courses are “out-of-major” requirements, in which students find themselves taught with relatively little regard to the course’s relevance to bio-medical careers and interests. Often such out-of-major requirements spring not from a thoughtful decision by faculty as to their necessity, but because they are prerequisites for post-graduation admission to medical or graduate school. “In-major” instructors may not even explicitly incorporate or depend upon the materials taught in these out-0f-major courses – rare is the undergraduate molecular biology degree program that actually calls on students to use calculus or a working knowledge of physics, despite the fact that such skills may be relevant in certain biological contexts – see Magnetofiction – A Reader’s Guide.  At the same time, those teaching “out of major” courses may overlook the fact that many (and sometimes most) of their students are non-chemistry, non-physics, and/or non-math majors.  The result is that those teaching such classes fail to offer a doorway into the subject matter to any but those already comfortable with it. But reconsidering the design and relevance of these courses is no simple matter.  Banning grading on a curve, on the other  hand, can be implemented overnight (and by fiat if necessary). 

 So why ban grading on a curve?  First and foremost, it would put faculty and institutions on record as valuing student learning outcomes (perhaps the best measure of effective teaching) over the sorting of students into easy-to-judge groups.  Second, there simply is no pedagogical justification for curved grading, with the possible exception of providing a kludgy fix to correct for poorly designed examinations and courses. There are more than enough opportunities to sort students based on their motivation, talent, ambition, “grit,” and through the opportunities they seek after and successfully embraced (e.g., through volunteerism, internships, and independent study projects). 

The negative impact of curving can be seen in a recent paper by Harris et al,  (Reducing achievement gaps in undergraduate general chemistry …), who report a significant difference in overall student inclusion and subsequent success based on a small grade difference between a C, which allows a student to proceed with their studies (generally as successfully as those with higher grades) and a C-minus, which requires them to retake the course before proceeding (often driving them out of the major).  Because Harris et al., analyzed curved courses, a subset of students cannot escape these effects.  And poor grades disproportionately impact underrepresented and underserved groups – they say explicitly “you do not belong” rather than “how can I help you learn”.   

Often naysayers disparage efforts to improve course design as “dumbing down” the course, rather than improving it.  In many ways this is a situation analogous to blaming patients for getting sick or not responding to treatment, rather than conducting an objective analysis of the efficacy of the treatment.  If medical practitioners had maintained this attitude, we would still be bleeding patients and accepting that more than a third are fated to die, rather than seeking effective treatments tailored to patients’ actual diseases – the basis of evidence-based medicine.  We would have failed to develop antibiotics and vaccines – indeed, we would never have sought them out. Curving grades implies that course design and delivery are already optimal, and the fate of students is predetermined because only a percentage can possibly learn the material.  It is, in an important sense, complacent quackery.

Banning grading on a curve, and labelling it for what it is – educational malpractice – would also change the dynamics of the classroom and might even foster an appreciation that a good teacher is one with the highest percentage of successful students, e.g. those who are retained in a degree program and graduate in a timely manner (hopefully within four years). Of course, such an alternative evaluation of teaching would reflect a department’s commitment to construct and deliver the most engaging, relevant, and effective educational program. Institutional resources might even be used to help departments generate more objective, instructor-independent evaluations of learning outcomes, in part to replace the current practice of student-based opinion surveys, which are often little more than measures of popularity.  We might even see a revolution in which departments compete with one another to maximize student inclusion, retention, and outcomes (perhaps even to the extent of applying pressure on the design and delivery of “out of major” required courses offered by other departments).  

“All a pipe dream” you might say, but the available data demonstrates that resources spent on rethinking course design, including engagement and relevance, can have significant effects on grades, retention, time to degree, and graduation rates.  At the risk of being labeled as self-promoting, I offer the following to illustrate the possibilities: working with Melanie Cooper at Michigan State University, we have built such courses in general and organic chemistry and documented their impact, see Evaluating the extent of a large-scale transformation in gateway science courses.

Perhaps we should be encouraging students to seek out legal representation to hold institutions (and instructors) accountable for detrimental practices, such as grading on a curve.  There might even come a time when professors and departments would find it prudent to purchase malpractice insurance if they insist on retaining and charging students for ineffective educational strategies.(1)  

Acknowledgements: Thanks to daughter Rebecca who provided edits and legal references and Melanie Cooper who inspired the idea. Educate! image from the Dorian De Long Arts & Music Scholarship site.

(1) One cannot help but wonder if such conduct could ever rise to the level of fraud. See, e.g., Bristol Bay Productions, LLC vs. Lampack, 312 P.3d 1155, 1160 (Colo. 2013) (“We have typically stated that a plaintiff seeking to prevail on a fraud claim must establish five elements: (1) that the defendant made a false representation of a material fact; (2) that the one making the representation knew it was false; (3) that the person to whom the representation was made was ignorant of the falsity; (4) that the representation was made with the intention that it be acted upon; and (5) that the reliance resulted in damage to the plaintiff.”).

Remembering the past and recognizing the limits of science …

A recent article in the Guardian reports on a debate at University College London (1) on whether to rename buildings because the people honored harbored odious ideological and political positions. Similar debates and decisions, in some cases involving unacceptable and abusive behaviors rather than ideological positions, have occurred at a number of institutions (see Calhoun at Yale, Sackler in NYC, James Watson at Cold Spring Harbor, Tim Hunt at the MRC, and sexual predators within the National Academy of Sciences). These debates raise important and sometimes troubling issues.

When a building is named after a scientist, it is generally in order to honor that person’s scientific contributions. The scientist’s ideological opinions are rarely considered explicitly, although they may influence the decision at the time.  In general, scientific contributions are timeless in that they represent important steps in the evolution of a discipline, often by establishing a key observation, idea, or conceptual framework upon which subsequent progress is based – they are historically important.  In this sense, whether a scientific contribution was correct (as we currently understand the natural world) is less critical than what that contribution led to. The contribution marks a milestone or a turning point in a discipline, understanding that the efforts of many underlie disciplinary progress and that those contributors made it possible for others to “see further.” (2)

Since science is not about recognizing or establishing a single unchanging capital-T-Truth, but rather about developing an increasingly accurate model for how the world works, it is constantly evolving and open to revision.  Working scientists are not particularly upset when new observations lead to revisions to or the abandonment of ideas or the addition of new terms to equations.(3)

Compare that to the situation in the ideological, political, or religious realms.  A new translation or interpretation of a sacred text can provoke schism and remarkably violent responses between respective groups of believers. The closer the groups are to one another, the more horrific the levels of violence that emerge often are.  In contrast, over the long term, scientific schools of thought resolve, often merging with one another to form unified disciplines. From my own perspective, and not withstanding the temptation to generate new sub-disciplines (in part in response to funding factors), all of the life sciences have collapsed into a unified evolutionary/molecular framework.  All scientific disciplines tend to become, over time, consistent with, although not necessarily deducible from, one another, particularly when the discipline respects and retains connections to the real (observable) world.(4)  How different from the political and ideological.

The historical progression of scientific ideas is dramatically different from that of political, religious, or social mores.  No matter what some might claim, the modern quantum mechanical view of the atom bears little meaningful similarity to the ideas of the cohort that included Leucippus and Democritus.  There is progress in science.  In contrast, various belief systems rarely abandon their basic premises.  A politically right- or left-wing ideologue might well find kindred spirits in the ancient world.  There were genocidal racists, theists, and nationalists in the past and there are genocidal racists, theists, and nationalists now.  There were (limited) democracies then, as there are (limited) democracies now; monarchical, oligarchical, and dictatorial political systems then and now; theistic religions then and now. Absolutist ideals of innate human rights, then as now, are routinely sacrificed for a range of mostly self-serving or politically expedient reasons.  Advocates of rule by the people repeatedly install repressive dictatorships. The authors of the United States Constitution declare the sacredness of human rights and then legitimized slavery. “The Bible … posits universal brotherhood, then tells Israel to kill all the Amorites.” (Phil Christman). The eugenic movement is a good example; for the promise of a genetically perfect future, existing people are treated inhumanely – just another version of apocalyptic (ends justify the means) thinking. 

Ignoring the simpler case of not honoring criminals (sexual and otherwise), most calls for removing names from buildings are based on the odious ideological positions espoused by the honored – typically some version of racist, nationalistic, or sexist ideologies.  The complication comes from the fact that people are complex, shaped by the context within which they grow up, their personal histories and the dominant ideological milieu they experienced, as well as their reactions to it.  But these ideological positions are not scientific, although a person’s scientific worldview and their ideological positions may be intertwined. The honoree may claim that science “says” something unambiguous and unarguable, often in an attempt to force others to acquiesce to their perspective.  A modern example would be arguments about whether climate is changing due to anthropogenic factors, a scientific topic, and what to do about it, an economic, political, and perhaps ideological question.(5)

So what to do?  To me, the answer seems reasonably obvious – assuming that the person’s contribution was significant enough, we should leave the name in place and use the controversy to consider why they held their objectionable beliefs and more explicitly why they were wrong to claim scientific justification for their ideological (racist / nationalist / sexist / socially prejudiced) positions.(6)  Consider explicitly why an archeologist (Flinders Petrie), a naturalist (Francis Galton), a statistician (Karl Pearson), and an advocate for women’s reproductive rights (Marie Stopes) might all support the non-scientific ideology of eugenics and forced sterilization.  We can use such situations as a framework within which to delineate the boundaries between the scientific and the ideological. 

Understanding this distinction is critical and is one of the primary justifications for why people not necessarily interested in science or science-based careers are often required to take science courses.  Yet all too often these courses fail to address the constraints of science, the difference between political and ideological opinions, and the implications of scientific models.  I would argue that unless students (and citizens) come to understand what constitutes a scientific idea or conclusion and what reflects a political or ideological position couched in scientific or pseudo-scientific terms, they are not learning what they need to know about science or its place in society.  That science is used as a proxy for Truth writ large is deeply misguided. It is much more important to understand how science works than it is to remember the number of phyla or the names of amino acids, the ability to calculate the pH of a solution, or to understand processes going on at the center of a galaxy or the details of a black hole’s behavior.  While sometimes harmless, misunderstanding science and how it is used socially can result in traumatic social implications, such as drawing harmful conclusions about individuals from statistical generalizations of populations, avoidable deaths from measles, and the forced “eugenic” sterilization of people deemed defective.  We should seek out and embrace opportunities to teach about these issues, even if it means we name buildings after imperfect people.  

footnotes:

  1. The location of some of my post-doc work.
  2. In the words of Isaac Newton, “If I have seen further than others, it is by standing upon the shoulders of giants.”
  3.  Unless, of course, the ideas and equations being revised or abandoned are one’s own. 
  4.  Perhaps the most striking exception occurs in physics on the subjects of quantum mechanics and relativity, but as I am not a physicist, I am not sure about that. 
  5.  Perhaps people are “meant” to go extinct. 
  6.  The situation is rather different outside of science, because the reality of progress is more problematic and past battles continue to be refought.  Given the history of Reconstruction and the Confederate “Lost Cause” movement [see PBS’s Reconstruction] following the American Civil War, monuments to defenders of slavery, no matter how admirable they may have been in terms of personal bravery and such, reek of implied violence, subjugation, and repression, particularly when the person honored went on to found an institution dedicated to racial hatred and violent intimidation [link]. There would seem little doubt that a monument in honor of a Nazi needs to be eliminated and replaced by one to their victims or to those who defeated them.

On teaching genetics, social evolution and understanding the origins of racism

Links between genetics and race crop up periodically in the popular press (link; link), but the real, substantive question, and the topic of a number of recent essays (see Saletan. 2018a. Stop Talking About Race and IQ) is whether the idea of “race” as commonly understood, and used by governments to categorize people (link), makes scientific sense.  More to the point, do biology educators have an unmet responsibility to modify and extend their materials and pedagogical approaches to address the non-scientific, often racist, implications of racial characterizations.  Such questions are complicated by a social geneticssecond factor, independent of whether the term race has any useful scientific purpose, namely to help students understand the biological (evolutionary) origins of racism itself, together with the stressors that lead to its periodic re-emergence as a socio-political factor. In times of social stress, reactions to strangers (others) identified by variations in skin color or overt religious or cultural signs (dress), can provoke hostility against those perceived to be members of a different social group.  As far as I can tell, few in the biology education community, which includes those involved in generating textbooks, organizing courses and curricula, or the design, delivery, and funding of various public science programs, including PBS’s NOVA, the science education efforts of HHMI and other private foundations, and programs such as Science Friday on public radio, directly address the roots of racism, roots associated with biological processes such as the origins and maintenance of multicellularity and other forms of social organization among organisms, involved in coordinating their activities and establishing defenses against social cheaters and processes such as cancer, in an organismic context (1).  These established defense mechanisms can, if not recognized and understood, morph into reflexive and unjustified intolerance, hostility toward, and persecution of various “distinguishable others.”  I will consider both questions, albeit briefly, here. 


Two factors have influenced my thinking about these questions.  The first involves the design of the biofundamentals text/course and its extension to include topics in genetics (2).  This involved thinking about what is commonly taught in genetics, what is critical for students to know going forward (and by implication what is not), and where materials on genetic processes best fit into a molecular biology curriculum (3).  While engaged in such navel gazing there came an email from Malcolm Campbell describing student responses to the introduction of a chapter section on race and racism in his textbook Integrating Concepts in Biology.  The various ideas of race, the origins of racism, and the periodic appearance of anti-immigrant, anti-religious and racist groups raise important questions – how best to clarify what is an undeniable observation, that different, isolated, sub-populations of a species can be distinguished from one another (see quote from Ernst Mayr’s 1994 “Typological versus Population thinking” ), from the deeper biological reality, that at the level of the individual these differences are meaningless. In what I think is an interesting way, the idea that people can be meaningfully categorized as different types of various platonic ideals (for example, as members of one race or the other) based on anatomical / linguistic differences between once distinct sub-populations of humans is similar to the dichotomy between common wisdom (e.g. that has influenced people’s working understanding of the motion of objects) and the counter-intuitive nature of empirically established scientific ideas (e.g. Newton’s laws and the implications of Einstein’s theory of general relativity).  What appears on the surface to be true but in fact is not.  In this specific case, there is a pressure toward what Mayr terms “typological” thinking, in which we class people into idealized (platonic) types or races ().   

As pointed out most dramatically, and repeatedly, by Mayr (1985; 1994; 2000), and supported by the underlying commonality of molecular biological mechanisms and the continuity of life, stretching back to the last universal common ancestor, there are only individuals who are members of various populations that have experienced various degrees of separation from one another.  In many cases, these populations have diverged and, through geographic, behavioral, and structure adaptations driven by natural, social, and sexual selection together with the effects of various events, some non-adaptive, such as bottlenecks, founder effects, and genetic drift, may eventually become reproductively isolated from one another, forming new species.  An understanding of evolutionary principles and molecular mechanisms transforms biology from a study of non-existent types to a study of populations with their origins in common, sharing a single root – the last universal common ancestor (LUCA).   Over the last ~200,000 years the movement of humans first within Africa and then across the planet  has been impressive ().  These movements have been accompanied by the fragmentation of human populations. Campbell and Tishkoff (2008) identified 13 distinct ancestral African populations while Busby et al (2016) recognized 48 sub-saharan population groups.  The fragmentation of the human population is being reversed (or rather rendered increasingly less informative) by the effects of migration and extensive intermingling ().   

    Ideas, such as race (and in a sense species), try to make sense of the diversity of the many different types of organisms we observe. They are based on a form of essentialist or typological thinking – thinking that different species and populations are completely different “kinds” of objects, rather than individuals in a population connected historically to all other living things. Race is a more pernicious version of this illusion, a pseudo-scientific, political and ideological idea that postulates that humans come  in distinct, non-overlapping types (quote  again, from Mayr).  Such a weird idea underlies various illogical and often contradictory legal “rules” by which a person’s “race” is determined.  

Given the reality of the individual and the unreality of race, racial profiling (see Satel,
2002) can lead to serious medical mistakes, as made clear in the essays by Acquaviva & Mintz (2010) “Are We Teaching Racial Profiling?”,  Yudell et al  (2016) “Taking Race out of Human Genetics”, and Donovan (2014) “The impact of the hidden curriculum”. 

The idea of race as a type fails to recognize the dynamics of the genome over time.  If possible (sadly not) a comparative analysis of the genome of a “living fossil”, such as modern day coelacanths and their ancestors (living more than 80 million years ago) would likely reveal dramatic changes in genomic DNA sequence.  In this light the fact that between 100 to 200 new mutations are introduced into the human genome per generation (see Dolgin 2009 Human mutation rate revealed) seems like a useful number to be widely appreciated by students, not to mention the general public. Similarly, the genomic/genetic differences between humans, our primate relatives, and other mammals and the mechanisms behind them (Levchenko et al., 2017)(blog link) would seem worth considering and explicitly incorporating into curricula on genetics and human evolution.  

While race may be meaningless, racism is not.  How to understand racism?  Is it some kind of political artifact, or does it arise from biological factors.  Here, I believe, we find a important omission in many biology courses, textbooks, and curricula – namely an introduction and meaningful discussion of social evolutionary mechanisms. Many is the molecular/cell biology curriculum that completely ignores such evolutionary processes. Yet, the organisms that are the primary focus of biological research (and who pay for such research, e.g. humans) are social organisms at two levels.  In multicellular organisms somatic cells, which specialize to form muscular, neural, circulatory and immune systems, bone and connective tissues, sacrifice their own inter-generational reproductive future to assist their germ line (sperm and/or eggs) relatives, the cells that give rise to the next generation of organisms, a form of inclusive fitness (Dugatkin, 2007).  Moreover, humans are social organisms, often sacrificing themselves, sharing their resources, and showing kindness to other members of their group. This social cooperation is threatened by cheaters of various types (POST LINK).  Unless these social cheaters are suppressed, by a range of mechanisms, and through processes of kin/group selection, multicellular organisms die and socially dysfunctional social populations are likely to die out.  Without the willingness to cooperate, and when necessary, self-sacrifice, social organization is impossible – no bee hives, no civilizations.  Imagine a human population composed solely of people who behave in a completely selfish manner, not honoring their promises or social obligations.  

A key to social interactions involves recognizing those who are, and who are not part of your social group.  A range of traits can serve as markers for social inclusion.  A plausible hypothesis is that the explicit importance of group membership and defined social interactions becomes more critical when a society, or a part of society, is under stress.  Within the context of social stratification, those in the less privileged groups may feel that the social contract has been broken or made a mockery of.  The feeling (apparent reality) that members of “elite” or excessively privileged sub-groups are not willing to make sacrifices for others serves as evidence that social bonds are being broken (4). Times of economic and social disruption (migrations and conquests) can lead to increased explicit recognition of both group and non-group identification.  The idea that outsiders (non-group members) threaten the group can feed racism, a justification for why non-group members should be treated differently from group members.  From this position it is a small (conceptual) jump to the conclusion that non-group members are somehow less worthy, less smart, less trustworthy, less human – different in type from members of the group – many of these same points are made in an op-ed piece by Judis. 2018. What the Left Misses About Nationalism.

That economic or climatic stresses can foster the growth of racist ideas is no new idea; consider the unequal effects of various disruptions likely to be associated with the spread of automation (quote from George Will ) and the impact of climate change on migrations of groups within and between countries (see Saletan 2018b: Why Immigration Opponents Should Worry About Climate Change) are likely to spur various forms of social unrest, whether revolution or racism, or both – responses that could be difficult to avoid or control.   

So back to the question of biology education – in this context understanding the ingrained responses of social creatures associated with social cohesion and integrity need to be explicitly presented. Similarly, variants of such mechanisms occur within multicellular organisms and how they work is critical to understanding how diseases such as cancer, one of the clearest forms of a cheater phenotype, are suppressed.  Social evolutionary mechanisms provide the basis for understanding a range of phenomena, and the ingrained effects of social selection may be seen as one of the roots of racism, or at the very least a contributing factor worth acknowledging explicitly.  

Thanks to Melanie Cooper and Paul Strode for comments. Minor edits 4 May 2019.

Footnotes:

  1. It is an interesting possibility whether the 1%, or rather the super 0.1% represent their own unique form of social parasite, leading periodically to various revolutions – although sadly, new social parasites appear to re-emerge quite quickly.
  2. A part of the CoreBIO-biofundamentals project 
  3. At this point it is worth noting that biofundamentals itself includes sections on social evolution, kin/group and sexual selection (see Klymkowsky et al., 2016; LibreText link). 
  4. One might be forgiven for thinking that rich and privileged folk who escape paying what is seen as their fair share of taxes, might be cast as social cheaters (parasites) who, rather than encouraging racism might lead to revolutionary thoughts and actions. 

Literature cited: 

Acquaviva & Mintz. (2010). Perspective: Are we teaching racial profiling? The dangers of subjective determinations of race and ethnicity in case presentations. Academic Medicine 85, 702-705.

Busby et  al. (2016). Admixture into and within sub-Saharan Africa. Elife 5, e15266.

Campbell & Tishkoff. (2008). African genetic diversity: implications for human demographic history, modern human origins, and complex disease mapping. Annu. Rev. Genomics Hum. Genet. 9, 403-433.

Donovan, B.M. (2014). Playing with fire? The impact of the hidden curriculum in school genetics on essentialist conceptions of race. Journal of Research in Science Teaching 51: 462-496.

Dugatkin, L. A. (2007). Inclusive fitness theory from Darwin to Hamilton. Genetics 176, 1375-1380.

Klymkowsky et al., (2016). The design and transformation of Biofundamentals: a non-survey introductory evolutionary and molecular biology course..” LSE Cell Biol Edu pii: ar70.

Levchenko et al., (2017). Human accelerated regions and other human-specific sequence variations in the context of evolution and their relevance for brain development. Genome biology and evolution 10, 166-188.

Mayr, E. (1985). The Growth of Biological Thought: Diversity, Evolution, and Inheritance. Cambridge, MA: Belknap Press of Harvard University Press.

Mayr, E. (1994). Typological versus population thinking. Conceptual issues in evolutionary biology, 157-160.

—- (2000). Darwin’s influence on modern thought. Scientific American 283, 78-83.

Satel, S. (2002). I am a racially profiling doctor. New York Times 5, 56-58.

Yudell et al., (2016). Taking race out of human genetics. Science 351, 564-565.

Can we talk scientifically about free will?

(edited and updated – 3 May 2019)

For some, the scientific way of thinking is both challenging and attractive.  Thinking scientifically leads to an introduction to, and sometimes membership in a unique community, who at their best are curious, critical, creative, and receptive to new and mind-boggling ideas, anchored in objective (reproducible) observations whose implications can be rigorously considered (1).  

What I particularly love about science is its communal aspect, within which the novice can point to a new observation or logical limitation, and force the Nobel laureate (assuming that they remain cognitively nimble, ego-flexible, and interested in listening) to rethink and revise there positions. Add to that the amazing phenomena that the scientific enterprise has revealed to us, the apparent age and size of the universe, the underlying unity, and remarkable diversity of life, the mind-bending behavior of matter-energy at the quantum level, and the apparent bending of space-time.  Yet, and not withstanding the power of the scientific approach, there are many essential topics that simply cannot be studied scientifically, and even more in which a range of practical constraints seriously limit our ability to come to meaningful conclusions.  

Perhaps acknowledging the limits of science is nowhere more important than in the scientific study of consciousness and self-consciousness.  While we can confidently dismiss various speculations (often from disillusioned and displaced physicists) that all matter is “conscious” (2), or mystical speculations on the roles of  super-natural forces (spirits and such), we need to recognize explicitly why studying consciousness and self-consciousness remains an extremely difficult and problematic area of research.  One aspect is that various scientific-sounding pronouncements on the impossibility or illusory nature of free will have far ranging and largely pernicious if not down right toxic social and personal  implications. Denying the possibility of free will implies that people are not responsible for their actions – and so cannot reasonably be held accountable.  In a broader sense, such a view can be seen as justifying treating we hold these truthspeople as disposable machines, to be sacrificed for some ideological or religious faith (3).  It directly contradicts the founding presumptions and aspirations behind the enterprise that is the United States of America, as articulated by Thomas Jefferson, a fragile bulwark against sacrificing individuals on the alter of often pseudoscientific or half-baked ideas.

So the critical question is, is there a compelling reason to take pronouncements such as those that deny the reality of free will, seriously?   I think not.  I would assume that all “normal” human beings come to feel that there is someone (them) listening to various aspects of neural activity and that they (the listener) can in turn decide (or at the very least influence) what happens next, how they behave, what they think and how they feel.  All of which is to say that there is an undeniable (self-evident) reality associated with self-consciousness, as well as the feeling of (at least partial) control. 

badmomThis is not to imply that humans (and other animals) are totally in control of their thoughts and actions, completely “free” – obviously not.  First, one’s life history and the details of a situation can dramatically impact thoughts and behaviors, and much of that is based on luck, a range of hereditary factors, our experiences (both long and short term) that combine to influence our response to a particular situation – recognition of which is critical for developing empathy for ourselves and others (see The radical moral implications of luck in human life).  At the same time how we (our brain) experiences and interprets what our brain (also us) is “saying” to itself is based on genetically and developmentally shaped neural circuitry and signaling systems that influence the activities of complex ensembles of interconnected cellular systems – it is not neurons firing in deterministic patterns, since at the cellular level there are multiple stochastic processes that influence the behaviors of neural networks. There is noise (spontaneous activity) that impacts patterns of neuronal signaling, as well as stochastic processes, such as the timing of synaptic vesicle fusion events, the cellular impacts of diffusing molecules, the monoallelic expression of genes (Deng et al., 2014; Zakharova et al., 2009) and various feedback networks that can lead to subtle and likely functional differences between apparently identical cells of what appear to be the “same” type (for the implications of stochastic, single cell processes see: Biology education in the light of single cell/molecule studies).

So let us consider what it would take to make a fully deterministic model of the brain, without considering for the moment the challenges associated with incorporating the effects of molecular and cellular level noise. First there is the inherent difficulty (practical impossibility) of fully characterizing the properties of the living human brain, with its ~100,000,000,000 neurons, brain-networksmaking ~7,000,000,000,000,000 synapses with one another, and interacting in various ways with ~100,000,000,000 glia that include non-neuronal astrocytes, oligodendrocytes, and immune system microglia (von Bartheld et al., 2016). These considerations ignore the recently discovered effects of the rest of the body (and its microbiome) on the brain (see Mayer et al., 2014; Smith, 2015).

Then there is the fact that measuring a system changes a system. In a manner analogous to the Heisenberg uncertainty principle, measuring aspects of neuronal function (or glial-neural interactions) will necessarily involve perturbations to the examined cells – recent studies have used a range of light emitting reporters to follow various aspects of neuronal activity (see Lin and Schnitzer, 2016), but these reporters also perturb the system, if only through heating effects associated with absorbing and emitting light. Or if they, for example, serve to report the levels of intracellular calcium ions, involved in a range of cellular behaviors, they will necessarily influence calcium ion concentrations, etc. Such high resolution analyses, orders of magnitude higher than functional MRI (fMRI) studies  would likely kill or cripple the person measured. The more accurate the measurement, the more perturbed, and the more altered future behaviors can be expected to be and the less accurate our model of the functioning brain will be.

There is, however, another more practical question to consider, namely are current neurobiological methods adequate for revealing how the brain works.  This point has been made in a particularly interesting way by Jonas & Kording (2017) in their paper “Could a neuroscientist understand a microprocessor?” – their analysis indicates the answer is “probably not”, even though such a processor represents a completely deterministic system. 

If it is not possible to predict the system, then any discussion of free will or determinism is mute – unknowable and in an important scientific sense uninteresting. In a Popperian way (only the ability to predict and falsify interesting predictions makes, at the end of the day, something scientifically useful.  

I have little intelligent to say about artificial intelligence, since free will and intelligence are rather different things. While it is clearly possible to build a computer system (hardware and software) that can beat people at complex games such as chess (Kasparov, 2010; see AlphaZero) and GO (Silver et al., 2016), it remains unclear whether a computer can “want” to play chess or go in the same way as a human being does.  We can even consider the value of evolving free will, as a way to confuse our enemies and seduce love interests or non-sexual social contacts. Brembs  (2010) presents an interesting paper on the evolutionary value of free will in lower organisms (invertebrates).

What seems clear to me (and considered before: The pernicious effects of disrespecting the constraints of science) is that the damage, social, emotional, and political, associated with claiming to have come to an “scientifically established” conclusion on topics that are demonstrably beyond the scope of scientific resolution, conclusions that make a completely knowable and strictly deterministic universe impossible to attain) should be explained and understood to both the general public and stressed on and by the scientific and educational community.  They could be seen as a form of scientific malpractice that should be, quite rightly, dismissed out of hand. Rather than become the focus of academic or public debate, they are best ignored and those who promulgate them, often out of careerist motivations (or just arrogance) should be pitied, rather than being promoted as public intellectuals to be taken seriously.A note on images: Parts of the header image are modified from images created by Tom Edwards (of WallyWare fame) and used by permission. The “Becky O” Bad Mom card by Roz Chast is used by permission.  Thanks to Michael Stowell for pointing out the work of Jonas and Kording.  Also it turns out that physicist Sabine Hossenfelder has recently had something to say on the subject.  Minor updates and the re-insertion of figures – 26 October 2020.

Footnotes 

1. We won’t consider them at their worst, suffice it to say, they can embrace all that is wrong with humanity, leading to a range of atrocities.

3. The universe may be conscious, say prominent scientists

4. A common topic of the philosopher John Gray: Believing in Reason is Childish

Literature cited:

Brembs, B. (2010). Towards a scientific concept of free will as a biological trait: spontaneous actions and decision-making in invertebrates. Proceedings of the Royal Society of London B: Biological Sciences, rspb20102325.

Deng, Q., Ramsköld, D., Reinius, B. and Sandberg, R. (2014). Single-cell RNA-seq reveals dynamic, random monoallelic gene expression in mammalian cells. Science 343, 193-196.

Kasparov, G. (2010). The chess master and the computer. The New York Review of Books 57, 16-19.

Lin, M. Z. and Schnitzer, M. J. (2016). Genetically encoded indicators of neuronal activity. Nature neuroscience 19, 1142.

Jonas, E., & Kording, K. P. (2017). Could a neuroscientist understand a microprocessor?. PLoS computational biology, 13, e1005268.

Mayer, E. A., Knight, R., Mazmanian, S. K., Cryan, J. F., & Tillisch, K. (2014). Gut microbes and the brain: paradigm shift in neuroscience. Journal of Neuroscience, 34, 15490-15496.

Silver et al. (2016). Mastering the game of Go with deep neural networks and tree search. nature 529, 484.

Smith, P. A. (2015). The tantalizing links between gut microbes and the brain. Nature News, 526, 312.

von Bartheld, C. S., Bahney, J. and Herculano‐Houzel, S. (2016). The search for true numbers of neurons and glial cells in the human brain: a review of 150 years of cell counting. Journal of Comparative Neurology 524, 3865-3895.

Zakharova, I. S., Shevchenko, A. I. and Zakian, S. M. (2009). Monoallelic gene expression in mammals. Chromosoma 118, 279-290.