Remembering the past and recognizing the limits of science …

A recent article in the Guardian reports on a debate at University College London (1) on whether to rename buildings because the people honored harbored odious ideological and political positions. Similar debates and decisions, in some cases involving unacceptable and abusive behaviors rather than ideological positions, have occurred at a number of institutions (see Calhoun at Yale, Sackler in NYC, James Watson at Cold Spring Harbor, Tim Hunt at the MRC, and sexual predators within the National Academy of Sciences). These debates raise important and sometimes troubling issues.

When a building is named after a scientist, it is generally in order to honor that person’s scientific contributions. The scientist’s ideological opinions are rarely considered explicitly, although they may influence the decision at the time.  In general, scientific contributions are timeless in that they represent important steps in the evolution of a discipline, often by establishing a key observation, idea, or conceptual framework upon which subsequent progress is based – they are historically important.  In this sense, whether a scientific contribution was correct (as we currently understand the natural world) is less critical than what that contribution led to. The contribution marks a milestone or a turning point in a discipline, understanding that the efforts of many underlie disciplinary progress and that those contributors made it possible for others to “see further.” (2)

Since science is not about recognizing or establishing a single unchanging capital-T-Truth, but rather about developing an increasingly accurate model for how the world works, it is constantly evolving and open to revision.  Working scientists are not particularly upset when new observations lead to revisions to or the abandonment of ideas or the addition of new terms to equations.(3)

Compare that to the situation in the ideological, political, or religious realms.  A new translation or interpretation of a sacred text can provoke schism and remarkably violent responses between respective groups of believers. The closer the groups are to one another, the more horrific the levels of violence that emerge often are.  In contrast, over the long term, scientific schools of thought resolve, often merging with one another to form unified disciplines. From my own perspective, and not withstanding the temptation to generate new sub-disciplines (in part in response to funding factors), all of the life sciences have collapsed into a unified evolutionary/molecular framework.  All scientific disciplines tend to become, over time, consistent with, although not necessarily deducible from, one another, particularly when the discipline respects and retains connections to the real (observable) world.(4)  How different from the political and ideological.

The historical progression of scientific ideas is dramatically different from that of political, religious, or social mores.  No matter what some might claim, the modern quantum mechanical view of the atom bears little meaningful similarity to the ideas of the cohort that included Leucippus and Democritus.  There is progress in science.  In contrast, various belief systems rarely abandon their basic premises.  A politically right- or left-wing ideologue might well find kindred spirits in the ancient world.  There were genocidal racists, theists, and nationalists in the past and there are genocidal racists, theists, and nationalists now.  There were (limited) democracies then, as there are (limited) democracies now; monarchical, oligarchical, and dictatorial political systems then and now; theistic religions then and now. Absolutist ideals of innate human rights, then as now, are routinely sacrificed for a range of mostly self-serving or politically expedient reasons.  Advocates of rule by the people repeatedly install repressive dictatorships. The authors of the United States Constitution declare the sacredness of human rights and then legitimized slavery. “The Bible … posits universal brotherhood, then tells Israel to kill all the Amorites.” (Phil Christman). The eugenic movement is a good example; for the promise of a genetically perfect future, existing people are treated inhumanely – just another version of apocalyptic (ends justify the means) thinking. 

Ignoring the simpler case of not honoring criminals (sexual and otherwise), most calls for removing names from buildings are based on the odious ideological positions espoused by the honored – typically some version of racist, nationalistic, or sexist ideologies.  The complication comes from the fact that people are complex, shaped by the context within which they grow up, their personal histories and the dominant ideological milieu they experienced, as well as their reactions to it.  But these ideological positions are not scientific, although a person’s scientific worldview and their ideological positions may be intertwined. The honoree may claim that science “says” something unambiguous and unarguable, often in an attempt to force others to acquiesce to their perspective.  A modern example would be arguments about whether climate is changing due to anthropogenic factors, a scientific topic, and what to do about it, an economic, political, and perhaps ideological question.(5)

So what to do?  To me, the answer seems reasonably obvious – assuming that the person’s contribution was significant enough, we should leave the name in place and use the controversy to consider why they held their objectionable beliefs and more explicitly why they were wrong to claim scientific justification for their ideological (racist / nationalist / sexist / socially prejudiced) positions.(6)  Consider explicitly why an archeologist (Flinders Petrie), a naturalist (Francis Galton), a statistician (Karl Pearson), and an advocate for women’s reproductive rights (Marie Stopes) might all support the non-scientific ideology of eugenics and forced sterilization.  We can use such situations as a framework within which to delineate the boundaries between the scientific and the ideological. 

Understanding this distinction is critical and is one of the primary justifications for why people not necessarily interested in science or science-based careers are often required to take science courses.  Yet all too often these courses fail to address the constraints of science, the difference between political and ideological opinions, and the implications of scientific models.  I would argue that unless students (and citizens) come to understand what constitutes a scientific idea or conclusion and what reflects a political or ideological position couched in scientific or pseudo-scientific terms, they are not learning what they need to know about science or its place in society.  That science is used as a proxy for Truth writ large is deeply misguided. It is much more important to understand how science works than it is to remember the number of phyla or the names of amino acids, the ability to calculate the pH of a solution, or to understand processes going on at the center of a galaxy or the details of a black hole’s behavior.  While sometimes harmless, misunderstanding science and how it is used socially can result in traumatic social implications, such as drawing harmful conclusions about individuals from statistical generalizations of populations, avoidable deaths from measles, and the forced “eugenic” sterilization of people deemed defective.  We should seek out and embrace opportunities to teach about these issues, even if it means we name buildings after imperfect people.  

footnotes:

  1. The location of some of my post-doc work.
  2. In the words of Isaac Newton, “If I have seen further than others, it is by standing upon the shoulders of giants.”
  3.  Unless, of course, the ideas and equations being revised or abandoned are one’s own. 
  4.  Perhaps the most striking exception occurs in physics on the subjects of quantum mechanics and relativity, but as I am not a physicist, I am not sure about that. 
  5.  Perhaps people are “meant” to go extinct. 
  6.  The situation is rather different outside of science, because the reality of progress is more problematic and past battles continue to be refought.  Given the history of Reconstruction and the Confederate “Lost Cause” movement [see PBS’s Reconstruction] following the American Civil War, monuments to defenders of slavery, no matter how admirable they may have been in terms of personal bravery and such, reek of implied violence, subjugation, and repression, particularly when the person honored went on to found an institution dedicated to racial hatred and violent intimidation [link]. There would seem little doubt that a monument in honor of a Nazi needs to be eliminated and replaced by one to their victims or to those who defeated them.

Latest & past (PLoS Sci-Ed)

Most recent post:  Avoiding unrecognized racist implications arising from teaching genetics

Recent posts:

curlique

Please note, given the move from PLoS some of the links in the posts may be broken; some minor editing in process.  All by Mike Klymkowsky unless otherwise noted

On teaching genetics, social evolution and understanding the origins of racism

Links between genetics and race crop up periodically in the popular press (link; link), but the real, substantive question, and the topic of a number of recent essays (see Saletan. 2018a. Stop Talking About Race and IQ) is whether the idea of “race” as commonly understood, and used by governments to categorize people (link), makes scientific sense.  More to the point, do biology educators have an unmet responsibility to modify and extend their materials and pedagogical approaches to address the non-scientific, often racist, implications of racial characterizations.  Such questions are complicated by a social geneticssecond factor, independent of whether the term race has any useful scientific purpose, namely to help students understand the biological (evolutionary) origins of racism itself, together with the stressors that lead to its periodic re-emergence as a socio-political factor. In times of social stress, reactions to strangers (others) identified by variations in skin color or overt religious or cultural signs (dress), can provoke hostility against those perceived to be members of a different social group.  As far as I can tell, few in the biology education community, which includes those involved in generating textbooks, organizing courses and curricula, or the design, delivery, and funding of various public science programs, including PBS’s NOVA, the science education efforts of HHMI and other private foundations, and programs such as Science Friday on public radio, directly address the roots of racism, roots associated with biological processes such as the origins and maintenance of multicellularity and other forms of social organization among organisms, involved in coordinating their activities and establishing defenses against social cheaters and processes such as cancer, in an organismic context (1).  These established defense mechanisms can, if not recognized and understood, morph into reflexive and unjustified intolerance, hostility toward, and persecution of various “distinguishable others.”  I will consider both questions, albeit briefly, here. 


Two factors have influenced my thinking about these questions.  The first involves the design of the biofundamentals text/course and its extension to include topics in genetics (2).  This involved thinking about what is commonly taught in genetics, what is critical for students to know going forward (and by implication what is not), and where materials on genetic processes best fit into a molecular biology curriculum (3).  While engaged in such navel gazing there came an email from Malcolm Campbell describing student responses to the introduction of a chapter section on race and racism in his textbook Integrating Concepts in Biology.  The various ideas of race, the origins of racism, and the periodic appearance of anti-immigrant, anti-religious and racist groups raise important questions – how best to clarify what is an undeniable observation, that different, isolated, sub-populations of a species can be distinguished from one another (see quote from Ernst Mayr’s 1994 “Typological versus Population thinking” ), from the deeper biological reality, that at the level of the individual these differences are meaningless. In what I think is an interesting way, the idea that people can be meaningfully categorized as different types of various platonic ideals (for example, as members of one race or the other) based on anatomical / linguistic differences between once distinct sub-populations of humans is similar to the dichotomy between common wisdom (e.g. that has influenced people’s working understanding of the motion of objects) and the counter-intuitive nature of empirically established scientific ideas (e.g. Newton’s laws and the implications of Einstein’s theory of general relativity).  What appears on the surface to be true but in fact is not.  In this specific case, there is a pressure toward what Mayr terms “typological” thinking, in which we class people into idealized (platonic) types or races ().   

As pointed out most dramatically, and repeatedly, by Mayr (1985; 1994; 2000), and supported by the underlying commonality of molecular biological mechanisms and the continuity of life, stretching back to the last universal common ancestor, there are only individuals who are members of various populations that have experienced various degrees of separation from one another.  In many cases, these populations have diverged and, through geographic, behavioral, and structure adaptations driven by natural, social, and sexual selection together with the effects of various events, some non-adaptive, such as bottlenecks, founder effects, and genetic drift, may eventually become reproductively isolated from one another, forming new species.  An understanding of evolutionary principles and molecular mechanisms transforms biology from a study of non-existent types to a study of populations with their origins in common, sharing a single root – the last universal common ancestor (LUCA).   Over the last ~200,000 years the movement of humans first within Africa and then across the planet  has been impressive ().  These movements have been accompanied by the fragmentation of human populations. Campbell and Tishkoff (2008) identified 13 distinct ancestral African populations while Busby et al (2016) recognized 48 sub-saharan population groups.  The fragmentation of the human population is being reversed (or rather rendered increasingly less informative) by the effects of migration and extensive intermingling ().   

    Ideas, such as race (and in a sense species), try to make sense of the diversity of the many different types of organisms we observe. They are based on a form of essentialist or typological thinking – thinking that different species and populations are completely different “kinds” of objects, rather than individuals in a population connected historically to all other living things. Race is a more pernicious version of this illusion, a pseudo-scientific, political and ideological idea that postulates that humans come  in distinct, non-overlapping types (quote  again, from Mayr).  Such a weird idea underlies various illogical and often contradictory legal “rules” by which a person’s “race” is determined.  

Given the reality of the individual and the unreality of race, racial profiling (see Satel,
2002) can lead to serious medical mistakes, as made clear in the essays by Acquaviva & Mintz (2010) “Are We Teaching Racial Profiling?”,  Yudell et al  (2016) “Taking Race out of Human Genetics”, and Donovan (2014) “The impact of the hidden curriculum”. 

The idea of race as a type fails to recognize the dynamics of the genome over time.  If possible (sadly not) a comparative analysis of the genome of a “living fossil”, such as modern day coelacanths and their ancestors (living more than 80 million years ago) would likely reveal dramatic changes in genomic DNA sequence.  In this light the fact that between 100 to 200 new mutations are introduced into the human genome per generation (see Dolgin 2009 Human mutation rate revealed) seems like a useful number to be widely appreciated by students, not to mention the general public. Similarly, the genomic/genetic differences between humans, our primate relatives, and other mammals and the mechanisms behind them (Levchenko et al., 2017)(blog link) would seem worth considering and explicitly incorporating into curricula on genetics and human evolution.  

While race may be meaningless, racism is not.  How to understand racism?  Is it some kind of political artifact, or does it arise from biological factors.  Here, I believe, we find a important omission in many biology courses, textbooks, and curricula – namely an introduction and meaningful discussion of social evolutionary mechanisms. Many is the molecular/cell biology curriculum that completely ignores such evolutionary processes. Yet, the organisms that are the primary focus of biological research (and who pay for such research, e.g. humans) are social organisms at two levels.  In multicellular organisms somatic cells, which specialize to form muscular, neural, circulatory and immune systems, bone and connective tissues, sacrifice their own inter-generational reproductive future to assist their germ line (sperm and/or eggs) relatives, the cells that give rise to the next generation of organisms, a form of inclusive fitness (Dugatkin, 2007).  Moreover, humans are social organisms, often sacrificing themselves, sharing their resources, and showing kindness to other members of their group. This social cooperation is threatened by cheaters of various types (POST LINK).  Unless these social cheaters are suppressed, by a range of mechanisms, and through processes of kin/group selection, multicellular organisms die and socially dysfunctional social populations are likely to die out.  Without the willingness to cooperate, and when necessary, self-sacrifice, social organization is impossible – no bee hives, no civilizations.  Imagine a human population composed solely of people who behave in a completely selfish manner, not honoring their promises or social obligations.  

A key to social interactions involves recognizing those who are, and who are not part of your social group.  A range of traits can serve as markers for social inclusion.  A plausible hypothesis is that the explicit importance of group membership and defined social interactions becomes more critical when a society, or a part of society, is under stress.  Within the context of social stratification, those in the less privileged groups may feel that the social contract has been broken or made a mockery of.  The feeling (apparent reality) that members of “elite” or excessively privileged sub-groups are not willing to make sacrifices for others serves as evidence that social bonds are being broken (4). Times of economic and social disruption (migrations and conquests) can lead to increased explicit recognition of both group and non-group identification.  The idea that outsiders (non-group members) threaten the group can feed racism, a justification for why non-group members should be treated differently from group members.  From this position it is a small (conceptual) jump to the conclusion that non-group members are somehow less worthy, less smart, less trustworthy, less human – different in type from members of the group – many of these same points are made in an op-ed piece by Judis. 2018. What the Left Misses About Nationalism.

That economic or climatic stresses can foster the growth of racist ideas is no new idea; consider the unequal effects of various disruptions likely to be associated with the spread of automation (quote from George Will ) and the impact of climate change on migrations of groups within and between countries (see Saletan 2018b: Why Immigration Opponents Should Worry About Climate Change) are likely to spur various forms of social unrest, whether revolution or racism, or both – responses that could be difficult to avoid or control.   

So back to the question of biology education – in this context understanding the ingrained responses of social creatures associated with social cohesion and integrity need to be explicitly presented. Similarly, variants of such mechanisms occur within multicellular organisms and how they work is critical to understanding how diseases such as cancer, one of the clearest forms of a cheater phenotype, are suppressed.  Social evolutionary mechanisms provide the basis for understanding a range of phenomena, and the ingrained effects of social selection may be seen as one of the roots of racism, or at the very least a contributing factor worth acknowledging explicitly.  

Thanks to Melanie Cooper and Paul Strode for comments. Minor edits 4 May 2019.

Footnotes:

  1. It is an interesting possibility whether the 1%, or rather the super 0.1% represent their own unique form of social parasite, leading periodically to various revolutions – although sadly, new social parasites appear to re-emerge quite quickly.
  2. A part of the CoreBIO-biofundamentals project 
  3. At this point it is worth noting that biofundamentals itself includes sections on social evolution, kin/group and sexual selection (see Klymkowsky et al., 2016; LibreText link). 
  4. One might be forgiven for thinking that rich and privileged folk who escape paying what is seen as their fair share of taxes, might be cast as social cheaters (parasites) who, rather than encouraging racism might lead to revolutionary thoughts and actions. 

Literature cited: 

Acquaviva & Mintz. (2010). Perspective: Are we teaching racial profiling? The dangers of subjective determinations of race and ethnicity in case presentations. Academic Medicine 85, 702-705.

Busby et  al. (2016). Admixture into and within sub-Saharan Africa. Elife 5, e15266.

Campbell & Tishkoff. (2008). African genetic diversity: implications for human demographic history, modern human origins, and complex disease mapping. Annu. Rev. Genomics Hum. Genet. 9, 403-433.

Donovan, B.M. (2014). Playing with fire? The impact of the hidden curriculum in school genetics on essentialist conceptions of race. Journal of Research in Science Teaching 51: 462-496.

Dugatkin, L. A. (2007). Inclusive fitness theory from Darwin to Hamilton. Genetics 176, 1375-1380.

Klymkowsky et al., (2016). The design and transformation of Biofundamentals: a non-survey introductory evolutionary and molecular biology course..” LSE Cell Biol Edu pii: ar70.

Levchenko et al., (2017). Human accelerated regions and other human-specific sequence variations in the context of evolution and their relevance for brain development. Genome biology and evolution 10, 166-188.

Mayr, E. (1985). The Growth of Biological Thought: Diversity, Evolution, and Inheritance. Cambridge, MA: Belknap Press of Harvard University Press.

Mayr, E. (1994). Typological versus population thinking. Conceptual issues in evolutionary biology, 157-160.

—- (2000). Darwin’s influence on modern thought. Scientific American 283, 78-83.

Satel, S. (2002). I am a racially profiling doctor. New York Times 5, 56-58.

Yudell et al., (2016). Taking race out of human genetics. Science 351, 564-565.

Can we talk scientifically about free will?

(edited and updated – 3 May 2019)

For some, the scientific way of thinking is both challenging and attractive.  Thinking scientifically leads to an introduction to, and sometimes membership in a unique community, who at their best are curious, critical, creative, and receptive to new and mind-boggling ideas, anchored in objective (reproducible) observations whose implications can be rigorously considered (1).  

What I particularly love about science is its communal aspect, within which the novice can point to a new observation or logical limitation, and force the Nobel laureate (assuming that they remain cognitively nimble, ego-flexible, and interested in listening) to rethink and revise there positions. Add to that the amazing phenomena that the scientific enterprise has revealed to us, the apparent age and size of the universe, the underlying unity, and remarkable diversity of life, the mind-bending behavior of matter-energy at the quantum level, and the apparent bending of space-time.  Yet, and not withstanding the power of the scientific approach, there are many essential topics that simply cannot be studied scientifically, and even more in which a range of practical constraints seriously limit our ability to come to meaningful conclusions.  

Perhaps acknowledging the limits of science is nowhere more important than in the scientific study of consciousness and self-consciousness.  While we can confidently dismiss various speculations (often from disillusioned and displaced physicists) that all matter is “conscious” (2), or mystical speculations on the roles of  super-natural forces (spirits and such), we need to recognize explicitly why studying consciousness and self-consciousness remains an extremely difficult and problematic area of research.  One aspect is that various scientific-sounding pronouncements on the impossibility or illusory nature of free will have far ranging and largely pernicious if not down right toxic social and personal  implications. Denying the possibility of free will implies that people are not responsible for their actions – and so cannot reasonably be held accountable.  In a broader sense, such a view can be seen as justifying treating people as disposable machines, to be sacrificed for some ideological or religious faith (3).  It directly contradicts the founding presumptions and aspirations behind the enterprise that is the United States of America, as articulated by Thomas Jefferson, a fragile bulwark against sacrificing individuals on the alter of often pseudoscientific or half-baked ideas. 

So the critical question is, is there a compelling reason to take pronouncements such as those that deny the reality of free will, seriously?   I think not.  I would assume that all “normal” human beings come to feel that there is someone (them) listening to various aspects of neural activity and that they (the listener) can in turn decide (or at the very least influence) what happens next, how they behave, what they think and how they feel.  All of which is to say that there is an undeniable (self-evident) reality associated with self-consciousness, as well as the feeling of (at least partial) control. 

This is not to imply that humans (and other animals) are totally in control of their thoughts and actions, completely “free” – obviously not.  First, one’s life history and the immediate situation can dramatically impact thoughts and behaviors, and much of that is based on luck and our responses to it – recognition of which is critical for developing empathy for ourselves and others (see The radical moral implications of luck in human life).  At the same time how we (our brain) experiences and interprets what our brain (also us) is “saying” to itself is based on genetically and developmentally shaped neural circuitry and signaling systems that influence the activities of complex ensembles of interconnected cellular systems – it is not neurons firing in deterministic patterns, since at the cellular level there are multiple stochastic processes that influence the behaviors of neural networks. There is noise (spontaneous activity) that impacts patterns of neuronal signaling, as well as stochastic processes, such as the timing of synaptic vesicle fusion events, the cellular impacts of diffusing molecules, and the monoallelic expression of genes (Deng et al., 2014; Zakharova et al., 2009) that can lead to subtle and likely functional differences between apparently identical cells of what appear to be the “same” type (for the implications of stochastic, single cell processes see: Biology education in the light of single cell/molecule studies).

So let us consider what it would take to make a fully deterministic model of the brain, without considering for the moment the challenges associated with incorporating the effects of molecular and cellular level noise. First there is the inherent difficulty (practical impossibility) of fully characterizing the properties of the living human brain, with its ~100,000,000,000 neurons, making ~7,000,000,000,000,000 synapses with one another, and interacting in various ways with ~100,000,000,000 glia that include non-neuronal astrocytes, oligodendrocytes, and immune system microglia (von Bartheld et al., 2016). These considerations ignore the recently discovered effects of the rest of the body (and its microbiome) on the brain (see Mayer et al., 2014; Smith, 2015).

Then there is the fact that measuring a system changes a system. In a manner analogous to the Heisenberg uncertainty principle, measuring aspects of neuronal function (or glial-neural interactions) will necessarily involve perturbations to the examined cell – recent studies have used a range of light emitting reporters to follow various aspects of neuronal activity (see Lin and Schnitzer, 2016), but these reporters perturb the system, if only through heating effects associated with absorbing and emitting light. Or if they, for example, serve to report the levels of intracellular calcium ions, involved in a range of cellular behaviors, they will necessarily influence calcium ion concentration, etc. Such a high resolution analysis, orders of magnitude higher than functional MRI (fMRI) studies (illustrated in the heading picture) would likely kill or cripple the person measured. The more accurate the measurement, the more perturbed, and the more altered future behaviors can be expected to be and the less accurate our model of the functioning brain will be.

There is, however, another more practical question to consider, namely are current neurobiological methods adequate for revealing how the brain works.  This point has been made in a particularly interesting way by Jonas & Kording (2017) in their paper “Could a neuroscientist understand a microprocessor?” – their analysis indicates the answer is “probably not”, even though such a processor represents a completely deterministic system.
 

If it is not possible to predict the system, then any discussion of free will or determinism is mute – unknowable and in an important scientific sense uninteresting, In a Popperian way (only the ability to predict and falsify predictions makes, at the end of the day, something scientific.  

I have little intelligent to say about artificial intelligence, since free will and intelligence are rather different things. While it is clearly possible to build a computer system (hardware and software) that can beat people at complex games such as chess (Kasparov, 2010; see AlphaZero) and GO (Silver et al., 2016), it remains completely unclear whether a computer can “want” to play chess or go in the same way as a human beings does.  We can even consider the value of evolving free will, as a way to confuse our enemies and seduce love interests or non-sexual social contacts. Brembs  (2010) presents an interesting paper on the evolutionary value of free will in lower organisms (invertebrates).

What seems clear to me (and considered before: The pernicious effects of disrespecting the constraints of science) is that the damage, social, emotional, and political, associated with claiming to have come to an “scientifically established” conclusion on topics that are demonstrably beyond the scope of scientific resolution, conclusions that make a completely knowable and strictly deterministic universe impossible to attain) should be clearly explained and understood to both the general public and stressed on and by the scientific and educational community.  They could be seen as a form of scientific malpractice that should be, quite rightly, dismissed out of hand. Rather than become the focus of academic or public debate, they are best ignored, and those who promulgate them, often out of careerist motivations (or just arrogance) should be pitied, rather than being promoted as public intellectuals to be taken seriously.A note on the header image: Parts of the header image are modified from images created by Tom Edwards (of WallyWare fame) and used by permission. The “Becky O” Bad Mom card by Roz Chast is used by permission.  Thanks to Michael Stowell for pointing out the work of Jonas and Kording.  Also it turns out that physicist Sabine Hossenfelder has recently had something to say on the subject.

Footnotes 

1. We won’t consider them at their worst, suffice it to say, they can embrace all that is wrong with humanity, leading to a range of atrocities.

3. The universe may be conscious, say prominent scientists

4. A common topic of the philosopher John Gray: such as Believing in Reason is Childish

Literature cited:

Brembs, B. (2010). Towards a scientific concept of free will as a biological trait: spontaneous actions and decision-making in invertebrates. Proceedings of the Royal Society of London B: Biological Sciences, rspb20102325.

Deng, Q., Ramsköld, D., Reinius, B. and Sandberg, R. (2014). Single-cell RNA-seq reveals dynamic, random monoallelic gene expression in mammalian cells. Science 343, 193-196.

Kasparov, G. (2010). The chess master and the computer. The New York Review of Books 57, 16-19.

Lin, M. Z. and Schnitzer, M. J. (2016). Genetically encoded indicators of neuronal activity. Nature neuroscience 19, 1142.

Jonas, E., & Kording, K. P. (2017). Could a neuroscientist understand a microprocessor?. PLoS computational biology, 13, e1005268.

Mayer, E. A., Knight, R., Mazmanian, S. K., Cryan, J. F., & Tillisch, K. (2014). Gut microbes and the brain: paradigm shift in neuroscience. Journal of Neuroscience, 34, 15490-15496.

Silver et al. (2016). Mastering the game of Go with deep neural networks and tree search. nature 529, 484.

Smith, P. A. (2015). The tantalizing links between gut microbes and the brain. Nature News, 526, 312.

von Bartheld, C. S., Bahney, J. and Herculano‐Houzel, S. (2016). The search for true numbers of neurons and glial cells in the human brain: a review of 150 years of cell counting. Journal of Comparative Neurology 524, 3865-3895.

Zakharova, I. S., Shevchenko, A. I. and Zakian, S. M. (2009). Monoallelic gene expression in mammals. Chromosoma 118, 279-290.

Genes – way weirder than you thought

Pretty much everyone, at least in societies with access to public education or exposure to media in its various forms, has been introduced to the idea of the gene, but “exposure does not equate to understanding” (see Lanie et al., 2004).  Here I will argue that part of the problem is that instruction in genetics (or in more modern terms, the molecular biology of the gene and its role in biological processes) has not kept up with the advances in our understanding of the molecular mechanisms underlying biological processes (Gayon, 2016).  

Let us reflect (for a moment) on the development of the concept of a gene: Over the course of human history, those who have been paying attention to such things have noticed that organisms appear to come in “types”, what biologists refer to as species. At the same time, individual organisms of the same type are not identical to one  another, they vary in various ways. Moreover, these differences can be passed from generation to generation, and by controlling  which organisms were bred together; some of the resulting offspring often displayed more extreme versions of the “selected” traits.  By strictly controlling which individuals were bred together, over a number of generations, people were able to select for the specific traits they desired (→).  As an interesting aside, as people domesticated animals, such as cows and goats, the availability of associated resources (e.g. milk) led to reciprocal effects – resulting in traits such as adult lactose tolerance (see Evolution of (adult) lactose tolerance & Gerbault et al., 2011).  Overall, the process of plant and animal breeding is generally rather harsh (something that the fanciers of strange breeds who object to GMOs might reflect upon), in that individuals that did not display the desired trait(s) were generally destroyed (or at best, not allowed to breed).  

Charles Darwin took inspiration from this process, substituting “natural” for artificial (human-determined) selection to shape populations, eventually generating new species (Darwin, 1859).  Underlying such evolutionary processes was the presumption that traits, and their variation, was “encoded” in some type of “factors”, eventually known as genes and their variants, alleles.  Genes influenced the organism’s molecular, cellular, and developmental systems, but the nature of these inheritable factors and the molecular trait building machines active in living systems was more or less completely obscure. 

Through his studies on peas, Gregor Mendel was the first to clearly identify some of the rules for the behavior of these inheritable factors using highly stereotyped, and essentially discontinuous traits – a pea was either yellow or green, wrinkled or smooth.  Such traits, while they exist in other organisms, are in fact rare – an example of how the scientific exploration of exceptional situations can help understand general processes, but the downside is the promulgation of the idea that genes and traits are somehow discontinuous – that a trait is yes/no, displayed by an organism or not – in contrast to the realities that the link between the two is complex, a reality rarely directly addressed (apparently) in most introductory genetics courses.  Understanding such processes is critical to appreciating the fact that genetics is often not destiny, but rather alterations in probabilities (see Cooper et al., 2013).  Without such an more nuanced and realistic understanding, it can be difficult to make sense of genetic information.      

A gene is part of a molecular machine:  A number of observations transformed the abstraction of Darwin’s and Mendel’s hereditary factors into physical entities and molecular mechanisms (1).  In 1928 Fred Griffith demonstrated that a genetic trait could be transferred from dead to living organisms – implying a degree of physical / chemical stability; subsequent observations implied that the genetic information transferred involved DNA molecules. The determination of the structure of double-stranded DNA immediately suggested how information could be stored in DNA (in variations of bases along the length of the molecule) and how this information could be duplicated (based on the specificity of base pairing).  Mutations could be understood as changes in the sequence of bases along a DNA molecule (introduced by chemicals, radiation, mistakes during replication, or molecular reorganizations associated with DNA repair mechanisms and selfish genetic elements.  

But on their own, DNA molecules are inert – they have functions only within the context of a living organism (or highly artificial, that is man made, experimental systems).  The next critical step was to understand how a gene works within a biological system, that is, within an organism.  This involve appreciating the molecular mechanisms (primarily proteins) involved in identifying which stretches of a particular DNA molecule were used as templates for the synthesis of RNA molecules, which in turn could be used to direct the synthesis of polypeptides (see previous post on polypeptides and proteins).  In the context of the introductory biology courses I am familiar with (please let me know if I am wrong), these processes are based on a rather deterministic context; a gene is either on or off in a particular cell type, leading to the presence or absence of a trait. Such a deterministic presentation ignores the stochastic nature of molecular level processes (see past post: Biology education in the light of single cell/molecule studies) and the dynamic interaction networks that underlie cellular behaviors.   

But our level of resolution is changing rapidly (2).  For a number of practical reasons, when the human genome was first sequence, the identification of polypeptide-encoding genes was based on recognizing “open-reading frames” (ORFs) encoding polypeptides of > 100 amino acids in length (> 300 base long coding sequence).  The increasing sensitivity of mass spectrometry-based proteomic studies reveals that smaller ORFs (smORFs) are present and can lead to the synthesis of short (< 50 amino acid long) polypeptides (Chugunova et al., 2017; Couso, 2015).  Typically an ORF was considered a single entity – basically one gene one ORF one polypeptide (3).  A recent, rather surprising discovery is what are known as “alternative ORFs” or altORFs; these RNA molecules that use alternative reading frames to encode small polypeptides.  Such altORFs can be located upstream, downstream, or within the previously identified conventional ORF (figure →)(see Samandi et al., 2017).  The implication, particularly for the analysis of how variations in genes link to traits, is that a change, a mutation or even the  experimental  deletion of a gene, a common approach in a range of experimental studies, can do much more than previously presumed – not only is the targeted ORF effected, but various altORFs can also be modified.  

The situation is further complicated when the established rules of using RNAs to direct polypeptide synthesis via the process of translation, are violated, as occurs in what is known as “repeat-associated non-ATG (RAN)” polypeptide synthesis (see Cleary and Ranum, 2017).  In this situation, the normal signal for the start of RNA-directed polypeptide synthesis, an AUG codon, is subverted – other RNA synthesis start sites are used leading to underlying or imbedded gene expression.  This process has been found associated with a class of human genetic diseases, such as amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD) characterized by the expansion of simple (repeated) DNA sequences  (see Pattamatta et al., 2018).  Once they exceed a certain length, such“repeat” regions have been found to be associated with the (apparently) inappropriate transcription of RNA in both directions, that is using both DNA strands as templates (← A: normal situation, B: upon expansion of the repeat domain).  These abnormal repeat region RNAs are translated via the RAN process to generate six different types of toxic polypeptides. 

So what are the molecular factors that control the various types of altORF transcription and translation?  In the case of ALS and FTD, it appears that other genes, and the polypeptides and proteins they encode, are involved in regulating the expression of repeat associated RNAs (Kramer et al., 2016)(Cheng et al., 2018).  Similar or distinct mechanisms may be involved in other  neurodegenerative diseases  (Cavallieri et al., 2017).  

So how should all of these molecular details (and it is likely that there are more to be discovered) influence how genes are presented to students?  I would argue that DNA should be presented as a substrate upon which various molecular mechanisms occur; these include transcription in its various forms (directed and noisy), as well as DNA synthesis, modification, and repair mechanisms occur.   Genes are not static objects, but key parts of dynamic systems.  This may be one reason that classical genetics, that is genes presented within a simple Mendelian (gene to trait) framework, should be moved deeper into the curriculum, where students have the background in molecular mechanisms needed to appreciate its complexities, complexities that arise from the multiple molecular machines acting to access, modify, and use the information captured in DNA (through evolutionary processes), thereby placing the gene in a more realistic cellular perspective (4). 

Footnotes:

1. Described greater detail in biofundamentals™

2. For this discussion, I am completely ignoring the roles of genes that encode RNAs that, as far as is currently know, do not encode polypeptides.  That said, as we go on, you will see that it is possible that some such non-coding RNA may encode small polypeptides.  

3. I am ignoring the complexities associated with alternative promoter elements, introns, and the alternative and often cell-type specific regulated splicing of RNAs, to create multiple ORFs from a single gene.  

4. With respects to Norm Pace – assuming that I have the handedness of the DNA molecules wrong or have exchanged Z for A or B. 

literature cited: 

  • Cavallieri et al, 2017. C9ORF72 and parkinsonism: Weak link, innocent bystander, or central player in neurodegeneration? Journal of the neurological sciences 378, 49.
  • Cheng et al, 2018. C9ORF72 GGGGCC repeat-associated non-AUG translation is upregulated by stress through eIF2α phosphorylation. Nature communications 9, 51.
  • Chugunova et al, 2017. Mining for small translated ORFs. Journal of proteome research 17, 1-11.
  • Cleary & Ranum, 2017. New developments in RAN translation: insights from multiple diseases. Current opinion in genetics & development 44, 125-134.
  • Cooper et al, 2013. Where genotype is not predictive of phenotype: towards an understanding of the molecular basis of reduced penetrance in human inherited disease. Human genetics 132, 1077-1130.
  • Couso, 2015. Finding smORFs: getting closer. Genome biology 16, 189.
  • Darwin, 1859. On the origin of species. London: John Murray.
  • Gayon, 2016. From Mendel to epigenetics: History of genetics. Comptes rendus biologies 339, 225-230.
  • Gerbault et al, 2011. Evolution of lactase persistence: an example of human niche construction. Philosophical Transactions of the Royal Society of London B: Biological Sciences 366, 863-877.
  • Kramer et al, 2016. Spt4 selectively regulates the expression of C9orf72 sense and antisense mutant transcripts. Science 353, 708-712.
  • Lanie et al, 2004. Exploring the public understanding of basic genetic concepts. Journal of genetic counseling 13, 305-320.
  • Pattamatta et al, 2018. All in the Family: Repeats and ALS/FTD. Trends in neurosciences 41, 247-250.
  • Samandi et al, 2017. Deep transcriptome annotation enables the discovery and functional characterization of cryptic small proteins. Elife 6.

Is a little science a dangerous thing?

Is the popularization of science encouraging a growing disrespect for scientific expertise? 
Do we need to reform science education so that students are better able to detect scientific BS? 

It is common wisdom that popularizing science by exposing the public to scientific ideas is an unalloyed good,  bringing benefits to both those exposed and to society at large. Many such efforts are engaging and entertaining, often taking the form of compelling images with quick cuts between excited sound bites from a range of “experts.” A number of science-centered programs, such PBS’s NOVA series, are particularly adept and/or addicted to this style. Such presentations introduce viewers to natural wonders, and often provide scientific-sounding, albeit often superficial and incomplete, explanations – they appeal to the gee-whiz and inspirational, with “mind-blowing” descriptions of how old, large, and weird the natural world appears to be. But there are darker sides to such efforts. Here I focus on one, the idea that a rigorous, realistic understanding of the scientific enterprise and its conclusions, is easy to achieve, a presumption that leads to unrealistic science education standards, and the inability to judge when scientific pronouncements are distorted or unsupported, as well as anti-scientific personal and public policy positions.That accurate thinking about scientific topics is easy to achieve is an unspoken assumption that informs much of our educational, entertainment, and scientific research system. This idea is captured in the recent NYT best seller “Astrophysics for people in a hurry” – an oxymoronic presumption. Is it possible for people “in a hurry” to seriously consider the observations and logic behind the conclusions of modern astrophysics? Can they understand the strengths and weaknesses of those conclusions? Is a superficial familiarity with the words used the same as understanding their meaning and possible significance? Is acceptance understanding?  Does such a cavalier attitude to science encourage unrealistic conclusions about how science works and what is known with certainty versus what remains speculation?  Are the conclusions of modern science actually easy to grasp?
The idea that introducing children to science will lead to an accurate grasp the underlying concepts involved, their appropriate application, and their limitations is not well supported [1]; often students leave formal education with a fragile and inaccurate understanding – a lesson made explicit in Matt Schneps and Phil Sadler’s Private Universe videos. The feeling that one understands a topic, that science is in some sense easy, undermines respect for those who actually do understand a topic, a situation discussed in detail in Tom Nichols “The Death of Expertise.” Under-estimating how hard it can be to accurately understand a scientific topic can lead to unrealistic science standards in schools, and often the trivialization of science education into recognizing words rather than understanding the concepts they are meant to convey.

The fact is, scientific thinking about most topics is difficult to achieve and maintain – that is what editors, reviewers, and other scientists, who attempt to test and extend the observations of others, are for – together they keep science real and honest. Until an observation has been repeated or confirmed by others, it can best be regarded as an interesting possibility, rather than a scientifically established fact.  Moreover, until a plausible mechanism explaining the observation has been established, it remains a serious possibility that the entire phenomena will vanish, more or less quietly (think cold fusion). The disappearing physiological effects of “power posing” comes to mind. Nevertheless the incentives to support even disproven results can be formidable, particularly when there is money to be made and egos on the line.

While power-posing might be helpful to some, even though physiologically useless, there are more dangerous pseudo-scientific scams out there. The gullible may buy into “raw water” (see: Raw water: promises health, delivers diarrhea) but the persistent, and in some groups growing, anti-vaccination movement continues to cause real damage to children (see Thousands of cheerleaders exposed to mumps).  One can ask oneself, why haven’t professional science groups, such as the American Association for the Advancement of Science (AAAS), not called for a boycott of NETFLIX, given that NETFLIX continues to distribute the anti-scientific, anti-vaccination program VAXXED [2]?  And how do Oprah Winfrey and Donald Trump  [link: Oprah Spreads Pseudoscience and Trump and the anti-vaccine movement] avoid universal ridicule for giving credence to ignorant non-sense, and for disparaging the hard fought expertise of the biomedical community?  A failure to accept well established expertise goes along way to understanding the situation. Instead of an appreciation for what we do and do not know about the causes of autism (see: Genetics and Autism Risk & Autism and infection), there are desperate parents who turn to a range of “therapies” promoted by anti-experts. The tragic case of parents trying to cure autism by forcing children to drink bleach (see link) illustrates the seriousness of the situation.

So why do a large percentage of the public ignore the conclusions of disciplinary experts?  I would argue that an important driver is the way that science is taught and popularized [3]. Beyond the obvious fact that a range of politicians and capitalists (in both the West and the East) actively distain expertise that does not support their ideological or pecuniary positions [4], I would claim that the way we teach science, often focussing on facts rather than processes, largely ignoring the historical progression by which knowledge is established, and the various forms of critical analyses to which scientific conclusions are subjected to, combines with the way science is popularized, erodes respect for disciplinary expertise. Often our education systems fail to convey how difficult it is to attain real disciplinary expertise, in particular the ability to clearly articulate where ideas and conclusions come from and what they do and do not imply. Such expertise is more than a degree, it is a record of rigorous and productive study and useful contributions, and a critical and objective state of mind. Science standards are often heavy on facts, and weak on critical analyses of those ideas and observations that are relevant to a particular process. As Carl Sagan might say, we have failed to train students on how to critically evaluate claims, how to detect baloney (or BS in less polite terms)[5].

In the area of popularizing scientific ideas, we have allowed hype and over-simplification to capture the flag. To quote from a article by David Berlinski [link: Godzooks], we are continuously bombarded with a range of pronouncements about new scientific observations or conclusions and there is often a “willingness to believe what some scientists say without wondering whether what they say is true”, or even what it actually means.  No longer is the in-depth, and often difficult and tentative explanation conveyed, rather the focus is on the flashy conclusion (independent of its plausibility). Self proclaimed experts pontificate on topics that are often well beyond their areas of training and demonstrated proficiency – many is the physicist who speaks not only about the completely speculative multiverse, but on free will and ethical beliefs. Complex and often irreconcilable conflicts between organisms, such as those between mother and fetus (see: War in the womb), male and female (in sexually dimorphic species), and individual liberties and social order, are ignored instead of explicitly recognized, and their origins understood. At the same time, there are real pressures acting on scientific researchers (and the institutions they work for) and the purveyors of news to exaggerate the significance and broader implications of their “stories” so as to acquire grants, academic and personal prestige, and clicks.  Such distortions serve to erode respect for scientific expertise (and objectivity).

So where are the scientific referees, the individuals that are tasked to enforce the rules of the game; to call a player out of bounds when they leave the playing field (their area of expertise) or to call a foul when rules are broken or bent, such as the fabrication, misreporting, suppression, or over-interpretation of data, as in the case of the anti-vaccinator Wakefield. Who is responsible for maintaining the integrity of the game?  Pointing out the fact that many alternative medicine advocates are talking meaningless blather (see: On skepticism & pseudo-profundity)? Where are the referees who can show these charlatans the “red card” and eject them from the game?

Clearly there are no such referees. Instead it is necessary to train as large a percentage of the population as possible to be their own science referees – that is, to understand how science works, and to identify baloney when it is slung at them. When a science popularizer, whether for well meaning or self-serving reasons, steps beyond their expertise, we need to call them out of
bounds!  And when scientists run up against the constraints of the scientific process, as appears to occur periodically with theoretical physicists, and the occasional neuroscientist (see: Feuding physicists and The Soul of Science) we need to recognize the foul committed.  If our educational system could help develop in students a better understanding of the rules of the scientific game, and why these rules are essential to scientific progress, perhaps we can help re-establish both an appreciation of rigorous scientific expertise, as well as a respect for what is that scientists struggle to do.



Footnotes and references:

  1. And is it clearly understood that they have nothing to say as to what is right or wrong.
  2.  Similarly, many PBS stations broadcast pseudoscientific infomercials: for example see Shame on PBS, Brain Scam, and the Deepak Chopra’s anti-scientific Brain, Mind, Body, Connection, currently playing on my local PBS station. Holocaust deniers and slavery apologists are confronted much more aggressively.
  3.  As an example, the idea that new neurons are “born” in the adult hippocampus, up to now established orthodoxy, has recently been called into question: see Study Finds No Neurogenesis in Adult Humans’ Hippocampi
  4.  Here is a particular disturbing example: By rewriting history, Hindu nationalists lay claim to India
  5. Pennycook, G., J. A. Cheyne, N. Barr, D. J. Koehler and J. A. Fugelsang (2015). “On the reception and detection of pseudo-profound bullshit.” Judgment and Decision Making 10(6): 549.

Reverse Dunning-Kruger effects and science education

The Dunning-Kruger (DK) effect is the well-established phenomenon that people tend to over estimate their understanding of a particular topic or their skill at a particular task, often to a dramatic degree [link][link]. We see examples of the DK effect throughout society; the current administration (unfortunately) and the nutritional supplements / homeopathy section of Whole Foods spring to mind as examples. But there is a less well-recognized “reverse DK” effect, namely the tendency of instructors, and a range of other public communicators, to over-estimate what the people they are talking to are prepared to understand, appreciate, and accurately apply. The efforts of science communicators and instructors can be entertaining but the failure to recognize and address the reverse DK effect results in ineffective educational efforts. These efforts can themselves help generate the illusion of understanding in students and the broader public (discussed here). While a confused understanding of the intricacies of cosmology or particle physics can be relatively harmless in their social and personal implications, similar misunderstandings become personally and publicly significant when topics such as vaccination, alternative medical treatments, and climate change are in play.

There are two synergistic aspects to the reverse DK effect that directly impact science instruction: the need to understand what one’s audience does not understand together with the need to clearly articulate the conceptual underpinnings needed to understand the subject to be taught. This is in part because modern science has, at its core, become increasingly counter-intuitive over the last approximately 100 years or so, a situation that can cause serious confusions that educators must address directly and explicitly. The first reverse DK effect involves the extent to which the instructor (and by implication the course and textbook designer) has an accurate appreciation of what students think or think they know, what ideas they have previously been exposed to, and what they actually understand about the implications of those ideas.  Are they prepared to learn a subject or does the instructor first have to acknowledge and address conceptual confusions and build or rebuild base concepts?  While the best way to discover what students think is arguably a Socratic discussion, this only rarely occurs for a range of practical reasons. In its place, a number of concept inventory-type testing instruments have been generated to reveal whether various pre-identified common confusions exist in students’ thinking. Knowing the results of such assessments BEFORE instruction can help customize how the instructor structures the learning environment and content to be presented and whether the instructor gives students the space to work with these ideas to develop a more accurate and nuanced understanding of a topic.  Of course, this implies that instructors have the flexibility to adjust the pace and focus of their classroom activities. Do they take the time needed to address student issues or do they feel pressured to plow through the prescribed course content, come hell, high water, or cascading student befuddlement.

A complementary aspect of the reverse DK effect, well-illustrated in the “why magnets attract” interview with the physicist Richard Feynman, is that the instructor, course designer, or textbook author(s) needs to have a deep and accurate appreciation of the underlying core knowledge necessary to understand the topic they are teaching. Such a robust conceptual understanding makes it possible to convey the complexities involved in a particular process and explicitly values appreciating a topic rather than memorizing it.  It focuses on the general, rather than the idiosyncratic. A classic example from many an introductory biology course is the difference between expecting students to remember the steps in glycolysis or the Krebs cycle reaction system, as opposed to the general principles that underlie the non-equilibrium reaction networks involved in all biological functions, a reaction network based on coupled chemical reactions and governed by the behaviors of thermodynamically favorable and unfavorable reactions. Without a explicit discussion of these topics, all too often students are required to memorize names without understanding the underlying rationale driving the processes involved; that is, why the system behaves as it does.  Instructors also give false “rubber band” analogies or heuristics to explain complex phenomena (see Feynman video 6:18 minutes in). A similar situation occurs when considering how molecules come to associate and dissociate from one another, for example in the process of regulating gene expression or repairing mutations in DNA. Most textbooks simply do not discuss the physiochemical processes involved in binding specificity, association, and dissociation rates, such as the energy changes associated with molecular interactions and thermal collisions (don’t believe me? look for yourself!). But these factors are essential for a student to understand the dynamics of gene expression [link], as well as the specificity of modern methods involved in genetic engineering, such as restriction enzymes, polymerase chain reaction, and CRISPR CAS9-mediated mutagenesis. By focusing on the underlying processes involved we can avoid their trivialization and enable students to apply basic principles to a broad range of situations. We can understand exactly why CRISPR CAS9-directed mutagenesis can be targeted to a single site within a multibillion-base pair genome.

Of course, as in the case of recognizing and responding to student misunderstandings and knowledge gaps, a thoughtful consideration of underlying processes takes course time, time that trades the development of a working understanding of core processes and principles for broader “coverage” of frequently disconnected facts, the memorization and regurgitation of which has been privileged over understanding why those facts are worth knowing. If our goal is for students to emerge from a course with an accurate understanding of the basic processes involved rather than a superficial familiarity with a plethora of unrelated facts, however, a Socratic interaction with the topic is essential. What assumptions are being made, where do they come from, how do they constrain the system, and what are their implications?  Do we understand why the system behaves the way it does? In this light, it is a serious educational mystery that many molecular biology / biochemistry curricula fail to introduce students to the range of selective and non-selective evolutionary mechanisms (including social and sexual selection – see link), that is, the processes that have shaped modern organisms.

Both aspects of the reverse DK effect impact educational outcomes. Overcoming the reverse DK effect depends on educational institutions committing to effective and engaging course design, measured in terms of retention, time to degree, and a robust inquiry into actual student learning. Such an institutional dedication to effective course design and delivery is necessary to empower instructors and course designers. These individuals bring a deep understanding of the topics taught and their conceptual foundations and historic development to their students AND must have the flexibility and authority to alter the pace (and design) of a course or a curriculum when they discover that their students lack the pre-existing expertise necessary for learning or that the course materials (textbooks) do not present or emphasize necessary ideas. Unfortunately, all too often instructors, particularly in introductory level college science courses, are not the masters of their ships; that is, they are not rewarded for generating more effective course materials. An emphasis on course “coverage” over learning, whether through peer-pressure, institutional apathy, or both, generates unnecessary obstacles to both student engagement and content mastery.  To reverse the effects of the reverse DK effect, we need to encourage instructors, course designers, and departments to see the presentation of core disciplinary observations and concepts as the intellectually challenging and valuable endeavor that it is. In its absence, there are serious (and growing) pressures to trivialize or obscure the educational experience – leading to the socially- and personally-damaging growth of fake knowledge.