Avoiding unrecognized racist implications arising from teaching genetics

It is common to think of teaching as socially and politically beneficial, or at least benign, but Donovan et al. (2019. ” Toward a more humane genetics education” Science Education 103: 529-560)(1) raises the interesting possibility, supported by various forms of analysis and a thorough review of the literature, that conventional approaches to teaching genetics can exacerbate students’ racialist ideas. A focus on genetic diseases associated with various population groups, say for example Tay-Sachs disease within Eastern European Jewish populations of sickle cell anemia within African populations, can result in more racialist and racist perspectives among students.

What is meant by racialist? Basically it is an essentialist perspective that a person is an exemplar of the essence of a group, and that all members of a particular group “carry” that essence, an essence that defines them as different and distinct from members of other groups. Such an essence may reflect a culture, or in our more genetical age, their genome, that is the versions of the genes that they possess. In a sense, their essence is more real than their individuality, an idea that contradicts the core reality of biological systems, as outlined in works by Mayr (2,3) – a mistake he termed typological thinking.

Donovan et al. go on to present evidence that exposure of students to lessons that stress the genomic similarities between humans can help. That “any two humans share 99.9% of their DNA, which means that 0.1% of human DNA varies between individuals. Studies find that, on average, 4.3% of genetic variability in humans (4.3% of the 0.1% of the variable portion of human DNA) occurs between the continental populations commonly associated with US census racial groups (i.e., Africa, Asia, Pacific Islands, and The Americas, Europe). In contrast, 95.7% of human genetic variation (95.7% of the 0.1% of variable portion of human DNA) occurs between individuals within those same groups” (italics added). And that “there is more variability in skull shape, facial structure, and blood types within racially defined populations … than there is between them.” Lessons that emphasized the genomic similarities between people and the dissimilarities within groups, appeared effective in reducing racialist ideation – they can help dispel racist beliefs while presenting the most scientifically accurate information available.

This is of particular importance given the dangers of genetic essentialism, that is the idea that we are our genomes and that our genomes determine who (and what) we are. A pernicious ideology that even the co-discover of DNA’s structure, James Watson, has fallen prey to. One pernicious aspect of such conclusions is illustrated in the critique of a recent genomic analysis of educational attainment and cognitive performance by John Warner (4).

An interesting aspect of this work is to raise the question of where, within a curriculum, should genetics go? What are the most important aspects of the complex molecular-level interaction networks that connect genotype with phenotype that need to be included in order to flesh out the overly simplified Mendelian view (pure dominant and recessive alleles, monogenic traits, and unlinked genes) often presented? A point of particular relevance given the growing complexity of what genes are and how they act (5,6). Perhaps the serious consideration of genetic systems would be better left for later in a curriculum. At the very least, it points out the molecular and genomic contexts that should be included so as to minimize the inadvertent support for racialist predilections and predispositions. 

modified from F1000 post

References

  1. Donovan, B. M., R. Semmens, P. Keck, E. Brimhall, K. Busch, M. Weindling, A. Duncan, M. Stuhlsatz, Z. B. Bracey and M. Bloom (2019). “Toward a more humane genetics education: Learning about the social and quantitative complexities of human genetic variation research could reduce racial bias in adolescent and adult populations.” Science Education 103(3): 529-560.
  2. Mayr (1985) The Growth of Biological Thought: Diversity, Evolution, and Inheritance. Belknap Press of Harvard University Press ISBN: 9780674364462
  3. Mayr (1994) Typological versus population thinking. In: Conceptual issues in evolutionary biology. MIT Press, Bradford Books, 157-160. Sober E (ed)
  4. Why we shouldn’t embrace the genetics of education. Warner J. Inside Higher Ed blog, July 26 2018 Available online (accessed Aug 22 2019)
  5. Genes – way weirder than you thought. Bioliteracy blog, Jul 09 2018
  6. The evolving definition of the term “gene”. Portin & Wilkins. 2017 Genetics. 205:1353-1364

Remembering the past and recognizing the limits of science …

A recent article in the Guardian reports on a debate at University College London (1) on whether to rename buildings because the people honored harbored odious ideological and political positions. Similar debates and decisions, in some cases involving unacceptable and abusive behaviors rather than ideological positions, have occurred at a number of institutions (see Calhoun at Yale, Sackler in NYC, James Watson at Cold Spring Harbor, Tim Hunt at the MRC, and sexual predators within the National Academy of Sciences). These debates raise important and sometimes troubling issues.

When a building is named after a scientist, it is generally in order to honor that person’s scientific contributions. The scientist’s ideological opinions are rarely considered explicitly, although they may influence the decision at the time.  In general, scientific contributions are timeless in that they represent important steps in the evolution of a discipline, often by establishing a key observation, idea, or conceptual framework upon which subsequent progress is based – they are historically important.  In this sense, whether a scientific contribution was correct (as we currently understand the natural world) is less critical than what that contribution led to. The contribution marks a milestone or a turning point in a discipline, understanding that the efforts of many underlie disciplinary progress and that those contributors made it possible for others to “see further.” (2)

Since science is not about recognizing or establishing a single unchanging capital-T-Truth, but rather about developing an increasingly accurate model for how the world works, it is constantly evolving and open to revision.  Working scientists are not particularly upset when new observations lead to revisions to or the abandonment of ideas or the addition of new terms to equations.(3)

Compare that to the situation in the ideological, political, or religious realms.  A new translation or interpretation of a sacred text can provoke schism and remarkably violent responses between respective groups of believers. The closer the groups are to one another, the more horrific the levels of violence that emerge often are.  In contrast, over the long term, scientific schools of thought resolve, often merging with one another to form unified disciplines. From my own perspective, and not withstanding the temptation to generate new sub-disciplines (in part in response to funding factors), all of the life sciences have collapsed into a unified evolutionary/molecular framework.  All scientific disciplines tend to become, over time, consistent with, although not necessarily deducible from, one another, particularly when the discipline respects and retains connections to the real (observable) world.(4)  How different from the political and ideological.

The historical progression of scientific ideas is dramatically different from that of political, religious, or social mores.  No matter what some might claim, the modern quantum mechanical view of the atom bears little meaningful similarity to the ideas of the cohort that included Leucippus and Democritus.  There is progress in science.  In contrast, various belief systems rarely abandon their basic premises.  A politically right- or left-wing ideologue might well find kindred spirits in the ancient world.  There were genocidal racists, theists, and nationalists in the past and there are genocidal racists, theists, and nationalists now.  There were (limited) democracies then, as there are (limited) democracies now; monarchical, oligarchical, and dictatorial political systems then and now; theistic religions then and now. Absolutist ideals of innate human rights, then as now, are routinely sacrificed for a range of mostly self-serving or politically expedient reasons.  Advocates of rule by the people repeatedly install repressive dictatorships. The authors of the United States Constitution declare the sacredness of human rights and then legitimized slavery. “The Bible … posits universal brotherhood, then tells Israel to kill all the Amorites.” (Phil Christman). The eugenic movement is a good example; for the promise of a genetically perfect future, existing people are treated inhumanely – just another version of apocalyptic (ends justify the means) thinking. 

Ignoring the simpler case of not honoring criminals (sexual and otherwise), most calls for removing names from buildings are based on the odious ideological positions espoused by the honored – typically some version of racist, nationalistic, or sexist ideologies.  The complication comes from the fact that people are complex, shaped by the context within which they grow up, their personal histories and the dominant ideological milieu they experienced, as well as their reactions to it.  But these ideological positions are not scientific, although a person’s scientific worldview and their ideological positions may be intertwined. The honoree may claim that science “says” something unambiguous and unarguable, often in an attempt to force others to acquiesce to their perspective.  A modern example would be arguments about whether climate is changing due to anthropogenic factors, a scientific topic, and what to do about it, an economic, political, and perhaps ideological question.(5)

So what to do?  To me, the answer seems reasonably obvious – assuming that the person’s contribution was significant enough, we should leave the name in place and use the controversy to consider why they held their objectionable beliefs and more explicitly why they were wrong to claim scientific justification for their ideological (racist / nationalist / sexist / socially prejudiced) positions.(6)  Consider explicitly why an archeologist (Flinders Petrie), a naturalist (Francis Galton), a statistician (Karl Pearson), and an advocate for women’s reproductive rights (Marie Stopes) might all support the non-scientific ideology of eugenics and forced sterilization.  We can use such situations as a framework within which to delineate the boundaries between the scientific and the ideological. 

Understanding this distinction is critical and is one of the primary justifications for why people not necessarily interested in science or science-based careers are often required to take science courses.  Yet all too often these courses fail to address the constraints of science, the difference between political and ideological opinions, and the implications of scientific models.  I would argue that unless students (and citizens) come to understand what constitutes a scientific idea or conclusion and what reflects a political or ideological position couched in scientific or pseudo-scientific terms, they are not learning what they need to know about science or its place in society.  That science is used as a proxy for Truth writ large is deeply misguided. It is much more important to understand how science works than it is to remember the number of phyla or the names of amino acids, the ability to calculate the pH of a solution, or to understand processes going on at the center of a galaxy or the details of a black hole’s behavior.  While sometimes harmless, misunderstanding science and how it is used socially can result in traumatic social implications, such as drawing harmful conclusions about individuals from statistical generalizations of populations, avoidable deaths from measles, and the forced “eugenic” sterilization of people deemed defective.  We should seek out and embrace opportunities to teach about these issues, even if it means we name buildings after imperfect people.  

footnotes:

  1. The location of some of my post-doc work.
  2. In the words of Isaac Newton, “If I have seen further than others, it is by standing upon the shoulders of giants.”
  3.  Unless, of course, the ideas and equations being revised or abandoned are one’s own. 
  4.  Perhaps the most striking exception occurs in physics on the subjects of quantum mechanics and relativity, but as I am not a physicist, I am not sure about that. 
  5.  Perhaps people are “meant” to go extinct. 
  6.  The situation is rather different outside of science, because the reality of progress is more problematic and past battles continue to be refought.  Given the history of Reconstruction and the Confederate “Lost Cause” movement [see PBS’s Reconstruction] following the American Civil War, monuments to defenders of slavery, no matter how admirable they may have been in terms of personal bravery and such, reek of implied violence, subjugation, and repression, particularly when the person honored went on to found an institution dedicated to racial hatred and violent intimidation [link]. There would seem little doubt that a monument in honor of a Nazi needs to be eliminated and replaced by one to their victims or to those who defeated them.

Is it possible to teach evolutionary biology “sensitively”?

Michael Reiss, a professor of science education at University College London and an Anglican Priest, suggests that “we need to rethink the way we teach evolution” largely because conventional approaches can be unduly confrontational and “force religious children to choose between their faith and evolution” or to result in students who”refuse to engage with a lesson.” He suggests that a better strategy would be akin to those use to teach a range of “sensitive” subjects “such as sex, pornography, ethnicity, religion, death studies, terrorism, and others” and could “help some students to consider evolution as a possibility who would otherwise not do so.” [link to his original essay and a previous post on teaching evolution: Go ahead and teach the controversy].

There is no doubt that an effective teacher attempts to present materials sensitively; it is the rare person who will listen to someone who “teaches” ideas in a hostile, alienating, or condescending manner. That said, it can be difficult to avoid the disturbing implications of scientific ideas, implications that can be a barrier to their acceptance. The scientific conclusion that males and females are different but basically the same can upset people on various sides of the theo-political spectrum. 

In point of fact an effective teacher, a teacher who encourages students to question their long held, or perhaps better put, familial or community beliefs, can cause serious social push-back  – Trouble with a capital T.  It is difficult to imagine a more effective teacher than Socrates (~470-399 BCE). Socrates “was found guilty of ‘impiety’ and ‘corrupting the young’, sentenced to death” in part because he was an effective teacher (see Socrates was guilty as charged).  In a religious and political context, challenging accepted Truths (again with a capital T) can be a crime.  In Socrates’ case”Athenians probably genuinely felt that undesirables in their midst had offended Zeus and his fellow deities,” and that, “Socrates, an unconventional thinker who questioned the legitimacy and authority of many of the accepted gods, fitted that bill.”  

So we need to ask of scientists and science instructors, does the presentation of a scientific, that is, a naturalistic and non-supernatural, perspective in and of itself represent an insensitivity to those with a super-natural belief system. Here it is worth noting a point made by the philosopher John Gray, that such systems extend beyond those based on a belief in god(s); they include those who believe, with apocalyptic certainty, in any of a number of Truths, ranging from the triumph of a master race, the forced sterilization of the unfit, the dictatorship of the proletariat, to history’s end in a glorious capitalist and technological utopia. Is a science or science instruction that is “sensitive” to, that is, uncritical of or upsetting to those who hold such beliefs, possible? 

My original impression is that one’s answer to this question is likely to be determined by whether one considers science a path to Truth, with a purposeful capital T, or rather that the goal of scientists is to build a working understanding of the world around and within us.  Working scientists, and particularly biologists who must daily confront the implications of apparently un-intelligent designed organisms (due to ways evolution works) are well aware that absolute certainty is counterproductive. Nevertheless, the proven explanatory and technological power of the scientific enterprise cannot help but reinforce the strong impression that there is some deep link between scientific ideas and the way the world really works.  And while some scientists have advocated unscientific speculations (think multiverses and cosmic consciousness), the truth, with a small t, of scientific thinking is all around us.  

Photograph of the Milky Way by Tim Carl photography, used by permission 

 A science-based appreciation of the unimaginable size and age of the universe, taken together with compelling evidence for the relatively recent appearance of humans (Homo sapiens from their metazoan, vertebrate, tetrapod, mammalian, and primate ancestors) cannot help but impact our thinking as to our significance in the grand scheme of things (assuming that there is such a, possibly ineffable, plan)(1). The demonstrably random processes of mutation and the generally ruthless logic by which organisms survive, reproduce, and evolve, can lead even the most optimistic to question whether existence has any real meaning.  

Consider, as an example, the potential implications of the progress being made in terms of computer-based artificial intelligence, together with advances in our understanding of the molecular and cellular connection networks that underlie human consciousness and self-consciousness. It is a small step to conclude, implicitly or explicitly, that humans (and all other organisms with a nervous system) are “just” wet machines that can (and perhaps should) be controlled and manipulated. The premise, the “self-evident truth”, that humans should be valued in and of themselves, and that their rights should be respected (2) is eroded by the ability of machines to perform what were previously thought to be exclusively human behaviors. 

Humans and their societies have, after all, been around for only a few tens of thousands of years.  During this time, human social organizations have passed from small wandering bands influenced by evolutionary kin and group selection processes to produce various social systems, ranging from more or less functional democracies, pseudo-democracies (including our own growing plutocracy), dictatorships, some religion-based, and totalitarian police states.  Whether humans have a long term future (compared to the millions of years that dinosaurs dominated life on Earth) remains to be seen – although we can be reasonably sure that the Earth, and many of its non-human inhabitants, will continue to exist and evolve for millions to billions of years, at least until the Sun explodes. 

So how do we teach scientific conclusions and their empirical foundations, which combine to argue that science represents how the world really works, without upsetting the most religiously and politically fanatical among us?  Those who most vehemently reject scientific thinking because they are the most threatened by its apparently unavoidable implications. The answer is open to debate, but to my mind it involves teaching students (and encouraging the public) to distinguish empirically-based, and so inherently limited observations and the logical, coherent, and testable scientific models they give rise to from unquestionable TRUTH- and revelation-based belief systems. Perhaps we need to focus explicitly on the value of science rather than its “Truth”. To reinforce what science is ultimately for; what justifies society’s support for it, namely to help reduce human suffering and (where it makes sense) to enhance the human experience, goals anchored in the perhaps logically unjustifiable, but nevertheless essential acceptance of the inherent value of each person.   

  1. Apologies to “Good Omens”
  2. For example, “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their creator with certain unalienable rights, that among these are life, liberty and the pursuit of happiness.” 

Science “awareness” versus “literacy” and why it matters, politically.

Montaigne concludes, like Socrates, that ignorance aware of itself is the only true knowledge”  – from “forbidden knowledge” by Roger Shattuck

A month or so ago we were treated to a flurry of media excitement surrounding the release of the latest Pew Research survey on Americans’ scientific knowledge.  The results of such surveys have been interpreted to mean many things. As an example, the title of Maggie Koerth-Baker’s short essay for the 538 web site was a surprising “Americans are Smart about Science”, a conclusion not universally accepted (see also).  Koerth-Baker was taken by the observation that the survey’s results support a conclusion that Americans’ display “pretty decent scientific literacy”.  Other studies (see Drummond & Fischhoff 2017) report that one’s ability to recognize scientifically established statements does not necessarily correlate with the acceptance of science policies – on average climate change “deniers” scored as well on the survey as “acceptors”.  In this light, it is worth noting that science-based policy pronouncements generally involve projections of what the future will bring, rather than what exactly is happening now.  Perhaps more surprisingly, greater “science literacy” correlates with more polarized beliefs that, given the tentative nature of scientific understanding –which is not about truth per se but practical knowledge–suggests that the surveys’ measure something other than scientific literacy.  While I have written on the subject before  it seems worth revisiting – particularly since since then I have read Rosling’s FactFullness and thought more about the apocalyptic bases of many secular and religious movements, described in detail by the historian Norman Cohn and the philosopher John Gray and gained a few, I hope, potentially useful insights on the matter.  

First, to understand what the survey reports we should take a look at the questions asked and decide what the ability to chose correctly implies about scientific literacy, as generally claimed, or something simpler – perhaps familiarity.  It is worth recognizing that all such instruments, particularly  those that are multiple choice in format, are proxies for a more detailed, time consuming, and costly Socratic interrogation designed to probe the depth of a persons’ knowledge and understanding.  In the Pew (and most other such surveys) choosing the correct response implies familiarity with various topics impacted by scientific observations. They do not necessarily reveal whether or not the respondent understands where the ideas come from, why they are the preferred response, or exactly where and when they are relevant (2). So is “getting the questions correct” demonstrates a familiarity with the language of science and some basic observations and principles but not the limits of respondents’ understanding.  

Take for example the question on antibiotic resistance (→).  The correct answer “it can lead to antibiotic-resistant bacteria” does not reveal whether the respondent understands the evolutionary (selective) basis for this effect, that is random mutagenesis (or horizontal gene transfer) and antibiotic-resistance based survival.  It is imaginable that a fundamentalist religious creationist could select the correct answer based on  plausible, non-evolutionary mechanisms (3).  In a different light, the question on oil, natural gas and coal (↓) could be seen as ambiguous – aren’t these all derived from long dead organisms, so couldn’t they reasonably be termed biofuels?  

While there are issues with almost any such multiple choice survey instrument, surely we would agree that choosing the “correct” answers to these 11 questions reflects some awareness of current scientific ideas and terminologies.  Certainly knowing (I think) that a base can neutralize and acid leaves unresolved how exactly the two interact, that is what chemical reaction is going on, not to mention what is going on in the stomach and upper gastrointestinal tract of a human being.  In this case, selecting the correct answer is not likely to conflict with one’s view of anthropogenic effects on climate, sex versus gender, or whether one has an up to date understanding of the mechanisms of immunity and brain development, or the social dynamics behind vaccination – specifically the responsibilities that members of a social group have to one another.   

But perhaps a more relevant point is our understanding of how science deals with the subject of predictions, because at the end of the day it is these predictions that may directly impact people in personal, political, and economically impactful ways. 

We can, I think, usefully divide scientific predictions into two general classes.  There are predictions about a system that can be immediately confirmed or dismissed through direct experiment and observation and those that cannot. The immediate (accessible) type of prediction is the standard model of scientific hypothesis testing, an approach that reveals errors or omissions in one’s understanding of a system or process.  Generally these are the empirical drivers of theoretical understanding (although perhaps not in some areas of physics).  The second type of prediction is inherently more problematic, as it deals with the currently unobservable future (or the distant past).  We use our current understanding of the system, and various assumptions, to build a predictive model of the system’s future behavior (or past events), and then wait to see if they are confirmed. In the case of models about the past, we often have to wait for a fortuitous discovery, for example the discovery of a fossil that might support or disprove our model.   

It’s tough to make predictions, especially about the future
– Yogi Berra (apparently)

Anthropogenic effects on climate are an example of the second type of prediction. No matter our level of confidence, we cannot be completely sure our model is accurate until the future arrives. Nevertheless, there is a marked human tendency to take predictions, typically about the end of the world or the future of the stock market, very seriously and to make urgent decisions based upon them. In many cases, these predictions impact only ourselves, they are personal.  In the case of climate change, however, they are likely to have disruptive effects that impact many. Part of the concern about study predictions is that responses to these predictions will have immediate impacts, they produce social and economic winners and losers whether or not the predictions are confirmed by events. As Hans Rosling points out in his book Factfullness, there is an urge to take urgent, drastic, and pro-active actions in the face of perceived (predicted) threats.  These recurrent and urgent calls to action (not unlike repeated, and unfulfilled predictions of the apocalypse) can lead to fatigue with the eventual dismissal of important warnings; warnings that should influence albeit perhaps not dictate ecological-economic and political policy decisions.  

Footnotes and literature cited:
1. As a Pew Biomedical Scholar, I feel some peripheral responsibility for the impact of these reports

2. As pointed out in a forthcoming review, the quality of the distractors, that is the incorrect choices, can dramatically impact the conclusions derived from such instruments. 

3.  I won’t say intelligent design creationist, as that makes no sense. Organisms are clearly not intelligently designed, as anyone familiar with their workings can attest

Drummond, C. & B. Fischhoff (2017). “Individuals with greater science literacy and education have more polarized beliefs on controversial science topics.” Proceedings of the National Academy of Sciences 114: 9587-9592.


Latest & past (PLoS Sci-Ed)

Most recent post:  Avoiding unrecognized racist implications arising from teaching genetics

Recent posts:

curlique

Please note, given the move from PLoS some of the links in the posts may be broken; some minor editing in process.  All by Mike Klymkowsky unless otherwise noted

Gradients and Molecular Switches (a biofundamentalist perspective)

Embryogenesis is based on a framework of social (cell-cell) interactions, initial and early asymmetries, and cascades of cell-cell signaling and gene regulatory networks (DEVO posts one, two, & three). The result is the generation of embryonic axes, germ layers (ectoderm, mesoderm, endoderm), various organs and tissues (brains, limbs, kidneys, hearts, and such), their patterning, and their coordination into a functioning organism. It is well established that all animals share a common ancestor (hundreds of millions of years ago) and that a number of molecular  modules were already present in this common ancestor.  

At the same time evolutionary processes are, and need to be, flexible enough to generate the great diversity of organisms, with their various adaptations to particular life-styles. The extent of both conservation and flexibility (new genes, new mechanisms) in developmental systems is, however, surprising. Perhaps the most striking evidence for the depth of this conservation was supplied by the discovery of the organization of the Hox gene cluster in the fruit fly Drosophila and in the mouse (and other vertebrates); in both the genes are arranged and expressed in a common genomic and expression patterns. But as noted by Denis Duboule (2007) Hox gene organization is often presented in textbooks in a distorted manner (→). The Hox clusters of vertebrates are compact, but are split, disorganized, and even “atomized” in other types of organisms. Similarly, processes that might appear foundational, such as the role of the Bicoid gradient in the early fruit fly embryo (a standard topic in developmental biology textbooks), are in fact restricted to a small subset of flies (Stauber et al., 1999). New genes can be generated through well defined processes, such as gene duplication and divergence, or they can arise de novo out of sequence noise (Carvunis et al., 2012; Zhao et al., 2014). Comparative genomic analyses can reveal the origins of specific adaptations (see Stauber et al., 1999).  The result is that organisms as closely related to each other as the great apes (including humans) have significant species-specific genetic differences (see Florio et al., 2018; McLean et al., 2011; Sassa, 2013 and references therein) as well as common molecular and cellular mechanisms.

A universal (?) feature of developing systems – gradients and non-linear responses: There is a predilection to find (and even more to teach) simple mechanisms that attempt to explain everything (witness the distortion of the Hox cluster, above) – a form of physics “theory of everything” envy.  But the historic nature, evolutionary plasticity, and need for regulatory robustness generally lead to complex and idiosyncratic responses in biological systems.  Biological systems are not “intelligently designed” but rather cobbled together over time through noise (mutation) and selection (Jacob, 1977). 
That said, a  common (universal?) developmental process appears to be the transformation of asymmetries into unambiguous cell fate decisions. Such responses are based on threshold events controlled by a range of molecular behaviors, leading to discrete gene expression states. We can approach the question of how such decisions are made from both an abstract and a concrete perspective. Here I outline my initial approach – I plan to introduce organism specific details as needed.  I start with the response to a signaling gradient, such as that found in many developmental systems, including the vertebrate spinal cord (top image Briscoe and Small, 2015) and the early Drosophila embryo (Lipshitz, 2009)(→).  

We begin with a gradient in the concentration of a “regulatory molecule” (the regulator).  The shape of the gradient depends upon the sites and rates of synthesis, transport away from these sites, and turnover (degradation and/or inactivation). We assume, for simplicity’s sake, that the regulator directly controls the expression of target gene(s). Such a molecule binds in a sequence specific manner to regulatory sites, there could be a few or hundreds, and leads to the activation (or inhibition) of the DNA-dependent, RNA polymerase (polymerase), which generates RNA molecules complementary to one strand of the DNA. Both the binding of the regulator and the polymerase are stochastic processes, driven by diffusion, molecular collisions, and binding interactions.(1) 

Let us now consider the response of target gene(s) as a function of cell (nuclear) position within the gradient.  We might (naively) expect that the rate of target gene expression would be a simple function of regulator concentration. For an activator, where the gradient is high, target gene expression would be high, where the gradient concentration is low, target gene expression would be low – in between, target gene expression would be proportional to regulator concentration.  But generally we find something different, we find that the expression of target genes is non-uniform, that is there are thresholds in the gradient: on one side of the threshold concentration the target gene is completely off (not expressed), while on the other side of the threshold concentration, the target gene is fully on (maximally expressed).  The target gene responds as if it is controlled by an on-off switch. How do we understand the molecular basis for this behavior? 

Distinct mechanisms are used in different systems, but we will consider a system from the gastrointestinal bacteria E. coli that students may already be familiar with; these are the genes that enable E. coli to digest the mammalian milk sugar lactose.  They encode a protein needed to import  lactose into a bacterial cell and an enzyme needed to break lactose down so that it can be metabolized.  Given the energetic cost to synthesize these proteins, it is in the bacterium’s adaptive self interest to synthesize them only when lactose is present at sufficient concentrations in their environment.  The response is functionally similar to that associated with quorum sensing, which is also governed by threshold effects. Similarly cells respond to the concentration of regulator molecules (in a gradient) by turning on specific genes in specific domains, rather than uniformly. 

Now let us look in a little more detail at the behavior of the lactose utilization system in E. coli following an analysis by Vilar et al (2003)(2).  At an extracellular lactose concentration below the threshold, the system is off.  If we increase the extracellular lactose concentration above threshold the system turns on, the lactose permease and β-galactosidase proteins are made and lactose can enter the cell and be broken down to produce metabolizable sugars.  By looking at individual cells, we find that they transition, apparently stochastically from off to on (→), but whether they stay on depends upon the extracellular lactose concentration. We can define a concentration, the maintenance concentration, below the threshold, at which “on” cells will remain on, while “off” cells will remain off.  

The circuitry of the lactose system is well defined  (Jacob and Monod, 1961; Lewis, 2013; Monod et al., 1963)(↓).  The lacI gene encodes the lactose operon repressor protein and it is expressed constituately at a low level; it binds to sequences in the lac operon and inhibits transcription.  The lac operon itself contains three genes whose expression is regulated by a constituatively active promoter.  LacY encodes the permease while the lacZ encodes β-galactosidase.  β-galactosidase has two functions: it catalyzes the reaction that transforms lactose into allolactone and it cleaves lactose into the metabolically useful sugars glucose and galactose. Allolactone is an allosteric modulator of the Lac repressor protein; if allolactone is present, it binds to lac epressor proteins and inactivates them, allowing lac operon expression.  

The cell normally contains only ~10 lactose repressor proteins. Periodically (stochastically), even in the absence of lactose, and so its derivative allolactone, the lac operon promoter region is free of repressor proteins, and a lactose operon is briefly expressed – a few LacY and LacZ  polypeptides are synthesized (↓).  This noisy leakiness in the regulation of the lac operon allows the cell to respond if lactose happens to be present – some lactose molecules enter the cell through the permease, are converted to allolactone by β-galactosidase.  Allolactone is an allosteric effector of the lac repressor; when present it binds to and inactivates the lac repressor protein so that it no longer binds to its target sequences (the operator or “O” sites).  In the absence of repressor binding, the lac operon is expressed.  If lactose is not present, the lac operon is inhibited and lacY and LacZ disappear from the cell by turnover or growth associated dilution.     

The question of how the threshold concentration at which a genetic switch is set, whether for quorum sensing  or simpler regulated gene expression, is established is complex, and as we will see, different systems have different solutions – although often the exact mechanism remains to be resolved. The binding and activation of regulators can involve cooperative interactions between regulatory proteins and other positive and negative feedback interactions. 

In the case of patterning a tissue in terms of regional responses to a signaling gradient, there can be multiple regulatory thresholds for different genes, as well as indirect effects, where the initiation of gene expression of one set of target gene impacts the sensitive expression of subsequent sets of genes.  One widely noted mechanism, known as reaction-diffusion, was suggested by the English mathematician Alan Turing (see Kondo and Miura, 2010) – it postulates a two component system, regulated by a either a primary regulatory gradient or the stochastic activation of a master regulator. One component is an activator of gene expression, which in addition to its own various targets, positively regulates its own expression as well as a second gene.  This second gene encodes a repressor of the first.  Both of these two regulator molecules are released by the signaling cell or cells; the repressor diffuses away from the source faster than the activator does.  The result can be a domain of target gene expression (where the concentration of activator is sufficient to escape repression), surrounded by a zone in which expression is inhibited (where repressor concentration is sufficient in inhibit the activator).  Depending upon the geometry of the system, this can result in discrete regions (dots) of 1º target gene expression or stripes of 1º gene expression (see Sheth et al., 2012).  In real system there are often multiple gradients involved and their relative orientations can produce a range of patterns.   

The point of all of this, is that when we approach a particular system – we need to consider the mechanisms involved.  Typically they are selected to produce desired phenotypes, but also to be robust in the sense that they need to produce the same patterns even if the system in which they occur is subject to perturbations, such as embryo/tissue size (due to differences in cell division / growth rated) and temperature and other environmental variables. 

 

n.b.clearly there will be value in some serious editing and reorganization of this and other posts.  

Footnotes:

  1. While stochastic (random) these processes can still be predictable.  A classic example involves the decay of an unstable isotope (atom), which is predictable at the population level, but unpredictable at the level of an individual atom.  Similarly, in biological systems, the binding and unbinding of molecules to one another, such as a protein transcription regulator to its target DNA sequence is stochastic but can be predictable in a large enough population.   
  2. and presented in biofundamentals ( pages 216-218). 

literature cited: 

Briscoe & Small (2015). Morphogen rules: design principles of gradient-mediated embryo patterning. Development 142, 3996-4009.

Carvunis et al  (2012). Proto-genes and de novo gene birth. Nature 487, 370.

Duboule (2007). The rise and fall of Hox gene clusters. Development 134, 2549-2560.

Florio et al (2018). Evolution and cell-type specificity of human-specific genes preferentially expressed in progenitors of fetal neocortex. eLife 7.

Jacob  (1977). Evolution and tinkering. Science 196, 1161-1166.

Jacob & Monod (1961). Genetic regulatory mechanisms in the synthesis of proteins. Journal of Molecular Biology 3, 318-356.

Kondo & Miura (2010). Reaction-diffusion model as a framework for understanding biological pattern formation. Science 329, 1616-1620.

Lewis (2013). Allostery and the lac Operon. Journal of Molecular Biology 425, 2309-2316.

Lipshitz (2009). Follow the mRNA: a new model for Bicoid gradient formation. Nature Reviews Molecular Cell Biology 10, 509.

McLean et al  (2011). Human-specific loss of regulatory DNA and the evolution of human-specific traits. Nature 471, 216-219.

Monod Changeux & Jacob (1963). Allosteric proteins and cellular control systems. Journal of Molecular Biology 6, 306-329.

Sassa (2013). The role of human-specific gene duplications during brain development and evolution. Journal of Neurogenetics 27, 86-96.

Sheth et al (2012). Hox genes regulate digit patterning by controlling the wavelength of a Turing-type mechanism. Science 338, 1476-1480.

Stauber et al (1999). The anterior determinant bicoid of Drosophila is a derived Hox class 3 gene. Proceedings of the National Academy of Sciences 96, 3786-3789.

Vilar et al (2003). Modeling network dynamics: the lac operon, a case study. J Cell Biol 161, 471-476.

Zhao et al (2014). Origin and Spread of de Novo Genes in Drosophila melanogaster Populations. Science.

Establishing Cellular Asymmetries (a biofundamentalist perspective)

[21st Century DEVO-3]  Embryonic development is the process by which a fertilized egg becomes an independent organism, an organism capable of producing functional gametes, and so a new generation. In an animal, this process generally involves substantial growth and multiple rounds of mitotic cell division; the resulting organism, a clone of the single-celled zygote, contains hundreds, thousands, millions, billions, or trillions of cells [link]. As cells form, they begin the process of differentiation, forming a range of cell types; these differentiating (and sometime migrating) cells that interact to form the adult and its various tissues and organ systems. These various cell types can be characterized by the genes that they express, the shapes they assume, the behaviors that they display, and how they interact with neighboring and distant cells (1).  Based on first principles, one could imagine (at least) two general mechanisms that could lead to differences in gene expression between cells. The first would be that different cells contain different genes while the other is that while all cells contain all genes, which genes are expressed in a particular cell varies, it is regulated by molecular processes that determine when, where, and to what the levels particular genes are expressed (2).  Turns out, there are examples of both processes among the animals, although the latter is much more common.

The process of discarding genomic DNA in somatic cells is known as chromatin diminution. During the development of the soma, but not the germ line, regions of the genome are lost. In the germ line, for hopefully obvious reasons, the full genome is retained. The end result is that somatic cells contain different subsets of genes and non-coding DNA compared to the full genome. The classic case of chromosome diminution was described in the parasitic nematode of horses, now named Parascaris univalens (originally Ascaris megalocephala) by Theodore Boveri in 1887 (reviewed in Streit and Davis, 2016)[pdf link]. Based on its occurrence in a range of distinct animal lineages, chromatin diminution appears to be an emergent rather than an ancestral trait, that is, a trait present in the common ancestor of the animals.

While, as expected for an emergent trait, the particular mechanism of chromatin diminution appears to vary between different organisms: the best characterized example occurs in Parascaris. In the somatic cell lineages in which chromatin diminution occurs, double-stranded breaks are made in  chromosomal DNA molecules, and teleomeric sequences are added to ends of the resulting DNA molecules (↓).  You may have learned that chromosomes interact with spindle microtubules through a localized regions on the chromosomes, known as centromeres. Centromeres are identified through their association with proteins that form the kinetochore, which is a structure that mediates interactions between condensed chromosomes and mitotic (and meiotic) spindle microtubules. While many organisms have a discrete spot-like (localized) centromere, in many nematodes centromere-binding proteins are found distributed along the length of the chromosomes, a situation known as a holocentric centromere.  At higher resolution it appears that centromere components are preferentially associated with euchromatic, that is, molecularly accessible chromosomal regions, which are (typically) the regions where most expressed genes are located.  Centromere components are largely excluded from heterochromatic (condensed and molecularly inaccessible) chromosomal regions. After chromosome fragmentation, those DNA fragments associated with centromere components can interact with the spindle microtubules and are accurately segregated to daughter cells during mitosis, while those, primarily heterochromatic fragments (without associated centromeric components) are degraded and lost. In contrast the integrity of the genome is maintained in those cells that come to form the germ line, the cells that can undergo meiosis to produce gametes.  Looking forward to the reprogramming of somatic cells (the process of producing what are known as induced pluripotent stem cells – iPSCs), one prediction is that it should not be possible to reprogram a somatic cell that has undergone chromatin diminution to form a functional germ line cell – you should be able to explain why, or what would have to be the case for such reprogramming to be successful. 

The origins of cellular asymmetries: Clearly, there must be differences between the cells that undergo chromatin diminution and those that do not; at the very least the nuclease(s) that cuts the DNA during chromatin diminution will need to be active in somatic cells and inactive in germ line cells, or it may simply not be present – the genes that encode it are not expressed in germ line cells. We can presume that similar cytoplasmic differences play a role in the differential regulation of gene expression in different cell types during the development of organisms in which the genome remains intact in somatic cells. So how might such asymmetries arise?  There are three potential, but certainly not mutually exclusive, mechanisms that can lead to cellular/cytoplasmic asymmetries: they can be inherited based on pre-existing asymmetries in the parental cell, they could emerge based on asymmetries in the signaling environments occupied by the two daughters, or they could arise from stochastic fluctuations in gene expression (see Chen et al., 2016; Neumüller and Knoblich, 2009).  

         One example of how an asymmetry can be established occurs in the free-living nematode Caenorhabditis elegans, where the site of sperm fusion with the egg leads to the recruitment and assembly of proteins around the site of sperm entry, the future posterior side of the embryo.  After male and female pronuclei fuse, mitosis begins and cytokinesis divides the zygote into two cells; the asymmetry initiated by sperm entry leads to an asymmetric division (←); the anterior AB blastomere is larger, and molecularly distinct from the smaller posterior P1 blastomere.  These differences set off a regulatory cascade, in which the genes expressed at one stage influence those expressed subsequently, and so influence subsequent cell divisions / cell fate decisions.

Other organisms use different mechanisms to generate cellular asymmetries. In organisms that have external fertilization, such as the clawed frog Xenopus, development proceeds rapidly once fertilization occurs. The egg is large, since in contains all of the materials necessary for the formation until the time that the embryo can feed itself. The early embryo is immotile and vulnerable to predation, so early development in such species tends to be rapid, and based on materials supplied by the mother (leading to maternal effects on subsequent development).  In such cases, the initial asymmetry is built into the organization of the oocyte. 

Formed through a mitotic division the primary oocyte enters meiotic prophase I, during which it undergoes a period of growth. Maternal and paternal chromosomes align (syngamy) and undergo crossing-over (recombination).  The oocyte contains a single centrosome, a cytoplasmic structure that surrounds the centrioles of the oocyte’s inherited mitotic spindle pole. Cytoplasmic components become organized around the pole and then move from the pole toward the cell cortex (↓ image from Gard and Klymkowsky, 1998); this movement defines an “animal-vegetal” axisof the oocyte, which upon fertilization will play a role in generating the head-tail (anterior-posterior) and back-belly (dorsal-ventral) axes of the embryo and adult. The primary oocyte remains in prophase I throughout oogenesis. The asymmetry of the oocyte becomes visible through the development of a pigmented animal hemisphere, largely non-pigmented vegetal hemisphere, and an large (~300 um diameter) and off-centered nucleus (known as the germinal vesicle or GV)(3).  Messenger RNA molecules, encoding different polypeptides, are differentially localized to the animal and vegetal regions of the late stage oocyte. The translation of these mRNAs is regulated by factors activated by subsequent developmental events, leading to molecular asymmetries between embryonic cells derived from the animal and vegetal regions of the oocyte.  In preparation for fertilization, the oocyte resumes active meiosis,  leading to the formation of two polar bodies and the secondary oocyte, the egg. Fertilization occurs within the pigmented animal hemisphere; the site of sperm entry (↓) produces a second driver of asymmetry, in addition to the animal-vegetal axis, albeit through a mechanism distinct from that used in C. elegans (De Domenico et al., 2015). populations areAsymmetries in oocytes and eggs, and sperm entry points are not always the primary drivers of subsequent embryonic differentiation.  In the mouse, and other placental mammals, including humans, embryonic development occurs within, and is supported by and dependent upon the mother.  The mouse (mammalian) egg appears grossly symmetric, and sperm entry itself does not appear to impose an asymmetry.  Rather, as the zygote divides, the first cells formed appear to be similar to one another. As cell division continue, however, some cells find themselves on the surface while others are located within the interior of the forming ball of cells, or morula (↓).  These two cell  populations are exposed to different environments, environments that influence patterns of gene expression. The cells on the surface differentiate to form the trophectoderm, which in turn differentiates into extra-embryonic placental tissues, the interface between mother and developing embryo.  The internal cells becomes the inner cell mass, which differentiate to form the embryo proper, the future mouse (or human). Early on inner cell mass cells appear similar to one another, but they also experience different environments, leading to emerging asymmetries associated with the activation of different signaling systems, the expression of different sets of genes, and difference in behavior – they begin the process of differentiating into distinct cell lineages and types forming, as embryogenesis continues, different tissues and organs.   

The response of a particular cell to a particular environment will depend upon the signaling molecules present, typically expressed by neighboring cells, the signaling molecule receptors expressed by the cell itself, and how the binding of signaling molecules to receptors alters receptor activity or stability. For example, an activated receptor can activate (or inhibit) a transcription factor protein that could influence the expression of a subset of genes. These genes may themselves encode regulators of  transcription, signals, signal receptors, or modifiers of the cellular localization, stability, activity, or interactions with other molecules. While some effects of signal-receptor interactions can be transient, leading to reversible changes in cell state (and gene expression), during embryonic development activating and responding to a signal generally starts a cascade of effects that leads to irreversible changes, and the formation of altered differentiated states.
       A  cell’s response to a signal can be variable, and influenced by the totality of the signals it receives and its past history.  For example, a signal could lead to a decrease in the level of a receptor, or an increase in an inhibitory protein, making the cell unresponsive to the signal (a negative feedback effect) or more sensitive (a positive feedback effect) or could lead to a change in its response to a signal – different genes could be regulated as time goes by following the signal.  Such emerging patterns of gene expression, based on signaling inputs, are the primary driver of embryonic development. 

footnotes:

  1. Not all genes are differentially expression, however – some genes, known as housekeeping genes, are expressed in essential all cells.
  2.  Hopefully it is clear what the term “expressed” means – namely that part of the gene is used to direct the synthesis of RNA (through the process of transcription (DNA-dependent, RNA polymerization).  Some such RNAs (messenger or mRNAs) are used to direct the synthesis of a polypeptide through the process of translation (RNA-directed, amino acid polymerization) others do not encode polypeptides, such non-coding RNAs (ncRNAs) can play roles in a number of processes, from catalysis to the regulation of transcription, RNA stability, and translation.  
  3. Eggs are laid in water and are exposed to the sun; the pigmentation of the animal hemisphere is thought to protect the oocyte/zygote/early embryo’s DNA from photo-damage.

Literature cited

Chen et al.,  (2016). The ins (ide) and outs (ide) of asymmetric stem cell division. Current opinion in cell biology 43, 1-6.

De Domenico et al., (2015). Molecular asymmetry in the 8-cell stage Xenopus tropicalis embryo described by single blastomere transcript sequencing. Developmental biology 408, 252-268.

Gard & Klymkowsky. (1998). Intermediate filament organization during oogenesis and early development in the clawed frog, Xenopus laevis. In Intermediate filaments (ed. H. Herrmann & J. R. Harris), pp. 35-69. New York: Plenum.

Neumüller & Knoblich. (2009). Dividing cellular asymmetry: asymmetric cell division and its implications for stem cells and cancer. Genes & development 23, 2675-2699.

Streit & Davis. (2016). Chromatin Diminution. In eLS: John Wiley & Sons Ltd, Chichester.