Molecular machines and the place of physics in the biology curriculum

The other day, through no fault of my own, I found myself looking at the courses required by our molecular biology undergraduate degree program. I discovered a requirement for a 5 credit hour physics course, and a recommendation that this course be taken in the students’ senior year – a point in their studies when most have already completed their required biology courses.  Befuddlement struck me, what was the point of requiring an introductory physics course in the context of a molecular biology major?  Was this an example of time-travel (via wormholes or some other esoteric imagining) in which a physics course in the future impacts a students’ understanding of molecular biology in the past?  I was also struck by the possibility that requiring such a course in the students’ senior year would measurably impact their time to degree. 

In a search for clarity and possible enlightenment, I reflected back on my own experiences in an undergraduate biophysics degree program – as a practicing cell and molecular  biologist, I was somewhat confused. I could not put my finger on the purpose of our physics requirement, except perhaps the admirable goal of supporting physics graduate students. But then, after feverish reflections on the responsibilities of faculty in the design of the courses and curricula they prescribe for their students and the more general concepts of instructional (best) practice and malpractice, my mind calmed, perhaps because I was distracted by an article on Oxford Nanopore’s MinION (↓), a “portable real-time device for DNA and RNA sequencing”,a device that plugs into the USB port on one’s laptop!

Distracted from the potentially quixotic problem of how to achieve effective educational reform at the undergraduate level, I found myself driven on by an insatiable curiosity (or a deep-seated insecurity) to insure that I actually understood how this latest generation of DNA sequencers worked. This led me to a paper by Meni Wanunu (2012. Nanopores: A journey towards DNA sequencing)[1].  On reading the paper, I found myself returning to my original belief, yes, understanding physics is critical to developing a molecular-level understanding of how biological systems work, BUT it was just not the physics normally inflicted upon (required of) students [2]. Certainly this was no new idea.  Bruce Alberts had written on this topic a number of times, most dramatically in his 1989 paper “The cell as a collection of molecular machines” [3].  Rather sadly, and not withstanding much handwringing about the importance of expanding student interest in, and understanding of, STEM disciplines, not much of substance in this area has occurred. While (some minority of) physics courses may have adopted active engagement pedagogies (in the meaning of Hake [4]) most insist on teaching macroscopic physics, rather than to focus on, or even to consider, the molecular level physics relevant to biological systems, explicitly the physics of protein machines in a cellular (biological) context. Why sadly, because conventional, that is non-biologically relevant introductory physics and chemistry courses, all to often serve the role of a hazing ritual, driving many students out of biology-based careers [5], in part I suspect, because they often seem irrelevant to students’ interests in the workings of biological systems. (footnote 1)  

Nanopore’s sequencer and Wanunu’s article (footnote 2) got me thinking again about biological machines, of which there are a great number, ranging from pumps, propellers, and oars to  various types of transporters, molecular truckers that move chromosomes, membrane vesicles, and parts of cells with respect to one another, to DNA detanglers, protein unfolders, and molecular recyclers (↓). 

Nanopore’s sequencer works based on the fact that as a single strand of DNA (or RNA) moves through a narrow pore, the different bases (A,C,T,G) occlude the pore to different extents, allowing different numbers of ions, different amounts of current, to pass through the pore. These current differences can be detected, and allows for a nucleotide sequence to be “read” as the nucleic acid strand moves through the pore. Understanding the process involves understanding how molecules move, that is the physics of molecular collisions and energy transfer, how proteins and membranes allow and restrict ion movement, and the impact of chemical gradients and electrical fields across a membrane on molecular movements  – all physical concepts of widespread significance in biological systems (here is an example of where a better understanding of physics could be useful to biologists).  Such ideas can be extended to the more general questions of how molecules move within the cell, and the effects of molecular size and inter-molecular interactions within a concentrated solution of proteins, protein polymers, lipid membranes, and nucleic acids, such as described in Oliverira et al., (2016 Increased cytoplasmic viscosity hampers aggregate polar segregation in Escherichia coli)[6].  At the molecular level, the processes, while biased by electric fields (potentials) and concentration gradients, are stochastic (noisy). Understanding of stochastic processes is difficult for students [7], but critical to developing an appreciation of how such processes can lead to phenotypic  differences between cells with the same genotypes (previous post) and how such noisy processes are managed by the cell and within a multicellular organism.   

As path leads on to path, I found myself considering the (←) spear-chucking protein machine present in the pathogenic bacteria Vibrio cholerae; this molecular machine is used to inject toxins into neighbors that the bacterium happens to bump into (see Joshi et al., 2017. Rules of Engagement: The Type VI Secretion System in Vibrio cholerae)[8].  The system is complex and acts much like a spring-loaded and rather “inhumane” mouse trap.  This is one of a number of bacterial  type VI systems, and “has structural and functional homology to the T4 bacteriophage tail spike and tube” – the molecular machine that injects bacterial cells with the virus’s genetic material, its DNA.

Building the bacterium’s spear-based injection system is control by a social (quorum sensing) system, a way that unicellular organisms can monitor whether they are alone or living in an environment crowded with other organisms. During the process of assembly, potential energy, derived from various chemically coupled, thermodynamically favorable reactions, is stored in both type VI “spears” and the contractile (nucleic acid injecting) tails of the bacterial viruses (phage). Understanding the energetics of this process, exactly how coupling thermodynamically favorable chemical reactions, such as ATP hydrolysis, or physico-chemical reactions, such as the diffusion of ions down an electrochemical gradient, can be used to set these “mouse traps”, and where the energy goes when the traps are sprung is central to students’ understanding of these and a wide range of other molecular machines. 

Energy stored in such molecular machines during their assembly can be used to move the cell. As an example, another bacterial system generates contractile (type IV pili) filaments; the contraction of such a filament can allow “the bacterium to move 10,000 times its own body weight, which results in rapid movement” (see Berry & Belicic 2015. Exceptionally widespread nanomachines composed of type IV pilins: the prokaryotic Swiss Army knives)[9].  The contraction of such a filament has been found to be used to import DNA into the cell, an early step in the process of  horizontal gene transfer.  In other situations (other molecular machines) such protein filaments access thermodynamically favorable processes to rotate, acting like a propeller, driving cellular movement. 

During my biased random walk through the literature, I came across another, but molecularly distinct, machine used to import DNA into Vibrio (see Matthey & Blokesch 2016. The DNA-Uptake Process of Naturally Competent Vibrio cholerae)[10].

This molecular machine enables the bacterium to import DNA from the environment, released, perhaps, from a neighbor killed by its spear.  In this system (←), the double stranded DNA molecule is first transported through the bacterium’s outer membrane; the DNA’s two strands are then separated, and one strand passes through a channel protein through the inner (plasma) membrane, and into the cytoplasm, where it can interact with the bacterium’s  genomic DNA.

The value of introducing students to the idea of molecular machines is that it helps to demystify how biological systems work, how such machines carry out specific functions, whether moving the cell or recognizing and repairing damaged DNA.  If physics matters in biological curriculum, it matters for this reason – it establishes a core premise of biology, namely that organisms are not driven by “vital” forces, but by prosaic physiochemical ones.  At the same time, the molecular mechanisms behind evolution, such as mutation, gene duplication,  and genomic reorganization provide the means by which new structures emerge from pre-existing ones, yet many is the molecular biology degree program that does not include an introduction to evolutionary mechanisms in its required course sequence – imagine that, requiring physics but not evolution? (see [11]).

One final point regarding requiring students to take a biologically relevant physics course early in their degree program is that it can be used to reinforce what I think is a critical and often misunderstood point. While biological systems rely on molecular machines, we (and by we I mean all organisms) are NOT machines, no matter what physicists might postulate -see We Are All Machines That Think.  We are something different and distinct. Our behaviors and our feelings, whether ultimately understandable or not, emerge from the interaction of genetically encoded, stochastically driven non-equilibrium systems, modified through evolutionary, environmental, social, and a range of unpredictable events occurring in an uninterrupted, and basically undirected fashion for ~3.5 billion years.  While we are constrained, we are more, in some weird and probably ultimately incomprehensible way.

Footnotes:

[1]  A discussion with Melanie Cooper on what chemistry is relevant to a life science major was a critical driver in our collaboration to develop the chemistry, life, the universe, and everything (CLUE) chemistry curriculum.  

[2]  Together with my own efforts in designing the biofundamentals introductory biology curriculum. 

literature cited

1. Wanunu, M., Nanopores: A journey towards DNA sequencing. Physics of life reviews, 2012. 9(2): p. 125-158.

2. Klymkowsky, M.W. Physics for (molecular) biology students. 2014  [cited 2014; Available from: http://www.aps.org/units/fed/newsletters/fall2014/molecular.cfm.

3. Alberts, B., The cell as a collection of protein machines: preparing the next generation of molecular biologists. Cell, 1998. 92(3): p. 291-294.

4. Hake, R.R., Interactive-engagement versus traditional methods: a six-thousand-student survey of mechanics test data for introductory physics courses. Am. J. Physics, 1998. 66: p. 64-74.

5. Mervis, J., Weed-out courses hamper diversity. Science, 2011. 334(6061): p. 1333-1333.

6. Oliveira, S., R. Neeli‐Venkata, N.S. Goncalves, J.A. Santinha, L. Martins, H. Tran, J. Mäkelä, A. Gupta, M. Barandas, and A. Häkkinen, Increased cytoplasm viscosity hampers aggregate polar segregation in Escherichia coli. Molecular microbiology, 2016. 99(4): p. 686-699.

7. Garvin-Doxas, K. and M.W. Klymkowsky, Understanding Randomness and its impact on Student Learning: Lessons from the Biology Concept Inventory (BCI). Life Science Education, 2008. 7: p. 227-233.

8. Joshi, A., B. Kostiuk, A. Rogers, J. Teschler, S. Pukatzki, and F.H. Yildiz, Rules of engagement: the type VI secretion system in Vibrio cholerae. Trends in microbiology, 2017. 25(4): p. 267-279.

9. Berry, J.-L. and V. Pelicic, Exceptionally widespread nanomachines composed of type IV pilins: the prokaryotic Swiss Army knives. FEMS microbiology reviews, 2014. 39(1): p. 134-154.

10. Matthey, N. and M. Blokesch, The DNA-uptake process of naturally competent Vibrio cholerae. Trends in microbiology, 2016. 24(2): p. 98-110.

11. Pallen, M.J. and N.J. Matzke, From The Origin of Species to the origin of bacterial flagella. Nat Rev Microbiol, 2006. 4(10): p. 784-90.

Is a little science a dangerous thing?

Is the popularization of science encouraging a growing disrespect for scientific expertise? 
Do we need to reform science education so that students are better able to detect scientific BS? 

It is common wisdom that popularizing science by exposing the public to scientific ideas is an unalloyed good,  bringing benefits to both those exposed and to society at large. Many such efforts are engaging and entertaining, often taking the form of compelling images with quick cuts between excited sound bites from a range of “experts.” A number of science-centered programs, such PBS’s NOVA series, are particularly adept and/or addicted to this style. Such presentations introduce viewers to natural wonders, and often provide scientific-sounding, albeit often superficial and incomplete, explanations – they appeal to the gee-whiz and inspirational, with “mind-blowing” descriptions of how old, large, and weird the natural world appears to be. But there are darker sides to such efforts. Here I focus on one, the idea that a rigorous, realistic understanding of the scientific enterprise and its conclusions, is easy to achieve, a presumption that leads to unrealistic science education standards, and the inability to judge when scientific pronouncements are distorted or unsupported, as well as anti-scientific personal and public policy positions.That accurate thinking about scientific topics is easy to achieve is an unspoken assumption that informs much of our educational, entertainment, and scientific research system. This idea is captured in the recent NYT best seller “Astrophysics for people in a hurry” – an oxymoronic presumption. Is it possible for people “in a hurry” to seriously consider the observations and logic behind the conclusions of modern astrophysics? Can they understand the strengths and weaknesses of those conclusions? Is a superficial familiarity with the words used the same as understanding their meaning and possible significance? Is acceptance understanding?  Does such a cavalier attitude to science encourage unrealistic conclusions about how science works and what is known with certainty versus what remains speculation?  Are the conclusions of modern science actually easy to grasp?
The idea that introducing children to science will lead to an accurate grasp the underlying concepts involved, their appropriate application, and their limitations is not well supported [1]; often students leave formal education with a fragile and inaccurate understanding – a lesson made explicit in Matt Schneps and Phil Sadler’s Private Universe videos. The feeling that one understands a topic, that science is in some sense easy, undermines respect for those who actually do understand a topic, a situation discussed in detail in Tom Nichols “The Death of Expertise.” Under-estimating how hard it can be to accurately understand a scientific topic can lead to unrealistic science standards in schools, and often the trivialization of science education into recognizing words rather than understanding the concepts they are meant to convey.

The fact is, scientific thinking about most topics is difficult to achieve and maintain – that is what editors, reviewers, and other scientists, who attempt to test and extend the observations of others, are for – together they keep science real and honest. Until an observation has been repeated or confirmed by others, it can best be regarded as an interesting possibility, rather than a scientifically established fact.  Moreover, until a plausible mechanism explaining the observation has been established, it remains a serious possibility that the entire phenomena will vanish, more or less quietly (think cold fusion). The disappearing physiological effects of “power posing” comes to mind. Nevertheless the incentives to support even disproven results can be formidable, particularly when there is money to be made and egos on the line.

While power-posing might be helpful to some, even though physiologically useless, there are more dangerous pseudo-scientific scams out there. The gullible may buy into “raw water” (see: Raw water: promises health, delivers diarrhea) but the persistent, and in some groups growing, anti-vaccination movement continues to cause real damage to children (see Thousands of cheerleaders exposed to mumps).  One can ask oneself, why haven’t professional science groups, such as the American Association for the Advancement of Science (AAAS), not called for a boycott of NETFLIX, given that NETFLIX continues to distribute the anti-scientific, anti-vaccination program VAXXED [2]?  And how do Oprah Winfrey and Donald Trump  [link: Oprah Spreads Pseudoscience and Trump and the anti-vaccine movement] avoid universal ridicule for giving credence to ignorant non-sense, and for disparaging the hard fought expertise of the biomedical community?  A failure to accept well established expertise goes along way to understanding the situation. Instead of an appreciation for what we do and do not know about the causes of autism (see: Genetics and Autism Risk & Autism and infection), there are desperate parents who turn to a range of “therapies” promoted by anti-experts. The tragic case of parents trying to cure autism by forcing children to drink bleach (see link) illustrates the seriousness of the situation.

So why do a large percentage of the public ignore the conclusions of disciplinary experts?  I would argue that an important driver is the way that science is taught and popularized [3]. Beyond the obvious fact that a range of politicians and capitalists (in both the West and the East) actively distain expertise that does not support their ideological or pecuniary positions [4], I would claim that the way we teach science, often focussing on facts rather than processes, largely ignoring the historical progression by which knowledge is established, and the various forms of critical analyses to which scientific conclusions are subjected to, combines with the way science is popularized, erodes respect for disciplinary expertise. Often our education systems fail to convey how difficult it is to attain real disciplinary expertise, in particular the ability to clearly articulate where ideas and conclusions come from and what they do and do not imply. Such expertise is more than a degree, it is a record of rigorous and productive study and useful contributions, and a critical and objective state of mind. Science standards are often heavy on facts, and weak on critical analyses of those ideas and observations that are relevant to a particular process. As Carl Sagan might say, we have failed to train students on how to critically evaluate claims, how to detect baloney (or BS in less polite terms)[5].

In the area of popularizing scientific ideas, we have allowed hype and over-simplification to capture the flag. To quote from a article by David Berlinski [link: Godzooks], we are continuously bombarded with a range of pronouncements about new scientific observations or conclusions and there is often a “willingness to believe what some scientists say without wondering whether what they say is true”, or even what it actually means.  No longer is the in-depth, and often difficult and tentative explanation conveyed, rather the focus is on the flashy conclusion (independent of its plausibility). Self proclaimed experts pontificate on topics that are often well beyond their areas of training and demonstrated proficiency – many is the physicist who speaks not only about the completely speculative multiverse, but on free will and ethical beliefs. Complex and often irreconcilable conflicts between organisms, such as those between mother and fetus (see: War in the womb), male and female (in sexually dimorphic species), and individual liberties and social order, are ignored instead of explicitly recognized, and their origins understood. At the same time, there are real pressures acting on scientific researchers (and the institutions they work for) and the purveyors of news to exaggerate the significance and broader implications of their “stories” so as to acquire grants, academic and personal prestige, and clicks.  Such distortions serve to erode respect for scientific expertise (and objectivity).

So where are the scientific referees, the individuals that are tasked to enforce the rules of the game; to call a player out of bounds when they leave the playing field (their area of expertise) or to call a foul when rules are broken or bent, such as the fabrication, misreporting, suppression, or over-interpretation of data, as in the case of the anti-vaccinator Wakefield. Who is responsible for maintaining the integrity of the game?  Pointing out the fact that many alternative medicine advocates are talking meaningless blather (see: On skepticism & pseudo-profundity)? Where are the referees who can show these charlatans the “red card” and eject them from the game?

Clearly there are no such referees. Instead it is necessary to train as large a percentage of the population as possible to be their own science referees – that is, to understand how science works, and to identify baloney when it is slung at them. When a science popularizer, whether for well meaning or self-serving reasons, steps beyond their expertise, we need to call them out of
bounds!  And when scientists run up against the constraints of the scientific process, as appears to occur periodically with theoretical physicists, and the occasional neuroscientist (see: Feuding physicists and The Soul of Science) we need to recognize the foul committed.  If our educational system could help develop in students a better understanding of the rules of the scientific game, and why these rules are essential to scientific progress, perhaps we can help re-establish both an appreciation of rigorous scientific expertise, as well as a respect for what is that scientists struggle to do.



Footnotes and references:

  1. And is it clearly understood that they have nothing to say as to what is right or wrong.
  2.  Similarly, many PBS stations broadcast pseudoscientific infomercials: for example see Shame on PBS, Brain Scam, and the Deepak Chopra’s anti-scientific Brain, Mind, Body, Connection, currently playing on my local PBS station. Holocaust deniers and slavery apologists are confronted much more aggressively.
  3.  As an example, the idea that new neurons are “born” in the adult hippocampus, up to now established orthodoxy, has recently been called into question: see Study Finds No Neurogenesis in Adult Humans’ Hippocampi
  4.  Here is a particular disturbing example: By rewriting history, Hindu nationalists lay claim to India
  5. Pennycook, G., J. A. Cheyne, N. Barr, D. J. Koehler and J. A. Fugelsang (2015). “On the reception and detection of pseudo-profound bullshit.” Judgment and Decision Making 10(6): 549.

Making education matter in higher education


It may seem self-evident that providing an effective education, the type of educational experiences that lead to a useful bachelors degree and serve as the foundation for life-long learning and growth, should be a prime aspirational driver of Colleges and Universities (1).  We might even expect that various academic departments would compete with one another to excel in the quality and effectiveness of their educational outcomes; they certainly compete to enhance their research reputations, a competition that is, at least in part, responsible for the retention of faculty, even those who stray from an ethical path. Institutions compete to lure research stars away from one another, often offering substantial pay raises and research support (“Recruiting or academic poaching?”).  Yet, my own experience is that a department’s performance in undergraduate educational outcomes never figures when departments compete for institutional resources, such as supporting students, hiring new faculty, or obtaining necessary technical resources (2).

 I know of no example (and would be glad to hear of any) of a University hiring a professor based primarily on their effectiveness as an instructor (3).

In my last post, I suggested that increasing the emphasis on measures of departments’ educational effectiveness could help rebalance the importance of educational and research reputations, and perhaps incentivize institutions to be more consistent in enforcing ethical rules involving research malpractice and the abuse of students, both sexual and professional. Imagine if administrators (Deans and Provosts and such) were to withhold resources from departments that are performing below acceptable and competitive norms in terms of undergraduate educational outcomes?

Outsourced teaching: motives, means and impacts

Sadly, as it is, and particularly in many science departments, undergraduate educational outcomes have little if any impact on the perceived status of a department, as articulated by campus administrators. The result is that faculty are not incentivized to, and so rarely seriously consider the effectiveness of their department’s course requirements, a discussion that would of necessity include evaluating whether a course’s learning goals are coherent and realistic, whether the course is delivered effectively, whether it engages students (or is deemed irrelevant), and whether students’ achieve the desired learning outcomes, in terms of knowledge and skills achieved, including the ability to apply that knowledge effectively to new situations.  Departments, particularly research focussed (dependent) departments, often have faculty with low teaching loads, a situation that incentivizes the “outsourcing” of key aspects of their educational responsibilities.  Such outsourcing comes in two distinct forms, the first is requiring majors to take courses offered by other departments, even if such courses are not well designed, delivered, or (in the worst cases) relevant to the major.  A classic example is to require molecular biology students to take macroscopic physics or conventional calculus courses, without regard to whether the materials presented in these courses is ever used within the major or the discipline.  Expecting a student majoring in the life sciences to embrace a course that (often rightly) seems irrelevant to their discipline can alienate a student, and poses an unnecessary obstacle to student success, rather than providing students with needed knowledge and skills.  Generally, the incentives necessary to generate a relevant course, for example, a molecular level physics course that would engage molecular biology students, are simply not there.  A version of this situation is to require courses that are poorly designed or delivered (general chemistry is often used as the poster child for such a course). These are courses that have high failure rates, sometimes justified in terms of “necessary rigor” when in fact better course design could (and has) resulted in lower failure rates and improved learning outcomes.  In addition, there are perverse incentives associated with requiring “weed out” courses offered by other departments, as they reduce the number of courses a department’s faculty needs to teach, and can lead to fewer students proceeding into upper division courses.

The second type of outsourcing involves excusing tenure track faculty from teaching introductory courses, and having them replaced by lower paid instructors or lecturers.  Independently of whether instructors, lecturers, or tenure track professors make for better teaching, replacing faculty with instructors sends an implicit message to students.  At the same time, the freedom of instructors/lecturers to adopt an effective (socratic) approach to teaching is often severely constrained; common exams can force classes to move in lock step, independently of whether that pace is optimal for student engagement and learning. Generally, instructors/lecturers do not have the freedom to adjust what they teach, to modify the emphasis and time they spend on specific topics in response to their students’ needs. How an instructor instructs their students suffers when teachers do not have the freedom to customize their interactions with students in response to where they are intellectually.  This is particularly detrimental in the case of underrepresented or underprepared students. Generally, a flexible and adaptive approach to instruction (including ancillary classes on how to cope with college: see An alternative to remedial college classes gets results) can address many issues, and bring the majority of students to a level of competence, whereas tracking students into remedial classes can succeed in driving them out of a major or college (see Colleges Reinvent Classes to Keep More Students in Science and Redesigning a Large-Enrollment Introductory Biology Course and Does Remediation Work for All Students? )

How to address this imbalance, how can we reset the pecking order so that effective educational efforts actually matter to a department? 

My (modest) suggestion is to base departmental rewards on objective measures of educational effectiveness.   And by rewards I mean both at the level of individuals (salary and status) as well as support for graduate students, faculty positions, start up funds, etc.  What if, for example, faculty in departments that excel at educating their students received a teaching bonus, or if the number of graduate students within a department supported by the institution was determined not by the number of classes these graduate students taught (courses that might not be particularly effective or engaging) but rather by a departments’ undergraduate educational effectiveness, as measured by retention, time to degree, and learning outcomes (see below)?  The result could well be a drive within a department to improve course and curricular effectiveness to maximize education-linked rewards.  Given that laboratory courses, the courses most often taught by science graduate students, are multi-hour schedule disrupting events, of limited demonstrable educational effectiveness, that complicate student course scheduling, removing requirements for lab courses deemed unnecessary (or generating more effective versions), would be actively rewarded (of course, sanctions for continuing to offer ineffective courses would also be useful, but politically more problematic.)

A similar situation applies when a biology department requires its majors to take 5 credit hour physics or chemistry courses.  Currently it is “easy” for a department to require its students to take such courses without critically evaluating whether they are “worth it”, educationally.  Imagine how a department’s choices of required courses would change if the impact of high failure rates (which I would argue is a proxy for poorly designed  and delivered courses) directly impacted the rewards reaped by a department. There would be an incentive to look critically at such courses, to determine whether they are necessary and if so, well designed and delivered. Departments would serve their own interests if they invested in the development of courses  that better served their disciplinary goals, courses likely to engage their students’ interests.

So how do we measure a department’s educational efficacy?

There are three obvious metrics: i) retention of students as majors (or in the case of “service courses” for non-majors, whether students master what it is the course claims to teach); ii) time to degree (and by that I mean the percentage of students who graduate in 4 years, rather than the 6 year time point reported in response to federal regulations (six year graduation rate | background on graduation rates); and iii) objective measures of student learning outcomes attained and skills achieved. The first two are easy, Universities already know these numbers.  Moreover they are directly influenced by degree requirements – requiring students to take boring and/or apparently irrelevant courses serves to drive a subset of students out of a major.  By making courses relevant and engaging, more students can be retained in a degree program. At the same time, thoughtful course design can help students  pass through even the most rigorous (difficult) of such courses. The third, learning outcomes, is significantly more challenging to measure, since universal metrics are (largely) missing or superficial.  A few disciplines, such as chemistry, support standardized assessments, although one could argue with what such assessments measure.  Nevertheless, meaningful outcomes measures are necessary, in much the same way that Law and Medical boards and the Fundamentals of Engineering exam serve to help insure (although they do not guarantee) the competence of practitioners. One could imagine using parts of standardized exams, such as discipline specific GRE exams, to generate outcomes metrics, although more informative assessment instruments would clearly be preferable. The initiative in this area could be taken by professional societies, college consortia (such as the AAU), and research foundations, as a critical driver for education reform, increased effectiveness, and improved cost-benefit outcomes, something that could help address the growing income inequality in our country and make success in higher education an important factor contributing to an institution’s reputation.

 

A footnote or two…
 
1. My comments are primarily focused on research universities, since that is where my experience lies; these are, of course, the majority of the largest universities (in a student population sense).
 
2. Although my experience is limited, having spent my professorial career at a single institution, conversations with others leads me to conclude that it is not unique.
 
3. The one obvious exception would be the hiring of  coaches of sports teams, since their success in teaching (coaching) is more directly discernible and impactful on institutional finances and reputation).
 
minor edits – 16 March 2020

Balancing research prestige, human decency, and educational outcomes.


Or why do academic institutions shield predators?  Many working scientists, particularly those early in their careers or those oblivious to practical realities, maintain an idealistic view of the scientific enterprise. They see science as driven by curious, passionate, and skeptical scholars, working to build an increasingly accurate and all encompassing understanding of the material world and the various phenomena associated with it, ranging from the origins of the universe and the Earth to the development of the brain and the emergence of consciousness and self-consciousness (1).  At the same time, the discipline of science can be difficult to maintain (see PLoS post:  The pernicious effects of disrespecting the constraints of science). Scientific research relies on understanding what people have already discovered and established to be true; all too often, exploring the literature associated with a topic can reveal that one’s brilliant and totally novel “first of its kind” or “first to show” observation or idea is only a confirmation or a modest extension of someone else’s previous discovery. That is the nature of the scientific enterprise, and a major reason why significant new discoveries are rare and why graduate students’ Ph.D. theses can take years to complete.

Acting to oppose a rigorous scholarly approach are the real life pressures faced by working scientists: a competitive landscape in which only novel observations  get rewarded by research grants and various forms of fame or notoriety in one’s field, including a tenure-track or tenured academic position. Such pressures encourage one to distort the significance or novelty of one’s accomplishments; such exaggerations are tacitly encouraged by the editors of high profile journals (e.g. Nature, Science) who seek to publish “high impact” claims, such as the claim for “Arsenic-life” (see link).  As a recent and prosaic example, consider a paper that claims in its title that “Dietary Restriction and AMPK Increase Lifespan via Mitochondrial Network and Peroxisome Remodeling” (link), without mentioning (in the title) the rather significant fact that the effect was observed in the nematode C. elegans, whose lifespan is typically between 300 to 500 hours and which displays a trait not found in humans (and other vertebrates), namely the ability to assume a highly specialized “dauer” state that can survive hostile environmental conditions for months. Is the work wrong or insignificant? Certainly not, but it is presented to the unwary (through the Harvard Gazette under the title, “In pursuit of healthy aging: Harvard study shows how intermittent fasting and manipulating mitochondrial networks may increase lifespan,” with the clear implication that people, including Harvard alumni, might want to consider the adequacy of their retirement investments


Such pleas for attention are generally quickly placed in context and their significance evaluated, at least within the scientific community – although many go on to stimulate the economic health of the nutritional supplement industry.  Lower level claims often go unchallenged, just part of the incessant buzz associated with pleas for attention in our excessively distracted society (see link).  Given the reward structure of the modern scientific enterprise, the proliferation of such claims is not surprising.  Even “staid” academics seek attention well beyond the immediate significance of their (tax-payer funded) observations. Unfortunately, the explosively expanding size of the scientific enterprise makes policing such transgressions (generally through peer review or replication) difficult or impossible, at least in the short term.

The hype and exaggeration associated with some scientific claims for attention are not the most distressing aspect of the quest for “reputation.”  Rather, there are growing number of revelations of academic institutions protecting those guilty of abusing their dependent colleagues. These reflect how scientific research teams are organized. Most scientific studies involve groups of people working with one another, generating data, testing ideas, and eventually publishing their observations and conclusions, and speculating on their broader implications.

Research groups can vary greatly in size.  In some areas, they involve isolated individuals, whether thinkers (theorists) or naturalists, in the mode of Darwin and Wallace.  In other cases, these are larger and include senior researchers, post-doctoral  fellows, graduate students, technicians, undergraduates,  and even high school students. Such research groups can range from the small (2 to 3 people) to the significantly larger (~20-50 people); the largest of such groups are associated mega-projects, such as the human genome project and the Large Hadron Collider-based search for the Higgs boson (see: Physics paper sets record with more than 5,000 authors).  A look at this site [link] describing the human genome project reflects two aspects of such mega-science: 1) while many thousands of people were involved [see Initial sequencing and analysis of the human genome], generally only the “big names” are singled out for valorization (e.g., receiving a Nobel Prize). That said, there would be little or no progress without general scientific community that evaluates and extends ideas and observations. In this context, “lead investigators” are charged primarily with securing the funds needed to mobilize such groups, convincing funders that the work is significant; it is members of the group that work out the technical details and enable the project to succeed.

As with many such social groups, there are systems in play that serve to establish the status of the individuals involved – something necessary (apparently) in a system in which individuals compete for jobs, positions, and resources.  Generally, one’s status is established through recommendations from others in the field, often the senior member(s) of one’s research group or the (generally small) group of senior scientists who work in the same or a closely related area. The importance of professional status is particularly critical in academia, where the number of senior (e.g. tenured or tenure-track professorships) is limited. The result is a system that is increasingly susceptible to the formation of clubs, membership in which is often determined by who knows who, rather than who has done what (see Steve McKnight’s “The curse of committees and clubs”). Over time, scientific social status translates into who is considered productive, important, trustworthy, or (using an oft-misused term) brilliant. Achieving status can mean putting up with abusive and unwanted behaviors (particularly sexual). Examples of this behavior have recently been emerging with increasing frequency (which has been extensively described elsewhere: see Confronting Sexual Harassment in Science; More universities must confront sexual harassment; What’s to be done about the numerous reports of faculty misconduct dating back years and even decades?; Academia needs to confront sexism; and The Trouble With Girls’: The Enduring Sexism in Science).

So why is abusive behavior tolerated?  One might argue that this reflects humans’ current and historical obsession with “stars,” pharaohs, kings, and dictators as isolated geniuses who make things work. Perhaps the most visible example of such abused scientists (although there are in fact many others : see History’s Most Overlooked Scientists) is Rosalind Franklin, whose data was essential to solving the structure of double stranded DNA, yet whose contributions were consistently and systematically minimized, a clear example of sexual marginalization. In this light, many is the technician who got an experiment to “work,” leading to their research supervisor’s being awarded the prizes associated with the breakthrough (2).

Amplifying the star effect is the role of research status at the institutional level;  an institution’s academic ranking is often based upon the presence of faculty “stars.” Perhaps surprisingly to those outside of academia, an institution’s research status, as reflected in the number of stars on staff, often trumps its educational effectiveness, particularly with undergraduates, that is the people who pay the bulk of the institution’s running costs. In this light, it is not surprising that research stars who display various abusive behavior (often to women) are shielded by institutions from public censure.

So what is to be done? My own modest proposal (to be described in more detail in a later post) is to increase the emphasis on institution’s (and departments within institutions) effectiveness at undergraduate educational success. This would provide a counter-balancing force that could (might?) place research status in a more realistic context.

a footnote or two:

  1.  on the assumption that there is nothing but a material world.
  2. Although I am no star, I would acknowledge Joe Dent, who worked out the whole-mount immunocytochemical methods that we have used extensively in our work over the years).
  3. Thanks to Becky for editorial comments as well as a dramatic reading!

Humanized mice & porcinized people

mouse and pig

Updates:  12 January 2022

7 December 2020: US FDA declares genetically modified pork ‘safe to eat

A practical benefit, from a scientific and medical perspective, of the evolutionary unity of life (link) are the molecular and cellular similarities between different types of organisms. Even though humans and bacteria diverged more than 2 billion years ago (give or take), the molecular level conservation of key systems makes it possible for human insulin to be synthesized in and secreted by bacteria and pig-derived heart valves to be used to replace defective human heart valves (see link). Similarly, while mice, pigs, and people are clearly different from one another in important ways they have, essentially, all of the same body parts. Such underlying similarities raise interesting experimental and therapeutic possibilities.

A (now) classic way to study the phenotypic effects of human-specific versions of genes is to introduce these changes into a model organism, such as a mouse (for a review of human brain-specific human genes – see link).  A example of such a study involves the gene that encodes the protein foxp2, a protein involved in the regulation of gene expression (a transcription factor). The human foxp2  protein differs from the foxp2 protein in other primates at two positions; foxP2 evolution these two amio acid changes alter the activity of the human protein, that is the ensemble of genes that it regulates. That foxp2 has an important role in humans was revealed through studies of individuals in a family that displayed a severe language disorder linked to a mutation that disrupts the function of the foxp2 protein. Individuals carrying this mutant  foxp2 allele display speech apraxia, a “severe impairment in the selection and sequencing of fine oral and facial movements, the ability to break up words into their constituent phonemes, and the production and comprehension of word inflections and syntax” (cited in Bae et al, 2015).  Male mice that carry this foxp2 mutation display changes in the “song” that they sing to female mice (1), while mice carrying a humanized form of foxp2 display changes in “dopamine levels, dendrite morphology, gene expression and synaptic plasticity” in a subset of CNS neurons (2).  While there are many differences between mice and humans, such studies suggest that changes in foxp2 played a role in human evolution, and human speech in particular.

Another way to study the role of human genes using mouse as a model system is to generate what are known as chimeras, named after the creature in Greek mythology composed of parts of multiple organisms.  A couple of years ago, Goldman and colleagues (3) reported that human glial progenitor cells could, when introduced into immune-compromised mice (to circumvent tissue rejection), displaced the mouse’s own glia, replacing them with human glia cells.iPSC transplant Glial cells are the major non-neuronal component of the central nervous system. Once thought of as passive “support” cells, it is now clear that the two major types of glia, known as astrocytes and oligodendrocytes, play a number of important roles in neural functioning [back track post].  In their early studies, they found that the neurological defects associated with the shaker mutation, a mutation that disrupts the normal behavior of oligodendrocytes, could be rescued by the implantation of normal human glial progenitor cells (hGPCs)(4).  Such studies confirmed what was already known, that the shaker mutation disrupts the normal function of myelin, the insulating structure around axons that dramatically speeds the rate at which neuronal signals (action potentials) move down the axons and activate the links between neurons (synapses). In the central nervous system, myelin is produced by oligodendrocytes as they ensheath neuronal axons.  Human oligodendrocytes derived from hGPCs displaced the mouse’s mutation carrying oligodendrocytes and rescued the shaker mouse’s mutation-associated neurological defect.

golgi staining- diagramSubsequently, Goldman and associates used a variant of this approach to introduce hGPCs (derived from human embryonic stem cells) carrying either a normal or mutant version of the  Huntingtin protein, a protein associated with the severe neural disease Huntington’s chorea (OMIM: 143100)(5).  Their studies strongly support a model that locates defects associated with human Huntington’s disease to defects in glia.  This same research group has generated hGPCs from patient-derived, induced pluripotent stem cells (patient-derived HiPSCs). In this case, the patients had been diagnosed with childhood-onset schizophrenia (SCZ) [link](6).  Skin biopsies were taken from both normal and children diagnosed with SCZ; fibroblasts were isolated, and reprogrammed to form human iPSCs. These iPSCs were treated so that they formed hGPCs that were then injected into mice to generate chimeric (human glial/mouse neuronal) animals. The authors reported systematic differences in the effects of control and SCZ-derived hGPCs; “SCZ glial mice showed reduced prepulse inhibition and abnormal behavior, including excessive anxiety, antisocial traits, and disturbed sleep”, a result that suggests that defects in glial behavior underlie some aspects of the human SCZ phenotype.

The use of human glia chimeric mice provides a powerful research tool for examining the molecular and cellular bases for a subset of human neurological disorders.  Does it raise a question of making mice more human?  Not for me, but perhaps I do not appreciate the more subtle philosophical and ethical issues involved. The mice are still clearly mice, most of their nervous systems are composed of mouse cells, and the overall morphology, size, composition, and organization of their central nervous systems are mouse-derived and mouse-like. The situation becomes rather more complex and potentially therapeutically useful when one talks about generating different types of chimeric animals or of using newly developed genetic engineering tools (the CRISPR CAS9 system found in prokaryotes), that greatly simplify and improve the specificity of the targeted manipulation of specific genes (link).  In these studies the animal of choice is not mice, but pigs – which because of their larger size produce organs for transplantion that are similar in size to the organs of people (see link).  While similar in size, there are two issues that complicate pig to human organ transplantation: first there is the human immune system mediated rejection of foreign  tissue and second there is the possibility that transplantation of porcine organs will lead to the infection of the human recipient with porcine retroviruses.

The issue of rejection (pig into human), always a serious problem, is further exacerbated by the presence in pigs of a gene encoding the enzyme α-1,3 galactosyl transferase (GGTA1). GGTA1 catalyzes the addition of the gal-epitope to a number of cell surface proteins. The gal-epitope is “expressed on the tissues of all mammals except humans and subhuman primates, which have antibodies against the epitope” (7). The result is that pig organs provoke an extremely strong immune (rejection) response in humans.  The obvious technical fix to this (and related problems) is to remove the gal-epitope from pig cells by deleting the GGTA1 enzyme (see 8). It is worth noting that “organs from genetically engineered animals have enjoyed markedly improved survivals in non-human primates” (see Sachs & Gall, 2009).

pig to humanThe second obstacle to pig → human transplantation is the presence of retroviruses within the pig genome.  All vertebrate genomes, including those of humans, contain many inserted retroviruses; almost 50% of the human genome is retrovirus-derived sequence (an example of unintelligent design if ever there was one). Most of these endogenous retroviruses are “under control” and are normally benign (see 9). The concern, however, is that the retroviruses present in pig cells could be activated when introduced into humans. To remove (or minimize) this possibility, Niu et al set out to use the CRISPR CAS9 system to delete these porcine endogenous retroviral sequences (PERVs) from the pig genome; they appear to have succeeded, generating a number of genetically modified pigs without PERVs (see 10).  The hope is that organs generated from PERV-minus pigs from which antigen-generating genes, such as α-1,3 galactosyl transferase, have also been removed or inactivated together with more sophisticated inhibitors of tissue rejection, will lead to an essentially unlimited supply of pig organs that can be used for heart and other organ transplantation (see 11), and so alleviate the delays in transplantation, and so avoid deaths in sick people and the often brutal and criminal harvesting of organs carried out in some countries.

The final strategy being explored is to use genetically modified hosts and patient derived iPSCs  to generate fully patient compatible human organs. To date, pilot studies have been carried out, apparently successfully, using rat embryos with mouse stem cells (see 12 and 13), with much more preliminary studies using pig embryos and human iPSCs (see 14).  The approach involves what is known as chimeric  embryos.  In this case, host animals are genetically modified so that they cannot generate the organ of choice. Typically this is done by mutating a key gene that encodes a transcription factor directly involved in formation of the organ; embryos missing pancreas, kidney, heart, human pig embryo chimeraor eyes can be generated.  In an embryo that cannot make these organs, which can be a lethal defect, the introduction of stem cells from an animal that can form these organs can lead to the formation of an organ composed primarily of cells derived from the transplanted (human) cells.

At this point the strategy appears to work reasonably well for mouse-rat chimeras, which are much more closely related, evolutionarily, than are humans and pigs. Early studies on pig-human chimeras appear to be dramatically less efficient. At this point, Jun Wu has been reported as saying of human-pig chimeras that “we estimate [each had] about one in 100,000 human cells” (see 15), with the rest being pig cells.  The bottom line appears to be that there are many technical hurdles to over-come before this method of developing patient-compatible human organs becomes feasible.  Closer to reality are PERV-free/gal-antigen free pig-derived, human compatible organs. The reception of such life-saving organs by the general public, not to mention religious and philosophical groups that reject the consumption of animals in general, or pigs in particular, remains to be seen.

figures reinserted & minor edits 23 October 2020 – new link 17 December 2020.
references cited

  1. A Foxp2 Mutation Implicated in Human Speech Deficits Alters Sequencing of Ultrasonic Vocalizations in Adult Male Mice.
  2. A Humanized Version of Foxp2 Affects Cortico-Basal Ganglia Circuits in Mice
  3. Modeling cognition and disease using human glial chimeric mice.
  4. Human iPSC-derived oligodendrocyte progenitor cells can myelinate and rescue a mouse model of congenital hypomyelination.
  5. Human glia can both induce and rescue aspects of disease phenotype in Huntington disease
  6. Human iPSC Glial Mouse Chimeras Reveal Glial Contributions to Schizophrenia.
  7.  The potential advantages of transplanting organs from pig to man: A transplant Surgeon’s view
  8. see Sachs and Gall. 2009. Genetic manipulation in pigs. and Fisher et al., 2016. Efficient production of multi-modified pigs for xenotransplantation by ‘combineering’, gene stacking and gene editing
  9. Hurst & Magiokins. 2017. Epigenetic Control of Human Endogenous Retrovirus Expression: Focus on Regulation of Long-Terminal Repeats (LTRs)
  10. Nui et al., 2017. Inactivation of porcine endogenous retrovirus in pigs using CRISPR-Cas9
  11. Zhang  2017. Genetically Engineering Pigs to Grow Organs for People
  12. Kobayashi et al., 2010. Generation of rat pancreas in mouse by interspecific blastocyst injection of pluripotent stem cells.
  13. Kobayashi et al., 2015. Targeted organ generation using Mixl1-inducible mouse pluripotent stem cells in blastocyst complementation.
  14. Wu et al., 2017. Interspecies Chimerism with Mammalian Pluripotent Stem Cells
  15. Human-Pig Hybrid Created in the Lab—Here Are the Facts