Making education matter in higher education


It may seem self-evident that providing an effective education, the type of educational experiences that lead to a useful bachelors degree and serve as the foundation for life-long learning and growth, should be a prime aspirational driver of Colleges and Universities (1).  We might even expect that various academic departments would compete with one another to excel in the quality and effectiveness of their educational outcomes; they certainly compete to enhance their research reputations, a competition that is, at least in part, responsible for the retention of faculty, even those who stray from an ethical path. Institutions compete to lure research stars away from one another, often offering substantial pay raises and research support (“Recruiting or academic poaching?”).  Yet, my own experience is that a department’s performance in undergraduate educational outcomes never figures when departments compete for institutional resources, such as supporting students, hiring new faculty, or obtaining necessary technical resources (2).

 I know of no example (and would be glad to hear of any) of a University hiring a professor based primarily on their effectiveness as an instructor (3).

In my last post, I suggested that increasing the emphasis on measures of departments’ educational effectiveness could help rebalance the importance of educational and research reputations, and perhaps incentivize institutions to be more consistent in enforcing ethical rules involving research malpractice and the abuse of students, both sexual and professional. Imagine if administrators (Deans and Provosts and such) were to withhold resources from departments that are performing below acceptable and competitive norms in terms of undergraduate educational outcomes?

Outsourced teaching: motives, means and impacts

Sadly, as it is, and particularly in many science departments, undergraduate educational outcomes have little if any impact on the perceived status of a department, as articulated by campus administrators. The result is that faculty are not incentivized to, and so rarely seriously consider the effectiveness of their department’s course requirements, a discussion that would of necessity include evaluating whether a course’s learning goals are coherent and realistic, whether the course is delivered effectively, whether it engages students (or is deemed irrelevant), and whether students’ achieve the desired learning outcomes, in terms of knowledge and skills achieved, including the ability to apply that knowledge effectively to new situations.  Departments, particularly research focussed (dependent) departments, often have faculty with low teaching loads, a situation that incentivizes the “outsourcing” of key aspects of their educational responsibilities.  Such outsourcing comes in two distinct forms, the first is requiring majors to take courses offered by other departments, even if such courses are not well designed, delivered, or (in the worst cases) relevant to the major.  A classic example is to require molecular biology students to take macroscopic physics or conventional calculus courses, without regard to whether the materials presented in these courses is ever used within the major or the discipline.  Expecting a student majoring in the life sciences to embrace a course that (often rightly) seems irrelevant to their discipline can alienate a student, and poses an unnecessary obstacle to student success, rather than providing students with needed knowledge and skills.  Generally, the incentives necessary to generate a relevant course, for example, a molecular level physics course that would engage molecular biology students, are simply not there.  A version of this situation is to require courses that are poorly designed or delivered (general chemistry is often used as the poster child for such a course). These are courses that have high failure rates, sometimes justified in terms of “necessary rigor” when in fact better course design could (and has) resulted in lower failure rates and improved learning outcomes.  In addition, there are perverse incentives associated with requiring “weed out” courses offered by other departments, as they reduce the number of courses a department’s faculty needs to teach, and can lead to fewer students proceeding into upper division courses.

The second type of outsourcing involves excusing tenure track faculty from teaching introductory courses, and having them replaced by lower paid instructors or lecturers.  Independently of whether instructors, lecturers, or tenure track professors make for better teaching, replacing faculty with instructors sends an implicit message to students.  At the same time, the freedom of instructors/lecturers to adopt an effective (socratic) approach to teaching is often severely constrained; common exams can force classes to move in lock step, independently of whether that pace is optimal for student engagement and learning. Generally, instructors/lecturers do not have the freedom to adjust what they teach, to modify the emphasis and time they spend on specific topics in response to their students’ needs. How an instructor instructs their students suffers when teachers do not have the freedom to customize their interactions with students in response to where they are intellectually.  This is particularly detrimental in the case of underrepresented or underprepared students. Generally, a flexible and adaptive approach to instruction (including ancillary classes on how to cope with college: see An alternative to remedial college classes gets results) can address many issues, and bring the majority of students to a level of competence, whereas tracking students into remedial classes can succeed in driving them out of a major or college (see Colleges Reinvent Classes to Keep More Students in Science and Redesigning a Large-Enrollment Introductory Biology Course and Does Remediation Work for All Students? )

How to address this imbalance, how can we reset the pecking order so that effective educational efforts actually matter to a department? 

My (modest) suggestion is to base departmental rewards on objective measures of educational effectiveness.   And by rewards I mean both at the level of individuals (salary and status) as well as support for graduate students, faculty positions, start up funds, etc.  What if, for example, faculty in departments that excel at educating their students received a teaching bonus, or if the number of graduate students within a department supported by the institution was determined not by the number of classes these graduate students taught (courses that might not be particularly effective or engaging) but rather by a departments’ undergraduate educational effectiveness, as measured by retention, time to degree, and learning outcomes (see below)?  The result could well be a drive within a department to improve course and curricular effectiveness to maximize education-linked rewards.  Given that laboratory courses, the courses most often taught by science graduate students, are multi-hour schedule disrupting events, of limited demonstrable educational effectiveness, that complicate student course scheduling, removing requirements for lab courses deemed unnecessary (or generating more effective versions), would be actively rewarded (of course, sanctions for continuing to offer ineffective courses would also be useful, but politically more problematic.)

A similar situation applies when a biology department requires its majors to take 5 credit hour physics or chemistry courses.  Currently it is “easy” for a department to require its students to take such courses without critically evaluating whether they are “worth it”, educationally.  Imagine how a department’s choices of required courses would change if the impact of high failure rates (which I would argue is a proxy for poorly designed  and delivered courses) directly impacted the rewards reaped by a department. There would be an incentive to look critically at such courses, to determine whether they are necessary and if so, well designed and delivered. Departments would serve their own interests if they invested in the development of courses  that better served their disciplinary goals, courses likely to engage their students’ interests.

So how do we measure a department’s educational efficacy?

There are three obvious metrics: i) retention of students as majors (or in the case of “service courses” for non-majors, whether students master what it is the course claims to teach); ii) time to degree (and by that I mean the percentage of students who graduate in 4 years, rather than the 6 year time point reported in response to federal regulations (six year graduation rate | background on graduation rates); and iii) objective measures of student learning outcomes attained and skills achieved. The first two are easy, Universities already know these numbers.  Moreover they are directly influenced by degree requirements – requiring students to take boring and/or apparently irrelevant courses serves to drive a subset of students out of a major.  By making courses relevant and engaging, more students can be retained in a degree program. At the same time, thoughtful course design can help students  pass through even the most rigorous (difficult) of such courses. The third, learning outcomes, is significantly more challenging to measure, since universal metrics are (largely) missing or superficial.  A few disciplines, such as chemistry, support standardized assessments, although one could argue with what such assessments measure.  Nevertheless, meaningful outcomes measures are necessary, in much the same way that Law and Medical boards and the Fundamentals of Engineering exam serve to help insure (although they do not guarantee) the competence of practitioners. One could imagine using parts of standardized exams, such as discipline specific GRE exams, to generate outcomes metrics, although more informative assessment instruments would clearly be preferable. The initiative in this area could be taken by professional societies, college consortia (such as the AAU), and research foundations, as a critical driver for education reform, increased effectiveness, and improved cost-benefit outcomes, something that could help address the growing income inequality in our country and make success in higher education an important factor contributing to an institution’s reputation.

 

A footnote or two…
 
1. My comments are primarily focused on research universities, since that is where my experience lies; these are, of course, the majority of the largest universities (in a student population sense).
 
2. Although my experience is limited, having spent my professorial career at a single institution, conversations with others leads me to conclude that it is not unique.
 
3. The one obvious exception would be the hiring of  coaches of sports teams, since their success in teaching (coaching) is more directly discernible and impactful on institutional finances and reputation).
 
minor edits – 16 March 2020

Reverse Dunning-Kruger effects and science education

The Dunning-Kruger (DK) effect is the well-established phenomenon that people tend to over estimate their understanding of a particular topic or their skill at a particular task, often to a dramatic degree [link][link]. We see examples of the DK effect throughout society; the current administration (unfortunately) and the nutritional supplements / homeopathy section of Whole Foods spring to mind as examples. But there is a less well-recognized “reverse DK” effect, namely the tendency of instructors, and a range of other public communicators, to over-estimate what the people they are talking to are prepared to understand, appreciate, and accurately apply. The efforts of science communicators and instructors can be entertaining but the failure to recognize and address the reverse DK effect results in ineffective educational efforts. These efforts can themselves help generate the illusion of understanding in students and the broader public (discussed here). While a confused understanding of the intricacies of cosmology or particle physics can be relatively harmless in their social and personal implications, similar misunderstandings become personally and publicly significant when topics such as vaccination, alternative medical treatments, and climate change are in play.

There are two synergistic aspects to the reverse DK effect that directly impact science instruction: the need to understand what one’s audience does not understand together with the need to clearly articulate the conceptual underpinnings needed to understand the subject to be taught. This is in part because modern science has, at its core, become increasingly counter-intuitive over the last approximately 100 years or so, a situation that can cause serious confusions that educators must address directly and explicitly. The first reverse DK effect involves the extent to which the instructor (and by implication the course and textbook designer) has an accurate appreciation of what students think or think they know, what ideas they have previously been exposed to, and what they actually understand about the implications of those ideas.  Are they prepared to learn a subject or does the instructor first have to acknowledge and address conceptual confusions and build or rebuild base concepts?  While the best way to discover what students think is arguably a Socratic discussion, this only rarely occurs for a range of practical reasons. In its place, a number of concept inventory-type testing instruments have been generated to reveal whether various pre-identified common confusions exist in students’ thinking. Knowing the results of such assessments BEFORE instruction can help customize how the instructor structures the learning environment and content to be presented and whether the instructor gives students the space to work with these ideas to develop a more accurate and nuanced understanding of a topic.  Of course, this implies that instructors have the flexibility to adjust the pace and focus of their classroom activities. Do they take the time needed to address student issues or do they feel pressured to plow through the prescribed course content, come hell, high water, or cascading student befuddlement.

A complementary aspect of the reverse DK effect, well-illustrated in the “why magnets attract” interview with the physicist Richard Feynman, is that the instructor, course designer, or textbook author(s) needs to have a deep and accurate appreciation of the underlying core knowledge necessary to understand the topic they are teaching. Such a robust conceptual understanding makes it possible to convey the complexities involved in a particular process and explicitly values appreciating a topic rather than memorizing it.  It focuses on the general, rather than the idiosyncratic. A classic example from many an introductory biology course is the difference between expecting students to remember the steps in glycolysis or the Krebs cycle reaction system, as opposed to the general principles that underlie the non-equilibrium reaction networks involved in all biological functions, a reaction network based on coupled chemical reactions and governed by the behaviors of thermodynamically favorable and unfavorable reactions. Without a explicit discussion of these topics, all too often students are required to memorize names without understanding the underlying rationale driving the processes involved; that is, why the system behaves as it does.  Instructors also give false “rubber band” analogies or heuristics to explain complex phenomena (see Feynman video 6:18 minutes in). A similar situation occurs when considering how molecules come to associate and dissociate from one another, for example in the process of regulating gene expression or repairing mutations in DNA. Most textbooks simply do not discuss the physiochemical processes involved in binding specificity, association, and dissociation rates, such as the energy changes associated with molecular interactions and thermal collisions (don’t believe me? look for yourself!). But these factors are essential for a student to understand the dynamics of gene expression [link], as well as the specificity of modern methods involved in genetic engineering, such as restriction enzymes, polymerase chain reaction, and CRISPR CAS9-mediated mutagenesis. By focusing on the underlying processes involved we can avoid their trivialization and enable students to apply basic principles to a broad range of situations. We can understand exactly why CRISPR CAS9-directed mutagenesis can be targeted to a single site within a multibillion-base pair genome.

Of course, as in the case of recognizing and responding to student misunderstandings and knowledge gaps, a thoughtful consideration of underlying processes takes course time, time that trades the development of a working understanding of core processes and principles for broader “coverage” of frequently disconnected facts, the memorization and regurgitation of which has been privileged over understanding why those facts are worth knowing. If our goal is for students to emerge from a course with an accurate understanding of the basic processes involved rather than a superficial familiarity with a plethora of unrelated facts, however, a Socratic interaction with the topic is essential. What assumptions are being made, where do they come from, how do they constrain the system, and what are their implications?  Do we understand why the system behaves the way it does? In this light, it is a serious educational mystery that many molecular biology / biochemistry curricula fail to introduce students to the range of selective and non-selective evolutionary mechanisms (including social and sexual selection – see link), that is, the processes that have shaped modern organisms.

Both aspects of the reverse DK effect impact educational outcomes. Overcoming the reverse DK effect depends on educational institutions committing to effective and engaging course design, measured in terms of retention, time to degree, and a robust inquiry into actual student learning. Such an institutional dedication to effective course design and delivery is necessary to empower instructors and course designers. These individuals bring a deep understanding of the topics taught and their conceptual foundations and historic development to their students AND must have the flexibility and authority to alter the pace (and design) of a course or a curriculum when they discover that their students lack the pre-existing expertise necessary for learning or that the course materials (textbooks) do not present or emphasize necessary ideas. Unfortunately, all too often instructors, particularly in introductory level college science courses, are not the masters of their ships; that is, they are not rewarded for generating more effective course materials. An emphasis on course “coverage” over learning, whether through peer-pressure, institutional apathy, or both, generates unnecessary obstacles to both student engagement and content mastery.  To reverse the effects of the reverse DK effect, we need to encourage instructors, course designers, and departments to see the presentation of core disciplinary observations and concepts as the intellectually challenging and valuable endeavor that it is. In its absence, there are serious (and growing) pressures to trivialize or obscure the educational experience – leading to the socially- and personally-damaging growth of fake knowledge.

Is it time to start worrying about conscious human “mini-brains”?

A human iPSC cerebral organoid in which pigmented retinal epithelial cells can be seen (from the work of McClure-Begley, Mike Klymkowsky, and William Old.)

The fact that experiments on people are severely constrained is a major obstacle in understanding human development and disease.  Some of these constraints are moral and ethical and clearly appropriate and necessary given the depressing history of medical atrocities.  Others are technical, associated with the slow pace of human development. The combination of moral and technical factors has driven experimental biologists to explore the behavior of a wide range of “model systems” from bacteria, yeasts, fruit flies, and worms to fish, frogs, birds, rodents, and primates.  Justified by the deep evolutionary continuity between these organisms (after all, all organisms appear to be descended from a single common ancestor and share many molecular features), experimental evolution-based studies of model systems have led to many therapeutically valuable insights in humans – something that I suspect a devotee of intelligent design creationism would be hard pressed to predict or explain (post link).

While humans are closely related to other mammals, it is immediately obvious that there are important differences – after all people are instantly recognizable from members of other closely related species and certainly look and behave differently from mice. For example, the surface layer of our brains are extensively folded (they are known as gyrencephalic) while the brain of a mouse is smooth as a baby’s bottom (and referred to as lissencephalic). In humans, the failure of the brain cortex to fold is known as lissencephaly, a disorder associated with several severe neurological defects. With the advent of more and more genomic sequence data, we can identify human specific molecular (genomic) differences. Many of these sequence differences occur in regions of our DNA that regulate when and where specific genes are expressed.  Sholtis & Noonan (1) provide an example: the HACNS1 locus is a 81 basepair region that is highly conserved in various vertebrates from birds to chimpanzees; there are 13 human specific changes in this sequence that appear to alter its activity, leading to human-specific changes in the expression of nearby genes (↓). At this point ~1000 genetic elements that are different in humans compared to other vertebrates have been identified and more are likely to emerge (2).  Such human-specific changes can make modeling human-specific behaviors, at the cellular, tissue, organ, and organism level, in non-human model systems difficult and problematic (3, 4).   It is for this reason that scientists have attempted to generate better human specific systems.

One particularly promising approach is based on what are known as embryonic stem cells (ESCs) or pluripotent stem cells (PSCs). Human embryonic stem cells are generated from the inner cell mass of a human embryo and so involve the destruction of that embryo – which raises a number of ethical and religious concerns as to when “life begins” (5)(more on that in a future post).  Human pluripotent stem cells are isolated from adult tissues but in most cases require invasive harvesting methods that limit their usefulness.  Both ESCs and PSCs can be grown in the laboratory and can be induced to differentiate into what are known as gastruloids.  Such gastruloids can develop anterior-posterior (head-tail), dorsal-ventral (back-belly), and left-right axes analogous to those found in embryos (6) and adults (top panel ↓). In the case of PSCs, the gastruloid (bottom panel ↓) is essentially a twin of the organism from which the PSCs were derived, a situation that raises difficult questions: is it a distinct individual, is it the property of the donor or the creation of a technician.  The situation will be further complicated if (or rather, when) it becomes possible to generate viable embryos from such gastruloids.

 

The Nobel prize winning work of Kazutoshi Takahashi and Shinya Yamanaka (7), who devised methods to take differentiated (somatic) human cells and reprogram them into ESC/PSC-like cells, cells known as induced pluripotent stem cells (iPSCs)(8), represented a technical breakthrough that jump-started this field. While the original methods derived sample cells from tissue biopsies, it is possible to reprogram kidney epithelial cells recovered from urine, a non-invasive approach (910).  Subsequently, Madeline Lancaster, Jurgen Knōblich, and colleagues devised an approach by which such cells could be induced to form what they termed “cerebral organoids” (although Yoshiki Sasai and colleagues were the first to generate neuronal organoids); they used this method to examine the developmental defects associated with microencephaly (11).  The value of the approach was rapidly recognized and a number of studies on human conditions, including  lissencephaly (12), Zika-virus infection-induced microencephaly (13), and Down’s syndrome (14);  investigators have begun to exploit these methods to study a range of human diseases.

The production of cerebral organoids from reprogrammed human somatic cells has also attracted the attention of the media (15).  While “mini-brain” is certainly a catchier name, it is a less accurate description of a cerebral organoid, itself possibly a bit of an overstatement, since it is not clear exactly how “cerebral” such organoids are. For example, the developing brain is patterned by embryonic signals that establish its asymmetries; it forms at the anterior end of the neural tube (the nascent central nervous system and spinal cord) and with distinctive anterior-posterior, dorsal-ventral, and left-right asymmetries, something that simple cerebral organoids do not display.  Moreover, current methods for generating cerebral organoids involve primarily what are known as neuroectodermal cells – our nervous system (and that of other vertebrates) is a specialized form of the embryo’s surface layer that gets internalized during development. In the embryo, the developing neuroectoderm interacts with cells of the circulatory system (capillaries, veins, and arteries), formed by endothelial cells and what are known as pericytes that surround them. These cells, together with interactions with glial cells (astrocytes, a non-neuronal cell type) combine to form the blood brain barrier.  Other glial cells (oligodendrocytes) are also present; in contrast, both types of glia (astrocytes and oligodendrocytes) are rare in the current generation of cerebral organoids. Finally, there are microglial cells,  immune system cells that originate from outside the neuroectoderm; they invade and interact with neurons and glia as part of the brain’s dynamic neural system. The left panel of the figure shows, in highly schematic form how these cells interact (16). The right panel is a drawing of neural tissue stained by the Golgi method (17), which reveals ~3-5% of the neurons present. There are at least as many glial cells present, as well as microglia, none of which are visible in the image. At this point, cerebral organoids typically contain few astrocytes and oligodendrocytes, no vasculature, and no microglia. Moreover, they grow to be about 1 to 3 mm in diameter over the course of 6 to 9 months; that is significantly smaller in volume than a fetal or newborn’s brain. While cerebral organoids can generate structures characteristic of retinal pigment epithelia (top figure) and photo-responsive neurons (18), such as those associated with the retina, an extension of the brain, it is not at all clear that there is any significant sensory input into the neuronal networks that are formed within a cerebral organoid, or any significant outputs, at least compared to the role that the human brain plays in controlling bodily and mental functions.

The reasonable question, then, must be whether a  cerebral organoid, which is a relatively simple system of cells (although itself complex), is conscious. It becomes more reasonable as increasingly complex systems are developed, and such work is proceeding apace. Already researchers are manipulating the developing organoid’s environment to facilitate axis formation, and one can anticipate the introduction of vasculature. Indeed, the generation of microglia-like cells from iPSCs has been reported; such cells can be incorporated into cerebral organoids where they appear to respond to neuronal damage in much the same way as microglia behave in intact neural tissue (19).

We can ask ourselves, what would convince us that a cerebral organoid, living within a laboratory incubator, was conscious? How would such consciousness manifest itself? Through some specific pattern of neural activity, perhaps?  As a biologist, albeit one primarily interested in molecular and cellular systems, I discount the idea, proposed by some physicists and philosophers as well as the more mystical, that consciousness is a universal property of matter (20,21).  I take consciousness to be an emergent property of complex neural systems, generated by evolutionary mechanisms, built during embryonic and subsequent development, and influenced by social interactions (BLOG LINK) using information encoded within the human genome (something similar to this: A New Theory Explains How Consciousness Evolved). While a future concern, in a world full of more immediate and pressing issues, it will be interesting to listen to the academic, social, and political debate on what to do with mini-brains as they grow in complexity and perhaps inevitably, towards consciousness.

 

Footnotes and references

Thanks to Rebecca Klymkowsky, Esq. and Joshua Sanes, Ph.D. for editing and disciplinary support.

  1. Gene regulation and the origins of human biological uniqueness
  2.  See also Human-specific loss of regulatory DNA and the evolution of human-specific traits
  3. The mouse trap
  4. Mice Fall Short as Test Subjects for Some of Humans’ Deadly Ill
  5. The status of the human embryo in various religions
  6. Interactions between Nodal and Wnt signalling Drive Robust Symmetry Breaking and Axial Organisation in Gastruloids (Embryonic Organoids)
  7.  Induction of pluripotent stem cells from mouse embryonic and adult fibroblast cultures by defined factors
  8.  How iPS cells changed the world
  9.  Generation of Induced Pluripotent Stem Cells from Urine
  10. Urine-derived induced pluripotent stem cells as a modeling tool to study rare human diseases
  11. Cerebral organoids model human brain development and microcephaly.
  12. Human iPSC-Derived Cerebral Organoids Model Cellular Features of Lissencephaly and Reveal Prolonged Mitosis of Outer Radial Glia
  13. Using brain organoids to understand Zika virus-induced microcephaly
  14. Probing Down Syndrome with Mini Brains
  15. As an example, see The Beauty of “Mini Brains”
  16. Derived from Central nervous system pericytes in health and disease
  17. Golgi’s method .
  18. Cell diversity and network dynamics in photosensitive human brain organoids
  19. Efficient derivation of microglia-like cells from human pluripotent stem cells
  20. The strange link between the human mind and quantum physics – BBC:
  21. Can Quantum Physics Explain Consciousness?

Visualizing and teaching evolution through synteny

Embracing the rationalist and empirically-based perspective of science is not easy. Modern science generates disconcerting ideas that can be difficult to accept and often upsetting to philosophical or religious views of what gives meaning to existence [link]. In the context of evolutionary mechanisms within biology, the fact that variation is generated by random (stochastic) events, unpredictable at the level of the individual or within small populations, led to the rejection of Darwinian principles by many working scientists around the turn of the 20th century (see Bowler’s The Eclipse of Darwinism + link).  Educational research studies, such as our own “Understanding randomness and its impact on student learning“, reinforce the fact that ideas involving stochastic processes are relevant to evolutionary, as well as cellular and molecular, biology and are inherently difficult for people to accept (see also: Why being human makes evolution hard to understand). Yet there is no escape from the science-based conclusion that stochastic events provide the raw material upon which evolutionary mechanisms act, as well as playing a key role in a wide range of molecular and cellular level processes, including the origin of various diseases, particularly cancer [Cancer is partly caused by bad luck](1).

All of which leaves the critical question, at least for educators, of how to best teach students about evolutionary mechanisms and outcomes. The problem becomes all the more urgent given the anti-science posturing of politicians and public “intellectuals”, on both the right and the left, together with various overt and covert attacks on the integrity of science education, such as a new Florida law that lets “anyone in Florida challenge what’s taught in schools”.

Just to be clear, we are not looking for students to simply “believe” in the role of evolutionary processes in generating the diversity of life on Earth, but rather that they develop an understanding of how such processes work and how they make a wide range of observations scientifically intelligible. Of course the end result, unless you are prepared to abandon science altogether, is that you will find yourself forced to seriously consider the implications of unescapable scientific conclusions, no matter how weird and disconcerting they may be.

There are a number of educational strategies, in part depending upon one’s disciplinary perspective, on how to approach teaching evolutionary processes. Here I consider just one, based on my background in cell and molecular biology.  Genomicus is a web tool that “enables users to navigate in genomes in several dimensions: linearly along chromosome axes, transversely across different species, and chronologically along evolutionary time.”  It is one of a number of recently developed web-based resources that make it possible to use the avalanche of DNA (gene and genomic) sequence data being generated by the scientific community. For example, the ExAC Browser enables one to examine genetic variation in over 60,000 unrelated people. Such tools supplement and extend a range of tools accessible through the U.S. National Library of Medicine / NIH / National Center for Biotechnology Information (NCBI) web portal (PubMed).

In the biofundamentals© / coreBio course (with an evolving text available here), we originally used the observation that members of our subfamily of primates,  the Haplorhini or dry nose primates, are, unlike most mammals, dependent on the presence of vitamin C (ascorbic acid) in their diet; without vitamin C we develop scurvy, a potentially lethal condition. While there may be positive reasons for vitamin C dependence, in biofundamentals© we present this observation in the context of small population size and a forgiving environment. A plausible scenario is that the ancestral population of the Haplorhini lost the L-gulonolactone oxidase (GULO) gene (see OMIM) needed for vitamin C synthesis. The remains of the GULO gene found in humans and other Haplorhini genomes is mutated and non-functional, resulting in our requirement for dietary vitamin C.

How, you might ask, can we be so sure? Because we can transfer a functional mouse GULO gene into human cells; the result is that vitamin C dependent human cells become vitamin C independent (see: Functional rescue of vitamin C synthesis deficiency in human cells). This is yet another experimental result, similar to the ability of bacteria to accurately decode a human insulin gene), that supports the explanatory power of an evolutionary perspective (2),


In an environment in which vitamin C is plentiful in a population’s diet, the mutational loss of the GULO gene would be benign, that is, not selected against. In a small population, the stochastic effects of genetic drift can lead to the loss of genetic variants that are not strongly selected for. More to the point, once a gene’s function has been lost due to mutation, it is unlikely, although not impossible, that a subsequent mutation will lead to the repair of the gene. Why? Because there are many more ways to break a molecular machine, such as the GULO enzyme, but only a few ways to repair it. As the ancestor of the Haplorhini diverged from the ancestor of the vitamin C independent Strepsirrhini (wet-nose) group of primates, an event estimated to have occurred around 65 million years ago, its ancestors had to deal with their dietary dependence on vitamin C either by remaining within their original (vitamin C-rich) environment or by adjusting their diet to include an adequate source of vitamin C.

At this point we can start to use Genomicus to examine the results of evolutionary processes (a YouTube video on using Genomicus)(3).  In Genomicus a gene is indicated  by a pointed box  ; for simplicity all genes are drawn as if they are the same size (they are not); different genes get different colors and the direction of the box indicates the direction of RNA synthesis, the first stage of gene expression. Each horizontal line in the diagram below represents a segment of a chromosome from a particular species, while the blue lines to the left represent phylogenic (evolutionary) relationships. If we search for the GULO gene in the mouse, we find it and we discover that its orthologs (closely related genes) can be found in a wide range of eukaryotes, that is, organisms whose cells have a nucleus (humans are eukaryotes).
We find a version of the GULO gene in single-celled eukaryotes, such as baker’s yeast, that appear to have diverged from other eukaryotes about ~1.500,000,000 years ago (1500 million years ago, abbreviated Mya).  Among the mammalian genomes sequenced to date, the genes surrounding the GULO gene are also (largely) the same, a situation known as synteny (mammals are estimated to have shared a common ancestor about 184 Mya). Since genes can move around in a genome without necessarily disrupting their normal function(s), a topic for another day, synteny between distinct organisms is assumed to reflect the organization of genes in their common ancestor. The synteny around the GULO gene, and the presence of a GULO gene in yeast and other distantly related organisms, suggests that the ability to synthesize vitamin C is a trait conserved from the earliest eukaryotic ancestors.

Now a careful examination of this map (↑) reveals the absence of humans (Homo sapiens) and other Haplorhini primates – Whoa!!! what gives?  The explanation is, it turns out, rather simple. Because of mutation, presumably in their common ancestor, there is no functional GULO gene in Haplorhini primates. But the Haplorhini are related to the rest of the mammals, aren’t they?  We can test this assumption (and circumvent the absence of a functional GULO gene) by exploiting synteny – we search for other genes present in the syntenic region (↓). What do we find? We find that this region, with the exception of GULO, is present and conserved in the Haplorhini: the systemic region around the GULO gene lies on human chromosome 8 (highlighted by the red box); the black box indicates the GULO region in the mouse. Similar syntenic regions are found in the homologous (evolutionarily-related) chromosomes of other Haplorhini primates.

The end result of our Genomicus exercise is a set of molecular level observations, unknown to those who built the original anatomy-based classification scheme, that support the evolutionary relationship between the Haplorhini and more broadly among mammals. Based on these observations, we can make a number of unambiguous and readily testable predictions. A newly discovered Haplorhini primate would be predicted to share the same syntenic region and to be missing a functional GULO gene, whereas a newly discovered Strepsirrhini primate (or any mammal that does not require dietary ascorbic acid) should have a functional GULO gene within this syntenic region.  Similarly, we can explain the genomic similarities between those primates closely related to humans, such as the gorilla, gibbon, orangutan, and chimpanzee, as well as to make testable predictions about the genomic organization of extinct relatives, such as Neanderthals and Denisovians, using DNA recovered from fossils [link].

It remains to be seen how best to use these tools in a classroom context and whether having students use such tools influences their working understanding, and more generally, their acceptance of evolutionary mechanisms. That said, this is an approach that enables students to explore real data and to develop  plausible and predictive explanations for a range of genomic discoveries, likely to be relevant both to understanding how humans came to be, and in answering pragmatic questions about the roles of specific mutations and genetic variations in behavior, anatomy, and disease susceptibility.

Some footnotes:

(1) Interested in a magnetic bumper image? visit: http://www.cafepress.com/bioliteracy

(2) An insight completely missing (unpredicted and unexplained) by any creationist / intelligent design approach to biology.

(3) Note, I have no connection that I know of with the Genomicus team, but I thank Tyler Square (soon to be at UC Berkeley) for bringing it to my attention.

The trivialization of science education

It’s time for universities to accept their role in scientific illiteracy.  

There is a growing problem with scientific illiteracy, and its close relative, scientific over-confidence. While understanding science, by which most people seem to mean technological skills, or even the ability to program a device (1), is purported to be a critical competitive factor in our society, we see a parallel explosion of pseudo-scientific beliefs, often religiously held.  Advocates of a gluten-free paleo-diet battle it out with orthodox vegans for a position on the Mount Rushmore of self-righteousness, at the same time astronomers and astrophysicists rebrand themselves as astrobiologists (a currently imaginary discipline) while a subset of theoretical physicists, and the occasional evolutionary biologist, claim to have rendered ethicists and philosophers obsolete (oh, if it were only so). There are many reasons for this situation, most of which are probably innate to the human condition.  Our roots are in the vitamin C-requiring Haplorhini (dry nose) primate family, we were not evolved to think scientifically, and scientific thinking does not come easy for most of us, or for any of us over long periods of time (2). The fact that the sciences are referred to as disciplines reflects this fact, it requires constant vigilance, self-reflection, and the critical skepticism of knowledgeable colleagues to build coherent, predictive, and empirically validated models of the Universe (and ourselves).  In point of fact, it is amazing that our models of the Universe have become so accurate, particularly as they are counter-intuitive and often seem incredible, using the true meaning of the word.

Many social institutions claim to be in the business of developing and supporting scientific literacy and disciplinary expertise, most obviously colleges and universities.  Unfortunately, there are several reasons to question the general efficacy of their efforts and several factors that have led to this failure. There is the general tendency (although exactly how wide-spread is unclear, I cannot find appropriate statistics on this question) of requiring non-science students to take one, two, or more  “natural science” courses, often with associated laboratory sections, as a way to “enhance literacy and knowledge of one or more scientific disciplines, and enhance those reasoning and observing skills that are necessary to evaluate issues with scientific content” (source).

That such a requirement will “enable students to understand the current state of knowledge in at least one scientific discipline, with specific reference to important past discoveries and the directions of current development; to gain experience in scientific observation and measurement, in organizing and quantifying results, in drawing conclusions from data, and in understanding the uncertainties and limitations of the results; and to acquire sufficient general scientific vocabulary and methodology to find additional information about scientific issues, to evaluate it critically, and to make informed decisions” (source) suggests a rather serious level of faculty/institutional distain or apathy for observable learning outcomes, devotional levels of wishful thinking,  or simple hubris.  To my knowledge there is no objective evidence to support the premise that such requirements achieve these outcomes – which renders the benefits of such requirements problematic, to say the least (link).

On the other hand, such requirements have clear and measurable costs; going beyond the simple burden of added and potentially ineffective or off-putting course credit hours. The frequent requirement for multi-hour laboratory courses impacts the ability of students to schedule courses.  It would be an interesting study to examine how, independently of benefit, such laboratory course requirements impact students’ retention and time to degree, that is, bluntly put, costs to students and their families.

Now, if there were objective evidence that taking such courses improved students’ understanding of a specific disciplinary science and its application, perhaps the benefit would warrant the cost.  But one can be forgiven if one assumes a less charitable driver, that is, science departments’ self-interest in using laboratory and other non-major course requirements as means to support graduate students.  Clearly there is a need for objective metrics for scientific, that is disciplinary, literacy and learning outcomes.

And this brings up another cause for concern.  Recently, there has been a movement within the science education research community to attempt to quantify learning in terms of what are known as “forced choice testing instruments;” that is, tests that rely on true/false and multiple-choice questions, an actively anti-Socratic strategy.  In some cases, these tests claim to be research based.  As one involved in the development of such a testing instrument (the Biology Concepts Instrument or BCI), it is clear to me that such tests can serve a useful role in helping to identify areas in which student understanding is weak or confused [example], but whether they can provide an accurate or, at the end of the day, meaningful measure of whether students have developed an accurate working understanding of complex concepts and the broader meaning of observations is problematic at best.

Establishing such a level of understanding relies on Socratic, that is, dynamic and adaptive evaluations: can the learner clearly explain, either to other experts or to other students, the source and implications of their assumptions?  This is the gold standard for monitoring disciplinary understanding. It is being increasingly side-lined by those who rely on forced choice tests to evaluate learning outcomes and to support their favorite pedagogical strategies (examples available upon request).  In point of fact, it is often difficult to discern, in most science education research studies, what students have come to master, what exactly they know, what they can explain and what they can do with their knowledge. Rather unfortunately, this is not a problem restricted to non-majors taking science course requirements; majors can also graduate with a fragmented and partially, or totally, incoherent understanding of key ideas and their empirical foundations.

So what are the common features of a functional understanding of a particular scientific discipline, or more accurately, a sub-discipline?  A few ideas seem relevant.  A proficient needs to be realistic about their own understanding.  We need to teach disciplinary (and general) humility – no one actually understands all aspects of most scientific processes.  This is a point made by Fernbach & Sloman in their recent essay, “Why We Believe Obvious Untruths.”  Humility about our understanding has a number of beneficial aspects.  It helps keep us skeptical when faced with, and asked to accept, sweeping generalizations.

Such skepticism is part of a broader perspective, common among working scientists, namely the ability to distinguish the obvious from the unlikely, the implausible, and the impossible. When considering a scientific claim, the first criterion is whether there is a plausible mechanism that can be called upon to explain it, or does it violate some well-established “law of nature”. Claims of “zero waste” processes butt up against the laws of thermodynamics.

Going further, we need to consider how the observation or conclusions fits with other well established principles, which means that we have to be aware of these principles, as well as acknowledging that we are not universal experts in all aspects of science.  A molecular biologist may recognize that quantum mechanics dictates the geometries of atomic bonding interactions without being able to formally describe the intricacies of the molecule’s wave equation. Similarly, a physicist might think twice before ignoring the evolutionary history of a species, and claiming that quantum mechanics explains consciousness, or that consciousness is a universal property of matter.  Such a level of disciplinary expertise can take extended experience to establish, but is critical to conveying what disciplinary mastery involves to students; it is the major justification for having disciplinary practitioners (professors) as instructors.

From a more prosaic educational perspective other key factors need to be acknowledged, namely a realistic appreciation of what people can learn in the time available to them, while also understanding at least some of their underlying motivations, which is to say that the relevance of a particular course to disciplinary goals or desired educational outcomes needs to be made explicit and as engaging as possible, or at least not overtly off putting, something that can happen when a poor unsuspecting molecular biology major takes a course in macroscopic physics, taught by an instructor who believes organisms are deducible from first principles based on the conditions of the big bang.  Respecting the learner requires that we explicitly acknowledge that an unbridled thirst for an empirical, self-critical, mastery of a discipline is not a basic human trait, although it is something that can be cultivated, and may emerge given proper care.  Understanding the real constraints that act on meaningful learning can help focus courses on what is foundational, and help eliminate the irrelevant or the excessively esoteric.

Unintended consequences arise from “pretending” to teach students, both majors and non-science majors, science. One is an erosion of humility in the face of the complexity of science and our own limited understanding, a point made in a recent National Academy report that linked superficial knowledge with more non-scientific attitudes. The end result is an enhancement of what is known as the Kruger-Dunning effect, the tendency of people to seriously over-estimate their own expertise: “the effect describes the way people who are the least competent at a task often rate their skills as exceptionally high because they are too ignorant to know what it would mean to have the skill”.

A person with a severe case of Kruger-Dunning-itis is likely to lose respect for people who actually know what they are talking about. The importance of true expertise is further eroded and trivialized by the current trend of having photogenic and well-speaking experts in one domain pretend to talk, or rather to pontificate, authoritatively on another (3).  In a world of complex and arcane scientific disciplines, the role of a science guy or gal can promote rather than dispel scientific illiteracy.

We see the effects of the lack of scientific humility when people speak outside of their domain of established expertise to make claims of certainty, a common feature of the conspiracy theorist.  An oft used example is the claim that vaccines cause autism (they don’t), when the actual causes of autism, whether genetic and/or environmental, are currently unknown and the subject of active scientific study.  An honest expert can, in all humility, identify the limits of current knowledge as well as what is known for certain.  Unfortunately, revealing and ameliorating the levels of someone’s Kruger-Dunning-itis involves a civil and constructive Socratic interrogation, something of an endangered species in this day and age, where unseemly certainty and unwarranted posturing have replaced circumspect and critical discourse.  Any useful evaluation of what someone knows demands the time and effort inherent in a Socratic discourse, the willingness to explain how one knows what one thinks one knows, together with a reflective consideration of its implications, and what it is that other trained observers, people demonstrably proficient in the discipline, have concluded. It cannot be replaced by a multiple choice test.

Perhaps a new (old) model of encouraging in students, as well as politicians and pundits, an understanding of where science comes from, the habits of mind involved, the limits of, and constraints on, our current understanding  is needed.  At the college level, courses that replace superficial familiarity and unwarranted certainty with humble self-reflection and intellectual modesty might help treat the symptoms of Kruger-Dunning-itis, even though the underlying disease may be incurable, and perhaps genetically linked to other aspects of human neuronal processing.


some footnotes:

  1. after all, why are rather distinct disciplines lumped together as STEM (science, technology, engineering and mathematics).
  2.  Given the long history of Homo sapiens before the appearance of science, it seems likely that such patterns of thinking are an unintended consequence of selection for some other trait, and the subsequent emergence of (perhaps excessively) complex and self-reflective nervous system.
  3.  Another example of Neil Postman’s premise that education is be replaced by edutainment (see  “Amusing ourselves to Death”.

Go ahead and “teach the controversy:” it is the best way to defend science.

as long as teachers understand the science and its historical context

The role of science in modern societies is complex. Science-based observations and innovations drive a range of economically important, as well as socially disruptive, technologies. A range of opinion polls indicate that the American public “supports” science, while at the same time rejecting rigorously established scientific conclusions on topics ranging from the safety of genetically modified organisms and the role of vaccines in causing autism to the effects of burning fossil fuels on the global environment [Pew: Views on science and society]. Given that a foundational principle of science is that the natural world can be explained without calling on supernatural actors, it remains surprising that a substantial majority of people report that they believe that supernatural entities are involved in human evolution [as reported by the Gallup organization]; although the theistic percentage has been dropping  (a little) of late. This situation highlights the fact that when science intrudes on the personal or the philosophical (within which I include the theological and the  ideological), many people are willing to abandon the discipline of science to embrace explanations based on personal beliefs. These include the existence of a supernatural entity that cares for people, at least enough to create them, and that there are easily identifiable reasons why a child develops autism.

Where science appears to conflict with various non-scientific positions, the public has pushed back and rejected the scientific. This is perhaps best represented by the recent spate of “teach the controversy” legislative efforts, primarily centered on evolutionary theory and the reality of anthropogenic climate change [see Nature: Revamped ‘anti-science’ education bills], although we might expect to see, on more politically correct campuses, similar calls for anti-GMO, anti-vaccination, or gender-based curricula. In the face of the disconnect between scientific and non-scientific (philosophical, ideological, theological) personal views, I would suggest that an important part of the problem has didaskalogenic roots; that is, it arises from the way science is taught – all too often expecting students to memorize terms and master various heuristics (tricks) to answer questions rather than developing a self-critical understanding of ideas, their origins, supporting evidence, limitations, and practice in applying them.

 

Science is a social activity, based on a set of accepted core assumptions; it is not so much concerned with Truth, which could, in fact, be beyond our comprehension, but rather with developing a universal working knowledge, composed of ideas based on empirical observations that expand in their explanatory power over time to allow us to predict and manipulate various phenomena.  Science is a product of society rather than isolated individuals, but only rarely is the interaction between the scientific enterprise and its social context articulated clearly enough so that students and the general public can develop an understanding of how the two interact.  As an example, how many people appreciate the larger implications of the transition from an Earth to a Sun- or galaxy-centered cosmology?  All too often students are taught about this transition without regard to its empirical drivers and philosophical and sociological implications, as if the opponents at the time were benighted religious dummies. Yet, how many students or their teachers appreciate that as originally presented the Copernican system had more hypothetical epicycles and related Rube Goldberg-esque kludges, introduced to make the model accurate, than the competing Ptolemic Sun-centered system? Do students understand how Kepler’s recognition of elliptical orbits eliminated the need for such artifices and set the stage for Newtonian physics?  And how did the expulsion of humanity from the center to the periphery of things influence peoples’ views on humanity’s role and importance?

So how can education adapt to help students and the general public develop a more realistic understanding of how science works?  To my mind, teaching the controversy is a particularly attractive strategy, on the assumption that teachers have a strong grounding in the discipline they are teaching, something that many science degree programs do not achieve, as discussed below. For example, a common attack against evolutionary mechanisms relies on a failure to grasp the power of variation, arising from stochastic processes (mutation), coupled to the power of natural, social, and sexual selection. There is clear evidence that people find stochastic processes difficult to understand and accept [see Garvin-Doxas & Klymkowsky & Fooled by Randomness].  An instructor who is not aware of the educational challenges associated with grasping stochastic processes, including those central to evolutionary change, risks the same hurdles that led pre-molecular biologists to reject natural selection and turn to more “directed” processes, such as orthogenesis [see Bowler: The eclipse of Darwinism & Wikipedia]. Presumably students are even more vulnerable to intelligent-design  creationist arguments centered around probabilities.

The fact that single cell measurements enable us to visualize biologically meaningful stochastic processes makes designing course materials to explicitly introduce such processes easier [Biology education in the light of single cell/molecule studies].  An interesting example is the recent work on visualizing the evolution of antibiotic resistance macroscopically [see The evolution of bacteria on a “mega-plate” petri dish].

To be in a position to “teach the controversy” effectively, it is critical that students understand how science works, specifically its progressive nature, exemplified through the process of generating and testing, and where necessary, rejecting, clearly formulated and predictive hypotheses – a process antithetical to a Creationist (religious) perspective [a good overview is provided here: Using creationism to teach critical thinking].  At the same time, teachers need a working understanding of the disciplinary foundations of their subject, its core observations, and their implications. Unfortunately, many are called upon to teach subjects with which they may have only a passing familiarity.  Moreover, even majors in a subject may emerge with a weak understanding of foundational concepts and their origins – they may be uncomfortable teaching what they have learned.  While there is an implicit assumption that a college curriculum is well designed and effective, there is often little in the way of objective evidence that this is the case. While many of our dedicated teachers (particularly those I have met as part of the CU Teach program) work diligently to address these issues on their own, it is clear that many have not been exposed to a critical examination of the empirical observations and experimental results upon which their discipline is based [see Biology teachers often dismiss evolution & Teachers’ Knowledge Structure, Acceptance & Teaching of Evolution].  Many is the molecular biology department that does not require formal coursework in basic evolutionary mechanisms, much less a thorough consideration of natural, social, and sexual selection, and non-adaptive mechanisms, such as those associated with population bottlenecks and genetic drift, stochastic processes that play a key role in the evolution of many species, including humankind. Similarly, more ecologically- and physiologically-oriented majors are often “afraid” of the molecular foundations of evolutionary processes. As part of an introductory chemistry curriculum redesign project (CLUE), Melanie Cooper and her group at Michigan State University have found that students in conventional courses often fail to grasp key concepts, and that subsequent courses can sometimes fail to remediate the didaskalogenic damage done in earlier courses [see: an Achilles Heel in Chemistry Education].

 

The importance of a historical perspective: The power of scientific explanations are obvious, but they can become abstract when their historical roots are forgotten, or never articulated. A clear example is that the value of vaccination is obvious in the presence of deadly and disfiguring diseases; in their absence (due primarily to wide-spread vaccination), the value of vaccination can be called into question, resulting in the avoidable re-emergence of these diseases.  In this context, it would be important that students understand the dynamics and molecular complexity of biological systems, so that students can explain why it is that all drugs and treatments have potential side-effects, and how each individual’s genetic background influences these side-effects (although in the case of vaccination, such side effects do not include autism).

Often “controversy” arises when scientific explanations have broader social, political, or philosophical implications. Religious objections to evolutionary theory arise primarily, I believe, from the implication that we (humans) are not the result of a plan, created or evolved, but rather that we are accidents of mindless, meaningless, and often gratuitously cruel processes. The idea that our species, which emerged rather recently (that is, a few million years ago) on a minor planet on the edge of an average galaxy, in a universe that popped into existence for no particular reason or purpose ~14 billion years ago, can have disconcerting implications [link]. Moreover, recognizing that a “small” change in the trajectory of an asteroid could change the chance that humanity ever evolved [see: Dinosaur asteroid hit ‘worst possible place’] can be sobering and may well undermine one’s belief in the significance of human existence. How does it impact our social fabric if we are an accident, rather than the intention of a supernatural being or the inevitable product of natural processes?

Yet, as a person who firmly believes in the French motto of liberté, égalité, fraternité, laïcité, I feel fairly certain that no science-based scenario on the origin and evolution of the universe or life, or the implications of sexual dimorphism or racial differences, etc, can challenge the importance of our duty to treat others with respect, to defend their freedoms, and to insure their equality before the law. Which is not to say that conflicts do not inevitably arise between different belief systems – in my own view, patriarchal oppression needs to be called out and actively opposed where ever it occurs, whether in Saudi Arabia or on college campuses (e.g. UC Berkeley or Harvard).

This is not to say that presenting the conflicts between scientific explanations of phenomena, such as race, and non-scientific, but more important beliefs, such as equality under the law, is easy. When considering a number of natural cruelties, Charles Darwin wrote that evolutionary theory would claim that these are “as small consequences of one general law, leading to the advancement of all organic beings, namely, multiply, vary, let the strongest live  and the weakest die” note the absence of any reference to morality, or even sympathy for the “weakest”.  In fact, Darwin would have argued that the apparent, and overt cruelty that is rampant in the “natural” world is evidence that God was forced by the laws of nature to create the world the way it is, presumably a world that is absurdly old and excessively vast. Such arguments echo the view that God had no choice other than whether to create or not; that for all its flaws, evils, and unnecessary suffering this is, as posited by Gottfried Leibniz (1646-1716) and satirized by Voltaire in his novel Candide, the best of all possible worlds. Yet, as a member of a reasonably liberal, and periodically enlightened, society, we see it as our responsibility to ameliorate such evils, to care for the weak, the sick, and the damaged and to improve human existence; to address prejudice and political manipulation [thank you Supreme Court for ruling against race-based redistricting].  Whether anchored by philosophical or religious roots, many of us are driven to reject a scientific (biological) quietism (“a theology and practice of inner prayer that emphasizes a state of extreme passivity”) by actively manipulating our social, political, and physical environment and striving to improve the human condition, in part through science and the technologies it makes possible.

At the same time, introducing social-scientific interactions can be fraught with potential  controversies, particularly in our excessively politicized and self-righteous society. In my own introductory biology class (biofundamentals), we consider potentially contentious issues that include sexual dimorphism and selection and social evolutionary processes and their implications.  As an example, social systems (and we are social animals) are susceptible to social cheating and groups develop defenses against cheaters; how such biological ideas interact with historical, political and ideological perspectives is complex, and certainly beyond the scope of an introductory biology course, but worth acknowledging [PLoS blog link].

In a similar manner, we understand the brain as an evolved cellular system influenced by various experiences, including those that occur during development and subsequent maturation.  Family life interacts with genetic factors in a complex, and often unpredictable way, to shape behaviors.  But it seems unlikely that a free and enlightened society can function if it takes seriously the premise that we lack free-will and so cannot be held responsible for our actions, an idea of some current popularity [see Free will could all be an illusion]. Given the complexity of biological systems, I for one am willing to embrace the idea of constrained free will, no matter what scientific speculations are currently in vogue. Recognizing the complexities of biological systems, including the brain, with their various adaptive responses and feedback systems can be challenging. In this light, I am reminded of the contrast between the Doomsday scenario of Paul Ehrlich’s The Population Bomb, and the data-based view of the late Hans Rosling in Don’t Panic – The Facts About Population.

All of which is to say that we need to see science not as authoritarian, telling us who we are or what we should do, but as a tool to do what we think is best and why it might be difficult to achieve. We need to recognize how scientific observations inform but do not dictate our decisions. We need to embrace the tentative, but strict nature of the scientific enterprise which, while it cannot arrive at “Truth” can certainly identify non-sense.

Power Posing & Science Education

Developing a coherent understanding of a scientific idea is neither trivial nor easy and it is counter-productive to pretend that it is.

For some time now the idea of “active learning” (as if there is any other kind) has become a mantra in the science education community (see Active Learning Day in America: link). Yet the situation is demonstrably more complex, and depends upon what exactly is to be learned, something rarely stated explicitly in many published papers on active learning (an exception can be found here with respect to understanding evolutionary mechanisms : link).  The best of such work generally relies on results from multiple-choice “concept tests” that  provide, at best, a limited (low resolution) characterization of what students know. Moreover it is clear that, much like in other areas, research into the impact of active learning strategies is rarely reproduced (see: link, link & link).

As is clear from the level of aberrant and non-sensical talk about the implications of “science” currently on display in both public and private spheres (link : link), the task of effective science education and rigorous scientific (data-based) decision making is not a simple one.  As noted by many there is little about modern science that is intuitively obvious and most is deeply counterintuitive or actively disconcerting (see link).  In the absence of a firm religious or philosophical perspective, scientific conclusions about the size and age of the Universe, the various processes driving evolution, and the often grotesque outcomes they canthey tried to teach produce can be deeply troubling; one can easily embrace a solipsistic, ego-centric and/or fatalistic belief/behavioral system.

There are two videos of Richard Feynman that capture much of what is involved in, and required for understanding a scientific idea and its implications. The first involves the basis scientific process, where the path to a scientific understanding of a phenomena begins with a guess, but these are a special kind of guess, namely a guess that implies unambiguous (and often quantitative) predictions of what future (or retrospective) observations will reveal (video: link).  This scientific discipline (link) implies the willingness to accept that scientifically-meaningful ideas need to have explicit, definable, and observable implications, while those that do not are non-scientific and need to be discarded. As witness the stubborn adherence to demonstrably untrue ideas (such as where past Presidents were born or how many people attended an event or voted legally), which mark superstitious and non-scientific worldviews.  Embracing a scientific perspective is not easy, nor is letting go of a favorite idea (or prejudice).  The difficulty of thinking and acting scientifically needs to be kept in the mind of instructors; it is one of the reasons that peer review continues to be important – it reminds us that we are part of a community committed to the rules of scientific inquiry and its empirical foundations and that we are accountable to that community.

The second Feynman video (video : link) captures his description of what it means to understand a particular phenomenon scientifically, in this particular case, why magnets attract one another.  The take home message is that many (perhaps most) scientific ideas require a substantial amount of well-understood background information before one can even begin a scientifically meaningful consideration of the topic. Yet all too often such background information is not considered by those who develop (and deliver) courses and curricula. To use an example from my own work (in collaboration with Melanie Cooper @MSU), it is very rare to find course and curricular materials (textbooks and such) that explicitly recognize (or illustrate) the underlying assumptions involved in a scientific explanation.  Often the “central dogma” of molecular biology is taught as if it were simply a description of molecular processes, rather than explicitly recognizing that information flows from DNA outward (link)(and into DNA through mutation and selection).  Similarly it is rare to see stated explicitly that random collisions with other molecules supply the energy needed for chemical reactions to proceed or to break intermolecular interactions, or that the energy released upon complex formation is transferred to other molecules in the system (see : link), even though these events control essentially all aspects of the systems active in organisms, from gene expression to consciousness.

The basic conclusion is that achieving a working understanding of a scientific ideas is hard, and that, while it requires an engaging and challenging teacher and a supportive and interactive community, it is also critical that students be presented with conceptually coherent content that acknowledges and presents all of the ideas needed to actually understand the concepts and observations upon which a scientific understanding is based (see “now for the hard part” :  link).  Bottom line, there is no simple or painless path to understanding science – it involves a serious commitment on the part of the course designer as well as the student, the instructor, and the institution (see : link).

This brings us back to the popularity of the “active learning” movement, which all too often ignores course content and the establishment of meaningful learning outcomes.  Why then has it attracted such attention?  My own guess it that is provides a simple solution that circumvents the need for instructors (and course designers) to significantly modify the materials that they present to students.  The current system rarely rewards or provides incentives for faculty to carefully consider the content that they are presenting to students, asking whether it is relevant or sufficient for students’ to achieve a working understanding of the subject presented, an understanding that enables the student to accurately interpret and then generate reasoned and evidence-based (plausible) responses.

Such a reflective reconsideration of a topic will often result in dramatic changes in course (and curricular) emphasis; traditional materials may be omitted or relegated to more specialized courses.  Such changes can provoke a negative response from other faculty, based of often inherited (an uncritically accepted) ideas about course “coverage”, as opposed to desired and realistic student learning outcomes.  Given the resistance of science faculty (particularly at institutions devoted to scientific research) to investing time in educational projects (often a reasonable strategy, given institutional reward systems), there is a seductive lure to easy fixes. One such fix is to leave the content unaltered and to “adopt a pose” in the classroom.

All of which brings me to the main problem – the frequency with which superficial (low cost, but often ineffectual) strategies can act to inhibit and distract from significant, but difficult reforms.  One cannot help but be reminded of other quick fixes for complex problems.  The most recent being the idea, promulgated by Amy Cuddy (Harvard: link) and others, that adopting a “power pose” can overcome various forms of experienced- and socioeconomic-based prejudices and injustices, as if over-coming a person’sexperiences and situAbsurdities-Voltaireation is simply a matter of will. The message is that those who do not succeed have only themselves to blame, because the way to succeed is (basically) so damn simple.  So imagine one’s surprise (or not) when one discovers that the underlying biological claims associated with “power posing” are not true (or at least cannot be replicated, even by the co-authors of the original work (see Power Poser: When big ideas go bad: link).  Seems as if the lesson that needs to be learned, both in science education and more generally, is that claims that seem too easy or universal are unlikely to be true.  It is worth remembering that even the most effective modern (and traditional) medicines, all have potentially dangerous side effects. Why, because they lead to significant changes to the system and such modifications can discomfort the comfortable. This stands in stark contrast to non-scientific approaches; homeopathic “remedies” come to mind, which rely on placebo effects (which is not to say that taking ineffective remedies does not itself  involve risks.)

As in the case of effective medical treatments, the development and delivery of engaging and meaningful science education reform often requires challenging current assumptions and strategies that are often based in outdated traditions, and are influenced more by the constraints of class size and the logistics of testing than they are by the importance of achieving demonstrable enhancements of students’ working understanding of complex ideas.