Higher Education Malpractice: curving grades

If there is one thing that university faculty and administrators could do today to demonstrate their commitment to inclusion, not to mention teaching and learning over sorting and status, it would be to ban curve-based, norm-referenced grading. Many obstacles exist to the effective inclusion and success of students from underrepresented (and underserved) groups in science and related programs.  Students and faculty often, and often correctly, perceive large introductory classes as “weed out” courses preferentially impacting underrepresented students. In the life sciences, many of these courses are “out-of-major” requirements, in which students find themselves taught with relatively little regard to the course’s relevance to bio-medical careers and interests. Often such out-of-major requirements spring not from a thoughtful decision by faculty as to their necessity, but because they are prerequisites for post-graduation admission to medical or graduate school. “In-major” instructors may not even explicitly incorporate or depend upon the materials taught in these out-0f-major courses – rare is the undergraduate molecular biology degree program that actually calls on students to use calculus or a working knowledge of physics, despite the fact that such skills may be relevant in certain biological contexts – see Magnetofiction – A Reader’s Guide.  At the same time, those teaching “out of major” courses may overlook the fact that many (and sometimes most) of their students are non-chemistry, non-physics, and/or non-math majors.  The result is that those teaching such classes fail to offer a doorway into the subject matter to any but those already comfortable with it. But reconsidering the design and relevance of these courses is no simple matter.  Banning grading on a curve, on the other  hand, can be implemented overnight (and by fiat if necessary). 

 So why ban grading on a curve?  First and foremost, it would put faculty and institutions on record as valuing student learning outcomes (perhaps the best measure of effective teaching) over the sorting of students into easy-to-judge groups.  Second, there simply is no pedagogical justification for curved grading, with the possible exception of providing a kludgy fix to correct for poorly designed examinations and courses. There are more than enough opportunities to sort students based on their motivation, talent, ambition, “grit,” and through the opportunities they seek after and successfully embraced (e.g., through volunteerism, internships, and independent study projects). 

The negative impact of curving can be seen in a recent paper by Harris et al,  (Reducing achievement gaps in undergraduate general chemistry …), who report a significant difference in overall student inclusion and subsequent success based on a small grade difference between a C, which allows a student to proceed with their studies (generally as successfully as those with higher grades) and a C-minus, which requires them to retake the course before proceeding (often driving them out of the major).  Because Harris et al., analyzed curved courses, a subset of students cannot escape these effects.  And poor grades disproportionately impact underrepresented and underserved groups – they say explicitly “you do not belong” rather than “how can I help you learn”.   

Often naysayers disparage efforts to improve course design as “dumbing down” the course, rather than improving it.  In many ways this is a situation analogous to blaming patients for getting sick or not responding to treatment, rather than conducting an objective analysis of the efficacy of the treatment.  If medical practitioners had maintained this attitude, we would still be bleeding patients and accepting that more than a third are fated to die, rather than seeking effective treatments tailored to patients’ actual diseases – the basis of evidence-based medicine.  We would have failed to develop antibiotics and vaccines – indeed, we would never have sought them out. Curving grades implies that course design and delivery are already optimal, and the fate of students is predetermined because only a percentage can possibly learn the material.  It is, in an important sense, complacent quackery.

Banning grading on a curve, and labelling it for what it is – educational malpractice – would also change the dynamics of the classroom and might even foster an appreciation that a good teacher is one with the highest percentage of successful students, e.g. those who are retained in a degree program and graduate in a timely manner (hopefully within four years). Of course, such an alternative evaluation of teaching would reflect a department’s commitment to construct and deliver the most engaging, relevant, and effective educational program. Institutional resources might even be used to help departments generate more objective, instructor-independent evaluations of learning outcomes, in part to replace the current practice of student-based opinion surveys, which are often little more than measures of popularity.  We might even see a revolution in which departments compete with one another to maximize student inclusion, retention, and outcomes (perhaps even to the extent of applying pressure on the design and delivery of “out of major” required courses offered by other departments).  

“All a pipe dream” you might say, but the available data demonstrates that resources spent on rethinking course design, including engagement and relevance, can have significant effects on grades, retention, time to degree, and graduation rates.  At the risk of being labeled as self-promoting, I offer the following to illustrate the possibilities: working with Melanie Cooper at Michigan State University, we have built such courses in general and organic chemistry and documented their impact, see Evaluating the extent of a large-scale transformation in gateway science courses.

Perhaps we should be encouraging students to seek out legal representation to hold institutions (and instructors) accountable for detrimental practices, such as grading on a curve.  There might even come a time when professors and departments would find it prudent to purchase malpractice insurance if they insist on retaining and charging students for ineffective educational strategies.(1)  

Acknowledgements: Thanks to daughter Rebecca who provided edits and legal references and Melanie Cooper who inspired the idea. Educate! image from the Dorian De Long Arts & Music Scholarship site.

(1) One cannot help but wonder if such conduct could ever rise to the level of fraud. See, e.g., Bristol Bay Productions, LLC vs. Lampack, 312 P.3d 1155, 1160 (Colo. 2013) (“We have typically stated that a plaintiff seeking to prevail on a fraud claim must establish five elements: (1) that the defendant made a false representation of a material fact; (2) that the one making the representation knew it was false; (3) that the person to whom the representation was made was ignorant of the falsity; (4) that the representation was made with the intention that it be acted upon; and (5) that the reliance resulted in damage to the plaintiff.”).

Making education matter in higher education


It may seem self-evident that providing an effective education, the type of educational experiences that lead to a useful bachelors degree and serve as the foundation for life-long learning and growth, should be a prime aspirational driver of Colleges and Universities (1).  We might even expect that various academic departments would compete with one another to excel in the quality and effectiveness of their educational outcomes; they certainly compete to enhance their research reputations, a competition that is, at least in part, responsible for the retention of faculty, even those who stray from an ethical path. Institutions compete to lure research stars away from one another, often offering substantial pay raises and research support (“Recruiting or academic poaching?”).  Yet, my own experience is that a department’s performance in undergraduate educational outcomes never figures when departments compete for institutional resources, such as supporting students, hiring new faculty, or obtaining necessary technical resources (2).

 I know of no example (and would be glad to hear of any) of a University hiring a professor based primarily on their effectiveness as an instructor (3).

In my last post, I suggested that increasing the emphasis on measures of departments’ educational effectiveness could help rebalance the importance of educational and research reputations, and perhaps incentivize institutions to be more consistent in enforcing ethical rules involving research malpractice and the abuse of students, both sexual and professional. Imagine if administrators (Deans and Provosts and such) were to withhold resources from departments that are performing below acceptable and competitive norms in terms of undergraduate educational outcomes?

Outsourced teaching: motives, means and impacts

Sadly, as it is, and particularly in many science departments, undergraduate educational outcomes have little if any impact on the perceived status of a department, as articulated by campus administrators. The result is that faculty are not incentivized to, and so rarely seriously consider the effectiveness of their department’s course requirements, a discussion that would of necessity include evaluating whether a course’s learning goals are coherent and realistic, whether the course is delivered effectively, whether it engages students (or is deemed irrelevant), and whether students’ achieve the desired learning outcomes, in terms of knowledge and skills achieved, including the ability to apply that knowledge effectively to new situations.  Departments, particularly research focussed (dependent) departments, often have faculty with low teaching loads, a situation that incentivizes the “outsourcing” of key aspects of their educational responsibilities.  Such outsourcing comes in two distinct forms, the first is requiring majors to take courses offered by other departments, even if such courses are not well designed, delivered, or (in the worst cases) relevant to the major.  A classic example is to require molecular biology students to take macroscopic physics or conventional calculus courses, without regard to whether the materials presented in these courses is ever used within the major or the discipline.  Expecting a student majoring in the life sciences to embrace a course that (often rightly) seems irrelevant to their discipline can alienate a student, and poses an unnecessary obstacle to student success, rather than providing students with needed knowledge and skills.  Generally, the incentives necessary to generate a relevant course, for example, a molecular level physics course that would engage molecular biology students, are simply not there.  A version of this situation is to require courses that are poorly designed or delivered (general chemistry is often used as the poster child for such a course). These are courses that have high failure rates, sometimes justified in terms of “necessary rigor” when in fact better course design could (and has) resulted in lower failure rates and improved learning outcomes.  In addition, there are perverse incentives associated with requiring “weed out” courses offered by other departments, as they reduce the number of courses a department’s faculty needs to teach, and can lead to fewer students proceeding into upper division courses.

The second type of outsourcing involves excusing tenure track faculty from teaching introductory courses, and having them replaced by lower paid instructors or lecturers.  Independently of whether instructors, lecturers, or tenure track professors make for better teaching, replacing faculty with instructors sends an implicit message to students.  At the same time, the freedom of instructors/lecturers to adopt an effective (socratic) approach to teaching is often severely constrained; common exams can force classes to move in lock step, independently of whether that pace is optimal for student engagement and learning. Generally, instructors/lecturers do not have the freedom to adjust what they teach, to modify the emphasis and time they spend on specific topics in response to their students’ needs. How an instructor instructs their students suffers when teachers do not have the freedom to customize their interactions with students in response to where they are intellectually.  This is particularly detrimental in the case of underrepresented or underprepared students. Generally, a flexible and adaptive approach to instruction (including ancillary classes on how to cope with college: see An alternative to remedial college classes gets results) can address many issues, and bring the majority of students to a level of competence, whereas tracking students into remedial classes can succeed in driving them out of a major or college (see Colleges Reinvent Classes to Keep More Students in Science and Redesigning a Large-Enrollment Introductory Biology Course and Does Remediation Work for All Students? )

How to address this imbalance, how can we reset the pecking order so that effective educational efforts actually matter to a department? 

My (modest) suggestion is to base departmental rewards on objective measures of educational effectiveness.   And by rewards I mean both at the level of individuals (salary and status) as well as support for graduate students, faculty positions, start up funds, etc.  What if, for example, faculty in departments that excel at educating their students received a teaching bonus, or if the number of graduate students within a department supported by the institution was determined not by the number of classes these graduate students taught (courses that might not be particularly effective or engaging) but rather by a departments’ undergraduate educational effectiveness, as measured by retention, time to degree, and learning outcomes (see below)?  The result could well be a drive within a department to improve course and curricular effectiveness to maximize education-linked rewards.  Given that laboratory courses, the courses most often taught by science graduate students, are multi-hour schedule disrupting events, of limited demonstrable educational effectiveness, that complicate student course scheduling, removing requirements for lab courses deemed unnecessary (or generating more effective versions), would be actively rewarded (of course, sanctions for continuing to offer ineffective courses would also be useful, but politically more problematic.)

A similar situation applies when a biology department requires its majors to take 5 credit hour physics or chemistry courses.  Currently it is “easy” for a department to require its students to take such courses without critically evaluating whether they are “worth it”, educationally.  Imagine how a department’s choices of required courses would change if the impact of high failure rates (which I would argue is a proxy for poorly designed  and delivered courses) directly impacted the rewards reaped by a department. There would be an incentive to look critically at such courses, to determine whether they are necessary and if so, well designed and delivered. Departments would serve their own interests if they invested in the development of courses  that better served their disciplinary goals, courses likely to engage their students’ interests.

So how do we measure a department’s educational efficacy?

There are three obvious metrics: i) retention of students as majors (or in the case of “service courses” for non-majors, whether students master what it is the course claims to teach); ii) time to degree (and by that I mean the percentage of students who graduate in 4 years, rather than the 6 year time point reported in response to federal regulations (six year graduation rate | background on graduation rates); and iii) objective measures of student learning outcomes attained and skills achieved. The first two are easy, Universities already know these numbers.  Moreover they are directly influenced by degree requirements – requiring students to take boring and/or apparently irrelevant courses serves to drive a subset of students out of a major.  By making courses relevant and engaging, more students can be retained in a degree program. At the same time, thoughtful course design can help students  pass through even the most rigorous (difficult) of such courses. The third, learning outcomes, is significantly more challenging to measure, since universal metrics are (largely) missing or superficial.  A few disciplines, such as chemistry, support standardized assessments, although one could argue with what such assessments measure.  Nevertheless, meaningful outcomes measures are necessary, in much the same way that Law and Medical boards and the Fundamentals of Engineering exam serve to help insure (although they do not guarantee) the competence of practitioners. One could imagine using parts of standardized exams, such as discipline specific GRE exams, to generate outcomes metrics, although more informative assessment instruments would clearly be preferable. The initiative in this area could be taken by professional societies, college consortia (such as the AAU), and research foundations, as a critical driver for education reform, increased effectiveness, and improved cost-benefit outcomes, something that could help address the growing income inequality in our country and make success in higher education an important factor contributing to an institution’s reputation.

 

A footnote or two…
 
1. My comments are primarily focused on research universities, since that is where my experience lies; these are, of course, the majority of the largest universities (in a student population sense).
 
2. Although my experience is limited, having spent my professorial career at a single institution, conversations with others leads me to conclude that it is not unique.
 
3. The one obvious exception would be the hiring of  coaches of sports teams, since their success in teaching (coaching) is more directly discernible and impactful on institutional finances and reputation).
 
minor edits – 16 March 2020

Balancing research prestige, human decency, and educational outcomes.


Or why do academic institutions shield predators?  Many working scientists, particularly those early in their careers or those oblivious to practical realities, maintain an idealistic view of the scientific enterprise. They see science as driven by curious, passionate, and skeptical scholars, working to build an increasingly accurate and all encompassing understanding of the material world and the various phenomena associated with it, ranging from the origins of the universe and the Earth to the development of the brain and the emergence of consciousness and self-consciousness (1).  At the same time, the discipline of science can be difficult to maintain (see PLoS post:  The pernicious effects of disrespecting the constraints of science). Scientific research relies on understanding what people have already discovered and established to be true; all too often, exploring the literature associated with a topic can reveal that one’s brilliant and totally novel “first of its kind” or “first to show” observation or idea is only a confirmation or a modest extension of someone else’s previous discovery. That is the nature of the scientific enterprise, and a major reason why significant new discoveries are rare and why graduate students’ Ph.D. theses can take years to complete.

Acting to oppose a rigorous scholarly approach are the real life pressures faced by working scientists: a competitive landscape in which only novel observations  get rewarded by research grants and various forms of fame or notoriety in one’s field, including a tenure-track or tenured academic position. Such pressures encourage one to distort the significance or novelty of one’s accomplishments; such exaggerations are tacitly encouraged by the editors of high profile journals (e.g. Nature, Science) who seek to publish “high impact” claims, such as the claim for “Arsenic-life” (see link).  As a recent and prosaic example, consider a paper that claims in its title that “Dietary Restriction and AMPK Increase Lifespan via Mitochondrial Network and Peroxisome Remodeling” (link), without mentioning (in the title) the rather significant fact that the effect was observed in the nematode C. elegans, whose lifespan is typically between 300 to 500 hours and which displays a trait not found in humans (and other vertebrates), namely the ability to assume a highly specialized “dauer” state that can survive hostile environmental conditions for months. Is the work wrong or insignificant? Certainly not, but it is presented to the unwary (through the Harvard Gazette under the title, “In pursuit of healthy aging: Harvard study shows how intermittent fasting and manipulating mitochondrial networks may increase lifespan,” with the clear implication that people, including Harvard alumni, might want to consider the adequacy of their retirement investments


Such pleas for attention are generally quickly placed in context and their significance evaluated, at least within the scientific community – although many go on to stimulate the economic health of the nutritional supplement industry.  Lower level claims often go unchallenged, just part of the incessant buzz associated with pleas for attention in our excessively distracted society (see link).  Given the reward structure of the modern scientific enterprise, the proliferation of such claims is not surprising.  Even “staid” academics seek attention well beyond the immediate significance of their (tax-payer funded) observations. Unfortunately, the explosively expanding size of the scientific enterprise makes policing such transgressions (generally through peer review or replication) difficult or impossible, at least in the short term.

The hype and exaggeration associated with some scientific claims for attention are not the most distressing aspect of the quest for “reputation.”  Rather, there are growing number of revelations of academic institutions protecting those guilty of abusing their dependent colleagues. These reflect how scientific research teams are organized. Most scientific studies involve groups of people working with one another, generating data, testing ideas, and eventually publishing their observations and conclusions, and speculating on their broader implications.

Research groups can vary greatly in size.  In some areas, they involve isolated individuals, whether thinkers (theorists) or naturalists, in the mode of Darwin and Wallace.  In other cases, these are larger and include senior researchers, post-doctoral  fellows, graduate students, technicians, undergraduates,  and even high school students. Such research groups can range from the small (2 to 3 people) to the significantly larger (~20-50 people); the largest of such groups are associated mega-projects, such as the human genome project and the Large Hadron Collider-based search for the Higgs boson (see: Physics paper sets record with more than 5,000 authors).  A look at this site [link] describing the human genome project reflects two aspects of such mega-science: 1) while many thousands of people were involved [see Initial sequencing and analysis of the human genome], generally only the “big names” are singled out for valorization (e.g., receiving a Nobel Prize). That said, there would be little or no progress without general scientific community that evaluates and extends ideas and observations. In this context, “lead investigators” are charged primarily with securing the funds needed to mobilize such groups, convincing funders that the work is significant; it is members of the group that work out the technical details and enable the project to succeed.

As with many such social groups, there are systems in play that serve to establish the status of the individuals involved – something necessary (apparently) in a system in which individuals compete for jobs, positions, and resources.  Generally, one’s status is established through recommendations from others in the field, often the senior member(s) of one’s research group or the (generally small) group of senior scientists who work in the same or a closely related area. The importance of professional status is particularly critical in academia, where the number of senior (e.g. tenured or tenure-track professorships) is limited. The result is a system that is increasingly susceptible to the formation of clubs, membership in which is often determined by who knows who, rather than who has done what (see Steve McKnight’s “The curse of committees and clubs”). Over time, scientific social status translates into who is considered productive, important, trustworthy, or (using an oft-misused term) brilliant. Achieving status can mean putting up with abusive and unwanted behaviors (particularly sexual). Examples of this behavior have recently been emerging with increasing frequency (which has been extensively described elsewhere: see Confronting Sexual Harassment in Science; More universities must confront sexual harassment; What’s to be done about the numerous reports of faculty misconduct dating back years and even decades?; Academia needs to confront sexism; and The Trouble With Girls’: The Enduring Sexism in Science).

So why is abusive behavior tolerated?  One might argue that this reflects humans’ current and historical obsession with “stars,” pharaohs, kings, and dictators as isolated geniuses who make things work. Perhaps the most visible example of such abused scientists (although there are in fact many others : see History’s Most Overlooked Scientists) is Rosalind Franklin, whose data was essential to solving the structure of double stranded DNA, yet whose contributions were consistently and systematically minimized, a clear example of sexual marginalization. In this light, many is the technician who got an experiment to “work,” leading to their research supervisor’s being awarded the prizes associated with the breakthrough (2).

Amplifying the star effect is the role of research status at the institutional level;  an institution’s academic ranking is often based upon the presence of faculty “stars.” Perhaps surprisingly to those outside of academia, an institution’s research status, as reflected in the number of stars on staff, often trumps its educational effectiveness, particularly with undergraduates, that is the people who pay the bulk of the institution’s running costs. In this light, it is not surprising that research stars who display various abusive behavior (often to women) are shielded by institutions from public censure.

So what is to be done? My own modest proposal (to be described in more detail in a later post) is to increase the emphasis on institution’s (and departments within institutions) effectiveness at undergraduate educational success. This would provide a counter-balancing force that could (might?) place research status in a more realistic context.

a footnote or two:

  1.  on the assumption that there is nothing but a material world.
  2. Although I am no star, I would acknowledge Joe Dent, who worked out the whole-mount immunocytochemical methods that we have used extensively in our work over the years).
  3. Thanks to Becky for editorial comments as well as a dramatic reading!

Is it time to start worrying about conscious human “mini-brains”?

A human iPSC cerebral organoid in which pigmented retinal epithelial cells can be seen (from the work of McClure-Begley, Mike Klymkowsky, and William Old.)

The fact that experiments on people are severely constrained is a major obstacle in understanding human development and disease.  Some of these constraints are moral and ethical and clearly appropriate and necessary given the depressing history of medical atrocities.  Others are technical, associated with the slow pace of human development. The combination of moral and technical factors has driven experimental biologists to explore the behavior of a wide range of “model systems” from bacteria, yeasts, fruit flies, and worms to fish, frogs, birds, rodents, and primates.  Justified by the deep evolutionary continuity between these organisms (after all, all organisms appear to be descended from a single common ancestor and share many molecular features), experimental evolution-based studies of model systems have led to many therapeutically valuable insights in humans – something that I suspect a devotee of intelligent design creationism would be hard pressed to predict or explain (post link).

While humans are closely related to other mammals, it is immediately obvious that there are important differences – after all people are instantly recognizable from members of other closely related species and certainly look and behave differently from mice. For example, the surface layer of our brains are extensively folded (they are known as gyrencephalic) while the brain of a mouse is smooth as a baby’s bottom (and referred to as lissencephalic). In humans, the failure of the brain cortex to fold is known as lissencephaly, a disorder associated with several severe neurological defects. With the advent of more and more genomic sequence data, we can identify human specific molecular (genomic) differences. Many of these sequence differences occur in regions of our DNA that regulate when and where specific genes are expressed.  Sholtis & Noonan (1) provide an example: the HACNS1 locus is a 81 basepair region that is highly conserved in various vertebrates from birds to chimpanzees; there are 13 human specific changes in this sequence that appear to alter its activity, leading to human-specific changes in the expression of nearby genes (↓). At this point ~1000 genetic elements that are different in humans compared to other vertebrates have been identified and more are likely to emerge (2).  Such human-specific changes can make modeling human-specific behaviors, at the cellular, tissue, organ, and organism level, in non-human model systems difficult and problematic (3, 4).   It is for this reason that scientists have attempted to generate better human specific systems.

One particularly promising approach is based on what are known as embryonic stem cells (ESCs) or pluripotent stem cells (PSCs). Human embryonic stem cells are generated from the inner cell mass of a human embryo and so involve the destruction of that embryo – which raises a number of ethical and religious concerns as to when “life begins” (5)(more on that in a future post).  Human pluripotent stem cells are isolated from adult tissues but in most cases require invasive harvesting methods that limit their usefulness.  Both ESCs and PSCs can be grown in the laboratory and can be induced to differentiate into what are known as gastruloids.  Such gastruloids can develop anterior-posterior (head-tail), dorsal-ventral (back-belly), and left-right axes analogous to those found in embryos (6) and adults (top panel ↓). In the case of PSCs, the gastruloid (bottom panel ↓) is essentially a twin of the organism from which the PSCs were derived, a situation that raises difficult questions: is it a distinct individual, is it the property of the donor or the creation of a technician.  The situation will be further complicated if (or rather, when) it becomes possible to generate viable embryos from such gastruloids.

 

The Nobel prize winning work of Kazutoshi Takahashi and Shinya Yamanaka (7), who devised methods to take differentiated (somatic) human cells and reprogram them into ESC/PSC-like cells, cells known as induced pluripotent stem cells (iPSCs)(8), represented a technical breakthrough that jump-started this field. While the original methods derived sample cells from tissue biopsies, it is possible to reprogram kidney epithelial cells recovered from urine, a non-invasive approach (910).  Subsequently, Madeline Lancaster, Jurgen Knōblich, and colleagues devised an approach by which such cells could be induced to form what they termed “cerebral organoids” (although Yoshiki Sasai and colleagues were the first to generate neuronal organoids); they used this method to examine the developmental defects associated with microencephaly (11).  The value of the approach was rapidly recognized and a number of studies on human conditions, including  lissencephaly (12), Zika-virus infection-induced microencephaly (13), and Down’s syndrome (14);  investigators have begun to exploit these methods to study a range of human diseases.

The production of cerebral organoids from reprogrammed human somatic cells has also attracted the attention of the media (15).  While “mini-brain” is certainly a catchier name, it is a less accurate description of a cerebral organoid, itself possibly a bit of an overstatement, since it is not clear exactly how “cerebral” such organoids are. For example, the developing brain is patterned by embryonic signals that establish its asymmetries; it forms at the anterior end of the neural tube (the nascent central nervous system and spinal cord) and with distinctive anterior-posterior, dorsal-ventral, and left-right asymmetries, something that simple cerebral organoids do not display.  Moreover, current methods for generating cerebral organoids involve primarily what are known as neuroectodermal cells – our nervous system (and that of other vertebrates) is a specialized form of the embryo’s surface layer that gets internalized during development. In the embryo, the developing neuroectoderm interacts with cells of the circulatory system (capillaries, veins, and arteries), formed by endothelial cells and what are known as pericytes that surround them. These cells, together with interactions with glial cells (astrocytes, a non-neuronal cell type) combine to form the blood brain barrier.  Other glial cells (oligodendrocytes) are also present; in contrast, both types of glia (astrocytes and oligodendrocytes) are rare in the current generation of cerebral organoids. Finally, there are microglial cells,  immune system cells that originate from outside the neuroectoderm; they invade and interact with neurons and glia as part of the brain’s dynamic neural system. The left panel of the figure shows, in highly schematic form how these cells interact (16). The right panel is a drawing of neural tissue stained by the Golgi method (17), which reveals ~3-5% of the neurons present. There are at least as many glial cells present, as well as microglia, none of which are visible in the image. At this point, cerebral organoids typically contain few astrocytes and oligodendrocytes, no vasculature, and no microglia. Moreover, they grow to be about 1 to 3 mm in diameter over the course of 6 to 9 months; that is significantly smaller in volume than a fetal or newborn’s brain. While cerebral organoids can generate structures characteristic of retinal pigment epithelia (top figure) and photo-responsive neurons (18), such as those associated with the retina, an extension of the brain, it is not at all clear that there is any significant sensory input into the neuronal networks that are formed within a cerebral organoid, or any significant outputs, at least compared to the role that the human brain plays in controlling bodily and mental functions.

The reasonable question, then, must be whether a  cerebral organoid, which is a relatively simple system of cells (although itself complex), is conscious. It becomes more reasonable as increasingly complex systems are developed, and such work is proceeding apace. Already researchers are manipulating the developing organoid’s environment to facilitate axis formation, and one can anticipate the introduction of vasculature. Indeed, the generation of microglia-like cells from iPSCs has been reported; such cells can be incorporated into cerebral organoids where they appear to respond to neuronal damage in much the same way as microglia behave in intact neural tissue (19).

We can ask ourselves, what would convince us that a cerebral organoid, living within a laboratory incubator, was conscious? How would such consciousness manifest itself? Through some specific pattern of neural activity, perhaps?  As a biologist, albeit one primarily interested in molecular and cellular systems, I discount the idea, proposed by some physicists and philosophers as well as the more mystical, that consciousness is a universal property of matter (20,21).  I take consciousness to be an emergent property of complex neural systems, generated by evolutionary mechanisms, built during embryonic and subsequent development, and influenced by social interactions (BLOG LINK) using information encoded within the human genome (something similar to this: A New Theory Explains How Consciousness Evolved). While a future concern, in a world full of more immediate and pressing issues, it will be interesting to listen to the academic, social, and political debate on what to do with mini-brains as they grow in complexity and perhaps inevitably, towards consciousness.

 

Footnotes and references

Thanks to Rebecca Klymkowsky, Esq. and Joshua Sanes, Ph.D. for editing and disciplinary support.

  1. Gene regulation and the origins of human biological uniqueness
  2.  See also Human-specific loss of regulatory DNA and the evolution of human-specific traits
  3. The mouse trap
  4. Mice Fall Short as Test Subjects for Some of Humans’ Deadly Ill
  5. The status of the human embryo in various religions
  6. Interactions between Nodal and Wnt signalling Drive Robust Symmetry Breaking and Axial Organisation in Gastruloids (Embryonic Organoids)
  7.  Induction of pluripotent stem cells from mouse embryonic and adult fibroblast cultures by defined factors
  8.  How iPS cells changed the world
  9.  Generation of Induced Pluripotent Stem Cells from Urine
  10. Urine-derived induced pluripotent stem cells as a modeling tool to study rare human diseases
  11. Cerebral organoids model human brain development and microcephaly.
  12. Human iPSC-Derived Cerebral Organoids Model Cellular Features of Lissencephaly and Reveal Prolonged Mitosis of Outer Radial Glia
  13. Using brain organoids to understand Zika virus-induced microcephaly
  14. Probing Down Syndrome with Mini Brains
  15. As an example, see The Beauty of “Mini Brains”
  16. Derived from Central nervous system pericytes in health and disease
  17. Golgi’s method .
  18. Cell diversity and network dynamics in photosensitive human brain organoids
  19. Efficient derivation of microglia-like cells from human pluripotent stem cells
  20. The strange link between the human mind and quantum physics – BBC:
  21. Can Quantum Physics Explain Consciousness?

From the Science March to the Classroom: Recognizing science in politics and politics in science

Jeanne Garbarino (with edits by Mike Klymkowsky)

Purely scientific discussions are hallmarked by objective, open, logical, and skeptical thought; they can describe and explain natural phenomena or provide insights into a broader questions. At the same time, scientific discussions are generally incomplete and tentative (sometimes for well understood reasons). True advocates of the scientific method appreciate the value of its skeptical and tentative approach, and are willing to revise even long-held positions in response to new, empirically-derived evidence or logical contradictions. Over time, science’s scope and conclusions have expanded and evolved dramatically; they provide an increasingly accurate working model of a wide range of processes, from the formation of the universe to the functioning of the human mind. The result is that the ubiquity of science’s impacts on society are clear and growing. However, discussing and debating the details of how science works, and the current consensus view on various phenomena, such as global warming or the causes of cancer or autism, is very different from discussing and debating how a scientific recommendation fits into a societal framework. As described in a recent National Academies Press report on Communicating Science Effectively  [link], “the decision to communicate science [outside of academia] always involves an ethical component. Choices about what scientific evidence to communicate and when, how, and to whom, are a reflection of values.”

Over the last ~150 years, the accelerating pace of advances in science and technology have enabled future sustainable development, but they have also disrupted traditional social and economic patterns. Closing coal mines in response to climate predictions (and government regulations) may be sensible when viewed broadly, but are disruptive to those who have, for generations, made a living mining coal. Similarly, a number of prognosticators have speculated on the impact of robotics and artificial intelligence on traditional socioeconomic roles and rules. Whether such impacts are worth the human costs is rarely explicitly considered and discussed in the public forum, or the classroom. As members of the scientific community, our educational and outreach efforts must go beyond simply promoting an appreciation of, and public support for science. They must also consider its limitations, as well as the potential ethical and disruptive effects on individuals, communities, and/or societies. Making policy decisions with large socioeconomic impacts based on often tentative models raises risks of alienating the public upon which modern science largely depends.

Citizens, experts or not, are often invited to contribute to debates and discussions surrounding science and technology at the local and national levels. Yet, many people are not provided with the tools to fully and effectively engage in these discussions, which involves critically analyzing the scope, resolution, and stability of scientific conclusions. As such, the acceptance or rejection of scientific pronouncements is often framed as an instrument of political power, casting a shadow on core scientific principles and processes, framing scientists as partisan players in a political game. The watering down of the role of science and science-based policies in the public sphere, and the broad public complacency associated with (often government-based, regulatory) efforts, is currently being challenged by the international March For Science effort. The core principles and goals of this initiative [link] are well articulated, and, to my mind, representative of a democratic society. However, a single march on a single day is not sufficient to promote a deep social transformation, and promote widespread dispassionate argumentation and critical thinking. Perspectives on how scientific knowledge can help shape current and future events, as well as the importance of recognizing both the implications and limits of science, are perspectives that must be taught early, often, and explicitly. Social or moral decisions are not mutually exclusive from scientific evidence or ideas, but overlap is constrained by the gates set by values that are held.

In this light, I strongly believe the sociopolitical nature of science in practice must be taught alongside traditional science content. Understanding the human, social, economic and broader (ecological) costs of action AND inaction can be used to highlight the importance of framing science in a human context. If the expectation is for members of our society to be able to evaluate and weigh in on scientific debates at all levels, I believe we are morally obligated to supply future generations with the tools required for full participation. This posits that scientists and science educators, together with historian, philosophers, and economists, etc., need to go beyond the teaching of simple facts and theories by considering how these facts and theories developed over time, their impact on people’s thinking, as well as the socioeconomic forces that shape societies. Highlighting the sociopolitical implications of science-based ideas in classrooms can also motivate students to take a greater interest in scientific learning in particular, and related social and political topics in general. It can help close the gap between what is learned in school and what is required for the critical evaluation of scientific applications in society, and how scientific ideas can and should be evaluated when it comes to social policy or person beliefs.

A “science in a social context” approach to science teaching may also address the common student question, “When will I ever use this?” All too often, scientific content in schools is presented in ways that are abstract, decontextualized, and can feel irrelevant to students. Such an approach can leave a student unable or unwilling to engage in meaningful and substantive discussions on the applications and limitations of science in society. The entire concept of including cost-benefit analyses when considering the role of science in shaping decisions is often over-looked, as if scientific conclusions are black and white. Furthermore, the current culture of science in classrooms leaves little room for students to assess how scientific information does and does not align with their cultural identities, often framing science as inherently conflicting or alien, forcing a choice between one way of seeing the world over the other, when a creative synthesis seems more reasonable. Shifting science education paradigms toward a strategy that promotes “education through science” (as opposed to “science through education”) recognizes student needs and motivations as critical to learning, and opens up channels for introducing science as something that is relevant and enriching to their lives. Centered on the German philosophy of Allgemeinbildung [link] that describes “the competence for participation in critical dialogue on currently important matters,” this approach has been found to be effective in motivating students to develop the necessary skills to implement empirical evidence when forming arguments and making decisions.

In extending the idea of the perceived value of science in sociopolitical debates, students can build important frameworks for effectively engaging with society in the future. A relevant example is the increasing accessibility of genome editing technology, which represents an area of science poised to deeply impact the future of society. In a recent report [link] on the ethics of genome editing, assembled by an panel of clinicians and scientists (experts), it is recommended that the United States should proceed — cautiously — with genome editing studies on human embryos. However, as pointed out [link], this panel failed to include ANY public participation in this decision. This effort, fundamentally ignores “a more conscious evaluation of how this impacts social standing, stigma and identity, ethics that scientists often tend to cite pro forma and then swiftly scuttle.” As this discussion increasingly shifts into the mainstream, it will be essential to engage with the public in ways that promote a more careful and thoughtful analysis of scientific issues [link], as opposed to hyperbolic fear mongering (as seen in regard to most GMO discussions)[link] or reserving genetic engineering to the hyper-affluent. Another, more timely example, involves the the level at which an individual’s genome be used to predict a future outcome or set of outcomes, and whether this information can be used by employers in any capacity [link]. By incorporating a clear description of how science is practiced (including the factors that influence what is studied, and what is done with the knowledge generated), alongside the transfer of traditional scientific knowledge, we can help provide future citizens with tools for critical evaluation as they navigate these uncharted waters.

It is also worth noting tcorrupted sciencehat the presentation of science in a sociopolitical contexts can emphasize learning of more than just science. Current approaches to education tend to compartmentalize academic subjects, framing them as standalone lessons and philosophies. Students go through the school day motions, attending English class, then biology, then social studies, then trigonometry, etc., and the natural connections among subject areas are often lost. When framing scientific topics in the context of sociopolitical discussions and debates, stu
dents have more opportunities to explore aspects of society that are, at face value, unrelated to science.

Drawing from lessons commonly taught in American History class, the Manhattan Project [link] offers an excellent opportunity to discuss the fundamentals of nuclear chemistry as well as sociopolitical implications of a scientific discovery. At face value, harnessing nuclear fission marked a dramatic milestone for science. However, when this technology was pursued by the United States government during World War II — at the urging of the famed physicist Albert Einstein and others — it opened up the possibility of an entirely new category of warfare, impacting individuals and communities at all levels. The reactions set off by the Manhattan Project, and the consequent 1945 bombing of Hiroshima and Nagasaki, are ones that are still felt in international power politics, agriculture, medicine, ecology, economics, research ethics, transparency in government, and, of course, the Presidency of the United States. The Manhattan Project represents an excellent case study on the relationship between science, technology, and society, as well as the project’s ongoing influence on these relationships. The double-edged nature often associated with scientific discoveries are important considerations of the scientific enterprise, and should be taught to students accordingly.

A more meaningful approach to science education requires including the social aspects of the scientific enterprise. When considering a heliocentric view of the solar system, it is worthwhile recognizing its social impacts as well as its scientific foundations (particularly before Kepler). If we want people to see science as a human enterprise that can inspire rather than dictate decisions and behaviors, it will require resifting how science — and scientists — are viewed in the public eye. As written here [link]. we need to restore the relationship between scientific knowledge and social goals by specifically recognizing how

'So... cutting my funding, eh? Well, I've got a pair of mutant fists that say otherwise!'
‘So… cutting my funding, eh? Well, I’ve got a pair of mutant fists that say otherwise!’

science can be used, inappropriately, to drive public opinion. As an example, in the context of CO2-driven global warming, one could (with equal scientific validity) seek to reduce CO2 generation or increase CO2 sequestration. Science does not tell us which is better from a human perspective (although it could tell us which is likely to be easier, technically). While science should inform relevant policy, we must also acknowledge the limits of science and how it fits into many human contexts. There is clearly a need for scientists to increase participation in public discourse, and explicitly consider the uncertainties and risks (social, economic, political) associated with scientific observations. Additionally, scientists need to recognize the limits of their own expertise.

A pertinent example was the call by Paul Ehrlich to limit, in various draconian ways, human reproduction – a political call well beyond his expertise. In fact, recognizing when someone has gone beyond what science can legitimately tell us [link] could help rebuild respect for the value of science-based evidence. Scientists and science educators need to be cognizant of these limits, and genuinely listen to the valid concerns and hesitations held by many in society, rather than dismiss them. The application of science has been, and will always be, a sociopolitical issue, and the more we can do to prepare future decision makers, the better society will be.

Jeanne Garbarino, PhD, Director of Science Outreach, The Rockefeller University, NY, NY

Jeanne earned herJGarbarino Ph.D. in metabolic biology from Columbia University, followed by a postdoc in the Laboratory of Biochemical Genetics and Metabolism at The Rockefeller University, where she now serves as Director of Science Outreach. In this role, she works to provide K-12 communities with equitable access to authentic biomedical research opportunities and resources. You can find Jeanne on social media under the handle @JeanneGarb.

Power Posing & Science Education

Developing a coherent understanding of a scientific idea is neither trivial nor easy and it is counter-productive to pretend that it is.

For some time now the idea of “active learning” (as if there is any other kind) has become a mantra in the science education community (see Active Learning Day in America: link). Yet the situation is demonstrably more complex, and depends upon what exactly is to be learned, something rarely stated explicitly in many published papers on active learning (an exception can be found here with respect to understanding evolutionary mechanisms : link).  The best of such work generally relies on results from multiple-choice “concept tests” that  provide, at best, a limited (low resolution) characterization of what students know. Moreover it is clear that, much like in other areas, research into the impact of active learning strategies is rarely reproduced (see: link, link & link).

As is clear from the level of aberrant and non-sensical talk about the implications of “science” currently on display in both public and private spheres (link : link), the task of effective science education and rigorous scientific (data-based) decision making is not a simple one.  As noted by many there is little about modern science that is intuitively obvious and most is deeply counterintuitive or actively disconcerting (see link).  In the absence of a firm religious or philosophical perspective, scientific conclusions about the size and age of the Universe, the various processes driving evolution, and the often grotesque outcomes they can produce can be deeply troubling; one can easily embrace a solipsistic, ego-centric and/or fatalistic belief/behavioral system.

There are two videos of Richard Feynman that capture much of what is involved in, and required for understanding a scientific idea and its implications. The first involves the basis scientific process, where the path to a scientific understanding of a phenomena begins with a guess, but these are a special kind of guess, namely a guess that implies unambiguous (and often quantitative) predictions of what future (or retrospective) observations will reveal (video: link).  This scientific discipline (link) implies the willingness to accept that scientifically-meaningful ideas need to have explicit, definable, and observable implications, while those that do not are non-scientific and need to be discarded. As witness the stubborn adherence to demonstrably untrue ideas (such as where past Presidents were born or how many people attended an event or voted legally), which mark superstitious and non-scientific worldviews.  Embracing a scientific perspective is not easy, nor is letting go of a favorite idea (or prejudice).  The difficulty of thinking and acting scientifically needs to be kept in the mind of instructors; it is one of the reasons that peer review continues to be important – it reminds us that we are part of a community committed to the rules of scientific inquiry and its empirical foundations and that we are accountable to that community.

The second Feynman video (video : link) captures his description of what it means to understand a particular phenomenon scientifically, in this particular case, why magnets attract one another.  The take home message is that many (perhaps most) scientific ideas require a substantial amount of well-understood background information before one can even begin a scientifically meaningful consideration of the topic. Yet all too often such background information is not considered by those who develop (and deliver) courses and curricula. To use an example from my own work (in collaboration with Melanie Cooper @MSU), it is very rare to find course and curricular materials (textbooks and such) that explicitly recognize (or illustrate) the underlying assumptions involved in a scientific explanation.  Often the “central dogma” of molecular biology is taught as if it were simply a description of molecular processes, rather than explicitly recognizing that information flows from DNA outward (link)(and into DNA through mutation and selection).  Similarly it is rare to see stated explicitly that random collisions with other molecules supply the energy needed for chemical reactions to proceed or to break intermolecular interactions, or that the energy released upon complex formation is transferred to other molecules in the system (see : link), even though these events control essentially all aspects of the systems active in organisms, from gene expression to consciousness.

The basic conclusion is that achieving a working understanding of a scientific ideas is hard, and that, while it requires an engaging and challenging teacher and a supportive and interactive community, it is also critical that students be presented with conceptually coherent content that acknowledges and presents all of the ideas needed to actually understand the concepts and observations upon which a scientific understanding is based (see “now for the hard part” :  link).  Bottom line, there is no simple or painless path to understanding science – it involves a serious commitment on the part of the course designer as well as the student, the instructor, and the institution (see : link).

This brings us back to the popularity of the “active learning” movement, which all too often ignores course content and the establishment of meaningful learning outcomes.  Why then has it attracted such attention?  My own guess it that is provides a simple solution that circumvents the need for instructors (and course designers) to significantly modify the materials that they present to students.  The current system rarely rewards or provides incentives for faculty to carefully consider the content that they are presenting to students, asking whether it is relevant or sufficient for students’ to achieve a working understanding of the subject presented, an understanding that enables the student to accurately interpret and then generate reasoned and evidence-based (plausible) responses.

Such a reflective reconsideration of a topic will often result in dramatic changes in course (and curricular) emphasis; traditional materials may be omitted or relegated to more specialized courses.  Such changes can provoke a negative response from other faculty, based of often inherited (an uncritically accepted) ideas about course “coverage”, as opposed to desired and realistic student learning outcomes.  Given the resistance of science faculty (particularly at institutions devoted to scientific research) to investing time in educational projects (often a reasonable strategy, given institutional reward systems), there is a seductive lure to easy fixes. One such fix is to leave the content unaltered and to “adopt a pose” in the classroom.

All of which brings me to the main problem – the frequency with which superficial (low cost, but often ineffectual) strategies can act to inhibit and distract from significant, but difficult reforms.  One cannot help but be reminded of other quick fixes for complex problems.  The most recent being the idea, promulgated by Amy Cuddy (Harvard: link) and others, that adopting a “power pose” can overcome various forms of experienced- and socioeconomic-based prejudices and injustices, as if over-coming a person’sexperiences and situation is simply a matter of will. The message is that those who do not succeed have only themselves to blame, because the way to succeed is (basically) so damn simple.  So imagine one’s surprise (or not) when one discovers that the underlying biological claims associated with “power posing” are not true (or at least cannot be replicated, even by the co-authors of the original work (see Power Poser: When big ideas go bad: link).  Seems as if the lesson that needs to be learned, both in science education and more generally, is that claims that seem too easy or universal are unlikely to be true.  It is worth remembering that even the most effective modern (and traditional) medicines, all have potentially dangerous side effects. Why, because they lead to significant changes to the system and such modifications can discomfort the comfortable. This stands in stark contrast to non-scientific approaches; homeopathic “remedies” come to mind, which rely on placebo effects (which is not to say that taking ineffective remedies does not itself  involve risks.)

As in the case of effective medical treatments, the development and delivery of engaging and meaningful science education reform often requires challenging current assumptions and strategies that are often based in outdated traditions, and are influenced more by the constraints of class size and the logistics of testing than they are by the importance of achieving demonstrable enhancements of students’ working understanding of complex ideas.

Judging science fairs: 10/10 Privilege, 0/10 Ability

Every year, I make a point of rounding up students in my department and encouraging them to volunteer one evening judging our local science fair. This year, the fair was held at the start of April, and featured over 200 judges and hundreds of projects from young scientists in grades 5 through to 12, with the winners going on to the National Championships.

President Obama welcomes some young scientists to the White House | Photo via USDAGov
President Obama welcomes some young scientists to the White House | Photo via USDAGov

Perhaps the most rewarding part of volunteering your time, and the reason why I encourage colleagues to participate is when you see just how excited the youth are for their projects. It doesn’t matter what the project is, most of the students are thrilled to be there. Add to that how A Real Life Scientist (TM) wants to talk to them about their project? It’s a highlight for many of the students. As a graduate student, the desire to do science for science’s sake is something that gets drilled out of you quickly as you follow the Williams Sonoma/Jamie Oliver Chemistry 101 Cookbook, where you add 50 g Chemical A to 50 of Chemical B and record what colour the mixture turns. Being around  excitement based purely on the pursuit of science is refreshing.

However, the aspect of judging science fairs that I struggle most with is how to deal with the wide range of projects. How do you judge two projects on the same criteria where one used university resources (labs, mass spectrometers, centrifuges etc) and the other looked at how high balls bounce when you drop them. It becomes incredibly difficult as a judge to remain objective when one project is closer in scope to an undergraduate research project and the other is more your typical kitchen cabinet/garage equipment project. Even within two students who do the same project, there is variability depending on whether or not they have someone who can help them at home, or access to facilities through their school or parents social network.

As the title suggests, this is an issue of privilege. Having people at home who can help, either directly by providing guidance and helping do the project, or indirectly by providing access to resources, gives these kids a huge leg up over their peers. As Erin pointed out in her piece last year:

A 2009 study of the Canada-Wide Science Fair found that found that fair participants were elite not just in their understanding of science, but in their finances and social network. The study looked at participants and winners from the 2002-2008 Fairs, and found that the students were more likely to come from advantaged middle to upper class families and had access to equipment in universities or laboratories through their social connections (emphasis mine).

So the youth who are getting to these fairs are definitely qualified to be there – they know the project, and they understand the scientific method. They’re explaining advanced concepts clearly and understand the material. The problem becomes how does one objectively deal with this? You can’t punish the student because they used the resources available to them, especially if they show mastery of the concepts. But can you really evaluate them on the same stage and using the same criteria as their peers without access to those resources, especially when part of the criteria includes the scientific merit of the project?

The fair, to their credit, took a very proactive approach to this concern, which was especially prudent given the makeup of this area where some kids have opportunities and others simply don’t. Their advice was to judge the projects independently, and judge the kids on the strength of their presentation and understanding. But again, there’s an element of privilege behind this. The kids who have parents and mentors who can coach them and prepare them for how to answer questions, or even just give them an opportunity/push them to practice their talk, will obviously do better.

The science fair acts as a microcosm for our entire academic system, from undergrad into graduate and professional school and into later careers. The students who can afford to volunteer in labs over the summer during undergrad are more likely to make it into highly competitive graduate programs as they have “relevant experience,” while their peers who have to work minimum wage positions to pay tuition or student loans are going to be left behind. The system is structured to reward privilege – when was the last time an undergrad or graduate scholarship considered “work history” as opposed to “relevant work experience?” Most ask for a resume or curriculum vitae, where one could theoretically include that experience, but if the ranking criteria look for “relevant” work experience, which working at Starbucks doesn’t include, how do those students compete for the same scholarships? This is despite how working any job does help you develop various transferable skills including time management and conflict resolution. And that doesn’t even begin to consider the negative stigma many professors hold for this type of employment.

The question thus is: Are we okay with this? Are we okay with a system where, based purely on luck, some kids are given opportunities, while others aren’t? And if not, how do we start tackling it?

 

 

========================================

Disclaimer: I’ve focused on economic privilege here, but privilege comes in many different forms. I’m not going to wade into the other forms, but for some excellent reads, take a read of this, this and this.

Strategies for Hearing Impaired Students, Educators, and Colleagues and The Bigger Picture

Today, Sci-Ed is happy to welcome Rachel Wayne to the blog to discuss hearing impairment in higher education, and this is her third post on the topic (for the first post, click here, and her second post is available here). For more about Rachel, see the end of this post.

One of the biggest frustrations facing students with disability (or those with disability in general), I think, concerns our lack of familiarity within society as a whole with respect to the needs of individuals with disability. This isn’t taught in schools and some of us just simply are never exposed to the experiences that require us to educate ourselves about disabilities. Even worse, the general sentiment often seems that we may be afraid to even approach such individuals for fear of not knowing how to conduct ourselves or for fear of offending someone. The recommendations and suggestions below for communicating with hearing impaired individuals are by no means comprehensive, but they are a good place to start. Although they are written specifically with the educational system in mind, they are by no means circumscribed to a single context (I also encourage you to read Parts I and II before moving on).

Advice to Other Students and Colleagues
Remember that hearing impaired individuals need to see your lips. Always face them when you are speaking and ensure your lips are visible. Do not shout. Do not over enunciate. Be prepared to have to repeat yourself here and there. Remember that saying “Oh, don’t worry, it’s not important” can be considered rude or offensive; if it was important enough to say the first time, then it’s important enough to repeat. Not doing so may unintentionally make the individual feel left out or excluded. When possible, get the individual’s attention first; it’s the polite thing to do. In public, choose a place with adequate lighting and minimal background noise. In large groups, ask the individual where they would prefer to sit; I usually like to sit in the middle of a large table where possible so that I can see everyone. Please don’t ask us to turn up our hearing aids or suggest that we turn up the volume (reading Part I will help you understand why this may appear offensive). When going to the movies, be flexible to theatres and movies for which personalized closed captioning (e.g., CaptiView) is available (Atif Note: This information is often listed on their website). Most importantly, be curious and don’t hesitate to seek feedback on how you’re doing!

captiview
This is an example of CaptiView, which plugs into your cup holder, and provides subtitles (click link to learn more)

Advice to Hearing-Impaired Students
Accommodations are useful, but individual needs will vary. Some of these accommodations will be self-driven, such as sitting in the front of the classroom, or familiarization with the material beforehand where possible in order to facilitate comprehension. However, other accommodations require registration with campus disability services, and I do strongly recommend that individuals register as soon as possible to ensure that services can be supplied as soon as they are needed). Such accommodations might include note-takers, assistive listening devices (such as an FM system- the professor wears a microphone that transmits the sound directly to the student’s hearing aid, or transcriptions. I also recommend that students introduce themselves to the professors during the first week of class so that they know who you are, and be specific in telling them exactly what you need from them. It might help to write this down in a list or by email to ensure you’ve covered all of your bases. If you are shy, this medium can be helpful too, but remember that it is the responsibility of Disability Services to ensure that your needs are met.

The lady on the right is wearing an FM system and the one on the left is wearing “boots” on her hearing aid
The lady on the right is wearing an FM system and the one on the left is wearing “boots” on her hearing aid | Click link to go to Phonak website

One strategy I have used in the clinic is to mention my hearing impairment to clients as soon as I meet them. I let them know that I need to see their lips when they speak and that I may ask them to repeat themselves, and that this doesn’t mean I wasn’t paying attention. I will then give them the opportunity to have questions, if needed. This is a good educational opportunity for others, and it also gets any confusion out of the way. Excerpts from this also lend themselves easily to other professional (and even colloquial) introductions.

Advice to Professors or Teaching Assistants of Hearing Impaired Students
Ensure that you are facing the student wherever possible. If you write on the board, minimize the amount of information that you speak while your back is to the class. Avoid walking around the room where the student cannot see you. Repeat questions spoken by other individuals in the class, especially in large classrooms. Ensure that you provide subtitles or transcriptions for all videos shown in the classroom (even if they are non-essential!). The student may ask you to wear an FM system, so you may need to wear a microphone or a small device around your neck. Online lectures or Skype calls will require additional support, likely through real-time transcription.

If you are a conference organizer, please consider providing an audiovisual projection of the speaker onto a large screen if you are using a big room. This is helpful to everyone, especially when you have various accents in the room!

Advice to Educators and Clinical supervisors
You will need to discuss with the student what kind of accommodations they need. However, you need to be aware that the student may not necessarily know what they need, or in my case, how much help they actually do need. Use a recorder to verify a client’s responses on an assessment. Importantly, remember that this may be a touchy issue for your student. He or she will appreciate sensitivity and compassion in your approach (as I certainly did).

The Burden of Advocacy, and the Bigger Picture
Everyone has different ways of dealing with their disability. But the good news is that people are generally receptive to feedback and input. In one example, my Master’s defense involved all four faculty members on my committee being as spread out in the large boardroom as could be, and I knew that this wasn’t going to work for me when I was faced with a similar situation for my oral comprehensive examination. This time, I asked all the faculty members and evaluators to sit closer so that I could read their lips, which was a seemingly terrifying thing to do since they were all there to evaluate me. Not only did this relieve a lot of the added intellectual challenges (and eye strain from trying to lip-read at a distance), in their feedback the evaluators actually expressed that they were impressed about my self-awareness. I still struggle with self-advocacy, however, such as when I ask the clinical department to keep the lights on during a PowerPoint presentation so I can see the speaker’s lips, but I’m getting better at it.

Nevertheless, advocacy is a social and moral issue. The unfortunate reality is that post-secondary education is generally not kind to individuals with disabilities. Such individuals often have to work harder than their peers to compensate for their added difficulties and achieve the same level of performance. As I have discussed, the process of obtaining accommodations may not be seamless, and challenges can act as both physical and psychological barriers to education. I hope that my experiences resonate and I hope that they will contribute to making post-secondary education more accessible to all.

But let’s be clear here: the problem is bigger than this; the challenges don’t stop once students leave the post-secondary institution and enter the workforce. I’ve been transparent in discussing the ways that my personal beliefs about my disability may have perpetuated my social and educational exclusion. However, I’ve begun to think more critically about the ways in which society shapes and reinforces implicit beliefs and stereotypes about individuals with disabilities. In turn, these promote an unspoken culture of shame and personal narratives of exclusion. Thus, the issue isn’t necessarily what is said about disabilities, but rather, what remains unsaid.

Generally speaking, individuals with disabilities have to speak up on their own behalf for accommodations and resources for integration. Consequently, this places the onus squarely on the shoulders of those who are most vulnerable. Social pressures and the desire for conformity often take precedence over individual needs, especially when individuals may have difficulty articulating them in the first place owing to shyness or fear of discrimination.

As educators and students, and as members of society in general, we will feel a diffused sense of responsibility. However, each of us needs to contribute our share to help fill in these gaps of silence. We must open ourselves to these difficult conversations about disability. We must negotiate an equitable place for disabled individuals within our society, and by extension, within the educational system.

Often, the amount of concern we have for an issue is directly proportional to the degree to which it affects us personally. However, I implore you to consider impact of the growing prevalence of age-related hearing loss in a society in which we are living longer than ever. Take a look at your parents or your grandparents, and you will see that this is an issue from which no one is immune.

I don’t know what the solution is, but every instance that we don’t speak up perpetuates the silence. Until disability awareness is taught in schools, until it becomes part of a wider discussion, then we must step up, one student, one individual at a time. For if we don’t, then who will?

About Rachel

mail.google
Rachel Wayne is a PhD student in the Clinical Psychology program at Queen’s University. Her research focuses on understanding ways in which we use environmental cues, context, and lip-reading to support conversational speech, particularly in noisy environments. The goal of this research is to provide a foundational basis for empirically supported rehabilitative programs for hearing-impaired individuals. Rachel can be contacted at 8rw16[at]queensu.ca

The Biggest Sci-Ed Stories of 2013

As 2013 comes to an end, it’s a time for reflection and thought about the last year, and look towards to the future. 2013 was quite the year in science, with impressive discoveries and wide reaching events. I’ve selected my five favourite science stories below, but I welcome your thoughts and would love to hear your thoughts on the top science stories of 2013.

GoldieBlox and Diversity in Science
This isn’t a new issue by any stretch, but it is one of the most important issues facing science (and higher education in general). Diversity in science is essential for a number of reasons, but perhaps most importantly, it gives us different perspectives on problems, and thus, new and novel solutions. Within the scientific establishment, there have been many stories about discrimination and inappropriate conduct (see SciCurious’ excellent series of posts on the matter, including posts by friends of the blog @RimRK and @AmasianV), and, unfortunately there are no easy solutions.

Perhaps the biggest diversity-related story this year was GoldieBlox. While initially this started as a media darling (who didn’t love the video?), further examination revealed deep-set problems in how they chose to approach the issue of gender representation in STEM disciplines.

There is a lot of change required to reach equality in science careers and to ensure that people are judged and given opportunities based on their work, not their privilege. Lets hope that in 2014 we can start the ball rolling on that change.

Fracking and Energy
Hydraulic fracturing, or “fracking” is a way by which natural gas is extracted from shale or coal beds deep in the ground. This is done by pumping millions of gallons of pressurized, chemically-treated water into the ground, which breaks up the rocks and allows the gas to escape and be collected at the surface. There are large deposits of gas stored in this manner throughout the Northeastern United States and Easten/Atlantic Canada, and, as you can imagine, the economic incentives to extract this gas are huge. In fact, the Hon. Craig Leonard, Minister of Energy and Mines in New Brunswick said:

Based on U.S. Department of Energy statistics, 15 trillion cubic feet of gas is enough to heat every home in New Brunswick for the next 630 years.

Or if used to generate electricity, it could supply all of New Brunswick’s residential, commercial and industrial needs for over 100 years.

In other words, it has the potential to provide a significant competitive advantage to our province.

These economic benefits, however, have to be considered along with potential risks that come along with pumping gallons of water into the ground. The most apparent is how fracking requires an excessive amount of water, which could negatively impact other industries. In addition, this treated water could potentially open cracks into underground water supplies, contaminating our drinking water supply. Finally, what do we do with this water once it’s been used – how do we dispose of it safely and efficiently? These are all concerns that need to be addressed, along with other environmental issues that may arise. There’s no doubt that we need to plan for energy independence, and a way to revitalize your economy is a benefit no politician (or citizen) would like to pass up. However, we have to think long term and plan for the future.

Typhoon Haiyan and Global Warming
Typhoon Haiyan was one of the most powerful tropical storms on record, killing an estimated 6,111 people in the Philippines alone and doing over USD$1.5 billion in damage. Currently, over 4.4 million are homeless – which is almost the population of the Phoenix metro area (4.3 million from their 2010 Census), or the entire population of New Zealand (4.2 million from their 2013 Census). While the immediate threat has passed, there are now other problems arising. Many of the victims remain unburied, and sanitation remains an important concern to prevent outbreaks of cholera, dysentery and other communicable diseases.

Typhoon Haiyan highlights what we can expect with global warming. While the general understanding is that global warming will simply lead to warmer temperatures, that is not entirely true. A “side effect” suggests that we are more likely to see extreme weather events, which include typhoons and tropical storms.

Politics impacting Science and the US Sequester
The US sequester had long reaching implications for federal scientists. For those who rely on seasonal fieldwork, this could have eliminated a full year of research, while those who were reliant on grants being submitted for this season had to reschedule research priorities. However, the effects aren’t limited to this calendar year. From this article in The Atlantic:

It’s not yet clear how much funding the National Labs will lose, but it will total tens of millions of dollars. Interrupting — or worse, halting — basic research in the physical, biological, and computational sciences would be devastating, both for science and for the many U.S. industries that rely on our national laboratory system to power their research and development efforts.

Instead, this drop in funding will force us to cancel all new programs and research initiatives, probably for at least two years. This sudden halt on new starts will freeze American science in place while the rest of the world races forward, and it will knock a generation of young scientists off their stride, ultimately costing billions in missed future opportunities.

It remains to be seen how the effects of the sequester play out. How long the effects last, and whether the US research industry simply stumbles or falls down, are still up in the air.

Commander Chris Hadfield and Science Communication
It’s no secret that I think Chris Hadfield is an amazing science communicator. His videos in space, the way he engaged with youth, and his approach to science in a “this is awesome” sense captured the imagination of the world while he was up in the International Space Station. His personality and enthusiasm for science continued once he landed back on Earth, and he recently released his first book. When it comes to issues around communicating science, one that I feel quite strongly about is that we need more science communicators. We have a few – Bill Nye, Neil DeGrasse Tyson and such. But we need others, and Chris Hadfield helps show the breadth of scientific discovery, and his personality and enthusiasm for science make him a great ambassador for science to young and old alike.

=================

Finally, us at PLOS Sci-Ed are now celebrating our first birthday. Since we launched last year, we’ve had over 180,000 visits and hope to continue growing in the future. A sincere thank you to the PLOS blogs community manager Victoria Costello for her constant support, and finally, a heart felt thank you to all our readers. We hope you continue to comment and share our work with your networks.

So these are my choices for the biggest science stories of 2013. What are yours?

Finally, if you enjoyed this post, consider reading The Biggest Public Health Stories of 2013, over on PLOS Public Health Perspectives!

Insights into Coping with Hearing Impairment within Post-Secondary Education

Today, Sci-Ed is happy to welcome Rachel Wayne back to the blog to discuss hearing impairment in higher education for her second post (for the first post, click here). For more about Rachel, see the end of this post.

Previously, I discussed five principles for communicating with hearing-impaired individuals. Now that you are acquainted with some of the communication challenges that hearing impaired individuals face, I want to discuss my experiences as a hearing impaired individual within the context of post-secondary education. I should stress that my experiences might not be reflective of others with hearing loss, as the level of support required will vary considerably between individuals.

My experience in the Classroom and at Conferences
As an undergraduate student, I managed to duck many of the issues that hearing-impaired students face in the classroom. I was lucky in that my level of speech understanding allowed me to get by without formal accommodation so long as I arrived at class early enough to get a seat front and center. However, this is problematic if you have a professor who likes to wander around, or when students ask a question from somewhere in the back row in a large classroom. Occasionally, I would have to ask a friend or a neighbour to fill me in on something. However, because there was a lot of redundancy between the material taught in class and the contents of the textbook, I managed to get by for the most part without any major problems (although there was one exception, which I will get to shortly).

Given my relative ease in coping with hearing loss in the undergraduate classroom, I managed to convince myself that I could make up for all the added challenges of having a hearing impairment without much substantial outside help. Then I started graduate school. Although the classes in graduate school were smaller, I found myself struggling even more because the material was more difficult. As I mentioned previously, the process of compensating for hearing impairment often involves using context and experience (or even the PowerPoint slides) to fill in the missing gaps, but when the material is also challenging, it is difficult to concentrate on both at the same time. Quite simply, I had reached my limit of compensation. To add to this, most of my classes and meetings involved group discussion, so it became essential for me to pay attention to what my peers were saying, which is difficult when everyone is spread out in a large boardroom.

In graduate school, I wasn’t always able to show up early to get the best seat. While most people in undergrad shy away from sitting in the front, it seems that most graduate students prefer to sit at the center of the conference room table (or at least that seems like the natural thing to do when you are one of the first people to arrive in the room). I was extremely shy about asking my peers if I could switch seats with them in the boardroom so I could be in a better position to see everyone. I often did not even bother asking, which compromised my ability to participate in discussion. I eventually recognized that these obstacles were easily surmised once I worked up the courage to ask my peers to trade seats with me, which they were more than willing to do.

Another issue I faced is that listening to someone with an accent is challenging for most people. However, whereas the average person can adapt pretty quickly, this is more difficult for someone with hearing loss, especially if there is noise in the background. In two cases during my undergraduate career, this required me to seek note-taking services for these particular classes. But in the academic or working world, this isn’t always an option. For example, conferences bring researchers together from around the globe, and it can be frustrating for individuals to carry out a conversation with someone you cannot understand. Not only is it also frustrating for them, but they often become self-conscious about their English ability and their accent, which adds awkwardness to a conversation. Secondly, when listening to a speaker with an accent, it is more difficult to follow along, especially when they are talking about a very dense and difficult subject. This is also a problem I’ve encountered in working with ESL clients.

Conference Calls or Online Lectures, or Videos
This domain has really been a test of my advocacy because most of the challenges I encountered here involved the process of obtaining supports for these mediums. I can recall two situations with two different professors over the course of my graduate career. The first one involved my assignment partner and I having to critique a lengthy video we had recorded of us practicing therapeutic techniques in a simulated environment. This required us to record our session using a stationary camera, which made it difficult to see anyone’s lips, and the audio quality wasn’t particularly great either. I asked the professor for video transcription, but this never materialized, which meant that it took my partner and I at least twice as long to critique our video as it should have, since she had to translate everything for me. In hindsight, I felt that I didn’t advocate for myself as much as I should have; if faced with the same situation again, I like to think I’d have acted differently. I didn’t talk about having the transcription as being necessity rather than convenience. Although the professor undoubtedly had good intentions, I walked away feeling that an extension on the assignment wasn’t a fair solution for my partner and myself.

In a second situation, we had an online conference call during one of our classes for a guest lecturer. I had assumed that since we’d be able to see the speaker’s face, it wouldn’t be an issue (and again, I was shy about advocating for myself at the time), but unfortunately, there was too much of a time delay between the audio and the video for it to be effective. Between shifting my attention back and forth between the speaker and the dense slides, I essentially got very little out of it. Thus, the professor and I agreed that we would need to recruit help for the second online guest lecture. In the end, this worked out really well. We moved the class to a classroom that was better equipped to support video, and I received an online transcription in real-time, which was very helpful to me (although not perfect, as they rarely are). However, I must confess that obtaining these supports felt like both a hassle and a struggle for all involved. I was also left with the impression that (at least at first), my professor didn’t appreciate the true extent of my disability and my needs, but in the end I certainly appreciated the efforts that the professor and disability services extended in order to make the lecture accessible to me.

My experiences in the clinic
Clinical or psychoeducational assessments rely on an accurate assessment of a client’s cognitive abilities or achievement. This frequently requires administration of a test where clients have to read out pseudowords (these are not real words but sound like they could be). Differences between syllables and mistakes in pronunciation are very difficult for me to hear (since even a mild hearing loss affects the frequencies in which speech sounds like “s” or “th” are produced). My strategy was to record my client and have someone else check it over at a later time, which usually worked well, and concerns were rarely raised. But this wasn’t always the case.

There is a memory test that requires the individual to repeat back words that he or she was asked to remember. Clients being assessed for dementia or cognitive impairment may make articulation errors that are indicative of a neurological condition, or they may falsely recall a word, instead naming a similar but incorrect word than the one they were asked to remember (for example, in a list containing several animals, they might remember “leopard” instead of “lion”). This case is problematic for someone with a hearing impairment like myself because I often rely on contextual cues for speech understanding. In this case, if I wasn’t sure what I heard, but I knew it was something that started with an ‘l’, based on contextual information, I would deduce that it would be more likely that the client would have said “lion” than another animal that begins with the same letter. But this isn’t always the case. Moreover, certain populations of patients with neurodegenerative disease will mispronounce words in ways that are subtle to even a hearing person, and such mispronunciations are important diagnostic clues. No one questioned the accuracy of my clinical notes and administration until my sixth and final practicum supervisor carefully reviewed the audio tapes that I had always been keeping and noticed that I had made an error in my scoring, even though I was so absolutely sure that I had heard the words correctly.

The apparently infallibility of my hearing ability was upsetting to me. Not only did it force me to think back on how many other errors I might have made in previous assessments, it really challenged my notion of feeling that I could be self-sufficient and minimize any indications that I might be “different”. Although this is a revelation that had been insidiously creeping up on me since I started graduate school (if not much earlier), its full impact didn’t fully manifest until I was forced to confront it directly. The notions of disability and shame that I had quietly developed quickly became disentangled for me.

As difficult as it was for me to hear, the conversation I had with my clinical supervisor dislocated me from my conditioned state of denial. The less I resisted, the more I began to appreciate the extent to which I minimized the physical barriers to my education. I started to see how some of the barriers were self-imposed and the impact of them on my actions; for example, my fear how my peers would react to switching seats with me actually perpetuated feelings of exclusion within a classroom environment because I was too afraid to ask for what I needed. At the time I thought this was okay. A 20-year history of coping without additional supports enabled a false sense of self-sufficiency, one that not only made me even more reluctant to not only seek help, but also to accept it.

Now, I only wonder how many others there who feel similarly. Or worse, I wonder how many people feel ashamed of their disability and don’t even know it.

About Rachel

mail.google
Rachel Wayne is a PhD candidate in the Clinical Psychology program at Queen’s University. Her research focuses on understanding ways in which we use environmental cues, context, and lip-reading to support conversational speech, particularly in noisy environments. The goal of this research is to provide a foundational basis for empirically supported rehabilitative programs for hearing-impaired individuals. Rachel can be contacted at 8rw16[at]queensu.ca