Visualizing and teaching evolution through synteny

Embracing the rationalist and empirically-based perspective of science is not easy. Modern science generates disconcerting ideas that can be difficult to accept and often upsetting to philosophical or religious views of what gives meaning to existence [link]. In the context of evolutionary mechanisms within biology, the fact that variation is generated by random (stochastic) events, unpredictable at the level of the individual or within small populations, led to the rejection of Darwinian principles by many working scientists around the turn of the 20th century (see Bowler’s The Eclipse of Darwinism + link).  Educational research studies, such as our own “Understanding randomness and its impact on student learning“, reinforce the fact that ideas involving stochastic processes are relevant to evolutionary, as well as cellular and molecular, biology and are inherently difficult for people to accept (see also: Why being human makes evolution hard to understand). Yet there is no escape from the science-based conclusion that stochastic events provide the raw material upon which evolutionary mechanisms act, as well as playing a key role in a wide range of molecular and cellular level processes, including the origin of various diseases, particularly cancer [Cancer is partly caused by bad luck](1).

All of which leaves the critical question, at least for educators, of how to best teach students about evolutionary mechanisms and outcomes. The problem becomes all the more urgent given the anti-science posturing of politicians and public “intellectuals”, on both the right and the left, together with various overt and covert attacks on the integrity of science education, such as a new Florida law that lets “anyone in Florida challenge what’s taught in schools”.

Just to be clear, we are not looking for students to simply “believe” in the role of evolutionary processes in generating the diversity of life on Earth, but rather that they develop an understanding of how such processes work and how they make a wide range of observations scientifically intelligible. Of course the end result, unless you are prepared to abandon science altogether, is that you will find yourself forced to seriously consider the implications of unescapable scientific conclusions, no matter how weird and disconcerting they may be.

There are a number of educational strategies, in part depending upon one’s disciplinary perspective, on how to approach teaching evolutionary processes. Here I consider just one, based on my background in cell and molecular biology.  Genomicus is a web tool that “enables users to navigate in genomes in several dimensions: linearly along chromosome axes, transversely across different species, and chronologically along evolutionary time.”  It is one of a number of recently developed web-based resources that make it possible to use the avalanche of DNA (gene and genomic) sequence data being generated by the scientific community. For example, the ExAC Browser enables one to examine genetic variation in over 60,000 unrelated people. Such tools supplement and extend a range of tools accessible through the U.S. National Library of Medicine / NIH / National Center for Biotechnology Information (NCBI) web portal (PubMed).

In the biofundamentals© / coreBio course (with an evolving text available here), we originally used the observation that members of our subfamily of primates,  the Haplorhini or dry nose primates, are, unlike most mammals, dependent on the presence of vitamin C (ascorbic acid) in their diet; without vitamin C we develop scurvy, a potentially lethal condition. While there may be positive reasons for vitamin C dependence, in biofundamentals© we present this observation in the context of small population size and a forgiving environment. A plausible scenario is that the ancestral population of the Haplorhini lost the L-gulonolactone oxidase (GULO) gene (see OMIM) needed for vitamin C synthesis. The remains of the GULO gene found in humans and other Haplorhini genomes is mutated and non-functional, resulting in our requirement for dietary vitamin C.

How, you might ask, can we be so sure? Because we can transfer a functional mouse GULO gene into human cells; the result is that vitamin C dependent human cells become vitamin C independent (see: Functional rescue of vitamin C synthesis deficiency in human cells). This is yet another experimental result, similar to the ability of bacteria to accurately decode a human insulin gene), that supports the explanatory power of an evolutionary perspective (2),


In an environment in which vitamin C is plentiful in a population’s diet, the mutational loss of the GULO gene would be benign, that is, not selected against. In a small population, the stochastic effects of genetic drift can lead to the loss of genetic variants that are not strongly selected for. More to the point, once a gene’s function has been lost due to mutation, it is unlikely, although not impossible, that a subsequent mutation will lead to the repair of the gene. Why? Because there are many more ways to break a molecular machine, such as the GULO enzyme, but only a few ways to repair it. As the ancestor of the Haplorhini diverged from the ancestor of the vitamin C independent Strepsirrhini (wet-nose) group of primates, an event estimated to have occurred around 65 million years ago, its ancestors had to deal with their dietary dependence on vitamin C either by remaining within their original (vitamin C-rich) environment or by adjusting their diet to include an adequate source of vitamin C.

At this point we can start to use Genomicus to examine the results of evolutionary processes (a YouTube video on using Genomicus)(3).  In Genomicus a gene is indicated  by a pointed box  ; for simplicity all genes are drawn as if they are the same size (they are not); different genes get different colors and the direction of the box indicates the direction of RNA synthesis, the first stage of gene expression. Each horizontal line in the diagram below represents a segment of a chromosome from a particular species, while the blue lines to the left represent phylogenic (evolutionary) relationships. If we search for the GULO gene in the mouse, we find it and we discover that its orthologs (closely related genes) can be found in a wide range of eukaryotes, that is, organisms whose cells have a nucleus (humans are eukaryotes).
We find a version of the GULO gene in single-celled eukaryotes, such as baker’s yeast, that appear to have diverged from other eukaryotes about ~1.500,000,000 years ago (1500 million years ago, abbreviated Mya).  Among the mammalian genomes sequenced to date, the genes surrounding the GULO gene are also (largely) the same, a situation known as synteny (mammals are estimated to have shared a common ancestor about 184 Mya). Since genes can move around in a genome without necessarily disrupting their normal function(s), a topic for another day, synteny between distinct organisms is assumed to reflect the organization of genes in their common ancestor. The synteny around the GULO gene, and the presence of a GULO gene in yeast and other distantly related organisms, suggests that the ability to synthesize vitamin C is a trait conserved from the earliest eukaryotic ancestors.

Now a careful examination of this map (↑) reveals the absence of humans (Homo sapiens) and other Haplorhini primates – Whoa!!! what gives?  The explanation is, it turns out, rather simple. Because of mutation, presumably in their common ancestor, there is no functional GULO gene in Haplorhini primates. But the Haplorhini are related to the rest of the mammals, aren’t they?  We can test this assumption (and circumvent the absence of a functional GULO gene) by exploiting synteny – we search for other genes present in the syntenic region (↓). What do we find? We find that this region, with the exception of GULO, is present and conserved in the Haplorhini: the systemic region around the GULO gene lies on human chromosome 8 (highlighted by the red box); the black box indicates the GULO region in the mouse. Similar syntenic regions are found in the homologous (evolutionarily-related) chromosomes of other Haplorhini primates.

The end result of our Genomicus exercise is a set of molecular level observations, unknown to those who built the original anatomy-based classification scheme, that support the evolutionary relationship between the Haplorhini and more broadly among mammals. Based on these observations, we can make a number of unambiguous and readily testable predictions. A newly discovered Haplorhini primate would be predicted to share the same syntenic region and to be missing a functional GULO gene, whereas a newly discovered Strepsirrhini primate (or any mammal that does not require dietary ascorbic acid) should have a functional GULO gene within this syntenic region.  Similarly, we can explain the genomic similarities between those primates closely related to humans, such as the gorilla, gibbon, orangutan, and chimpanzee, as well as to make testable predictions about the genomic organization of extinct relatives, such as Neanderthals and Denisovians, using DNA recovered from fossils [link].

It remains to be seen how best to use these tools in a classroom context and whether having students use such tools influences their working understanding, and more generally, their acceptance of evolutionary mechanisms. That said, this is an approach that enables students to explore real data and to develop  plausible and predictive explanations for a range of genomic discoveries, likely to be relevant both to understanding how humans came to be, and in answering pragmatic questions about the roles of specific mutations and genetic variations in behavior, anatomy, and disease susceptibility.

Some footnotes:

(1) Interested in a magnetic bumper image? visit: http://www.cafepress.com/bioliteracy

(2) An insight completely missing (unpredicted and unexplained) by any creationist / intelligent design approach to biology.

(3) Note, I have no connection that I know of with the Genomicus team, but I thank Tyler Square (soon to be at UC Berkeley) for bringing it to my attention.

The trivialization of science education

It’s time for universities to accept their role in scientific illiteracy.  

There is a growing problem with scientific illiteracy, and its close relative, scientific over-confidence. While understanding science, by which most people seem to mean technological skills, or even the ability to program a device (1), is purported to be a critical competitive factor in our society, we see a parallel explosion of pseudo-scientific beliefs, often religiously held.  Advocates of a gluten-free paleo-diet battle it out with orthodox vegans for a position on the Mount Rushmore of self-righteousness, at the same time astronomers and astrophysicists rebrand themselves as astrobiologists (a currently imaginary discipline) while a subset of theoretical physicists, and the occasional evolutionary biologist, claim to have rendered ethicists and philosophers obsolete (oh, if it were only so). There are many reasons for this situation, most of which are probably innate to the human condition.  Our roots are in the vitamin C-requiring Haplorhini (dry nose) primate family, we were not evolved to think scientifically, and scientific thinking does not come easy for most of us, or for any of us over long periods of time (2). The fact that the sciences are referred to as disciplines reflects this fact, it requires constant vigilance, self-reflection, and the critical skepticism of knowledgeable colleagues to build coherent, predictive, and empirically validated models of the Universe (and ourselves).  In point of fact, it is amazing that our models of the Universe have become so accurate, particularly as they are counter-intuitive and often seem incredible, using the true meaning of the word.

Many social institutions claim to be in the business of developing and supporting scientific literacy and disciplinary expertise, most obviously colleges and universities.  Unfortunately, there are several reasons to question the general efficacy of their efforts and several factors that have led to this failure. There is the general tendency (although exactly how wide-spread is unclear, I cannot find appropriate statistics on this question) of requiring non-science students to take one, two, or more  “natural science” courses, often with associated laboratory sections, as a way to “enhance literacy and knowledge of one or more scientific disciplines, and enhance those reasoning and observing skills that are necessary to evaluate issues with scientific content” (source).

That such a requirement will “enable students to understand the current state of knowledge in at least one scientific discipline, with specific reference to important past discoveries and the directions of current development; to gain experience in scientific observation and measurement, in organizing and quantifying results, in drawing conclusions from data, and in understanding the uncertainties and limitations of the results; and to acquire sufficient general scientific vocabulary and methodology to find additional information about scientific issues, to evaluate it critically, and to make informed decisions” (source) suggests a rather serious level of faculty/institutional distain or apathy for observable learning outcomes, devotional levels of wishful thinking,  or simple hubris.  To my knowledge there is no objective evidence to support the premise that such requirements achieve these outcomes – which renders the benefits of such requirements problematic, to say the least (link).

On the other hand, such requirements have clear and measurable costs; going beyond the simple burden of added and potentially ineffective or off-putting course credit hours. The frequent requirement for multi-hour laboratory courses impacts the ability of students to schedule courses.  It would be an interesting study to examine how, independently of benefit, such laboratory course requirements impact students’ retention and time to degree, that is, bluntly put, costs to students and their families.

Now, if there were objective evidence that taking such courses improved students’ understanding of a specific disciplinary science and its application, perhaps the benefit would warrant the cost.  But one can be forgiven if one assumes a less charitable driver, that is, science departments’ self-interest in using laboratory and other non-major course requirements as means to support graduate students.  Clearly there is a need for objective metrics for scientific, that is disciplinary, literacy and learning outcomes.

And this brings up another cause for concern.  Recently, there has been a movement within the science education research community to attempt to quantify learning in terms of what are known as “forced choice testing instruments;” that is, tests that rely on true/false and multiple-choice questions, an actively anti-Socratic strategy.  In some cases, these tests claim to be research based.  As one involved in the development of such a testing instrument (the Biology Concepts Instrument or BCI), it is clear to me that such tests can serve a useful role in helping to identify areas in which student understanding is weak or confused [example], but whether they can provide an accurate or, at the end of the day, meaningful measure of whether students have developed an accurate working understanding of complex concepts and the broader meaning of observations is problematic at best.

Establishing such a level of understanding relies on Socratic, that is, dynamic and adaptive evaluations: can the learner clearly explain, either to other experts or to other students, the source and implications of their assumptions?  This is the gold standard for monitoring disciplinary understanding. It is being increasingly side-lined by those who rely on forced choice tests to evaluate learning outcomes and to support their favorite pedagogical strategies (examples available upon request).  In point of fact, it is often difficult to discern, in most science education research studies, what students have come to master, what exactly they know, what they can explain and what they can do with their knowledge. Rather unfortunately, this is not a problem restricted to non-majors taking science course requirements; majors can also graduate with a fragmented and partially, or totally, incoherent understanding of key ideas and their empirical foundations.

So what are the common features of a functional understanding of a particular scientific discipline, or more accurately, a sub-discipline?  A few ideas seem relevant.  A proficient needs to be realistic about their own understanding.  We need to teach disciplinary (and general) humility – no one actually understands all aspects of most scientific processes.  This is a point made by Fernbach & Sloman in their recent essay, “Why We Believe Obvious Untruths.”  Humility about our understanding has a number of beneficial aspects.  It helps keep us skeptical when faced with, and asked to accept, sweeping generalizations.

Such skepticism is part of a broader perspective, common among working scientists, namely the ability to distinguish the obvious from the unlikely, the implausible, and the impossible. When considering a scientific claim, the first criterion is whether there is a plausible mechanism that can be called upon to explain it, or does it violate some well-established “law of nature”. Claims of “zero waste” processes butt up against the laws of thermodynamics.

Going further, we need to consider how the observation or conclusions fits with other well established principles, which means that we have to be aware of these principles, as well as acknowledging that we are not universal experts in all aspects of science.  A molecular biologist may recognize that quantum mechanics dictates the geometries of atomic bonding interactions without being able to formally describe the intricacies of the molecule’s wave equation. Similarly, a physicist might think twice before ignoring the evolutionary history of a species, and claiming that quantum mechanics explains consciousness, or that consciousness is a universal property of matter.  Such a level of disciplinary expertise can take extended experience to establish, but is critical to conveying what disciplinary mastery involves to students; it is the major justification for having disciplinary practitioners (professors) as instructors.

From a more prosaic educational perspective other key factors need to be acknowledged, namely a realistic appreciation of what people can learn in the time available to them, while also understanding at least some of their underlying motivations, which is to say that the relevance of a particular course to disciplinary goals or desired educational outcomes needs to be made explicit and as engaging as possible, or at least not overtly off putting, something that can happen when a poor unsuspecting molecular biology major takes a course in macroscopic physics, taught by an instructor who believes organisms are deducible from first principles based on the conditions of the big bang.  Respecting the learner requires that we explicitly acknowledge that an unbridled thirst for an empirical, self-critical, mastery of a discipline is not a basic human trait, although it is something that can be cultivated, and may emerge given proper care.  Understanding the real constraints that act on meaningful learning can help focus courses on what is foundational, and help eliminate the irrelevant or the excessively esoteric.

Unintended consequences arise from “pretending” to teach students, both majors and non-science majors, science. One is an erosion of humility in the face of the complexity of science and our own limited understanding, a point made in a recent National Academy report that linked superficial knowledge with more non-scientific attitudes. The end result is an enhancement of what is known as the Kruger-Dunning effect, the tendency of people to seriously over-estimate their own expertise: “the effect describes the way people who are the least competent at a task often rate their skills as exceptionally high because they are too ignorant to know what it would mean to have the skill”.

A person with a severe case of Kruger-Dunning-itis is likely to lose respect for people who actually know what they are talking about. The importance of true expertise is further eroded and trivialized by the current trend of having photogenic and well-speaking experts in one domain pretend to talk, or rather to pontificate, authoritatively on another (3).  In a world of complex and arcane scientific disciplines, the role of a science guy or gal can promote rather than dispel scientific illiteracy.

We see the effects of the lack of scientific humility when people speak outside of their domain of established expertise to make claims of certainty, a common feature of the conspiracy theorist.  An oft used example is the claim that vaccines cause autism (they don’t), when the actual causes of autism, whether genetic and/or environmental, are currently unknown and the subject of active scientific study.  An honest expert can, in all humility, identify the limits of current knowledge as well as what is known for certain.  Unfortunately, revealing and ameliorating the levels of someone’s Kruger-Dunning-itis involves a civil and constructive Socratic interrogation, something of an endangered species in this day and age, where unseemly certainty and unwarranted posturing have replaced circumspect and critical discourse.  Any useful evaluation of what someone knows demands the time and effort inherent in a Socratic discourse, the willingness to explain how one knows what one thinks one knows, together with a reflective consideration of its implications, and what it is that other trained observers, people demonstrably proficient in the discipline, have concluded. It cannot be replaced by a multiple choice test.

Perhaps a new (old) model of encouraging in students, as well as politicians and pundits, an understanding of where science comes from, the habits of mind involved, the limits of, and constraints on, our current understanding  is needed.  At the college level, courses that replace superficial familiarity and unwarranted certainty with humble self-reflection and intellectual modesty might help treat the symptoms of Kruger-Dunning-itis, even though the underlying disease may be incurable, and perhaps genetically linked to other aspects of human neuronal processing.


some footnotes:

  1. after all, why are rather distinct disciplines lumped together as STEM (science, technology, engineering and mathematics).
  2.  Given the long history of Homo sapiens before the appearance of science, it seems likely that such patterns of thinking are an unintended consequence of selection for some other trait, and the subsequent emergence of (perhaps excessively) complex and self-reflective nervous system.
  3.  Another example of Neil Postman’s premise that education is be replaced by edutainment (see  “Amusing ourselves to Death”.

Book Review: An Astronaut’s Guide to Life on Earth

Commander Chris Hadfield captured the world’s imagination last year, when, from 13 March to 13 May 2013, he was the first Canadian Commander of the International Space Station. While aboard the ISS, Commander Hadfield did a series of “experiments,” both for scientists, but, perhaps most importantly, for youth. This included genuinely interesting questions like “How do you cry in space? (video above)” and “How do you cut your nails?” and the always important “How do you go to the bathroom?” His amicable nature and genuinely infectious enthusiasm brought science to the masses, and helped inspire thousands of youth.

Recently, Chris Hadfield released his book – “An Astronaut’s Guide to Life on Earth.” My sister waited in line for 3 hours at our local Costco to get me a signed copy for my birthday, and I finally got around to reading it for this review. The book follows the life of Chris Hadfield as he becomes the commander of Expedition 35, detailing his attitude and the path he took to become the first Canadian Commander of the ISS. The book is split into three broad sections leading up to Expedition 35 titled “Pre-Launch,” “Liftoff” and “Coming Down to Earth,” with several chapters within each section.

The book was fascinating to me – Hadfield is a hybrid pilot-engineer-scientist-lab rat. His expertise is in engineering and as a test pilot, but throughout the book he references how his work is interdisciplinary, and he has to have a broad understanding of several domains in order to be effective. In addition to his role as an astronaut and Commander, he is also a fully fledged lab rat, and people on the ground will ask him questions about how he’s feeling, take samples while he’s in space and after he returns, as well as measure how quickly he recovers to life back on Earth in order to further our understanding about how life in space impacts the human body. Since, at some point, we hope to explore the stars, any data we can get on how astronauts respond to life in space is valuable.

One of my favourite parts of the book was how it didn’t just focus on the mundane, it relished them. He spends pages describing the drills he went through, and how important have a strong grasp of the fundamentals was for his success. I found this refreshing – too often in science we glorify the achievements but ignore all the hard work that got them there. A breakthrough in the lab might take months or even years of work before things go right, and having some acknowledge that, not only do things not work (often), them not working is not the end of the world. This was a refreshing take on the scientific method, and really highlighted the value in “the grind” of slowly perfecting your skills.

Click the book cover for purchasing options!
Click the book cover for purchasing options!

He also has a certain brand of “folksy wisdom” that is inspiring in it’s own way. It’s not inspirational in the nauseating sense that these things are often written in, but more practical. He states the importance of reading the team dynamic before getting involved for example, or how important it is to really understand the nuts and bolts of what you’re doing, but at no point does that feel patronizing or “hey, look at me, I’m an astronaut!” For many budding scientists, the idea of trudging through another page of equations, or washing beakers, or just doing the mundane, less exciting parts of science makes you apathetic and bored. Hadfield takes this moments and stresses just how important it is to learn from them, as well as ensure that you know exactly why they are important. I highly recommend the book to anyone interested in STEM careers, and especially those early in their careers.

To purchase, check out Chris Hadfield’s official website.


Featured image: Commander Hadfield performed at the 2013 Canada Day celebrations in Ottawa, ON | Picture courtesy David Johnson, click for more info

Say Hello to the Nation’s T-rex

“Anyone here doesn’t like T-rex?”

No hands were raised, but the packed auditorium welcomed Jack Horner with laughter and enthusiasm. The paleontologist climbed into the Smithsonian stage, and with flailing arms declared: “I’m going to talk about a very special T-rex”.

DSC03528
A replica of a T-rex skull with human size comparison.

The special Tyrannosaurus traveled via Fedex truck.

It was packed inside wood crates.

This famous dinosaur has a stage name: Wankel T-rex. An arm fossil bone was first uncovered by Kathy Wankel (pronounced WON-kal) in 1988, and later rescued by Horner’s team of paleontologists and graduate students.

DSC03506
Jack Horner. Photo by the author.

The Wankel T-rex was the largest and most complete specimen found at the time (and still stands as one of the most complete ever found, right after Sue). Last week, the dinosaur made it’s trip to Washington DC, to reside at the Natural History museum. It was received by director Kirk Johnson and the press with great fanfare. Photographers fought to get a close-up shot of the locked crates. One box, of a size that could house a widescreen TV, was labeled “WOW”. It contained a piece of the T-rex mandible, cheekbones, and banana-sized teeth.

A few days later, the community got a chance to to get involved. I joined in as the crowd filled the Smithsonian auditorium to hear from Horner, Johnson, and curator Matt Carrano. We were even introduced to Ms. Wankel, who recounted her discovery tale.

“Wait a minute, I found something out here”, said Ms. Wankel’s husband Tom. “I think I found something bigger out here”, said Ms. Wankel referring to an old and porous dinosaur arm bone.

DSC03499
Kirk Johnson. Photo by the author.

“I wonder if it’s real.”

I’d risk saying that’s the most frequent question museum visitors ask. They have to hear from the museum staff, that yes – those bones belonged to a tyrant dinosaur over 60 million years ago.

Visitors to the Smithsonian will get an affirmative answer to that question, and hopefully marvel at that titanic creature. Hopefully that celebrity T-rex will attract many new people to the science museum.

After all, there’s not a person who dislikes T-rex.

Judging science fairs: 10/10 Privilege, 0/10 Ability

Every year, I make a point of rounding up students in my department and encouraging them to volunteer one evening judging our local science fair. This year, the fair was held at the start of April, and featured over 200 judges and hundreds of projects from young scientists in grades 5 through to 12, with the winners going on to the National Championships.

President Obama welcomes some young scientists to the White House | Photo via USDAGov
President Obama welcomes some young scientists to the White House | Photo via USDAGov

Perhaps the most rewarding part of volunteering your time, and the reason why I encourage colleagues to participate is when you see just how excited the youth are for their projects. It doesn’t matter what the project is, most of the students are thrilled to be there. Add to that how A Real Life Scientist (TM) wants to talk to them about their project? It’s a highlight for many of the students. As a graduate student, the desire to do science for science’s sake is something that gets drilled out of you quickly as you follow the Williams Sonoma/Jamie Oliver Chemistry 101 Cookbook, where you add 50 g Chemical A to 50 of Chemical B and record what colour the mixture turns. Being around  excitement based purely on the pursuit of science is refreshing.

However, the aspect of judging science fairs that I struggle most with is how to deal with the wide range of projects. How do you judge two projects on the same criteria where one used university resources (labs, mass spectrometers, centrifuges etc) and the other looked at how high balls bounce when you drop them. It becomes incredibly difficult as a judge to remain objective when one project is closer in scope to an undergraduate research project and the other is more your typical kitchen cabinet/garage equipment project. Even within two students who do the same project, there is variability depending on whether or not they have someone who can help them at home, or access to facilities through their school or parents social network.

As the title suggests, this is an issue of privilege. Having people at home who can help, either directly by providing guidance and helping do the project, or indirectly by providing access to resources, gives these kids a huge leg up over their peers. As Erin pointed out in her piece last year:

A 2009 study of the Canada-Wide Science Fair found that found that fair participants were elite not just in their understanding of science, but in their finances and social network. The study looked at participants and winners from the 2002-2008 Fairs, and found that the students were more likely to come from advantaged middle to upper class families and had access to equipment in universities or laboratories through their social connections (emphasis mine).

So the youth who are getting to these fairs are definitely qualified to be there – they know the project, and they understand the scientific method. They’re explaining advanced concepts clearly and understand the material. The problem becomes how does one objectively deal with this? You can’t punish the student because they used the resources available to them, especially if they show mastery of the concepts. But can you really evaluate them on the same stage and using the same criteria as their peers without access to those resources, especially when part of the criteria includes the scientific merit of the project?

The fair, to their credit, took a very proactive approach to this concern, which was especially prudent given the makeup of this area where some kids have opportunities and others simply don’t. Their advice was to judge the projects independently, and judge the kids on the strength of their presentation and understanding. But again, there’s an element of privilege behind this. The kids who have parents and mentors who can coach them and prepare them for how to answer questions, or even just give them an opportunity/push them to practice their talk, will obviously do better.

The science fair acts as a microcosm for our entire academic system, from undergrad into graduate and professional school and into later careers. The students who can afford to volunteer in labs over the summer during undergrad are more likely to make it into highly competitive graduate programs as they have “relevant experience,” while their peers who have to work minimum wage positions to pay tuition or student loans are going to be left behind. The system is structured to reward privilege – when was the last time an undergrad or graduate scholarship considered “work history” as opposed to “relevant work experience?” Most ask for a resume or curriculum vitae, where one could theoretically include that experience, but if the ranking criteria look for “relevant” work experience, which working at Starbucks doesn’t include, how do those students compete for the same scholarships? This is despite how working any job does help you develop various transferable skills including time management and conflict resolution. And that doesn’t even begin to consider the negative stigma many professors hold for this type of employment.

The question thus is: Are we okay with this? Are we okay with a system where, based purely on luck, some kids are given opportunities, while others aren’t? And if not, how do we start tackling it?

 

 

========================================

Disclaimer: I’ve focused on economic privilege here, but privilege comes in many different forms. I’m not going to wade into the other forms, but for some excellent reads, take a read of this, this and this.

Strategies for Hearing Impaired Students, Educators, and Colleagues and The Bigger Picture

Today, Sci-Ed is happy to welcome Rachel Wayne to the blog to discuss hearing impairment in higher education, and this is her third post on the topic (for the first post, click here, and her second post is available here). For more about Rachel, see the end of this post.

One of the biggest frustrations facing students with disability (or those with disability in general), I think, concerns our lack of familiarity within society as a whole with respect to the needs of individuals with disability. This isn’t taught in schools and some of us just simply are never exposed to the experiences that require us to educate ourselves about disabilities. Even worse, the general sentiment often seems that we may be afraid to even approach such individuals for fear of not knowing how to conduct ourselves or for fear of offending someone. The recommendations and suggestions below for communicating with hearing impaired individuals are by no means comprehensive, but they are a good place to start. Although they are written specifically with the educational system in mind, they are by no means circumscribed to a single context (I also encourage you to read Parts I and II before moving on).

Advice to Other Students and Colleagues
Remember that hearing impaired individuals need to see your lips. Always face them when you are speaking and ensure your lips are visible. Do not shout. Do not over enunciate. Be prepared to have to repeat yourself here and there. Remember that saying “Oh, don’t worry, it’s not important” can be considered rude or offensive; if it was important enough to say the first time, then it’s important enough to repeat. Not doing so may unintentionally make the individual feel left out or excluded. When possible, get the individual’s attention first; it’s the polite thing to do. In public, choose a place with adequate lighting and minimal background noise. In large groups, ask the individual where they would prefer to sit; I usually like to sit in the middle of a large table where possible so that I can see everyone. Please don’t ask us to turn up our hearing aids or suggest that we turn up the volume (reading Part I will help you understand why this may appear offensive). When going to the movies, be flexible to theatres and movies for which personalized closed captioning (e.g., CaptiView) is available (Atif Note: This information is often listed on their website). Most importantly, be curious and don’t hesitate to seek feedback on how you’re doing!

captiview
This is an example of CaptiView, which plugs into your cup holder, and provides subtitles (click link to learn more)

Advice to Hearing-Impaired Students
Accommodations are useful, but individual needs will vary. Some of these accommodations will be self-driven, such as sitting in the front of the classroom, or familiarization with the material beforehand where possible in order to facilitate comprehension. However, other accommodations require registration with campus disability services, and I do strongly recommend that individuals register as soon as possible to ensure that services can be supplied as soon as they are needed). Such accommodations might include note-takers, assistive listening devices (such as an FM system- the professor wears a microphone that transmits the sound directly to the student’s hearing aid, or transcriptions. I also recommend that students introduce themselves to the professors during the first week of class so that they know who you are, and be specific in telling them exactly what you need from them. It might help to write this down in a list or by email to ensure you’ve covered all of your bases. If you are shy, this medium can be helpful too, but remember that it is the responsibility of Disability Services to ensure that your needs are met.

The lady on the right is wearing an FM system and the one on the left is wearing “boots” on her hearing aid
The lady on the right is wearing an FM system and the one on the left is wearing “boots” on her hearing aid | Click link to go to Phonak website

One strategy I have used in the clinic is to mention my hearing impairment to clients as soon as I meet them. I let them know that I need to see their lips when they speak and that I may ask them to repeat themselves, and that this doesn’t mean I wasn’t paying attention. I will then give them the opportunity to have questions, if needed. This is a good educational opportunity for others, and it also gets any confusion out of the way. Excerpts from this also lend themselves easily to other professional (and even colloquial) introductions.

Advice to Professors or Teaching Assistants of Hearing Impaired Students
Ensure that you are facing the student wherever possible. If you write on the board, minimize the amount of information that you speak while your back is to the class. Avoid walking around the room where the student cannot see you. Repeat questions spoken by other individuals in the class, especially in large classrooms. Ensure that you provide subtitles or transcriptions for all videos shown in the classroom (even if they are non-essential!). The student may ask you to wear an FM system, so you may need to wear a microphone or a small device around your neck. Online lectures or Skype calls will require additional support, likely through real-time transcription.

If you are a conference organizer, please consider providing an audiovisual projection of the speaker onto a large screen if you are using a big room. This is helpful to everyone, especially when you have various accents in the room!

Advice to Educators and Clinical supervisors
You will need to discuss with the student what kind of accommodations they need. However, you need to be aware that the student may not necessarily know what they need, or in my case, how much help they actually do need. Use a recorder to verify a client’s responses on an assessment. Importantly, remember that this may be a touchy issue for your student. He or she will appreciate sensitivity and compassion in your approach (as I certainly did).

The Burden of Advocacy, and the Bigger Picture
Everyone has different ways of dealing with their disability. But the good news is that people are generally receptive to feedback and input. In one example, my Master’s defense involved all four faculty members on my committee being as spread out in the large boardroom as could be, and I knew that this wasn’t going to work for me when I was faced with a similar situation for my oral comprehensive examination. This time, I asked all the faculty members and evaluators to sit closer so that I could read their lips, which was a seemingly terrifying thing to do since they were all there to evaluate me. Not only did this relieve a lot of the added intellectual challenges (and eye strain from trying to lip-read at a distance), in their feedback the evaluators actually expressed that they were impressed about my self-awareness. I still struggle with self-advocacy, however, such as when I ask the clinical department to keep the lights on during a PowerPoint presentation so I can see the speaker’s lips, but I’m getting better at it.

Nevertheless, advocacy is a social and moral issue. The unfortunate reality is that post-secondary education is generally not kind to individuals with disabilities. Such individuals often have to work harder than their peers to compensate for their added difficulties and achieve the same level of performance. As I have discussed, the process of obtaining accommodations may not be seamless, and challenges can act as both physical and psychological barriers to education. I hope that my experiences resonate and I hope that they will contribute to making post-secondary education more accessible to all.

But let’s be clear here: the problem is bigger than this; the challenges don’t stop once students leave the post-secondary institution and enter the workforce. I’ve been transparent in discussing the ways that my personal beliefs about my disability may have perpetuated my social and educational exclusion. However, I’ve begun to think more critically about the ways in which society shapes and reinforces implicit beliefs and stereotypes about individuals with disabilities. In turn, these promote an unspoken culture of shame and personal narratives of exclusion. Thus, the issue isn’t necessarily what is said about disabilities, but rather, what remains unsaid.

Generally speaking, individuals with disabilities have to speak up on their own behalf for accommodations and resources for integration. Consequently, this places the onus squarely on the shoulders of those who are most vulnerable. Social pressures and the desire for conformity often take precedence over individual needs, especially when individuals may have difficulty articulating them in the first place owing to shyness or fear of discrimination.

As educators and students, and as members of society in general, we will feel a diffused sense of responsibility. However, each of us needs to contribute our share to help fill in these gaps of silence. We must open ourselves to these difficult conversations about disability. We must negotiate an equitable place for disabled individuals within our society, and by extension, within the educational system.

Often, the amount of concern we have for an issue is directly proportional to the degree to which it affects us personally. However, I implore you to consider impact of the growing prevalence of age-related hearing loss in a society in which we are living longer than ever. Take a look at your parents or your grandparents, and you will see that this is an issue from which no one is immune.

I don’t know what the solution is, but every instance that we don’t speak up perpetuates the silence. Until disability awareness is taught in schools, until it becomes part of a wider discussion, then we must step up, one student, one individual at a time. For if we don’t, then who will?

About Rachel

mail.google
Rachel Wayne is a PhD student in the Clinical Psychology program at Queen’s University. Her research focuses on understanding ways in which we use environmental cues, context, and lip-reading to support conversational speech, particularly in noisy environments. The goal of this research is to provide a foundational basis for empirically supported rehabilitative programs for hearing-impaired individuals. Rachel can be contacted at 8rw16[at]queensu.ca

Insights into Coping with Hearing Impairment within Post-Secondary Education

Today, Sci-Ed is happy to welcome Rachel Wayne back to the blog to discuss hearing impairment in higher education for her second post (for the first post, click here). For more about Rachel, see the end of this post.

Previously, I discussed five principles for communicating with hearing-impaired individuals. Now that you are acquainted with some of the communication challenges that hearing impaired individuals face, I want to discuss my experiences as a hearing impaired individual within the context of post-secondary education. I should stress that my experiences might not be reflective of others with hearing loss, as the level of support required will vary considerably between individuals.

My experience in the Classroom and at Conferences
As an undergraduate student, I managed to duck many of the issues that hearing-impaired students face in the classroom. I was lucky in that my level of speech understanding allowed me to get by without formal accommodation so long as I arrived at class early enough to get a seat front and center. However, this is problematic if you have a professor who likes to wander around, or when students ask a question from somewhere in the back row in a large classroom. Occasionally, I would have to ask a friend or a neighbour to fill me in on something. However, because there was a lot of redundancy between the material taught in class and the contents of the textbook, I managed to get by for the most part without any major problems (although there was one exception, which I will get to shortly).

Given my relative ease in coping with hearing loss in the undergraduate classroom, I managed to convince myself that I could make up for all the added challenges of having a hearing impairment without much substantial outside help. Then I started graduate school. Although the classes in graduate school were smaller, I found myself struggling even more because the material was more difficult. As I mentioned previously, the process of compensating for hearing impairment often involves using context and experience (or even the PowerPoint slides) to fill in the missing gaps, but when the material is also challenging, it is difficult to concentrate on both at the same time. Quite simply, I had reached my limit of compensation. To add to this, most of my classes and meetings involved group discussion, so it became essential for me to pay attention to what my peers were saying, which is difficult when everyone is spread out in a large boardroom.

In graduate school, I wasn’t always able to show up early to get the best seat. While most people in undergrad shy away from sitting in the front, it seems that most graduate students prefer to sit at the center of the conference room table (or at least that seems like the natural thing to do when you are one of the first people to arrive in the room). I was extremely shy about asking my peers if I could switch seats with them in the boardroom so I could be in a better position to see everyone. I often did not even bother asking, which compromised my ability to participate in discussion. I eventually recognized that these obstacles were easily surmised once I worked up the courage to ask my peers to trade seats with me, which they were more than willing to do.

Another issue I faced is that listening to someone with an accent is challenging for most people. However, whereas the average person can adapt pretty quickly, this is more difficult for someone with hearing loss, especially if there is noise in the background. In two cases during my undergraduate career, this required me to seek note-taking services for these particular classes. But in the academic or working world, this isn’t always an option. For example, conferences bring researchers together from around the globe, and it can be frustrating for individuals to carry out a conversation with someone you cannot understand. Not only is it also frustrating for them, but they often become self-conscious about their English ability and their accent, which adds awkwardness to a conversation. Secondly, when listening to a speaker with an accent, it is more difficult to follow along, especially when they are talking about a very dense and difficult subject. This is also a problem I’ve encountered in working with ESL clients.

Conference Calls or Online Lectures, or Videos
This domain has really been a test of my advocacy because most of the challenges I encountered here involved the process of obtaining supports for these mediums. I can recall two situations with two different professors over the course of my graduate career. The first one involved my assignment partner and I having to critique a lengthy video we had recorded of us practicing therapeutic techniques in a simulated environment. This required us to record our session using a stationary camera, which made it difficult to see anyone’s lips, and the audio quality wasn’t particularly great either. I asked the professor for video transcription, but this never materialized, which meant that it took my partner and I at least twice as long to critique our video as it should have, since she had to translate everything for me. In hindsight, I felt that I didn’t advocate for myself as much as I should have; if faced with the same situation again, I like to think I’d have acted differently. I didn’t talk about having the transcription as being necessity rather than convenience. Although the professor undoubtedly had good intentions, I walked away feeling that an extension on the assignment wasn’t a fair solution for my partner and myself.

In a second situation, we had an online conference call during one of our classes for a guest lecturer. I had assumed that since we’d be able to see the speaker’s face, it wouldn’t be an issue (and again, I was shy about advocating for myself at the time), but unfortunately, there was too much of a time delay between the audio and the video for it to be effective. Between shifting my attention back and forth between the speaker and the dense slides, I essentially got very little out of it. Thus, the professor and I agreed that we would need to recruit help for the second online guest lecture. In the end, this worked out really well. We moved the class to a classroom that was better equipped to support video, and I received an online transcription in real-time, which was very helpful to me (although not perfect, as they rarely are). However, I must confess that obtaining these supports felt like both a hassle and a struggle for all involved. I was also left with the impression that (at least at first), my professor didn’t appreciate the true extent of my disability and my needs, but in the end I certainly appreciated the efforts that the professor and disability services extended in order to make the lecture accessible to me.

My experiences in the clinic
Clinical or psychoeducational assessments rely on an accurate assessment of a client’s cognitive abilities or achievement. This frequently requires administration of a test where clients have to read out pseudowords (these are not real words but sound like they could be). Differences between syllables and mistakes in pronunciation are very difficult for me to hear (since even a mild hearing loss affects the frequencies in which speech sounds like “s” or “th” are produced). My strategy was to record my client and have someone else check it over at a later time, which usually worked well, and concerns were rarely raised. But this wasn’t always the case.

There is a memory test that requires the individual to repeat back words that he or she was asked to remember. Clients being assessed for dementia or cognitive impairment may make articulation errors that are indicative of a neurological condition, or they may falsely recall a word, instead naming a similar but incorrect word than the one they were asked to remember (for example, in a list containing several animals, they might remember “leopard” instead of “lion”). This case is problematic for someone with a hearing impairment like myself because I often rely on contextual cues for speech understanding. In this case, if I wasn’t sure what I heard, but I knew it was something that started with an ‘l’, based on contextual information, I would deduce that it would be more likely that the client would have said “lion” than another animal that begins with the same letter. But this isn’t always the case. Moreover, certain populations of patients with neurodegenerative disease will mispronounce words in ways that are subtle to even a hearing person, and such mispronunciations are important diagnostic clues. No one questioned the accuracy of my clinical notes and administration until my sixth and final practicum supervisor carefully reviewed the audio tapes that I had always been keeping and noticed that I had made an error in my scoring, even though I was so absolutely sure that I had heard the words correctly.

The apparently infallibility of my hearing ability was upsetting to me. Not only did it force me to think back on how many other errors I might have made in previous assessments, it really challenged my notion of feeling that I could be self-sufficient and minimize any indications that I might be “different”. Although this is a revelation that had been insidiously creeping up on me since I started graduate school (if not much earlier), its full impact didn’t fully manifest until I was forced to confront it directly. The notions of disability and shame that I had quietly developed quickly became disentangled for me.

As difficult as it was for me to hear, the conversation I had with my clinical supervisor dislocated me from my conditioned state of denial. The less I resisted, the more I began to appreciate the extent to which I minimized the physical barriers to my education. I started to see how some of the barriers were self-imposed and the impact of them on my actions; for example, my fear how my peers would react to switching seats with me actually perpetuated feelings of exclusion within a classroom environment because I was too afraid to ask for what I needed. At the time I thought this was okay. A 20-year history of coping without additional supports enabled a false sense of self-sufficiency, one that not only made me even more reluctant to not only seek help, but also to accept it.

Now, I only wonder how many others there who feel similarly. Or worse, I wonder how many people feel ashamed of their disability and don’t even know it.

About Rachel

mail.google
Rachel Wayne is a PhD candidate in the Clinical Psychology program at Queen’s University. Her research focuses on understanding ways in which we use environmental cues, context, and lip-reading to support conversational speech, particularly in noisy environments. The goal of this research is to provide a foundational basis for empirically supported rehabilitative programs for hearing-impaired individuals. Rachel can be contacted at 8rw16[at]queensu.ca