Making education matter in higher education


It may seem self-evident that providing an effective education, the type of educational experiences that lead to a useful bachelors degree and serve as the foundation for life-long learning and growth, should be a prime aspirational driver of Colleges and Universities (1).  We might even expect that various academic departments would compete with one another to excel in the quality and effectiveness of their educational outcomes; they certainly compete to enhance their research reputations, a competition that is, at least in part, responsible for the retention of faculty, even those who stray from an ethical path. Institutions compete to lure research stars away from one another, often offering substantial pay raises and research support (“Recruiting or academic poaching?”).  Yet, my own experience is that a department’s performance in undergraduate educational outcomes never figures when departments compete for institutional resources, such as supporting students, hiring new faculty, or obtaining necessary technical resources (2).

 I know of no example (and would be glad to hear of any) of a University hiring a professor based primarily on their effectiveness as a instructor (3).

In my last post (link), I suggested that increasing the emphasis on measures of departments’ educational effectiveness could help rebalance the importance of educational and research reputations, and perhaps incentivize institutions to be more consistent in enforcing ethical rules involving research malpractice and the abuse of students, both sexual and professional. Imagine if administrators (Deans and Provosts and such) were to withhold resources from departments that are performing below acceptable and competitive norms in terms of undergraduate educational outcomes?

Outsourced teaching: motives, means and impacts

Sadly, as it is, and particularly in many science departments, undergraduate educational outcomes have little if any impact on the perceived status of a department, as articulated by campus administrators. The result is that faculty are not incentivized to, and so rarely seriously consider the effectiveness of their department’s course requirements, a discussion that would of necessity include evaluating whether a course’s learning goals are coherent and realistic, whether the course is delivered effectively, whether it engages students (or is deemed irrelevant), and whether students’ achieved the desired learning outcomes, in terms of knowledge and skills achieved, including the ability to apply that knowledge effectively to new situations.  Departments, particularly research focussed (dependent) departments, often have faculty with low teaching loads, a situation that incentivizes the “outsourcing” of key aspects of their educational responsibilities.  Such outsourcing comes in two distinct forms, the first is requiring majors to take courses offered by other departments, even if such courses are not well designed, delivered, or (in the worst cases) relevant to the major.  A classic example is to require molecular biology students to take macroscopic physics (LINK) or calculus courses, without regard to whether the materials these courses present is ever used within the major.  Expecting a student majoring in the life sciences to embrace a course that (perhaps rightly) seems irrelevant to their discipline can alienate a student, and poses an unnecessary obstacle to student success, rather than providing students with needed knowledge and skills tools.  Generally, the incentives necessary to generate a relevant course, for example, a molecular level physics course that would engage molecular biology students, are simply not there.  A version of this situation is to require courses that are poorly designed or delivered (general chemistry is often used as the poster child for such a course). These are courses that have high failure rates, sometimes justified in terms of “necessary rigor” when in fact better course design could (and has) resulted in lower failure rates and improved learning outcomes [link].  In addition, there are perverse incentives associated with requiring courses offered by other departments, as they reduce the number of courses a department’s faculty needs to teach, and can lead to fewer students proceeding into upper division courses.

The second type of outsourcing involves excusing tenure track faculty from teaching introductory courses, and having them replaced by lower paid instructors or lecturers.  Independently of whether instructors, lecturers, or tenure track professors make for better teaching, replacing faculty with instructors sends an implicit message to students.  At the same time, the freedom of instructors/lecturers to adopt an effective (socratic) approach to teaching is often severely constrained; common exams can force classes to move in lock step, independently of whether that pace is optimal for student engagement and learning. Generally, instructors/lecturers do not have the freedom to adjust what they teach, to modify the emphasis and time they spend on specific topics in response to their students’ needs. How an instructor instructs their students suffers when teachers do not have the freedom to customize their interactions with students in response to where they are intellectually.  This is particularly detrimental in the case of underrepresented or underprepared students. Generally, a flexible and adaptive approach to instruction (including ancillary classes on how to cope with college: see An alternative to remedial college classes gets results) can address many issues, and bring the majority of students to a level of competence, whereas tracking students into remedial classes can succeed in driving them out of a major or college (see Colleges Reinvent Classes to Keep More Students in Science and Redesigning a Large-Enrollment Introductory Biology Course and Does Remediation Work for All Students? )

How to address this imbalance, how can we reset the pecking order so that effective educational efforts actually matter to a department? 

My (modest) suggestion is to base departmental rewards on objective measures of educational effectiveness.   And by rewards I mean both at the level of individuals (salary and status) as well as support for graduate students, faculty positions, start up funds, etc.  What if, for example, faculty in departments that excel at educating their students received a teaching bonus, or if the number of graduate students within a department supported by the institution was determined not by the number of classes these graduate students taught (courses that might not be particularly effective or engaging) but rather by a departments’ undergraduate educational effectiveness, as measured by retention, time to degree, and learning outcomes (see below)?  The result could well be a drive within a department to improve course and curricular effectiveness to maximize education-linked rewards.  Given that laboratory courses, the courses most often taught by science graduate students, are multi-hour schedule disrupting events, of limited demonstrable educational effectiveness, that complicate student course scheduling, removing requirements for lab courses deemed unnecessary (or generating more effective versions), would be actively rewarded (of course, sanctions for continuing to offer ineffective courses would also be useful, but politically more problematic.) The same can be said, for example, for a biology department that requires a 4 to 5 credit hour physics or chemistry course, a course that could lead to students changing majors.  Currently it is “easy” for a department to require its students to take such courses without critically evaluating whether they are “worth it”, educationally.  Imagine how a department’s choices of required courses would change if the impact of high failure rates (which I would argue is a proxy for poorly designed  and delivered courses) directly impacted the rewards reaped by a department. There would be an incentive to look critically at such courses, to determine whether they are necessary and if so, well designed and delivered. Departments would serve their own interests if they invested in the development of courses  that better served their disciplinary goals, courses likely to engage their students’ interests.

So how do we measure a department’s educational efficacy?

There are three obvious metrics: i) retention of students as majors (or in the case “service courses” whether students master what it is the course claims to teach); ii) time to degree (and by that I mean the percentage of students who graduate in 4 years, rather than the 6 year time point reported in response to federal regulations (six year graduation rate | background on graduation rates); and iii) objective measures of student learning outcomes attained and skills achieved. The first two are easy, Universities already know these numbers.  Moreover they are directly influenced by degree requirements – requiring students to take boring and/or apparently irrelevant courses serves to drive a subset of students out of a major.  By making courses relevant and engaging, more students can be retained in a degree program. At the same time, thoughtful course design can help students  pass through even the most rigorous (difficult) of such courses. The third, learning outcomes, is significantly more challenging to measure, since universal metrics are (largely) missing or superficial.  A few disciplines, such as chemistry, support standardized assessments, although one could argue with what such assessments measure.  Nevertheless, meaningful outcomes measures are necessary, in much the same way that Law and Medical boards and the Fundamentals of Engineering exam serve to help insure (although they do not guarantee) the competence of practitioners. One could imagine using parts of standardized exams, such as discipline specific GRE exams, to generate outcomes metrics, although more informative assessment instruments would clearly be preferable. The initiative in this area could be taken by professional societies, college consortia (such as the AAU), and research foundations, as a critical driver for education reform, increased effectiveness, and improved cost-benefit outcomes (something that could help address the growing income inequality in our country and make success in higher education an important factor contributing to an institution’s reputation.


A footnote or two…
1. My comments are primarily focused on research universities, since that is where my experience lies; these are, of course, the majority of the largest universities (in a student population sense).
2. Although my experience is limited, having spent my professorial career at a single institution, conversations with others leads me to conclude that it is not unique.
3. The one obvious exception would be the hiring of  coaches of sports teams, since their success in teaching (coaching) is more directly discernible and impactful on institutional finances and reputation).

Author: Mike Klymkowsky

I am a Professor of Molecular, Cellular, and Developmental Biology at the University of Colorado Boulder. Growing up in Pennsylvania, I earned a bachelors degree in biophysics from Penn State then moved to California and earned a Ph.D. from CalTech (working for a time at UCSF and the Haight-Ashbury Free Clinic). I was a Muscular Dystrophy Association post-doctoral fellow at University College London and the Rockefeller University before moving to Boulder. My research has involved a number of topics, including neurotransmitter receptor structure, cytoskeletal organization and ciliary function, neural crest formation, and signaling systems in the context of the clawed frog Xenopus laevis as well as biology education research, leading to the development of the Biological Concepts Instrument (BCI), a suite of virtuallaboratory activities, and biofundamentals, a re-designed introductory molecular biology course. I have a close collaboration with Melanie Cooper (@Michigan State) that has resulted in transformed (and demonstrably effective and engaging) course materials in general and organic chemistry known as CLUE: Chemistry, Life, the Universe & Everything. I was in the first class of Pew Biomedical Scholars and am a Fellow of the American Association for the Advancement of Science.

4 thoughts on “Making education matter in higher education”

  1. Awarding retention and completion could backfire if courses and degrees just become “easier” to help boost your department’s stats. The slope is slippery.

    Like

    1. yup, that is why defining the knowledge/skills outcomes that a program aims to achieve AND then developing objective measures of student learning (going way beyond multiple choice like tests) and holding departments (and institutions) accountable is critical!

      we need institutions to care and get real about students’ learning

      Like

  2. I know several examples of a university hiring a professor based primarily on their effectiveness as an instructor. Consider those scientists hired into a teaching staff position (there are many). Some universities take advantage of the education-focused scientists who get buried in the classroom or with a heavy service load in a low-paid dead-end career. An increasing number of institutions are actually hiring those who are so effective as an educator into the tenure track be creating new education-focused position. I know several of these scientists who converted toward discipline-based education research (DBER), often training themselves in their new field by working closely alongside other faculty who were actually trained in theories and research methods of education. The problem remains when it comes time to evaluate their work for promotion and tenure. First, federal funding available to (and needed by) DBER scientists is low. Second, it is unfair to have faculty in such positions be reviewed by prominent scientists who may have attracted external funding for innovative education projects but who may not be qualified to evaluate work in education research where qualitative methods often provide the best answers to a research question. I appreciate the need for raising the question about how to measure a department’s educational efficacy. One measure would be to examine how the department treats expert DBER and/or teaching-focused faculty members. Is the department respectful of education research as an area of scholarship with broad applications for solving problems and as a research area that informs more than just student learning in a classroom? Are education-focused faculty members given opportunities to develop cultural capital in which social networks with other expert educators are central, and where transactions with bench or field scientists are marked by reciprocity, trust, and cooperation needed to promote advancement of both careers? Only a few departments can answer yes to these questions, but change is happening!

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s