Paul LeBlanc. Students First: Equity, Access, and Opportunity in Higher Education. Cambridge, MA: Harvard University Press, 2021
Alternatives to University
Paul LeBlanc is the president of Southern New Hampshire University, which famously reinvented itself with its “College for America” program aimed at providing a degree pathway to working adults who had some college credit but no degree. It now serves more than 170K learners.
A unique feature of the program is that it is 100% competency-based learning: you can get credit for things you already know (learned on the job) but never took formal classes in, as long as you can demonstrate the knowledge (competencies).
A bad grade on a test doesn’t follow you through the course, and the course isn’t transcripted until you’ve demonstrated competency on every learning goal.
A set of interviews SNHU conducted with low-income GenZ students of color in the Los Angeles area repeatedly yielded the comment “I love learning, but I hate school.” Ouch.
This book argues for much more pervasive adoption of CBE (competency-based education) throughout higher education: we don’t let pilots fly a plane without proving they can fly the simulator, and conversely, if they can handle the simulator then we don’t withhold a pilot license just because they didn’t take all the right classes, so why don’t we do the same for all degrees?
Like others before him, LeBlanc curses the credit-hour, which was originally designed to track faculty effort but has somehow become the yardstick of student learning (“You took 120 credit-hours worth of classes, therefore you should get a degree attesting to your knowledge”).
Credit hours also present a bureaucratic obstacle because they’re used as the basis of financial aid decisions, grants, student loans, etc.: if you’re not “taking a certain number of credit-hours” you may not qualify for certain kinds of assistance. So creatively doing CBL sometimes requires shoehorning things into a credit-hours mold. Worse, since credit-hours are ill defined and not connected to learner behavior, unscrupulous actors can game them (grade inflation, credit-hour inflation, etc.) to exploit these systems.
And as many students can attest, the slightest administrative hiccup can lead to crisis. Miss a couple of weeks because your mom is sick? Drop the course, you’ll never catch up in time. Owe $100 in admin fees? Can’t enroll or access your transcripts to apply for jobs. Job changed your shift and now you can’t attend the same English class? Sorry, you need to drop it, and now you don’t have enough units to continue to qualify for financial aid, and by the way since you dropped out pay us back the Pell money you’ve already spent this semester. For community college students, most of whom are working adults (among Pell recipients, about half work more than 20 hours a week), going through college is like being one flat tire away from disaster. This is why (he argues) proposal for free college or student-debt forgiveness ultimately won’t work: they don’t address the root causes for 40% of students who start a 4-year degree not finishing within 6 years.
LeBlanc also argues (correctly, IMO) that in most courses, the only thing grades tell you concretely is that A-students did better than B-students and so on. They don’t attest to specific knowledge, and in fact depending on how the overall score is weighted, they don’t even guarantee that the list of competencies demonstrated by an A-student is a strict superset of that of a B-student.
And credit hours do nothing to recognize learning that happens outside the confines of a traditional course.
One result of the meaninglessness of credit-hours and grades as learning measures is the difficulty of transferring credit for courses from outside your institution.
A more pernicious result is that unlike several decades ago, a college degree by itself is no longer a “signal” of competency trusted by employers: just 11% of business leaders strongly agreed that college graduates have the skills and competencies needed for their workplace.
In contrast, as Todd Rose points out in The End of Average:
Bloom showed that when students were allowed a little flexibility in the pace of their learning, the vast majority of students ended up performing extremely well... These two insights--that speed does not equal ability, and that there are no universally fast or slow learners--had actually been recognized several decades before Bloom's pioneering study.
Public perception is that in exploring higher ed, students care about competitive sports, a social life, and being perceived as elite/going to a selective school. (LeBlanc refers to this as the “faith-based” view of higher ed: if a school has a large enough library, enough faculty rock stars, and high-enough average entering SAT scores, what comes out will be fine.) In fact, major findings from the 2020 report American Priorities for Higher Education found that affordability, flexibility, and workplace-relevant/applied learning (ideally in the form of projects that reflect real-world examples) were students’ highest ranked priorities.
These priorities correlate with social mobility. Although I don’t believe the C4A program started this way, there’s a strong equity argument for flexible learning. If you’re poor, everything takes more time: getting around, staying on top of chores, etc., and that all takes time away from learning because you have less control of your time. (Eric Brewer used to say about his work on Technology Infrastructure for Emerging Regions that “it’s expensive being poor.”) SNHU is fully asynchronous and online.
CBE basically says:
Enumerate what students must be able to do (competencies) with respect to some body of knowledge
Come up with reliable assessments that concretely allow the student to demonstrate they can actually do it.
#2 is hard because in traditional universities it is usually left to the broad discretion of faculty who have no training in authoring reliable and valid assessments. (Check yourself: if you’re an assessment author, do you know what those terms mean?)
Indeed, demonstrating mastering through doing often requires assessing the process instead of (or in addition to) the product. (This is true, e.g., in agile software engineering, where following the process is just as important as creating the artifact, maybe more so.) Because such things can be hard to assess, we often test what is easy to assess, the equivalent of searching for lost keys under the streetlamp because that’s where the light is brightest.
In a CBE world, a course is a list of competencies, and the only possible grades for each one are “Mastered” or “Not yet.” And competencies may stack: demonstrating competency in a basic skill (say, writing a loop) will be required often in doing more advanced competencies, and so constantly re-exercised after mastery is demonstrated (a form of “spiraling”, a K-12 term about building mastery over time).
In a CBE world, getting into college would be easy, but graduating would be hard. And getting into C4A is easy–not in the sense that they’ll take just anyone, but they turn around enrollment inquiries in a day, much more “customer focused” than traditional academia.
What about “soft skills”? By age 40, social sceince/humanities graduates have often closed the wage gap with STEM graduates because technical skills can become obsolete and need to be refreshed often, but “soft skills” don’t. (If you’re a good writer or team leader, you just get better at it the more you do it. If you’re a good Perl programmer…well…) These can be assessed, albeit manually and with a good rubric that clearly states the objective. For example, a Team Dynamics competency set might include:
Describes some types of conflicts that could occur with difficult employees and identifies how these might affect team communication
Describes some ways to manage emotions when working with team members
Describes specific ways in which age and cultural factors will be considered in helping the team work together effectively
…and so on.
In the end, the “cultural” obstacles traditional academics must overcome to embrace CBE are (a) recognizing that learning can happen anywhere (not just in the classroom) and assessment doesn’t depend on how or where the learning happened, and (b) disaggregating faculty responsibilities of mentoring, teaching courses, and assessing/evaluating, possibly into different roles performed by different people. At SNHU, “evaluators” hold advanced degrees and are there to provide quick, personalized, and anonymous feedback (kind of like a program committee) on student work. This setup minimizes biases and other barriers (especially if the student’s identity is also anonymous to the assessor). However, faculty can feel vulnerable if it seems like this approach is reducing their centrality to higher education.
In Community College and other settings, including Northern Arizona University, scaling CBE was very difficult for various reasons. Necessary administrative work (financial aid usually based on credit-hours or full-time student status; enrollment management, including assigning and resolving incomplete grades; etc.) had to be done by hand. Explaining CBE was challenging – because it focuses on assessment, they had to explain they were not “watering down” the quality of instruction, just focusing the outcome metric on something other than seat time.
There’s a whole underlying question here (to me) about whether degrees are even the right thing to discuss. If the goal of (certain sectors of) higher ed is to get you into the job market, why wouldn’t stackable skills-attesting certificates be just as good as degrees? Part of LeBlanc’s answer is that many companies still use degree-holding as a de facto placeholder/requirement for certain opportunities (presumably because it used to be a useful signal). But notably, SNHU has acquired at least one software bootcamp and beefed up the bootcamp program to be “college level” (whatever that means), meaning that those credits could be counted towards the already-competency-based SNHU degree.