The faculty at a school I’m familiar with recently voted to change their scale for reporting final grades. They went from one that uses just letters (A, B, C, D, F) to one that uses letters with pluses/minuses (A, A-, B+, B, B- . . .). They now have twelve gradations from “outstanding” to “failing.”
Why was this done? The change was made for the sake of accuracy and motivation. It was argued that students who earn all “low A’s” (for instance) should not receive the same GPA as those who earn all “high A’s” because that would not accurately reflect their performance. Also, according to anecdotal evidence offered, students may work harder at the end of the semester when they want to earn the + or avoid the -.
But what does the scholarship on teaching and learning say about this?
The first and foremost problem is that none of those supporting the measure referenced or (apparently) even considered the scholarship (even though “measurement experts” were invoked to certify that the change would not worsen “grade inflation”). This situation illustrates how not reading the scholarship on teaching and learning is the norm in higher education.
While I am not aware of a particular study on whether pluses/minuses for final grades are a good idea or not, the move toward increasing differentiation goes against several established approaches and paradigms in the scholarship.
We know that deep learning comes about most effectively through complex and open-ended learning tasks, such as collaborative learning, active learning, problem-based learning, writing, and so forth—i.e., tasks that cannot be graded precisely. The demand for reporting more precise grades disregards this insight in at least two ways.
On one hand, the demand for reporting more precise grades may put pressure on teachers to use approaches that are less effective but more gradable. What’s the difference between B and B- for students taking a fifty-question objective-type exam? That’s easy: one wrong answer, regardless of whether that one answer really indicates an actual difference in how much the students know and regardless of whether the exam even helps them learn.
On the other hand, the demand for reporting more precise grades may put pressure on teachers—those who still use the more open-ended and complex approaches—to give grades that claim greater precision than can reasonably be claimed under the normal conditions of grading. What’s the difference between B and B- for students presenting their research and answering questions? Not so easy. Their complexity of thought? The number of times they say “um”? The teacher’s gut feeling about it? The decision will need to be made, regardless of how consistent, reliable, meaningful, fair—or accurate—it will be at that level of differentiation.
But what about motivation? The research on teaching and learning has a lot to say about this. On her page on student motivation, Karin Kirk presents a clear, short, well-referenced summary of the key insights. She also offers annotations of key articles and books and links to further resources. Spoiler alert: grades are not the answer. In fact, it’s quite the opposite. As Ken Bain explains in no uncertain terms, the most effective college teachers “don’t use grades to motivate students” (What the Best College Teachers Do, p. 161).
Introducing more precise final grades serves to emphasize grades—even more than they already are.
How often do college and university teachers make counterproductive decisions that could be avoided if they would read the scholarship on teaching and learning in higher education?