Short Answer: You can’t.
Long Answer: In my reflections on this week’s class readings, I’ve noticed that defenses for traditional undergraduate grading share one thing in common: an appeal to diagnosis. In other words, defenders argue that it’s important to measure how well students are learning and teachers are teaching.
Yeah, a couple of questions there.
First, according to whose definition of well? Faith in “evidence-based” standards for grading assume a criteria for success that must remain unchallenged. In other words, advocates for this model assume a near-universal ideal of objectivity on the part of decision makers. That position poses problems, to put it mildly.
Second, assume I agree its important to measure how well a student is learning or how well a teacher is teaching. How does the current grading system accomplish that? It basically falls to an instructor to give an arbitrary ranking based on some constructed criteria without any justification behind it. It doesn’t offer any context or information to either the teacher or the student beyond “get better or face consequences”. That didn’t work for me in my efforts to improve my handwriting, it didn’t work for dealing with my depression and anxiety, and it sure as sugar didn’t work for my geometry class in high school. I doubt (not without merit) that it fares much better for others, either.
That leads to the fundamental issue I have with traditional grading models. Their purpose (the main argument for their defense) and their function fail to connect. One is diagnostic while the other is prescriptive. As one of my favorite professors is fond of saying, it’s “putting the cart before the horse” (illustration below).
Let me explain. A diagnostic tells you how well what you’re doing lines up with what you want or need to accomplish. Its merit lies in the indicators it offers for what is lacking. It mainly deals with what’s happening or what’s already happened. Prescriptive deals more with method and rules to address issues. In this context, grading is designed around a diagnostic ideal but functions as a prescriptive indicator or rank. If your rank is low, find out what you’re doing wrong, because the grade sure won’t tell you. If your rank is high, you don’t need feedback, you’re doing just fine. Seems backwards, doesn’t it?
All that leads to my third question. What alternatives are there? Well, to be brief: lots. There’s the option to utilize portfolios with comments and feedback as opposed to a numeric ranking. There’s the option to negotiate standards and rubrics while using minimized ranking. There’s the option to forgo ranks altogether and focus on a seminar model. However, until educators, students, and administrators alike come together and negotiate a challenge to the status quo, at best these methods will likely function as stopgap measures. However, as the saying goes: “Start where you are, use what you have, do what you can.” The rest will come from our joint struggle.