NO MORE BAD RUBRICS!
It was me and two other English teachers. We about had a knock-down-drag-out with a teaching guru visiting our district who told us in no uncertain terms:
RUBRICS ARE BAD.
We were told that the rubrics were slayers of creativity, that they stunted our students, and moreover, they were for lazy teachers who just wanted to finish grading as quickly as possible.
I may be exaggerating the tone a touch, but my English homies and I were not going down without a fight. It’s not because we disagreed that we wanted to grade quicker (we do) or that we did not believe that they capped students’ desire to push their own boundaries (they might). It’s that we were 100% convinced that our students NEEDED rubrics to understand what was EXPECTED of them.
Teenagers are not mind readers, after all.
Our point was that we were indeed lazy teachers if we could not–or DID not–define what successful task completion looked like for our students. It was pure anathema that Guru Dude would suggest that we should be so vague and wishy washy with our students and just tell them to impress us then sit back and watch them blow our minds. I knew for a fact that that was not how MY students would react to such instructions. In fact, as another guru guy, Sr. Burgess, reminded us at our local conference this past weekend, there is freedom in framework. We all need a place to start at the very least, right?
Dr. Hill described two types of descriptors that sum up everything wrong with bad rubrics:
- Deficiency Descriptors and
- Empty Descriptors
My contention is that single-point rubrics are the solution to both of these blemishes on rubrics’ reputation AND Guru Guy’s insistence on pushing our students beyond our own expectations.
Deficiency Descriptors
Tell me I’m not the only one.
I describe EXACTLY what I ideally want to see in each category I’m scoring, then I copy and paste it into the next column and change one word or one number to designate the separate levels. I know I did it back when I was using ForAllRubrics (actually, columns just said “nearly,” “consistently,” and “emerging”).
Dr. Hill makes the point that these are great for US to ASSESS, but they don’t do much to help our students hit their goals. They explain why we mark them down, but what do they do for students who are trying NOT to get marked down? Do they actually convey the clear expectations my English amigos and I so vociferously championed? Or do they kind of enable us to be a little lazy as graders without actually helping the kiddos read our minds?
Empty Descriptors
Kids don’t know what they don’t know, so they CAN’T count the problems! They can only edit as carefully as their training and retention allow and then hope for the best when we’re counting what’s wrong with their work AFTER they’ve turned it in to us. I mean sure, we could close the feedback loop, but the rubrics are still not assisting the learning, as we insisted they must.
Single-Point Solution
Not only does our district bring in all kinds of guru guys to keep us in the know (George Couros next month, anyone??), but we have our own stable of gurus among our Instructional Facilitators. So while the more boisterous among us were having it out with the Guru #1, my facilitator amigo Chris was googling in the back to send me a solution that would
- Communicate clear expectations to students without points or empty deficiency descriptors getting in the way and
- Encourage the kiddos to surprise us.
- Senior Project products
- Senior Project presentations
- Plan for Change presentations
- Spanish portfolios
- Mejor yo videos
- Amigos animales dog show booths
You may notice that I had to add points to calculate scores on one rubric for my principal’s benefit. Basically if all of my expectations are met, students get a B (I don’t care if it’s grade inflation, so there). If they do something that goes beyond my expectations (hint: just changing the theme or colors in a presentation is still pretty expected), I explain briefly what impressed me, then a 10 gets averaged in with the 8s. If, however, my expectations are NOT met, I explain what more I need, and average in anything from a 0 to a 6.
And voila! Grading is still quick, but this time it’s more targeted and personal! They can get feedback on what they personally need to do to improve!
I do want to point out that there are a few practices you will still want to put in place for maximum rubric efficacy here:
- Close the feedback loop – Sometimes I just set up a Google Form to have them basically parrot back to me what they need to work on–then I make sure they have time to work on it.
- Recursive opportunities for improvement – Even if they’re not doing the same thing with new portfolio artifacts next grading period, maybe they can take their presentation feedback and do some revision before they actually have an audience.
- Provide models – Some students may feel stuck on how to exceed expectations, but we wouldn’t want Guru Guy to think we’re stepping on their little creative spirits by spelling out what that is. However, I’ve found I get better results when I pick out examples of student work that SHOW the above-and-beyond factor, especially if we pause and compare it to the rubric itself.