I mean, getting an A always gave me a charge, a sense of validation. But getting it from a test? It meant I could play The Game, and, brother, that game that ended when I got that last piece of paper. Now people insist on seeing what I can do, or at least an eye witness account (AKA references) that I am what I appear to be on paper. They'll take my reflections and artifacts of my accomplishments and video of me in action to justify giving me a pay raise, but that test was really kind of a garnish on the whole affair.
No, tests don't mean much in the real world, but I'll tell you what does: performance. Demonstration of your abilities in context. That's why I make my kids put together portfolios, to show exactly what they can do. But there's only so much a portfolio can show as far as what you can produce on demand, without constant teacher intervention and revision.
That's where Integrated Performance Assessments (IPAs) and the ACTFL Assessment of Performance toward Proficiency in Langauges (AAPPL) come in. Together they're a way to see what a kid can do in action, on demand, and a way to communicate how well they do it.
IPA and PBL
Now I've been thinking about how to fit IPAs in with my Project-Based Learning since LangCamp this past summer. The IPA takes a theme--just like the one that ties together a PBL unit and serves as a basis for the Driving Question--and builds communicative skills from interpretive to interpersonal to presentational. It works pretty much how the PBL process works, between inquiry, collaboration, and presentation. What I've been missing is the distilled, spontaneous form of the assessment for the different modes. Oh, sure, kiddos have been collecting evidence and stockpiling it portfolio style, but it has largely been heavily scaffolded. I need to get kids to the stage where they can produce language without my sentence starters and scripted storyasking and interpersonal playbooks. If I can't, we haven't practiced the skills enough. If they can, they need a chance to prove it in class.
My badges, I confess, have been kind of arbitrarily awarded. The rubric has been consistent, and the standards carefully considered, but they are not necessarily reflective of true proficiency, largely because they've been applied to performances that have been heavily scaffolded. Also, they've been based on a point system that may reflect incomplete mastery of certain skills: maybe you answer all in single words, but by golly you pronounced them right and had a bunch of examples, so you get a badge for earning 85% on Novice Mid Interpersonal.
By applying the AAPPL scoring descriptors to the IPAs aligned with the unit project, students will have a representation of their overall proficiency level, what they can produce anytime anywhere, rather than what they can do after I've coached them through step-by-step. Not only that, but they'll have a recommended strategy for how to get to the next step! That way their badges will represent actual skills rather than random hoops.
Report Card Implications
This means that that 65% category my district makes me set aside for "tests" will be reserved for IPAs instead of portfolios next semester (though portfolio curation will still fall under "quizzes"). This means there will be three to four of these types of grades each six weeks, one for each mode of communication, possibly two different interpretive grades to get the context good and solid.
This means that what an "A" is will change throughout the semester as proficiency expectations increase (kind of like JCPS does...but not so dang TOUGH), maybe something like this for Spanish I interpretation:
|Check out the AAPPL Score Descriptions for Interpretive Reading/Listening|
Australian Shepherd agility By Pharaoh Hound (Edit of Australian Shepherd agility Flickr.jpg) [GFDL (http://www.gnu.org/copyleft/fdl.html), CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/) or CC-BY-2.5 (http://creativecommons.org/licenses/by/2.5)], via Wikimedia Commons