28 November 2014

AAPPL Measures and IPAS

I think tests are stupid. A test can't tell you how well I do my job or live my life--or anything you really need to know about me. Now, I'm really good at taking tests, and if you were to look at various test scores I've accumulated in my life, I daresay you'd be impressed with me. But do you know how much bearing the ACT, SAT, or GRE has had on any of my roles since college or grad school application time? Do you know how much impact that Issues in Teaching Foreign Languages or  Masterpieces of Hispanic Art and Literature exam has had on me as a teacher, mother, wife, or friend--or even as a speaker of Spanish?


I mean, getting an A always gave me a charge, a sense of validation. But getting it from a test? It meant I could play The Game, and, brother, that game that ended when I got that last piece of paper. Now people insist on seeing what I can do, or at least an eye witness account (AKA references) that I am what I appear to be on paper. They'll take my reflections and artifacts of my accomplishments and video of me in action to justify giving me a pay raise, but that test was really kind of a garnish on the whole affair.

No, tests don't mean much in the real world, but I'll tell you what does: performance. Demonstration of your abilities in context. That's why I make my kids put together portfolios, to show exactly what they can do. But there's only so much a portfolio can show as far as what you can produce on demand, without constant teacher intervention and revision.

That's where Integrated Performance Assessments (IPAs) and the ACTFL Assessment of Performance toward Proficiency in Langauges (AAPPL) come in. Together they're a way to see what a kid can do in action, on demand, and a way to communicate how well they do it.

Now I've been thinking about how to fit IPAs in with my Project-Based Learning since LangCamp this past summer. The IPA takes a theme--just like the one that ties together a PBL unit and serves as a basis for the Driving Question--and builds communicative skills from interpretive to interpersonal to presentational. It works pretty much how the PBL process works, between inquiry, collaboration, and presentation. What I've been missing is the distilled, spontaneous form of the assessment for the different modes. Oh, sure, kiddos have been collecting evidence and stockpiling it portfolio style, but it has largely been heavily scaffolded. I need to get kids to the stage where they can produce language without my sentence starters and scripted storyasking and interpersonal playbooks. If I can't, we haven't practiced the skills enough. If they can, they need a chance to prove it in class.

AAPPL Badges
My badges, I confess, have been kind of arbitrarily awarded. The rubric has been consistent, and the standards carefully considered, but they are not necessarily reflective of true proficiency, largely because they've been applied to performances that have been heavily scaffolded. Also, they've been based on a point system that may reflect incomplete mastery of certain skills: maybe you answer all in single words, but by golly you pronounced them right and had a bunch of examples, so you get a badge for earning 85% on Novice Mid Interpersonal.

By applying the AAPPL scoring descriptors to the IPAs aligned with the unit project, students will have a representation of their overall proficiency level, what they can produce anytime anywhere, rather than what they can do after I've coached them through step-by-step. Not only that, but they'll have a recommended strategy for how to get to the next step! That way their badges will represent actual skills rather than random hoops.

Report Card Implications
This means that that 65% category my district makes me set aside for "tests" will be reserved for IPAs instead of portfolios next semester (though portfolio curation will still fall under "quizzes"). This means there will be three to four of these types of grades each six weeks, one for each mode of communication, possibly two different interpretive grades to get the context good and solid.

This means that what an "A" is will change throughout the semester as proficiency expectations increase (kind of like JCPS does...but not so dang TOUGH), maybe something like this for Spanish I interpretation:
Check out the AAPPL Score Descriptions for Interpretive Reading/Listening
Of course this also means that I'll have to be careful about how I space the IPA stages to effectively convey student progress as well, and that is going to take some practice.

Australian Shepherd agility By Pharaoh Hound (Edit of Australian Shepherd agility Flickr.jpg) [GFDL (http://www.gnu.org/copyleft/fdl.html), CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/) or CC-BY-2.5 (http://creativecommons.org/licenses/by/2.5)], via Wikimedia Commons


  1. I am being tasked at my school with having to produce reading prompts that we can then test a students level of proficiency with regard to the interpretive (reading) mode. I am having a difficult time finding appropriate material for this Spanish 2 reading prompt. To further complicate things their definition of "Authentic" by Spanish Speakers for Spanish Speakers, the most narrow definition there is. A very few of my students are Native Spanish Speakers...please give me your thoughts...

    1. I actually have several (carefully selected) reading assessments based on authentic texts I've used in Spanish 1 and 2 linked here: http://www.pblinthetl.com/p/pbl-ipas.html

    2. This is very helpful thank you. I question however what it is I am being tasked with. If I am being tasked with measuring reading proficiency (interpretive) why would I want to use more graphs/visual displays ? I understand it is helpful to the learner's comprehension and helpful to the reader, but don't feel it adequately assess their ability to comprehend language. As a Spanish techer I could probably give them that same image that you shared with me (Spanish 1 link) of the person in GERMAN and they would use hints/clues to make a determination and probably not be far off. When we do this I don't feel we are measuring comprehension accurately. It's for the same reason when I give questions to check comprehension I usually give them in English so that students can not "de-code" form the reading that was assigned matching similar vocab form questions with words in the reading assigned. Please give me your thoughts and thank you for your insight..JD

    3. Good question! It has to do with level-appropriate accommodations: you will notice that N1-N4 at least all reference "visual cues" as part of the interpretation. I think you're right about German, and I think truly with interpretive reading, we're all at LEAST Novice Low in Romance and Germanic languages. Being able to pick out words is not a big deal. It's the actual SENTENCES that go ALONG with the graphics that I'm looking for to get them to a solid N3 or beyond. I've actually graduated to a more rigorous scale where N1 is not passing by the end of the semester, but I DO still want to give them credit for using ANY Spanish when they're starting out so that they have somewhere to build on. You see the graphics alone can't get them above an N2, and realistically, at least half of my students are making it to intermediate by the end of the first semester in interpretive skills, and they HAVE to give more than what they could figure out in German to do that.

  2. Oh no! That AAPPL/ACTFL link no longer works. :(

    1. This one should: http://aappl.actfl.org/scores (the www doesn't work I know).