Rubrics for Learning Menus
What would a good rubric for a learning menu look like?
The first thing to remember is that there is no perfect rubric. It will always need to be adjusted as criteria evolve, lessons warrant, and uncertainties appear. For instance, when students score well on the rubric, but haven’t necessarily learned what you thought they should, you should adapt the rubric to encourage more student comprehension of the subject. On the other hand, when students learn what they should, but the structure of the rubric doesn’t reflect it in the scoring, the rubric should be altered to better convey student understanding.
Because of this concern, we rarely use the full version of a commercial rubric from a textbook publisher or even one created over the summer by a committee in our school district or state. It’s fine to use portions of these rubrics prepared by others; but to be effective, we should augment those descriptors with situation-specific needs for our own classrooms.
Learning menus can come in different formats, such as tic-tac-toe boards, restaurant-like menus, multiple choice grids, matrices, tiered sections of increasing challenge, and RAFTS, in which students choose one element from each category to make their own assignment: Roles, Audiences, Formats, Topics (or Time Periods), Strong Adverb/Adjective to set the tone. Because there are so many different forms, it’s hard to impossible to identify a perfect rubric for all learning menus. However, there are some universal elements that must be in place.
No matter which product or medium a student uses to express his learning, we seek the same, universal evidence of content in our rubric. For example, if students have a choice as to how they will represent their knowledge of a topic by generating a website, board game, podcast, short play, political cartoon, or debate, we will use the same rubric for all of them. It will include statements common to all products, like “Attention to Craftsmanship,” “Accurate Information Used,” “Relevant Information Used,” “Made Connections Between Topic and Historical Era,” and, “Goals Clearly Communicated.”
By using the same evidence-based rubric for all products, we are more efficient in our grading, i.e. it takes less time to grade a class set, and our students focus on creating quality products.
Rubric descriptors need to be criterion-referenced. We need to strip our rubric drafts of anything that is norm-referenced, such as, “better than most,” “above average,” “average,” or, “below average.” These statements of how students are performing compared to other students are not helpful.
Rubrics are meant to be standards-based, which means they are focused on specific evidence of the standards in play. We want to know if the student can analyze political rhetoric of the Executive Office, identify the economic causes of the Civil War, explain the Homestead Act, or trace repercussions of President Wilson’s 1918 14-point speech over time to today.
To assess these, all descriptors should focus on the degree to which the student can demonstrate evidence of mastery of content and skills, not how they perform in relation to their peers.
#3: Domain Consistency
Each level on the rubric must speak to the same domain(s) of knowledge or ability. If we reference a degree of information relevance on one level, for example, we have to speak to a different degree of information relevance in each of the other levels. We can’t focus on effort in one level and proficiency in resource citation in another. These represent different domains.
#4: Low Number of Performance Levels
The fewer the levels on our rubric, the higher the inter-rater reliability. This means that a 2.0 score in my class will indicate the same level of competency as a 2.0 score in another teacher’s class using the same rubric. If we have six, seven, eight, nine, and 10 levels of performance, inconsistent and subjective judgments are more likely. Keep the number of scores to three or four, if possible.
#5: Well-Defined Descriptors
Rubric scores are nonsense symbols that serve as shorthand for fleshed out descriptors. They are not percentages. For example, we can use a star, a Pi symbol, or an ampersand to indicate excellence as long as there is the rubric descriptor for excellence listed next to the symbol.
When using numbers, however, teachers and students often make the mistake of thinking that the score listed is out of a total possible, as if 3.0 on a 4.0 scale is a 75%, which in my district is a C grade. This is not how rubrics work. In most schools, a 3.0 on a 4.0 scale would be closer to a B grade. In rubric design we choose a symbol, then write a descriptor for it and the grade or score is determined by the degree to which the student’s work matches that description.
#6: Consistent Wording
We need to use the same part of speech in identifying the key elements. Make them all adverbs, adjectives, nouns, or verbs, not a mix of these. Rubrics are about communication, and using a consistent part of speech really helps. Excellent-Good-Fair-Poor (all adjectives) is much better than Top-Usually-simplistic-zero.
#7: Analytical vs. Holistic Rubrics
While analytical rubrics (see below) offer more detailed feedback, they take longer to use to grade students’ work. Holistic rubrics may be faster, but using them increases the chance of students misinterpreting our feedback as well as the “waffling” we do when deciding between two scores to apply to a student’s work.
Analytical rubrics are those that include more than one category or domain being assessed, and there is a separate rubric score for each one. For example, for a history project, we might have a separate 4, 3, 2, 1, 0 score for each of the following:
- Proper Citation Format
- Sufficient Evidence for Each Claim
- Understands How Physical Geography Affected the Economy
- Demonstrates Clear Understanding of Colonists’ Point of View
Holistic rubrics combine all these elements into one paragraph-like or bulleted descriptor, and the score is based on the degree to which the student accomplishes all of them collectively.
Of course, for each wiggle word like “proper” and “demonstrates clear understanding,” we have to provide samples of what is and is not acceptable. Three quick reminders:
- Ask students to use the rubric against which their work will be assessed as they assess samples you provide. When analyzing other work with these rubrics, students think about their own efforts and will make adjustments in their own work as a result.
- Always test-drive a rubric with a few student samples before using it with the whole class. It’s particularly helpful to ask a colleague to use the rubric to analyze student work and see if he or she can create an accurate assessment of students’ performance against the standards when using it. This will help you catch mistakes.
- When students design their own rubrics after analyzing exemplars provided in class, they internalize the evaluative criteria and refer to those criteria “real-time,” or as they work. Students achieve more, and it shows in their products, when they use the rubric as an internal editor while they work.
Descriptive feedback, not just any feedback, is key to students’ success. Rubrics are powerful vehicles for such helpful insight, and almost everything we teach in history, social studies, geography, and government can be “rubricized” for students. With practice it gets easier, and wow, it is completely worth it.