Friday, January 9, 2015

Assessment's Missing Link: Success Criteria



There's a missing link in assessment.  That's why everyone is talking about it.  We have standards.  We have assessments. But, they aren't connecting, and they aren't helping students.  Many of us are missing a step - we don't have success criteria.

Success criteria are called by many names - mastery levels, student level objectives, etc., but many are not equivalent to their actual definition.  Some of you are thinking, but we have rubrics, interim exams, and test banks.  We have common objectives.  We have mastery targets (80% of the questions).  You may have all of these things but these are components of testing, not success criteria.

Success criteria are the defined levels of student performance that articulate what student performance actually looks like.  Success criteria answer the question, "How do we know that the students have mastered the objective?".  Oftentimes, teachers begin to tell me how they will test students (bellringers, discussions) or the frequency of answers (they can answer 6 of the 10 questions).

This is the catch: success criteria are created before any actual assessment tool is created (rubrics, exams, projects, etc.).  This is often overlooked when we talk about assessment.  

Professional test developers:

1.  Define performance levels
2.  Sketch or blueprint assessments
3.  Create assessments

But, in schools, we tend to simply create assessments.  The most common error that we make is that we equate rubrics or percentages correct to success criteria.  Rubrics/percentatges are used to evaluate products; success criteria describe objective performance.  

Here's an example.

A teacher has decided that she is going to assign an essay (assessment tool) to her class.  She is doing this to test how well her students can use supporting details to support a claim (objective).  

What is on her rubric?  She has 5 categories: Main Idea, Supporting Details, Grammar, Neatness, and Outline.  For each of the categories, she creates 4 levels of description.  Sounds great, right?  Except that she has been teaching supporting details, not the 4 other categories.  

Depending upon how she writes the descriptions, she may or may not describe mastery. Additionally, it is possible for a student to get a grade that does not necessarily reflect mastery of their ability to use supporting details since this category is conflated with four other categories (so, a student could have a very neat paper, with an outline, a clear main idea, and great grammar, and STILL do well even though they haven't mastered supporting details).

Let's look at the difference between a rubric and success criteria.  

Rubric Example:  

At a level 4 on the rubric, a student provides 6-8 supporting details for the main idea in each paragraph.  The details are clear and support the main idea.

At a level 3, a student provides 4-6 supporting details for the main idea in each paragraph.  The details are overall clear and support the main idea with one or two exceptions.

Success Criteria Example:

 At a level 4, the student is able to provide explicit and inferential details to support the main idea.  The student uses transition words and gives explanations that clearly articulate the relationship between the details and the main idea to create a logical text.  The details are a mix of direct quotes, paraphrases, and the student's interpretation.  

At a level 3, the student is able to provide explicit details to support the main idea.  The student mostly uses transition words and explanations that clearly articulate the relationship between the details and the main idea to create a logical text.  The details are a mix of direct quotes, paraphrases, and the student's interpretation.

As administrators, we really want to hear the success criteria, but we often get the rubric instead.  Note that the success criteria can be used in conjunction with the rubric.  The rubric can adopt the success criteria as it's descriptions, but the rubric CANNOT replace the success criteria.

Success criteria are about learning - it helps both teachers and students to identify the gaps and the possible next steps for instruction.  Rubrics, multiple choice questions, etc. ON THEIR OWN are very limited in their ability to do this because they are designed for specific testing events, not student learning; whereas, success criteria can be continually used regardless of activity, test, or context.  Success criteria can also do something assessment tools by themselves cannot do - guide the alignment of instruction, activities, and TEACHER FEEDBACK (what is outlined in the success criteria should be what you hear and see in classroom/assignment feedback) and prevent classes from falling into the abyss of confusion (what were we learning today?).

If we take more time and create viable success criteria, they can be used to build common understanding of standards implementation and student mastery across classrooms and disciplines, not just common testing.

This is not a quick and easy process - it's not one or two sit down meetings, this is meaningful work that develops over time from looking at student work and assessment results, but it is work worth doing when everyone understands what is supposed to be going on rather than using their individual interpretations.

NOTE: PARCC actually provides it's Common Core standards interpretations.  In fact, any and all standardized exams provide their interpretation of standards (they may create their own standards), but these can be found on their websites in their test blueprint areas.

If you are interested in PARCC, I have written a blog about the particular site page that you may want to check out.  http://principalinstruction.blogspot.com/2014/12/the-mecca-of-parcc-assessment-page-you.html

Thanks for giving me a few minutes of your time.  Looking forward to your comments.



No comments:

Post a Comment

Thank you for commenting on my blog. I look forward to continuing a great conversation with you.