Admittedly, it sounds a little cheesy, but three times a year when I administer a benchmark in my ELA classroom, I tell my students that we are simply taking their temperature in ELA. The analogy works.
When administered correctly, benchmarking gives teachers AND students the ability to see where they are succeeding and where they are struggling in a no pressure setting on a standards-based test. Benchmarking should serve the following purposes in our classrooms:
Set standards-based, data-driven goals for students and for your classroom instruction
Connect standardized testing to meaningful instruction
Teach strategies that help students to improve
I administer a benchmark to my ELA students once in the fall, once mid-year, and then I use our state exam (New York) as their third and final benchmark. Here, I'll share my process for using benchmarking to inform my teaching and my students' learning.
1. Use the previous year's state exam for benchmarking
Every year, New York State releases most of the questions, answers, and scoring guides for the previous year's state exam. Along with the exam questions and answers, New York State also releases the data on how students across the state performed answering the questions.
This information can be incredibly useful to your students and to you. Using released questions allows teachers and students to identify the exact standards being measured. It also allows us to compare our own performance to the performance of students across the state.
To prepare our benchmark, I read through the entire test and choose one literary passage and one informational passage to give to my students for the reading comprehension portion of their benchmark. Each passage is usually followed by about 7 questions, totaling 14. We can complete the reading comprehension portion of the test in one class period.
I also choose two paired passages and short answers to give to my students on a second day. This benchmark will allow my students and I to measure their reading and writing skills. Once again, New York state provides the rubrics and model answers at the different levels of mastery that I can use to score my students and that we can later use to help us reflect as a class on what we did well and what we need to improve in our short answer writing.
Obviously, this is a very condensed version of the actual state exam. I will not get the FULL picture of where my students are with multiple questions measuring every single standard, but I will accomplish our goal of taking our ELA temperature and gathering information that will help to propel us forward in our learning.
2. Do not count the benchmark as a grade
Why would a student take a benchmark seriously if it's not graded? I get this question a lot. However, each year, my students take the benchmark with fidelity, and they do their very best.
Before giving the benchmark, I make sure that I am completely transparent with my students. I usually give them a talk similar to this:
"Today, you will take a benchmark exam that will not count as a grade in our grade book. It will be graded, but for a greater purpose: to give insight into our strengths and areas we need to improve. This benchmark is like taking your "ELA Temperature." After our benchmark, we will examine our benchmarks together to see what we did well, and what we need to improve. We will set goals based on our performance, and your performance will tell me what I need to focus on teaching for the next part of the school year.
It is essential that we get accurate information. My motto for each benchmark: don't stress. Do your best."
On the board, I display the four bullet points that I shared in the introduction to this blog post. I review them with my students, because it is essential that they know that the work they put in matters well beyond a grade.
3. Complete a data analysis
Once my students have completed a test, I grade in two phases: first, I grade my students short answers (teacher hack: I give the reading and writing test first and grade it as students take the reading comprehension/multiple choice portion). Second, I send students' multiple choice scantrons through the scantron machine at our school which also has an item analysis function (see if yours does--it's life changing!).
For the first phase, I use the New York State rubric and model answers at each level to score my students' short answers. We use a structure for our writing called RIPPS (restate, inference, proof, proof, summarize), so I label those parts in each student's answer. Most often, if a student is missing a part of RIPPS, it's really hard to earn a 2 (the highest score possible), so students are able to use my labeling as feedback for what they need to add to their answer to improve.
For the second phase, I compare our item analysis on the multiple choice portion of the benchmark to students' performance across New York State. Our scantron machine tallies the number of students who answered multiple choice questions incorrectly. To compare my numbers to the New York State data, which shares the percentage of students who answer correctly, I have to do a little math, but it's worth the valuable information that I can only get from giving my students this benchmark.