Hacking Standardized Test Results
The Problem:
Schools and parents are starting to see their standardized test results roll in. For individual teachers and students, the lauding or damning begins. It’s all about accountability, right? It’s all about systemic improvement, right?
Right.
The Hack:
If the test results are not specifically being used to improve student learning then they are more about policy and not teaching and learning. Any other purpose, including teacher evaluation, school ranking, teacher efficacy, etc. is about those policy decisions and not necessarily supportive of improving student learning. If we can partition the data itself from the ways in which it is being used unrelated to learning, then we can analyze what is worth analyzing for the sake of instructional programs and real student impact.
This partitioning must also include any biases we might have about the assessment. A standardized test is only good for what it was designed to do and usually that design takes into account a large population of potential test takers. It doesn’t mean the data are useless, nor does it mean that other assessments will be useful in determining student proficiencies.
Specifically, we can drill into data reports and look for trends that will enhance our curriculum data (units, lessons, etc.). Schools need both curriculum data and assessment data in alignment in order to have what Bena Kallick and Jeff Colosimo call a “Data Informed Culture.”
What You Can Do Tomorrow:
Analyze the standard.
Look at your students’ performance on missed questions. Pour over any documents released by your state or test designers in order to better understand what each question is asking. Were the answers close to correct? Test designers will often provide distractor analysis. Did students misinterpret what the question was asking? Use released test maps and documentation to compare the assessed standard to your taught standards to make sure all of the discrete skills are being taught. Standards are checkpoints. Some of them are made up of multiple skills a student must demonstrate proficiency for. The assessment may be a telling reminder that some skills are definitely engaged and that others might need more attention. This is especially true for teachers who did not design their own curriculum but instead rely on a vendor/purchased/downloaded curriculum that they do not subtract from or add to based on their knowledge of their students.
Limit your action plan.
If your state or test designer didn’t publicly release test maps, work with your district data leaders to track them down. Look for versions of data reports that tell you how often a standard has been assessed over several years. If the current year’s assessment is the only time that a particular standard has been addressed over the last few years, then it is not necessarily a priority in your action plan for this coming year’s planning. Your priority is with standards that are addressed in the assessment every year or most years. Your energy is better spent on those standards that are assessed often.
Look for thinking.
I’m asking you to think back to college days here. Go back to Bloom’s Taxonomy and Webb’s Depth of Knowledge levels. All standards, Common Core or otherwise, can be boiled down to one basic throughline: increase thinking. When you look at your assessment(s), standardized and benchmark/quarterly/summative, what do you notice about multiple levels of thinking? This might be evident through questions that assessed the same standard, particularly if there are different question types. There may be anomalies that suggest that students do well on lower level questions but not on higher level questions. If students are being asked to demonstrate high levels of thinking, particularly across multiple assessments, then those same high levels of thinking should be represented in instruction and resources used for instruction. If there is a mismatch in thinking levels between instruction and assessment, then there will be a mismatch in performance. If students have to evaluate on the assessment but the instruction only addressed description, then the assessment data will likely show the disparity.
Intervene logically.
Make sure students TRULY need scaffolds and interventions related to the data. If the standardized test is the only metric to determine extra help or interventions, be careful. Think about all variables involved. Look for other data to inform the decision: benchmark or quarterly assessments, formative data, past years’ performance, and intuition. If the assessment design includes questions that students faltered on but contained questionable distractors or multi-step constructed responses, consider how close the student's’ score was to whatever the proficient cutoff is. If it’s statistically insignificant (read: close), then it is likely that no intervention is necessary. If a student’s score is very low and it’s hard to determine where their specific improvement areas lie, then additional assessments, metrics, and data will be useful in targeting a specific improvement plan. Focus on student deficits the way doctors focus on symptoms. One symptom doesn’t give much information for a diagnosis. Multiple symptoms taken together paint a picture of what the action plan will be.
Analyze other assessment data.
Check your summative/benchmark/quarterly assessments for their alignment to both the standards featured in the standardized test as well as the ones you are responsible for. If you want to map out an assessment for the purpose of comparative analysis, you could use THIS TOOL for doing so. Mapping an assessment for question type, standards alignment, and thinking level is a worthwhile experience for discovering the degree of parallelism between assessments, i.e. how closely they align to each other in scope, coverage, and knowledge demands. This is an important step in aligning curriculum data and assessment data. If we truly want to reach the goal of a data-informed culture, then it’s worth our time to consider how deeply aligned our assessments are.
Doubledown on reading.
I can’t repeat this enough. Runners need to run to improve. Readers need to read to improve. The most important thing we can do to improve overall student performance is to give them ample time to read at their instructional level during the school day. The more they read, the more they know. The more they know, the more access they have to difficult texts or multi-step math equations. The more access they have, the higher the probability that they will be able to successfully solve problems / answer questions. Independent reading is a gift. Give them that gift at school.
Continued Data Meetings
To maintain high levels of alignment between curriculum data and assessment data, continue to discuss it throughout the year where the conversation can be about current assessments rather than just the summative standardized one. Continue to look for trends in the data both as a class and in terms of individual student performance and look for links back to the documented curriculum. Look to grade level standards below and above the grade you teach to inform your knowledge of how proficient a student is with skills that get more sophisticated over time. Look at questions all students did poorly on, is there an easy fix or misconception that can be addressed in follow up instruction? Do students falter on specific question types such as constructed response questions? (Which are also a higher thinking level question as students are asked to “put it all together” for a proficient response.) In short, don’t let data be a once a year conversation directed toward performance on one test.
Questions or comments? Contribute below or contact Mike on Twitter at @fisher1000
For more on Hacking Standards and the Common Core specifically, visit Amazon for Hacking the Common Core.
*Note: Information contained in this blog post is an amalgamated remix of work I’ve read about, experienced, provided professional development for, and had professional conversations about over the last few years of Common Core implementation. To discover more about how to use data to improve student learning, inform instruction, and align curricular goals, consider the following:
- Using Curriculum Mapping and Assessment Data to Improve Student Learning by Bena Kallick and Jeff Colosimo
- Driven by Data by Paul Bambrick Santoyo
- Data Analysis Resources from McRel
- Resources from Learner Centered Initiatives
- Protocols for Professional Practice by Lois Brown Easton
- The Data-Driven Classroom by Craig Mertler
Photo credit: FreeImages.com user SHHO (2010) under FreeImages.com Content License