When Ohio began issuing report cards years ago, they were straightforward.
There were x number of tests. If your school district passed so many, you received a certain ranking. Simple to understand. No ambiguity.
Today, you start with the same series of tests, plus attendance rate, plus graduation rate. These are the state indicators.
Then, performance index is factored in, based on a zero-to-120 scale (it couldn’t be zero to 100?). Now add the Federal Adequate Yearly Progress, or AYP. School districts have four ways to pass AYP, one by passing, and three “cheats” such as projecting
the students who aren’t passing will pass in the future.
Now, if a school district meets at least 94% of the state indicators and scores a performance index of at least 100, it’s excellent. Note: AYP does not count at this point.
But if a school district fails to meet AYP for three consecutive years, and fails in more than one student group in the current year, it can be no higher than continuous improvement.
If the school district makes it past AYP, it has to go through the state’s Value Added Measure. If it fails Value Added for three consecutive years, the district rank drops one level. Got that?
The system is so complex, the Ohio Department of Education has a 12-page “Guide to Understanding Ohio’s Accountability System,” which is hardly comprehensive.
Must school district report cards be so insanely complicated? Isn’t there an easier, more understandable way?