Testing Our Schools, Ourselves

Our state board of education—the only elected body in DC with any direct oversight of our public schools—will hold a hearing this Wednesday November 16 at 5:30 pm (441 4th St. NW, Judiciary Square) on how we in DC want to implement the new federal law called ESSA (Every Student Succeeds Act). That law replaces No Child Left Behind and provides a framework in which to evaluate our public schools.

Sign up for the hearing by Tuesday 11/15 by emailing sboe@dc.gov.

The state board (which has three new members, post-election: Ashley Carter, at large, replacing Mary Lord; Lanette Woodruff, Ward 4, replacing Kamili Anderson; and Markus Batchelor, Ward 8, replacing Tierra Jolly) is seeking a final round of public input before it approves how OSSE (the office of the state superintendent of education) proposes to implement the new law for DC public schools.

You can see the results of the board’s survey on ESSA here and read more about public input throughout 2016 here.

The board is also asking for public comment on three areas of interest:

–Weight of test scores

–Weight of growth relative to proficiency

–Qualitative indicators of school quality, including an “open, welcoming spirit.”

Right now, OSSE is proposing that test scores count for 80% of a school’s measure of performance.

That high number indicates the importance that OSSE places on the transmutation of individual student performance into performance of the school as a whole. Moreover, because 80% of the evaluation would be based on test scores, it diminishes, if not obviates, other measures of quality, including teacher training; teacher experience; and teacher retention and attrition. (In DCPS, at least, the latter two are not even counted in a principal’s evaluation—while PARCC and iReady scores are.)

On November 3, Ruth Wattenberg, chair of the ESSA committee of the state board, got to this directly in a memo to the state board chair:

“Crediting schools almost completely based on the proportion of students who score at a given level and not on how much progress the students have made means that schools that enroll lower-scoring students (mainly high poverty schools) have to be many times more effective than their counterparts to earn an equivalent rating. . . . As a result, schools that are models of effective education can get overlooked—and even led to dismantle effective programs and practices [which] has led schools to focus disproportionately on the students who are at the cusp of reaching that threshold instead of on other students. [This] may discourage schools from encouraging the enrollment of lower scoring students.”

Let us parse this statement for a moment:

The cross-sector task force has been working since spring on issues of mobility and enrollment stability. Never once, apparently, did the task force focus on Wattenberg’s observation that our obsession with test scores as accurate barometers of student and school performance can actually result in less enrollment stability for students with the poorest test scores–not more.

Given that our public schools with the greatest churn already have large proportions of the very poorest students (check out the data for Category 3 schools that the cross sector task force has been mulling), this means that the left hand of our public school oversight has no idea what the right hand is doing.

(Hmm: How’s mayoral control of schools working out for you?)

Sadly, students at those so-called high-churn schools may be in most need of economic and scholastic help, since test scores in all our public schools are clearly and unambiguously correlated to income, as Guy Brandenburg’s incredible graphic below shows:


And while we’re on the topic of test scores, let us talk about how we talk about them:

We are less than a month from our annual public school lottery. On that website, as well as others created by the city, our schools’ test scores are paraded around like a bunch of beauty pageant participants in bikinis. (Sorry—but our new president-elect has accurately captured the tenor of the times.)

So what do those test scores really mean?

We know intuitively, even if we don’t admit it, that students take tests—not schools.

But the scores of tens of thousands of individual DC students are transmuted annually into something that is meaningful only as a completely fake number on a website that we say has meaning for a school.

We know, for instance, that different middle schools use different PARCC math tests–but results for all the math tests are combined and spit back out, as if all students everywhere were taking the same test and as if that number has meaning for all of them, when in fact it has meaning for no one.

We also know that our schools respond to this testing by ensuring that even advanced students take the easiest PARCC math tests (yes, I am looking at you BASIS and Latin), so the school’s overall score may be higher. We also know that our schools respond to this testing by ensuring that our children are not merely taught TO a test, but HOW to take a test, including spending time working on computers; time setting up computers; and money buying computers. (My school’s PTA, for instance, has spent more than $50,000 in recent years on computers–ensuring that there are enough to go around during testing.)

Now, if all this was geared toward finding out how those students do, it might have some proven pedagogical value.

But it’s not.

And I know this because I did not receive any spring 2016 test scores for one of my children—and got the other child’s test scores a month apart, more than 6 months after the tests were completed.

Just like in years past.

More to the point: Who is going to follow up with those tests now, half a year later, and find out what the teachers at those schools did–or didn’t do? And what my children need–or don’t need?

I think our president-elect has captured this zeitgeist well: we say one thing–we want to evaluate our schools and judge their quality accurately!–all the while doing something else that actually prevents the first from happening and not acknowledging the difference at all.

In the end, the reason why none of us have timely results for our kids, the reason our lottery uses fake test scores to rate schools, is that these tests are not about our students at all.

Rather, the purpose of these tests is all about us adults: teacher and principal employment; parental choice; school perception.

Now, none of this is small beer: schools here in DC live and die by those things in terms of enrollment and staffing. They even get renovations preferentially on the basis of enrollment and performance!

(NB: It’s not just me saying this. The former head of DGS, Brian Hanlon, noted it when a bunch of us parents asked that a parking garage be constructed at Stuart-Hobson Middle School—Hanlon noted that if the school could show great test scores and be fully enrolled, the city might come back and make that garage someday. He was no outlier: There is also a bit about enrollment in the current weighting of the formula for determining renovation priority in DCPS.)

It is thus vital that we stop pretending that test scores tell everything–and stop judging schools with them. And, if we are committed to using test scores as measures, we need desperately to report them accurately and in a timely fashion.

Expect more at the hearing on Wednesday.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s