Discussion: (7 comments)
Comments are closed.
A public policy blog from AEI
View related content: Society and Culture
Teachers in Seattle are refusing to participate in a set of assessments designed to measure student growth over the course of a school year. The MAP, or Measures of Academic Progress, is used by the Seattle public schools as a supplement to their end of the year summative exams in order to track student progress and calculate teacher value-added, the specific contribution an individual teacher made to a student’s learning. The teachers don’t believe the tests are an accurate measure of student performance and object to them being used as a part of their evaluations.
The teachers’ efforts have won support of some of education reform’s staunchest critics. Impassioned leaders of the boycott have gone so far as to compare their struggle with that of Martin Luther King and the civil rights movement.
While the opportunity to take that hyperbole to task is almost too good to pass up, I’d prefer to address some of the specific concerns that teachers have with this particular test, and why I think they’re wrong.
For what it’s worth, I think teachers should absolutely push back against bad tests or the use of tests to do bad things. But it doesn’t look like the MAP falls into either of those categories.
Let’s take a look:
Bad tests take up days or weeks of valuable instruction time
Teachers often complain, and rightly so, that testing takes away from already-limited instructional time. When combined, schools often wind up administering 2-3 weeks of testing in a 36 week school year. While assessing students is important, actually teaching them is far more.
The MAP tests take about an hour a piece
The MAP tests in Reading and Math each take about an hour to complete. That means that even taking the tests 2-3 times a year (as Seattle does) only means sacrificing at most 6 hours of instruction, less than one day.
Bad tests offer a crude look at how many students clear some imaginary bar
The much maligned No Child Left Behind-mandated proficiency tests establish a performance bar for a given grade and subject and simply measure how many students clear it. They are designed to be most sensitive around the “cut point” of proficiency so as to have the best ability to clearly distinguish whether or not a student has demonstrated what he or she needs to know for that year. As a result, they are terribly insensitive in the tails of the distribution (in measuring the knowledge of the highest and lowest performing students) and are ill-equipped to measure growth throughout a year (which is what we really want the test to tell us).
The MAP test is computer adaptive to offer more fine grained assessment of student performance
The MAP test is computer adaptive, meaning that the test tailors questions to each student as they take it, to get the most accurate measure of what he or she knows. Rather than measure whether or not the student cleared some proficiency bar, it assesses the relative position of that student as compared to a nationally representative sample of students. The result is more fine-grained, and is a much clearer picture of what the student does and does not know. Also, because it is administered several times over the year, teachers and school leaders can track progress and intervene when it become clear that a student is falling behind.
Bad tests are used as the sole measure of a teacher’s performance
No one test can possibly capture a teacher’s total contribution to a child’s learning. If tests are going to be used as an evaluative tool for teachers, they should only comprise some part of that evaluation, because they can only measure one facet of what we want from our teachers.
The MAP test is used as only one part of a teacher’s evaluation, and a small part at that
Teachers in Seattle are primarily evaluated based on classroom observations, not tests. In fact, test scores only serve as a warning sign to principals to place teachers on performance improvement plans. It appears that MAP is only being used as a diagnostic tool, not an evaluative one.
Bad tests don’t give actionable information for months
It is not uncommon for teachers and schools to get the results of their standardized tests months after they have been administered. These tests are still taken with paper and pencil and thus have to be collected, sorted, and graded, which all take time. As a result, teachers get little up to date feedback that can inform their instruction.
The MAP test gets results back in days
Because of the computerized nature of the tests, teachers and school leaders can get results back in just a few days. This provides them with information that they can use immediately, allowing them to provide remediation to those that are behind and enrichment to those that are ahead.
If MAP had any of the problems of bad tests listed above, I’d be on the side of those teachers right now. We have a lot of bad tests out there, and we have a lot of good tests being used in dumb ways. However, it does not appear like that is the case in Seattle. It looks like the MAP is being used the way that it should.
No test, and thus no teacher evaluation system, is perfect. We know this. But what tends to get lost in this discussion is the fact that before tests like MAP came around, 99% of teachers were rated effective and schools had little or no information about where students were during the year. MAP is clearly an improvement on that, and one that should be built upon, not boycotted.
Comments are closed.
1150 17th Street, N.W. Washington, D.C. 20036
© 2016 American Enterprise Institute for Public Policy Research