Discussion: (1 comment)
Comments are closed.
The public policy blog of the American Enterprise Institute
Dispatches from a nervous Common Core observer (part 8 of 10)
View related content: Society and Culture
One of the things that drew me to the Common Core initially was its great potential as a platform to develop new, cool technologies that teachers all across the country could use to improve their instruction.
I started my career teaching in Alabama, and given the tiny market share of that state, not a lot of folks were developing cool apps or resources aligned to the Alabama state standards. We pretty much had to take what the big textbook companies were creating. The Common Core, for all of its foibles, does create a new marketplace for resources that, while still having some kinks to work out, can benefit teachers and school leaders in the long run.
Already, the increased competition and space created by the standards has led to the development of some sweet stuff. Teachers can now share their lessons with tools like BetterLesson and LearnZillion and their assessments with tools like MasteryConnect. Other than in the largest markets, there is little likelihood that these would have existed absent a common set of standards across the nation.
I am also encouraged to read about the use of computer-based assessments, and particularly computer-adaptive assessments for the tests aligned to the standards. Paper and pencil tests are antiquated and offer untold issues with security and administration, and computer-adaptive tests hold the promise for a far more accurate measurement of student achievement.
But increased reliance on technology has run up against a very real issue: Capacity.
In order for new tests or new tools to take advantage of new technologies, schools need to be up to date with hardware and software. Looking at the requirements for the two major testing consortia, I tend to agree with the SBAC technology guidelines that call them “a low compliance threshold.” At minimum, each SBAC test only needs 223MHz of processing speed, 128MB of RAM, and 52MB of free hard drive space, though they recommend 1GHz, 1 GB, and 1GB respectively. PARCC recommends the same. Considering that computers with at least 233 MHz have been around since 1997 or so, you’d be hard pressed to find computers in schools that don’t meet this threshold.
This makes sense: Testing companies don’t want to create products that would be too expensive or onerous for schools to adopt.
The problem with this “low threshold” is the ceiling that it creates for new providers. If tests have to be made simple enough to run on Windows XP, as SBAC says it will support through 2015, how much does that stipulation limit the tests’ ability to harness the power of Windows 7 or new Apple or Android operating systems? Both tests require devices with at least a 9.5’’ screen and a physical keyboard, nixing most smartphones and causing real problems with adapting tablets for testing purposes. With 600 school districts implementing tablet-based technology initiatives, it remains to be seen how well they will mesh with the new testing regime.
Given limited budgets for capital upgrades, it is reasonable to think that the requirements for testing will play a large role in technology purchases moving forward. Few districts have cash to buy a separate set of devices for testing and for teaching. If they buy devices to meet the minimum standards for testing compliance in 2014-15, they risk having devices that are obsolete very soon and limit their ability to use many of the cool new tools that are being developed.
But this isn’t the only tension. Standardized tests have to be given within particular windows for accountability purposes, and if schools need to administer tests on computers, there have to be enough seats for students. This is easy to do if every student has his or her own tablet or computer, but the demand on network bandwidth would be quite large. If a school has limited lab space, teachers will need to shuffle kids in and out round-robin style, which, while easier bandwidth-wise, is a lot harder logistically. States could extend testing windows to accommodate schools with fewer computers or less bandwidth, but that raises questions about cheating and the time differential in administering tests. If one school or one teacher is evaluated off tests that were given at the end of March and another is measured off tests that were given at the beginning of May, there is good reason to believe that is not an accurate comparison.
While I think I’ve been laying it on the Obama administration pretty thick for what I think is their ill-conceived decision to align with the Common Core, I do need to give them some props for their recent push to revamp E-rate funding to pay for bandwidth upgrades. The plan is not perfect, and circumventing Congress through the FCC will always give me pause, but demand for E-rate funding has far outstripped its supply, and ensuring that every school has at least 1 gigabyte per second of internet speed would allow them to connect something like 50,000 computers to the testing network at any one time (according to SBAC’s fun technology readiness calculator). That would pretty much take care of the bandwidth problem, and allow a great deal of room for growth in the future.
By the way, if these questions interest you, Taryn Hochleitner and Allison Kimmel have a chapter in Rick Hess’ and my upcoming Common Core Meets the Reform Agenda volume (due out this fall) that digs into these issues in great detail. I’ll make sure to let everyone know how to get a hold of it when that comes out.
As always, give me a shout in the comments or over on twitter if you want to weigh in (I’m @mq_mcshane).
Comments are closed.
1150 17th Street, N.W. Washington, D.C. 20036
© 2014 American Enterprise Institute for Public Policy Research