Discussion: (0 comments)
There are no comments available.
View related content: Economics of Education
Last fall, President Obama unveiled a controversial plan to promote college affordability by changing the way the federal government distributes student financial aid. The proposal calls for a new system of federal college ratings that would measure how well colleges perform on measures of access, affordability, and student success.
Most college presidents argue that these three things are linked in an “iron triangle”: that you can’t improve on one of them without negatively affecting the others. Through this lens, enrolling more disadvantaged students is a worthwhile goal, but it will lower completion rates. Reducing costs will boost affordability and encourage access, but it could compromise the quality of the education provided. And so on.
Never before has a reform targeted all three sides of the iron triangle at the same time. Whether it will succeed depends in large part on whether the iron triangle is indeed an iron law. Are there colleges hitting high marks on all three sides? How many colleges might be in trouble under a new ratings scheme?
We analyzed this question in a new study that aimed to simulate how 1,700 four-year colleges might fare on the new ratings system. We used data from the Integrated Postsecondary Educational Data System (IPEDS) to develop measures of access, the percentage of undergraduate students who receive Pell Grants; success, the 6-year graduation rate for first-time, full-time students; and affordability, the average net price after grants and scholarships.
We then plotted the results, with access on the Y-axis, affordability on the X-axis, and colors that correspond to graduation rates (red = low, green=high):
You can find an interactive version of the graph here.
What did we find? Few colleges perform poorly on all three measures, but hardly any perform well on all three, either. The figure also illustrates the iron triangle in action: the red dots are concentrated in the upper left hand corner, while the green dots are concentrated in the lower right. Translation: schools with the highest graduation rates (i.e., dark green) are expensive and don’t enroll many low-income students, while those with lots of low-income students and low prices tend to have low completion rates (the concentration of red in the upper left-hand corner).
When we took the figure apart, we found four basic groups of colleges:
We think there are lots of lessons to draw from these findings. First and foremost, very few institutions are doing well on all three measures. Only 19 institutions in our sample had tuition under $10,000, enrolled at least one-quarter low-income students, and had a graduation rate above chance (50%). This is presumably the status quo the President wants to fix. But the results suggest it will be long, hard work to do so.
Second, policymakers must recognize that it’s generally easier for a college to change who they admit than it is to change the success rates of the students already there. Small, elite institutions will be able to improve by letting in a few more low-income students, which is not a bad thing. But other colleges, those with open doors and low completion rates, will either have to improve teaching and student support or get more selective, neither of which is guaranteed to drive improvement. The point is: the relative ease of improving on the different measures will lead some schools to disproportionately benefit from the new ratings system.
Third, because colleges are generally at four different starting points, improvement on the ratings will entail very different behavioral changes for different institutions. Carrots might work better for some goals (increasing the enrollment of Pell Grant students) and sticks for others (compelling cost containment and tuition reduction). A system that doesn’t reflect these different goals will be hard-pressed to succeed.
Fourth, it’ll be difficult to come up with measures and thresholds that enable policymakers to avoid perverse consequences. The thorniest issue is how to measure the value that colleges add to students instead of the selectivity of their admissions process. A “value-added” measure would reward schools that help students build human capital but is hard to capture. In contrast, measuring the level of success could reward colleges more for whom they enroll (the inputs) than the quality of the education they provide.
In thinking through these issues, the President must acknowledge that a poorly designed accountability system will likely do more harm than good, providing critics with the ammunition they need to roll back future efforts to hold colleges accountable. We still don’t know exactly how the ratings system will be designed, but our new report shows just how much progress we have to make if we’re to create the high-quality, affordable postsecondary opportunities that Americans need.
Andrew P. Kelly is a resident scholar and director of the Center on Higher Education Reform at the American Enterprise Institute. Awilda Rodriguez is a research fellow at the Center on Higher Education Reform at the American Enterprise Institute.
There are no comments available.
1150 17th Street, N.W. Washington, D.C. 20036
© 2014 American Enterprise Institute for Public Policy Research