Discussion: (0 comments)
There are no comments available.
View related content: Higher Education
After four years of significant growth, the Pell Grant program faces in 2014 what has been called a “funding cliff.” The looming shortfall has set off a heated debate about how to put the program—and our entire student-financial-aid system—on a path that is more sustainable and better serves students.
Behind the sound bites of the presidential election, many researchers, wonks, advocates, and foundations have become involved. This fall, for instance, the Bill & Melinda Gates Foundation awarded grants to an array of organizations tasked with reimagining the design and delivery of financial-aid programs. (I am a part of one project, organized by HCM Strategists.)
The discussions under way go beyond temporary fixes to financing issues; they are asking important questions about the future shape of financial aid. How can scarce money be allocated more efficiently? Can reforms help more students complete college while maintaining a commitment to ensuring them access to postsecondary education?
As the Pell Grants and other financial-aid programs come under scrutiny, people leery of changing them caution that any modifications must be “research based.” William J. Goggin, executive director of the federal Advisory Commission on Student Financial Assistance, recently warned that any effort to focus on college completion rather than access must be carefully weighed. “The burden of proof should be on those who say you can redistribute and come out ahead,” he told Inside Higher Ed. The nonprofit Center for Law and Social Policy said in a news release that revisions must be “backed up with data showing that change is needed and that the proposed changes will help—not hurt—needy students.”
Those warnings illuminate the hulking elephant in the room: the sheer lack of rigorous, thoughtful research examining the impact of financial aid on student success. Simply put, current research doesn’t answer basic questions about how changes in financial-aid programs would affect student behavior.
Most studies have focused on the factors that shape enrollment decisions, or on the overall impact of specific programs. But few have attended to how the presence or absence of aid actually affects students’ decisions about their education. As the researchers Sara Goldrick-Rab, of the University of Wisconsin at Madison, and Douglas N. Harris, of Tulane University, recently argued in a paper on improving education research, we simply do not know enough about which kinds of financial-aid programs work best, for which students, and in what ways.
In other words, if we want to develop effective and efficient financial-aid policy, based on facts rather than fiction, then policy makers need to get serious about investing in research and development.
What is needed are large-scale, longitudinal studies in which students are randomly assigned to receive either their regular aid packages or variations—such as additional aid, aid that is dispersed in different ways, or aid that comes with additional counseling. Studying differences in behavior between randomly selected groups of students who receive aid and those who don’t can allow researchers to distinguish the independent impact of that aid.
But randomized control trials of financial-aid programs are exceedingly rare. Until this year, federal policy makers had never called for these kind of “experimental” studies of the Pell program—or of financial aid in general.
The Coalition for Evidence-Based Policy, a nonpartisan group that runs a clearinghouse of rigorous social-policy research, lists just two studies under its “postsecondary education” section, neither of which examines the effect of different aid programs on student success. (One is a study of completion of the Free Application for Federal Student Aid, or Fafsa; the other a study of the effects of a mentoring program.) So while cost is often cited as a reason that low-income students do not complete college, as the Stanford University economist Eric Bettinger noted recently, “There is surprisingly little research on how need-based aid programs affect students’ collegiate outcomes.”
Experimental evaluations have become de rigueur in other areas of social policy. In the past two decades, federal programs like Head Start, Job Corps, Upward Bound, and those under the Job Training Partnership Act and Workforce Investment Act have all been subjected to large-scale random-assignment analyses of their effectiveness.
Null results for the impact of programs like Head Start and Upward Bound have disheartened advocates and stirred controversy, but those studies have helped inform policy makers about what’s effective and what’s not. Demonstration projects in Medicaid, welfare, food stamps, and children’s health insurance have also fostered program innovation.
Without research and development on financial aid, federal policy makers have been limited in their ability to answer basic questions about the effect of existing programs on student success, let alone to propose promising changes in existing programs. The lack of research also sets up a Catch-22: Reformers have trouble making a case for policy change, while conducting such research requires experimenting with reforms that the advocates of programs may resist on the grounds that they are not research-based.
But there are signs of progress. First, a handful of randomized studies of financial aid are under way. MDRC, a company known for its experimental analyses of welfare policy in the 1990s, has carried out a series of randomized studies of performance-based scholarships. The awards are based on both academic performance and financial need. Early results suggest that at most colleges studied, the scholarships have had immediate, positive effects on academic performance and credits earned.
MDRC and the nonprofit Institute for College Access and Success have also started pilot “aid as a paycheck” programs, in which students at community colleges receive their aid in regular increments over the course of a term rather than in one lump sum.
The Wisconsin Scholars Longitudinal Study, directed by Goldrick-Rab, is a statewide experiment that examines the impact of a privately financed, need-based award on Pell Grant recipients at public two-year colleges. So far, results suggest that $1,000 of additional aid is associated with a two-to-four-percentage-point increase in rates of retention from the first to second year of college. The study is also using survey and interview data to examine the contexts in which aid is most effective, and where its delivery may have unintended results. For instance, when students expect more aid than they get, it may affect how they do in college.
Second, the federal government is finally getting its act together. This past summer, the Department of Education announced the first-ever federal Pell Grant experiments. The program will study two changes in Pell eligibility. The first will provide bachelor’s-degree holders with access to Pell dollars to pay for vocational training. The second will enable students to access Pell dollars for shorter-term, lower-intensity programs (as short as eight weeks and a minimum of 150 clock hours) than current law allows.
The Institute of Education Sciences, the research arm of the Education Department, has also contracted to expand the “What Works Clearinghouse.” It now collects rigorous research on successful school interventions, but it will include a focus on postsecondary education.
Such efforts are promising but insufficient. To make up for years of neglect, we need a sustained federal and philanthropic commitment to research and development on financial aid. At $160-billion, financial aid is the most expensive higher-education strategy for promoting student attainment that we have. The least we can do is devote a fraction of that commitment to making sure it works as well as it can. In the haste to “fix” the shortfall in the Pell Grants, policy makers must not lose sight of the need for basic yet careful research.
Nor should the lack of rigorous research be an excuse to maintain the status quo. Proponents of aid programs should embrace efforts to improve them. And better-informed policy making should ensure that the dollars go to the students who will benefit from them the most.
Andrew P. Kelly is a research fellow in education-policy studies at the American Enterprise Institute.
There are no comments available.
1150 17th Street, N.W. Washington, D.C. 20036
© 2015 American Enterprise Institute for Public Policy Research