Foxes Watching the Henhouse
How Higher Education's Voluntary Accountability Systems Miss the Mark

Click here to view this Outlook as an Adobe Acrobat PDF.

No. 4, April 2010

In his first speech to a joint session of Congress, President Barack Obama lamented America's failure to keep pace with other industrialized nations and challenged the country to regain its mantle as the worldwide leader in postsecondary attainment. Accomplishing this goal requires not only greater access to postsecondary education, but higher levels of college completion. Providing consumers with better information about college costs and quality would help hold postsecondary institutions accountable. Two new voluntary accountability systems miss the mark, however, either by failing to offer new information or by making it difficult for consumers to make comparisons across institutions. A better accountability system would make such comparisons easier and would be mandatory, not voluntary.

Key points in this Outlook:

  • Market accountability in higher education requires high-quality information about college costs and quality that consumers can use to choose colleges.
  • New voluntary efforts to encourage greater transparency on the part of colleges and universities are insufficient. Such efforts are better cast as attempts to preempt more significant accountability measures.
  • Transparency about costs and outcomes must be mandatory, not voluntary.

There has been a growing chorus of calls for increased college completion and accountability since the George W. Bush administration's Spellings Commission put colleges and universities on notice: if the United States was to maintain its competitive edge, American institutions of higher education could no longer be "increasingly risk-averse, at times self-satisfied, and unduly expensive."[1] The most pressing problems, the commission argued, were the "lack of clear, reliable information about the cost and quality of postsecondary institutions, along with a remarkable absence of accountability mechanisms to ensure that colleges succeed in educating students."[2]

The major higher-education trade associations have responded to these calls for transparency and accountability by announcing two voluntary databases that are designed to increase the quality of information consumers have about institutions of higher education. The National Association of Independent Colleges and Universities launched its University and College Accountability Network (U-CAN) in September 2007. A few months later, the American Association of State Colleges and Universities (AASCU) and the Association of Public Land Grant Universities (APLU) heralded the birth of their Voluntary System of Accountability (VSA). The associations have touted these efforts as a step toward meeting the pressing need for increased accountability. State systems of higher education have joined in, adopting the model for their own institutions, and a recent initiative will extend such a network to community colleges. Increasingly, these voluntary accountability systems are defining the contours of higher-education accountability in the twenty-first century.

But a close examination of these two prominent efforts reveals serious flaws that undermine their utility as engines of accountability. U-CAN, the site for private colleges and universities, is not really new at all; it is essentially a repackaging of data available elsewhere, and it provides almost no new information. In contrast, the VSA, which catalogs public colleges, represents a legitimate effort to provide students with important information about how much college costs and the experience students receive in return. But despite its lofty goals, it too suffers from numerous shortcomings: not all institutions participate, particularly those at the top and bottom of the quality scale; the site is deliberately designed to thwart the easy comparison of institutions--even though that is allegedly what the VSA is for; and many of the most crucial data elements are incomplete, noncomparable, or selected in a way that often obscures differences between institutions.

U-CAN is best cast as a preemptive attempt to fend off federal or state regulators; it is not a sincere attempt to compel institutions to become more transparent and focus on consumer needs.

To improve consumer choice and exert meaningful pressure on schools to improve, these efforts and others like them need to be more complete, comparison-friendly, and designed to highlight institutional differences rather than hide them. If existing flaws are not resolved, we will end up in the worst of all worlds: the appearance of higher-education accountability without the reality.

Accountability via Consumer Choice

In developing a system of educational accountability, policymakers can opt for one of two basic strategies: a top-down system of government-mandated standards, assessments, and rewards, or a more diffuse, market-oriented system in which choices made by informed consumers help to regulate providers. At the K-12 level, No Child Left Behind falls into the first category, as the government aims to hold schools accountable by mandating regular testing and imposing regulatory sanctions on schools that do not make the grade.

This heavy-handed model is ill-suited to regulate a sector as diverse as higher education. In contrast to the K-12 system, higher education enjoys a more open market. In theory, consumers in this market have the freedom to shop around for the service provider that best suits their needs. For this market to fully function, however, consumers must be supplied with adequate information about the cost and quality of the providers from which they can choose. Armed with such information, they can vote with their feet, rewarding institutions that provide the best service at the most affordable price and punishing those that fall short.

As the Spellings Commission pointed out, though, the higher-education market is not as information-rich as it needs to be, and prospective consumers are handicapped by a lack of transparency on the part of institutions. This is where systems like U-CAN and VSA come in. Both efforts are explicitly designed to solve some of the information problems that handicap market accountability by encouraging schools to be more transparent.

But a system that relies on consumer choice to unleash market accountability needs to give consumers the information they want in the way they want it. In general, consumers are all interested in price, in terms of their actual out-of-pocket costs, and service, in particular the quality of teaching, expectations for learning and degree attainment, and likelihood of postcollege success. They also need this information to be provided in a way that facilitates choice, with easy-to-make interinstitutional comparisons on important measures. Lastly, if market accountability is to compel low-performing schools to improve, it is important that consumers have information about all available schools, not just those that choose to participate.

By these criteria, U-CAN fails to meet the most basic definition of an accountability system. While its search engine does accommodate institutional comparisons on the basis of student characteristics (such as SAT scores), graduation and retention rates, and college costs, it does not obligate institutions to gather or reveal any data that are not already available elsewhere. U-CAN is best cast as a preemptive attempt to fend off federal or state regulators; it is not a sincere attempt to compel institutions to become more transparent and focus on consumer needs.

In contrast, the VSA is more promising as a mechanism to improve market accountability. Its "College Portraits" are supposed to display previously unavailable data on costs, student engagement, and student-learning outcomes. Unfortunately, the College Portraits are difficult to compare. Such comparisons, and any attempt to rank colleges and universities on common metrics, naturally make higher-education leaders nervous. Unfortunately, as is so often the case in higher-education reform, the interests of institutions have trumped the interests of consumers. The creators of the VSA have made conscious decisions about what data to include and how to include it that often inhibit easy comparisons across institutions.

Missing the Mark on Comparability and Utility

The VSA's College Portraits do not have any features that facilitate side-by-side comparisons. Users cannot search for schools that share a set of characteristics--such as admissions selectivity, cost, or average time to degree--nor can they easily rank schools on any of the criteria. Instead, users must navigate to an institution's portrait either by running a search on the school's name or by clicking on the school's state.

The obstacles to real comparability trace back to questions about who owns the data, echoing a common refrain in debates about higher-education reform. The "Common Questions and Answers" document explaining the launch of the VSA addresses this issue directly:

Q: Is there a central web site and search engine that can be used to search across the College Portrait pages of all VSA participants?

A: No. The College Portrait web pages will be hosted on individual institution websites not centralized in one location.[3]

In other words, designing a college-information clearinghouse that made comparisons difficult was not the result of poor web design, but was deliberate.

Beyond the general issue of comparability, some of the VSA's most innovative data elements have been implemented in a way that severely limits their utility to parents and prospective students. In particular, the net-cost estimates, future plans of graduates, learning outcomes, and student-engagement measures all leave much to be desired.

Net-Cost Estimates. Consumers typically care about choosing the product that meets their needs at the lowest cost. Unfortunately, pricing in higher education is notoriously opaque. Colleges and universities have become increasingly reliant on high-price, high-aid enrollment policies under which few people pay the listed price. Under this system of price discrimination, colleges list a high tuition cost but then cater the price through grants and loans to individual students based on their ability to pay and their academic credentials. Most of this information is hidden from prospective consumers, however, leaving them with only the sticker price as a rough indicator of cost.

The VSA requires institutions to provide a net-cost calculator, but institutions have very different notions of what constitutes a net-cost calculator. Of the 329 institutions that have joined the VSA, 109 have a functioning link to a calculator that factors in the institution's tuition and fees, the student's living arrangements, and the family's ability to pay. While these schools should be applauded, the track record for the rest of the VSA members is pretty bleak. The other 220 institutions often lack a functioning link, or they link to generic and irrelevant cost calculators.

To label such efforts "net-cost calculators" is false advertising. Most do not improve upon what is already publicly available, and what is available does little to solve the sticker-price problem.

Future Plans of Graduates. In addition to cost, the other key piece of consumer information is product quality. There are various potential measures of postsecondary quality, ranging from graduation and retention rates to the labor-market success of graduates ten years down the line. State and federal policymakers have increasingly sought to link student outcomes in the labor market--employment, earnings, and employer satisfaction--with the colleges from which students graduated.

The VSA attempts to get at one dimension of student outcomes by including data on the future plans of bachelor's-degree recipients at participating schools. This should not be seen as an indicator of postcollege success, however, because it does not give the percentage of students who find jobs or who choose to pursue postgraduate work, but rather the percentage of students who intend to. Unfortunately, data on actual postcollege outcomes are nonexistent.

Learning Outcomes. Another way to gauge the quality of an institution is to measure how much students learn while they are there. Several organizations have created standardized tests, designed to be administered to freshmen and seniors, to gauge the "value added" by the average student's time at the institution. Colleges and universities are generally loath to submit to this kind of standardized testing for fear of how their results might compare to those of their peers.

To the VSA's credit, in spite of the controversial nature of measuring college learning outcomes, it has required members to include measures of student learning on their College Portraits. In order to show evidence of student learning, participating institutions must publish freshmen and senior scores on one of three eligible standardized exams: the Collegiate Learning Assessment (CLA), the Collegiate Assessment of Academic Progress (CAAP), or the Measure of Academic Proficiency and Progress (MAPP). Of the sixty-nine institutions that had posted results of one of these exams to their portrait by the end of September 2009, fifty-nine had chosen the CLA, five the CAAP, and five the MAPP.

Each of the tests is norm-referenced, but they are not directly comparable because each has different components, testing protocols, and scales. Because the VSA profiles display assessment results as raw scores rather than percentiles, consumers will have difficulty comparing schools that use different exams. It is akin to knowing one school's average SAT score and another school's average ACT score without knowing where those scores ranked the schools across all institutions using that test.

Among schools that have implemented the CLA, providing users with percentiles would make it even easier to compare one school to another with precision. How much higher is the performance of a school that scores a 1200 on the CLA than one that scores an 1150? Without a sense of how these schools compare to the rest of the schools that use the test, it is difficult for the average consumer to tell.

Student-Engagement Measures. Many consumers are as concerned about the student experience as they are about whether the school makes students more successful in the long run. Scholars have argued that a sense of student engagement, or the degree to which students feel involved in the academic and social life of the campus, should be an important factor in college choice and is often related to other outcomes like student achievement and perseverance.

The VSA requires participants to report results from a major survey of student engagement. More often than not, institutions have reported their scores on a set of twenty-three survey questions from the National Survey of Student Engagement (NSSE).

There are three key problems with the way the VSA has implemented its student-engagement component. First, the portraits list engagement scores for seniors but omit scores for first-year students. If student engagement, as measured by NSSE, is positively related to retention and perseverance, then those students who are still on campus by their senior year are likely to be the most engaged. This presents a selection-bias problem likely to tilt engagement scores upward and distort the overall level of engagement. Indeed, NSSE's annual report shows that seniors score higher than first-year students on sixteen of the twenty-three questions featured in the College Portraits.[4]

Second, while NSSE summarizes institution-level results using five summary (or benchmark) scores, which institutions use to assess their performance relative to their peers, the VSA only displays responses to a subset of individual NSSE questions. It is not clear how the VSA decided which NSSE indicators to include, and many measures of academic rigor (such as how many books were assigned for the average class or how many paper assignments a student completed) were left out.

Lastly, the individual measures that are included are often calculated in a way that minimizes variation across schools, making it difficult to tell institutions apart on many of the measures. The VSA scores attach equal weight to any answer other than the lowest possible category. For example, though two different schools might report that "97 percent of seniors reported working harder than they thought they could to meet an instructor's standards or expectations," the typical response at one school might have been "sometimes," while at another it might have been "very often." These responses would suggest that the level of academic challenge may be different across the two schools, but these differences are obscured by the VSA's scoring criteria.

The NSSE Scores in Practice

We collected NSSE scores from the portraits of the 242 schools that reported them.[5] A look at the NSSE scores across schools reveals that choices about which NSSE items to include and how to "score" them often serve to obscure differences across schools.

The way the VSA has calculated the student-engagement component ensures that, on most indicators, schools often look both quite good and quite similar to one another. Moreover, when we plot the NSSE scores against a basic outcome of interest--the six-year graduation rate--we see that most items show little or no relationship to institutional completion rates.

Figures 1, 2, and 3 plot each school's response to an NSSE engagement indicator against the school's federally collected six-year graduation rate. The x-axis corresponds to the graduation rate and the y-axis to the institution's response on the NSSE indicator in question. The variance of a given NSSE indicator is shown by the extent to which the dots are spread vertically over the y-axis. If the dots are tightly coupled in a thin horizontal line, there is little variance; if the dots are spread out from top to bottom, this reveals considerable variance in the responses. In addition, we have included a "best fit" line to measure the strength of the relationship between the NSSE indicators and graduation rates: the steeper the line, the stronger the relationship.

While there is no objective measure of what constitutes "enough" variation across institutions, it is clear that many of the indicators show little daylight between schools. Take, for instance, the NSSE indicator that measures how many students report that they prepared for class six or more hours per week (figure 1). With the exception of the eight outliers with scores below the 50 percent mark, 87 percent of institutions report answers in the 75-90 percent range (203 out of 234 eligible schools). Other indicators of academic rigor show even more uniformity across schools: on the indicator that asks whether students have ever had to work harder to meet expectations (see figure 2), 98 percent of institutions report scores between 85 and 100 percent (239 out of 242). It is nearly impossible to distinguish between institutions on the basis of their level of commitment to student success. The responses to this item (see figure 3) are so tightly clustered that they almost form a solid line across the top of the graph. To be fair, the indicators that measure group learning and active learning generally exhibit more variance, as do a few of the indicators that measure the quality of interactions with faculty, staff, and student-support services.

Figure 1 Figure 1

Figure 2 Figure 2

Figure 3 Figure 3

How strongly are NSSE scores related to completion rates? Six of the learning-experience indicators and two of the student-satisfaction indicators show a moderate, positive relationship to graduation rates. With the exception of these indicators, however, the relationship between the student-engagement indicators and graduation rates are flat or even slightly negative. In figures 2 and 3, for instance, the indicators of institutional commitment to student success and expectations for student work show little to no relationship to completion rates. By minimizing variance and maximizing scores, the VSA has made these items less informative than they could have been.

Who Volunteers?

Beyond these design flaws, the concept of making an "accountability" system "voluntary" is problematic in its own right. One cannot possibly hold an entire set of institutions accountable for performance if poor performers can opt out of the system. Those colleges and universities that lag behind their peers have an incentive to remain in the background, protected from the pressure to improve that transparency can create. Has the voluntary nature of the VSA led to such creaming, where the best schools join and those with less-than-sterling records opt out?

We used the six-year, Student-Right-to-Know graduation rate to compare schools that have joined to schools that are eligible to join but have not yet done so.[6] We compare VSA members to members of the APLU or AASCU who have not yet joined. Because the membership of the VSA is in flux monthly, these findings are limited to those who were participants and nonparticipants as of the end of September 2009. Because graduation rates are a function of student characteristics and institutional practices, we also account for admissions selectivity using ratings from Barron's Profiles of American Colleges for 2009. We have valid graduation-rate data for 321 of the 329 participating institutions and 165 of the eligible nonparticipants.[7]

Figure 4 depicts the average graduation rates for VSA members and eligible nonmembers overall and across the selectivity categories. As the overall column suggests, the schools that have chosen to take part in the VSA appear to have graduation rates that are, on average, only slightly higher than those institutions that are eligible to join but have yet to do so.

Figure 4 Figure 4

Once we disaggregate the data by selectivity categories, we see two different participation patterns. For schools in the three lower tiers of selectivity (noncompetitive, less competitive, and competitive), the schools that have joined have slightly higher graduation rates, on average, than those that are eligible but have not joined. The gaps are not huge, but they do suggest that some low-performing schools are seeking to avoid the limelight. At the top two levels of admissions selectivity, the pattern is reversed: VSA members have slightly lower graduation rates than the nonmembers, but the differences are only significant at the most competitive level. The most elite public schools (such as the University of California system, the University of Michigan, and Georgia Tech) have avoided joining the VSA, and some of the lowest-performing, less-selective schools have done the same, handicapping the system's ability to act as an engine of accountability.

Policy Implications

From the perspective of colleges and universities, it makes more sense to think about these voluntary accountability systems as a firebreak--a gap in the forest that prevents a wildfire from spreading--designed to slow the push for an external framework of transparency, performance measurement, and rewards and sanctions. Though a firebreak can temporarily save a cluster of homes, it does little to resolve the deep-seated problems that led to the fire in the first place. Ensuring that those homes are safe in perpetuity requires a much more comprehensive rethinking of the way in which the forest is managed and may require sacrifices on the part of those who wish to protect themselves.

Institutional interests have driven the design and implementation of these voluntary systems. As a result, they are primarily designed to hold back prodding regulators, and consumer interests are likely to be a secondary concern. Institutional interests can parry more fundamental attempts to increase transparency by highlighting the informational benefits that accrue to consumers and assuring policymakers that the sector is managing itself effectively.

Though these efforts leave much to be desired, the basic intuition behind both--that better-informed consumers can help to discipline providers via market pressures--is fundamentally correct. How might policymakers do a better job of designing and implementing such a system of transparency? There are two basic lessons we can learn from these early efforts.

Transparency about Costs and Outcomes Must Be Mandatory, Not Voluntary. First, if the market for higher education is to exert pressure on poor-performing institutions, consumers must have the necessary information to make informed choices and vote with their feet. These voluntary systems do seek to increase consumer information, but they do so imperfectly because some schools fail to join.

The few state systems that have actively required their members to join, like the California State and University of North Carolina systems, provide a model for how policymakers can get around the participation problem. Research on college attendance suggests that 72 percent of college students enroll in their home state, while 86 percent enroll in their home region.[8] As such, state-level government could help the vast majority of their students by compelling all in-state institutions to collect and publish informative and comparable data. Increased transparency should not be a choice for institutions that receive public funds, but a fact of life. Though social and political pressure to join voluntary systems might succeed in the long run, statutory or regulatory pressure from state legislatures to increase transparency could pay more certain and immediate dividends.

Collect Data That Clarify Institutional Distinctions, Not Blur Them. Second, in order for such information systems to help create market pressure, the information itself must help consumers make distinctions between institutions with different missions, student bodies, and levels of performance. Even if all schools volunteered to be more transparent, there is little chance that the institutions that do certain things well will be rewarded while those that do not will become less popular if it is difficult or confusing to make conparisons.

Consumers seize on information that allows them to distinguish one college from another, and they flock to schools that appear to promise better outcomes.

The key problem is that in walking the fine line between revealing new information and ensuring institutional participation, both the VSA and U-CAN have gone too far in favor of the institutional-participation goal. This is where those policymakers looking to develop a system of market accountability need to learn from the popular rankings guides that many in academe detest. The omnipresent U.S. News & World Report rankings, for instance, make very fine-grained distinctions across institutions that are otherwise quite similar. Research has shown that higher rankings lead to increases in popularity. Popular magazine rankings probably go too far in making distinctions, but the lesson is clear: consumers seize on information that allows them to distinguish one college from another, and they flock to schools that appear to promise better outcomes.

Efforts like the VSA should not seek to rank schools in any systematic fashion. Leave that up to prospective students and parents, who can weigh certain data points more heavily than others depending on their particular needs. In order for these data to be useful, they must clarify institutional differences, not dilute them.

Conclusion

Since the advent of the mass system of higher education, American colleges and universities have engaged in a vastly imperfect system of "self-regulation" via the accreditation process. In spite of the fanfare with which they have been unveiled, however, the VSA and U-CAN still constitute a form of self-regulation, meaning that the institutions themselves have the power to define what they are willing to reveal to the public and to avoid joining altogether. Higher-education leaders have argued that these initiatives are an important step in the effort to increase accountability. They represent a step that is not nearly large enough. Much work remains to get to the destination--meaningful, transparent mechanisms with which to compare institutional performance.

Andrew P. Kelly ([email protected]) is a research fellow at AEI. Chad Aldeman ([email protected]) is a policy analyst at Education Sector. This Outlook is adapted from a March 2010 AEI-Education Sector report, False Fronts? Behind Higher Education's Voluntary Accountability Systems.

Click here to view this Outlook as an Adobe Acrobat PDF.

Notes

1. U.S. Department of Education, A Test of Leadership: Charting the Future of U.S. Higher Education; A Report of the Commission Appointed by Secretary of Education Margaret Spellings, 109th Cong., 2d sess. (Washington, DC, September 2006), xii, available at http://ed.gov/about/bdscomm/list/hiedfuture/reports/

final-report.pdf (accessed March 29, 2010).

2. Ibid., x.

3. "Voluntary System of Accountability; The College Portrait: Common Questions and Answers," February 15, 2007, available at www.voluntarysystem.org/docs/faq/Q&A.pdf (accessed April 16, 2010).

4. National Survey of Student Engagement (NSSE), "NSSEville State University," August 2009, available at http://nsse.iub.edu/2009_Institutional_Report/pdf/NSSE09%20Frequency%20Distributions%20Report%20%28NSSEville%20State%29.pdf (accessed March 29, 2010).

5. Schools are not obligated to use NSSE; they can use another student-engagement survey instrument. The 242 schools represent those schools that posted NSSE responses. For two indicators, a few schools reported percentages that were higher than 100 percent; these observations were excluded from the calculations.

6. In order to mute the impact of year-to-year fluctuations, we computed a weighted average graduation rate over three cohorts of first-time, full-time freshmen (1999, 2000, and 2001; graduation rates for 2005, 2006, and 2007) for each school classified as a "primarily bachelor's degree-granting" institution by the National Center for Education Statistics. This is an important restriction on the schools that are included in the analysis. For instance, many members of the American Association of State Colleges and Universities are labeled as "primarily associates degree-granting" institutions even though they may also award bachelor's degrees. A handful of these primarily associates schools are members of the Voluntary System of Accountability. We excluded these schools because it would be unfair to compare schools whose core mission is to award associates degrees with those for whom it is to award bachelor's degrees on the basis of the bachelor's-degree graduation rate.

7. The small number of schools missing from the database were excluded for one of the following reasons: they were outside of the United States (University of Puerto Rico-Mayaguez; University of American Samoa), they were too young to have data for the 2001 cohort of students (University of Washington-Tacoma), or they were not classified as primarily bachelor's degree-granting institutions and above (Utah Valley State University, CUNY-Medgar Evers College). Finally, a handful of schools, like the North Carolina School of the Arts and SUNY-Empire State, were classified as "special interest" and were not coded for selectivity.

8. Krista Mattern and Jeff Wyatt, "Student Choice of College: How Far Do Students Go for an Education?" Journal of College Admission (Spring 2009): 19-29.

Also Visit
AEIdeas Blog The American Magazine
About the Author

 

Andrew P.
Kelly

What's new on AEI

Holder will regret his refusal to obey the Constitution
image 'Flood Wall Street' climate protesters take aim at their corporate allies
image 3 opportunities for better US-India defense ties
image Is Nicolás Maduro Latin America's new man at the United Nations?
AEI on Facebook
Events Calendar
  • 29
    MON
  • 30
    TUE
  • 01
    WED
  • 02
    THU
  • 03
    FRI
Thursday, October 02, 2014 | 9:00 a.m. – 10:30 a.m.
Campbell Brown talks teacher tenure

We welcome you to join us as Brown shares her perspective on the role of the courts in seeking educational justice and advocating for continued reform.

Friday, October 03, 2014 | 12:00 p.m. – 1:00 p.m.
Harnessing the power of markets to tackle global poverty: A conversation with Jacqueline Novogratz

AEI welcomes you to this Philanthropic Freedom Project event, in which Novogratz will describe her work investing in early-stage enterprises, what she has learned at the helm of Acumen, and the role entrepreneurship can play in the fight against global poverty.

No events scheduled this day.
No events scheduled this day.
No events scheduled today.
No events scheduled this day.
No events scheduled this day.
No events scheduled this day.