Discussion: (0 comments)
There are no comments available.
Competency-based education (CBE) programs are growing in popularity as an alternative path to a postsecondary degree. Freed from the seat-time constraints of traditional higher education programs, CBE students can progress at their own pace and complete their postsecondary education having gained relevant and demonstrable skills. The CBE model has proven particularly attractive for nontraditional students juggling work and family commitments that make conventional higher education class schedules unrealistic. But the long-term viability of CBE programs hinges on the credibility of these programs’ credentials in the eyes of employers. That credibility, in turn, depends on the quality of the assessments CBE programs use to decide who earns a credential.
In this paper we introduce a set of best practices for high-stakes assessment in CBE, drawing from both the educational-measurement literature and current practices in prior-learning and CBE assessment. Broadly speaking, there are two areas in assessment design and implementation that require significant and sustained attention from test developers and program administrators: (1) validating the assessment instrument itself and (2) setting meaningful competency thresholds based on multiple sources of evidence. Both areas are critical for supporting the legitimacy and value of CBE credentials in the marketplace.
This paper therefore details how providers can work to validate their assessments and establish performance levels that map to real-world mastery, paying particular attention to the kinds of research and development common in other areas of assessment. We also provide illustrative examples of these concepts from prior-learning assessments (for example, Advanced Placement exams) and existing CBE programs. Our goal is to provide a resource to institutions currently developing CBE offerings and to other stakeholders—regulators and employers, for instance—who will encounter an increasing number of CBE programs.
Based on our review of the current landscape, we argue that CBE programs have dedicated most of their attention to defining discrete competencies and embedding those competencies in a broader framework associated with degree programs. Many programs clearly document not only the competencies but also the types of assessments they use to measure student proficiency. This is a good start.
We argue that, moving forward, CBE programs should focus on providing evidence that supports the validity of their assessments and their interpretation of assessment results. Specifically, program designers should work to clarify the links between the tasks students complete on an assessment and the competencies those tasks are designed to measure. Moreover, external-validity studies—relating performance on CBE assessments with performance in future courses or in the workplace—are crucial if CBE programs want employers to view their assessments and their competency thresholds as credible evidence of students’ career readiness.
External validity is the central component of our recommendations:
There are no comments available.
1150 17th Street, N.W. Washington, D.C. 20036
© 2016 American Enterprise Institute for Public Policy Research