Article

What’s in a College Credit?

Credits Should Measure More than Just Learning Time

It’s time to improve credit hours so that they measure what students have learned and not just time spent learning, writes Julie Morgan.

A lecture hall at Waynesburg University. It’s time to improve credit hours so that they measure what students have learned and not just time spent learning. (AP/Gene J. Puskar)
A lecture hall at Waynesburg University. It’s time to improve credit hours so that they measure what students have learned and not just time spent learning. (AP/Gene J. Puskar)

A New York Times article recently quoted an immigrant from Ecuador who was amazed at how little she knew of the community college system: “I didn’t know what a credit was.” Turns out, she’s not the only one. It may be surprising to learn that although most college graduates earn around 120 credits to get a bachelor’s degree, there is no way to know whether a credit earned at one college signals the same amount of learning as a credit earned at another.

The Department of Education is only now working to find a common understanding of the term “credit hour” after years of defining eligible postsecondary programs by the number of credits they offer. Its definition misses the mark by equating credit with time spent learning rather than with the learning outcomes.

The proposed regulation defines a credit in three ways:

  • One hour of classroom or direct faculty instruction and a minimum of two hours of out-of-class student work for 15 weeks in a semester or trimester program
  • An equivalent amount of work to one hour of classroom time and two hours of out-of-class work for other academic activities such as laboratory work, internships, practica, or studio work
  • Reasonable equivalencies to the amount of work required in one hour of classroom time and two hours of out of class work, represented in intended learning outcomes and verified by evidence of student achievement

Option one defines the credit, the building block of our college degrees, as the amount of time that a student spends sitting in a classroom and working at home. Options two and three are simply equivalencies based upon the seat-time standard set up in the first option.

The Department of Education has an opportunity to create a meaningful unit of learning that can be compared across institutions, but it is so far leaning toward a “butt in chair” standard. This standard makes it impossible to know whether a student with an engineering degree from Dartmouth College is learning the same amount as a student with an engineering degree from the University of Massachusetts at Dartmouth. All we will know for certain is that they both sat in classrooms for 120 hours and hopefully completed 240 hours of work on their own time.

The credit is an extremely important component of our postsecondary system. It’s at the core of how colleges compensate faculty, how the government disburses financial aid, how students transfer learning from one institution to another, and how colleges determine when a student has learned enough to earn a college credential. How can something so important be based on such an imprecise measure of student learning?

The reason is that federal regulators approached the credit issue by trying to solve one particular problem: ensuring that colleges are not inflating credits to garner more student financial aid. The definition is an adaptation of the way that traditional colleges define the credit because, in this narrow view, traditional colleges are not causing the problem. The culprits are at the fringes of higher education, those “bad actors” such as American InterContinental University, which the Education Inspector General singled out for its inflationary credit policy.

Focusing on the actors at the margin of higher education is once again shortchanging the bulk of students. It is surely important to stop those colleges that are manipulating the credit hour. But it is just as important to address the crisis of college completion, which has left the United States ranked 12th in the world for degree attainment among 25 to 34 year olds.

A better definition of the credit hour is a necessary component of any plan to improve college completion rates. Researchers identify transfer of college credit across institutions as a key to increasing completions since almost 60 percent of students transfer during the pursuit of a college credential and many lose credits along the way due to flawed credit transfer agreements among institutions. One of the central stumbling blocks in the credit transfer conversation is that administrators cannot compare course credits across institutions because all the credit tells them is the number of hours spent learning.

An improved standard for assigning credit should be based upon outcomes, not inputs. An outcomes-based definition would improve comparability across institutions, and it would bring us closer to the kind of transparency students need to judge the quality of educational options.

Redesigning the credit as a measure of learning outcomes would also result in a standard that can be easily applied to all delivery models, including emerging forms of online education that simply do not look like the traditional “sage on the stage” model. The quality of these new delivery systems can be a concern in the race to find innovative ways to get more students into and out of college in the most efficient and cost-effective way. The trick is to find a definition that manages quality but encourages innovation.

Julie Margetta Morgan is a Policy Analyst at the Center for American Progress.

For more information, see:

The positions of American Progress, and our policy experts, are independent, and the findings and conclusions presented are those of American Progress alone. A full list of supporters is available here. American Progress would like to acknowledge the many generous supporters who make our work possible.

Authors

Julie Margetta Morgan

Director of Postsecondary Access and Success