SHARE

Measuring State Progress on Learning

October 13, 2005

When the National Center for Public Policy and Higher Education started issuing report cards for the states on college issues, in 2000, it gave every state an "incomplete" in the category of learning.

In other categories, such as participation (do state residents enroll?) or completion (do they graduate?), states had data that could be compared and analyzed. But the center couldn't find reliable ways to compare learning from state to state. Today, the center is releasing on its  Web site a system for measuring student learning, with the idea that it could be used in future national report cards. (Those reports are done every other year.)

The analysis -- like those for the other categories in the report cards -- comes from several subdivisions, which in turn have scores based on a variety of factors. Here is how the center said a measure of learning could be produced:

  • Literacy levels of the state population would count for 25 percent of the score. These could be measured based on various tests of prose literacy, document literacy and quantitative literacy.
  • College graduates' preparedness for "advanced practice" would count for another 25 percent. Here, measures would be passage rates on state licensure examinations for various fields, and especially for teaching. In addition, scores on competitive graduate admissions tests such as the GRE and MCAT would be examined.
  • The performance of college graduates would count for the remaining 50 percent. This might be measured by standardized tests for two- and four-year college graduates that would indicate whether graduates have the skills and knowledge needed to succeed. For example, the ACT's WorkKeys assessment could be used for community college graduates. The test measures reading comprehension, applied math, the ability to locate information, and basic business writing skills.

On all of the above criteria, states could also be judged on whether members of all ethnic and racial groups perform comparably, or whether there are gaps.

The criteria were developed by a team led by Margaret Miller, a professor of education at the University of Virginia, and Peter Ewell, vice president of the National Center for Higher Education Management Systems.

 

Most:

  • Viewed
  • Commented
  • Past:
  • Day
  • Week
  • Month
  • Year
Loading results...
Back to Top