You have /5 articles left.
Sign up for a free account or log in.

Illustration by Justin Morrison/Inside Higher Ed
The recent decision by the American Council on Education and the Carnegie Foundation to simplify the classification of research universities may have been well meaning, but it represents a serious misstep with consequential results.
By reducing a comprehensive system of research metrics down to just two—in order to gain coveted R-1 status, an institution must now spend $50 million annually on research and award 70 research doctorates per year—ACE has fundamentally changed what it means to be a top-tier research institution. The shift away from more holistically evaluating research activity risks distorting public understanding and perception of university excellence while incentivizing behavior that undermines long-term research creativity and innovation.
To most effectively appreciate the importance of this change, it is helpful to trace the history of the Carnegie classification system. Initially conceived in 1973 by the Carnegie Commission on Higher Education, the system was intended as a tool to support research and policymaking by categorizing U.S. colleges and universities according to their missions and output.
Over the decades, the classification system has become a trusted compendium for the public, media and higher education community. Designations such as “R-1” (which historically stood for “Doctoral University—Very High Research Activity”) and “R-2” (“Doctoral University—High Research Activity”) gained prominence, indicating robust levels of scholarly productivity, research funding, doctoral education and infrastructure.
The methodology used for the 2021 classifications (the most recent until this year) involved a suite of indicators that aimed to quantify research excellence, with partial normalization for institutional size. These included total research expenditures in science and engineering, research expenditures in non–science and engineering fields, science and engineering research personnel size (postdoctoral appointees and other nonfaculty Ph.D. researchers), and the number of doctoral degrees awarded annually in humanities, social sciences, STEM fields and other fields like business and education.
A principal components analysis then allowed for the creation of indices representing both total and per-capita research activity, enabling close and equitable comparisons across different institutions. This methodology was, in many ways, one of the most comprehensive and encompassing frameworks to date, providing a statistical assessment of American research universities founded on publicly available data.
For the 2025 classifications, however, the landscape changed. With ACE’s leadership, the Carnegie Foundation developed a new framework that substantially simplifies the standards for achieving flagship research status. The revised criteria focus on just the two metrics mentioned above: Institutions must spend at least $50 million annually on research activities and award at least 70 research doctorates per year. Institutions qualifying on both criteria are R-1; those that fail to qualify but spend at least $5 million on research activities and award at least 20 research doctorates are R-2. These terms now stand for very high and high “spending and doctoral production,” respectively, and not the previously used very high and high “research activity.”
This change may appear technical, but it removes numerous subtle measures of academic involvement and output and represents a profound shift in values. Under the previous activity-based framework, institutions were rewarded for building a diverse research ecosystem across a range of disciplines. Now, the metric has been reduced to total money spent and degrees awarded—inputs and outputs that do not necessarily equate to research excellence.
Moreover, this move opens the door for institutions to “teach to the test.” Rather than pursuing organic growth in their research missions, universities may instead make tactical investments to reach the magic numbers needed for R-1 status. This situation is a textbook case of Goodhart’s law: “When a measure becomes a target, it ceases to be a good measure.”
By selecting just two metrics to assess national standing, the classification system invites institutions to game the criteria, boosting research spending and degree output not necessarily through improved research performance but administrative and accounting shifts. This oversimplification of a complex and holistic evaluation tool can have unintended consequences, such as distorting institutional priorities and stifling the motivation to invest in long-term, mission-driven scholarship.
Unfortunately, proof of this phenomenon is already visible. A cursory search of the internet will reveal multiple universities that have recently announced their elevation to R-1 status: More than 40 new institutions gained R-1 status under the revised criteria. While many have made commendable progress, it’s worth noting that their elevation to “elite” research status occurred not as a result of a significant shift in scholarly output, but because they met the two quantitative benchmarks.
The concern is not that these institutions shouldn’t be proud of their growth—it’s that the public will now assume parity between these universities and others whose research footprints are significantly deeper, broader and more globally impactful. ACE has effectively redefined what it means to be an “R-1” institution without clearly communicating that this designation no longer reflects the same type of achievement it once did.
To prevent confusion and preserve the integrity of the classification system, ACE and the Carnegie Foundation should consider rebranding the new categories to reflect their true nature. Rather than continuing to use the historically meaningful “R-1” and “R-2” terms, a more accurate labeling system might be RS-1 and RS-2, signifying “research spending.” This small change would clarify for stakeholders that these categories are now based largely on spending thresholds, not a holistic measure of research activity.
Whereas simplification may make the classifications more politically appealing and easier to administer, it does so at the cost of such vital ingredients as analytical comprehensiveness, contextual responsiveness and evaluative accuracy. To appropriately recognize and support genuine centers of research excellence, it is imperative to adopt a multidimensional evaluative framework—one that ideally encompasses not only research expenditures and doctoral degree program productivity, but also incorporates measures of scholarly impact, the quality of research publications, the development of research infrastructure and the extent of faculty engagement in research activities.
Also, to balance the structural advantages of larger institutions, appropriate normalization factors—such as costs per faculty member, publications per capita and doctoral degrees per research-active department—must be factored in. The 2021 classification model better reflected such a comprehensive and equitable approach, in contrast to the more reductive orientation observed in the 2025 iteration.
In order to preserve the integrity of American research universities as engines of discovery and innovation, their evaluation should be grounded in objective scholarly metrics that meaningfully reflect institutional excellence in research. Given the multifaceted nature of research excellence, our classification systems should be equally nuanced and comprehensive.