AALHE 5th Annual Assessment Conference

Mon, 06/01/2015 to Wed, 06/03/2015


Lexington , Kentucky
United States

General Education and Assessment

Thu, 02/19/2015 to Sat, 02/21/2015


200 West 12th Street
Kansas City , Missouri 64105
United States

Let's differentiate between 'competency' and 'mastery' in higher ed (essay)

"Competency-based” education appears to be this year’s answer to America’s higher education challenges, judging from this week's news in Washington. Unlike MOOCs (last year’s solution), there is, refreshingly, greater emphasis on the validation of learning. Yet, all may not be as represented.

On close examination, one might ask if competency-based education (or CBE) programs are really about “competency,” or are they concerned with something else? Perhaps what is being measured is more closely akin to subject matter “mastery.” The latter can be determined in a relatively straightforward manner, using various forms of examinations, projects and other forms of assessment.

However, an understanding of theories, concepts and terms tells us little about an individual’s ability to apply any of these in practice, let alone doing so with the skill and proficiency which would be associated with competence.

Deeming someone competent, in a professional sense, is a task that few competency-based education programs address. While doing an excellent job, in many instances, of determining mastery of a body of knowledge, most fall short in the assessment of true competence.

In the course of their own education, readers can undoubtedly recall the instructors who had complete command of their subjects, but who could not effectively present to their students. The mastery of content did not extend to their being competent as teachers. Other examples might include the much-in-demand marketing professors who did not know how, in practice, to sell their executive education programs. Just as leadership and management differ one from the other, so to do mastery and competence.

My institution has been involved in assessing both mastery and competence for several decades. Created by New York’s Board of Regents in the early 1970s, it is heir to the Regents’ century-old belief in the importance of measuring educational attainment (New York secondary students have been taking Regent’s Exams, as a requirement for high school graduation, since 1878).

Building on its legacy, the college now offers more than 60 subject matter exams. These have been developed with the help of nationally known subject matter experts and a staff of doctorally prepared psychometricians. New exams are field tested, nationally normed and reviewed for credit by the American Council on Education, which also reviews the assessments of ETS (DSST) and the College Board (CLEP). Such exams are routinely used for assessing subject matter mastery.

In the case of the institution’s competency-based associate degree in nursing, a comprehensive, hands-on assessment of clinical competence is required as a condition of graduation. This evaluation, created with the help of the W.K. Kellogg Foundation in 1975, takes place over three days in an actual hospital, with real patients, from across the life span -- pediatric to geriatric. Performance is closely monitored by multiple, carefully selected and trained nurse educators. Students must demonstrate skill and ability to a level of defined competence within three attempts or face dismissal or transfer from the program.

In developing a competency-based program as opposed to a mastery-based one, there are many challenges that must be addressed if the program is to have credibility. These include:

  • Who specifies the elements to be addressed in a competency determination? In the case of nursing, this is done by the profession. Other fields may not be so fortunate. For instance, who would determine the key areas of competency in the humanities or arts?
  • Who does the assessing, and what criteria must be met to be seen as a qualified assessor of someone’s competency?
  • How will competence be assessed, and is the process scalable? In the nursing example above, we have had to establish a national network of hospitals, as well as recruit, train and field a corps of graduate prepared nurse educators. At scale, this infrastructure is limited to approximately 2,000 competency assessments per year, which is far less than the number taking the College’s computer-based mastery examinations.
  • Who is to be served by the growing number of CBE programs? Are they returning adults who have been in the workplace long enough to acquire relevant skills and knowledge on the job, or is CBE thought to be relevant even for traditional-aged students?

(It is difficult to imagine many 22 year-olds as competent within a field or profession. Yet, there is little question that most could show some level of mastery of a body of knowledge for which prepared.)

  • Do prospective students want this type of learning/validation? Has there been market research that supports the belief that there is demand? We have offered two mastery-based bachelor’s degrees (each for less than $10,000) since 2011. Demand has been modest because of uncertainty about how a degree earned in such a manner might be viewed by employers and graduate schools (this despite the fact that British educators have offered such a model for centuries).
  • Will employers and graduate schools embrace those with credentials earned in a CBE program? Institutions that have varied from the norm (dropping the use of grades, assessing skills vs. time in class) have seen their graduates face admissions challenges when attempting to build on their undergraduate credentials by applying to graduate schools. As for employers, a backlash may be expected if academic institutions sell their graduates as “competent” and later performance makes clear that they are not.

The interest in CBE has, in large part, been driven by the fact that employers no longer see new college graduates as job-ready. In fact, a recent Lumina Foundation report found that only 11 percent of employers believe that recent graduates have the skills needed to succeed within their work forces. One CBE educator has noted, "We are stopping one step short of delivering qualified job applicants if we send them off having 'mastered' content, but not demonstrating competencies." 

Or, as another put it, somewhat more succinctly, "I don't give a damn what they KNOW.  I want to know what they can DO.”

The move away from basing academic credit on seat time is to be applauded. Determining levels of mastery through various forms of assessment -- exams, papers, projects, demonstrations, etc. – is certainly a valid way to measure outcomes. However, seat time has rarely been the sole basis for a grade or credit. The measurement tools listed here have been found in the classroom for decades, if not centuries.

Is this a case of old wine in new bottles? Perhaps not. What we now see are programs being approved for Title IV financial aid on the basis of validated learning, not for a specified number of instructional hours; whether the process results in a determination of competence or mastery is secondary, but not unimportant.

A focus on learning independent of time, while welcome, is not the only consideration here. We also need to be more precise in our terminology. The appropriateness of the word competency is questioned when there is no assessment of the use of the learning achieved through a CBE program. Western Governors University, Southern New Hampshire, and Excelsior offer programs that do assess true competency.

Unfortunately, the vast majority of the newly created CBE programs do not. This conflation of terms needs to be addressed if employers are to see value in what is being sold. A determination of “competency” that does not include an assessment of one’s ability to apply theories and concepts cannot be considered a “competency-based” program.

To continue to use “competency” when we mean “mastery” may seem like a small thing. Yet, if we of the academy cannot be more precise in our use of language, we stand to further the distrust which many already have of us. To say that we mean “A” when in fact we mean “B” is to call into question whether we actually know what we are doing.

John F. Ebersole is the president of Excelsior College, in Albany, N.Y.

Editorial Tags: 

Assessment Conference

Mon, 03/09/2015 to Wed, 03/11/2015


Austin , Texas
United States

Colleges should focus less on student failure and more on success (essay)

In their effort to improve outcomes, colleges and universities are becoming more sophisticated in how they analyze student data – a promising development. But too often they focus their analytics muscle on predicting which students will fail, and then allocate all of their support resources to those students.

That’s a mistake. Colleges should instead broaden their approach to determine which support services will work best with particular groups of students. In other words, they should go beyond predicting failure to predicting which actions are most likely to lead to success. 

Higher education institutions are awash in the resources needed for sophisticated analysis of student success issues. They have talented research professionals, mountains of data and robust methodologies and tools. Unfortunately, most resourced-constrained institutional research (IR) departments are focused on supporting accreditation and external reporting requirements. 

Some institutions have started turning their analytics resources inward to address operational and student performance issues, but the question remains: Are they asking the right questions?

Colleges spend hundreds of millions of dollars on services designed to enhance student success. When making allocation decisions, the typical approach is to identify the 20 to 30 percent of students who are most “at risk” of dropping out and throw as many support resources at them as possible. This approach involves a number of troubling assumptions:

  1. The most “at risk” students are the most likely to be affected by a particular form of support.
  2. Every form of support has a positive impact on every “at risk” student.
  3. Students outside this group do not require or deserve support.

What we have found over 14 years working with students and institutions across the country is that:

  1. There are students whose success you can positively affect at every point along the risk distribution.
  2. Different forms of support impact different students in different ways.
  3. The ideal allocation of support resources varies by institution (or more to the point, by the students and situations within the institution).

Another problem with a risk-focused approach is that when students are labeled “at risk” and support resources directed to them on that basis, asking for or accepting help becomes seen as a sign of weakness. When tailored support is provided to all students, even the most disadvantaged are better-off. The difference is a mindset of “success creation” versus “failure prevention.” Colleges must provide support without stigma.

To better understand impact analysis, consider Eric Siegel’s book Predictive Analytics. In it, he talks about the Obama 2012 campaign’s use of microtargeting to cost-effectively identify groups of swing voters who could be moved to vote for Obama by a specific outreach technique (or intervention), such as piece of direct mail or a knock on their door -- the “persuadable” voters. The approach involved assessing what proportion of people in a particular group (e.g., high-income suburban moms with certain behavioral characteristics) was most likely to:

  • vote for Obama if they received the intervention (positive impact subgroup)
  • vote for Obama or Romney irrespective of the intervention (no impact subgroup)
  • vote for Romney if they received the intervention (negative impact subgroup)

The campaign then leveraged this analysis to focus that particular intervention on the first subgroup.

This same technique can be applied in higher education by identifying which students are most likely to respond favorably to a particular form of support, which will be unmoved by it and which will be negatively impacted and dropout. 

Of course, impact modeling is much more difficult than risk modeling. Nonetheless, if our goal is to get more students to graduate, it’s where we need to focus analytics efforts.

The biggest challenge with this analysis is that it requires large, controlled studies involving multiple forms of intervention. The need for large controlled studies is one of the key reasons why institutional researchers focus on risk modeling. It is easy to track which students completed their programs and which did not. So, as long as the characteristics of incoming students aren’t changing much, risk modeling is rather simple. 

However, once you’ve assessed a student’s risk, you’re still left trying to answer the question, “Now what do I do about it?” This is why impact modeling is so essential. It gives researchers and institutions guidance on allocating the resources that are appropriate for each student.

There is tremendous analytical capacity in higher education, but we are currently directing it toward the wrong goal. While it’s wonderful to know which students are most likely to struggle in college, it is more important to know what we can do to help more students succeed.

Dave Jarrat is a member of the leadership team at InsideTrack, where he directs marketing, research and industry relations activities.

Study: Text messages about renewing aid boost 2-year college persistence

Smart Title: 

Text messages encouraging first-year community college students to fill out federal student aid form boost persistence to sophomore year, study finds.

Wake Forest U. tries to measure well-being

Smart Title: 

Wake Forest U. looks to measure the lives of its students and alumni.

Basketball box scores include numerous stats -- so should a federal ratings system (essay)

Dear Secretary of Education Arne Duncan:

Congratulations on your MVP award at the NBA Celebrity All-Star game: 20 points, 8 boards, 3 assists and a steal -- you really filled up that stat sheet. Even the NBA guys were amazed at your ability to play at such a high level -- still. Those hours on the White House court are paying off!

Like you, I spent some time playing overseas after college and have long been a consumer of basketball box scores -- they tell you so much about a game. I especially like the fact that the typical box score counts assists, rebounds and steals — not just points. I have spent many hours happily devouring box scores, mostly in an effort to defend my favorite players (who were rarely the top scorers).

As a coach of young players, my wife Michele and I (she is the real player in the family) expanded the typical box score — we counted everything in the regular box score, then added “good passes,” “defensive stops,” “loose ball dives” and anything else we could figure out a way to measure. This was all part of an effort to describe for our young charges the “right way” to play the game. I think you will agree that “points scored” rarely tells the full story of a player’s worth to the team.

Mr. Secretary, I think the basketball metaphor is instructive when we “measure” higher education, which is a task that has taken up a lot of your time lately. If you look at all the higher education “success” measures as a basketball box score instead of a golf-type scorecard, it helps clarify two central flaws.

First, exclusivity. Almost every single higher education scorecard fails to account for the efforts of more than half of the students actually engaged in “higher” education.

At Mount Aloysius College, we love our Division III brand of Mountie basketball, but we don’t have any illusions about what would happen if we went up against those five freshman phenoms from Division I Kentucky (or UConn/Notre Dame on the women’s side) -- especially if someone decided that half our points wouldn’t even get counted in the box score.

You see, the databases for all the current higher education scorecards focus exclusively on what the evaluators call “first-time four-year bachelor’s-degree-seeking students.” Nothing wrong with these FTFYBDs, Mr. Secretary, except that they represent less than half of all students in college, yet are the only students the scorecards actually “count.”

None of the following “players” show up in the box score when graduation rates are tabulated:

  • Players who are non-starters (that is, they aren’t FTFYBDs) — even if they play every minute of the last three quarters, score the most points and graduate on time. These are students who transfer (usually to save money, sometimes to take care of family), spring enrollees (increasingly popular), part-time students and mature students (who usually work full-time while going to school).
  • Any player on the team, even a starter, who has transferred in from another school. If you didn’t start at the school from which you graduated, then you don’t “count,” even if you graduate first in your class!
  • Any player, even if she is the best player on the team, who switches positions during the game: Think two-year degree students who switch to a four-year program, or four-year degree students who instead complete a two-year degree (usually because they have to start working).
  • Any player who is going to play for only two years. This is every single student in a community college and also graduates who get a registered-nurse degree in two years and go right to work at a hospital (even if they later complete a four-year bachelor’s degree, they still don’t count).
  • Any scoring by any player that occurs in overtime: Think mature and second-career students who never intended to graduate on the typical schedule because they are working full time and raising a family.

The message sent by today’s flawed college scorecards is unavoidable: These hard-working students don’t count.

Mr. Secretary, I know that you understand how essential two-year degrees are to our economy; that students who need to transfer for family, health or economic reasons are just as valuable as FTFYBDs, and that nontraditional students are now the rule, not the exception. But current evaluation methods are almost universally out-of-date with readily available data and out of synch with the real lives of many students who simply don’t have the economic luxury of a fully financed four-year college degree. All five types of students listed above just don’t show up anywhere in the box score.

“Scorecards” should look more like box scores and include total graduation rates for both two- and four-year graduates (the current IPEDS overall grad rate), all transfer-in students (it looks like IPEDs may begin to track these), as well as transfer-out students who complete degrees (current National Student Clearing­house numbers). These changes would provide a more accurate result for the student success rate at all institutions. 

Another relatively easy fix would be to break out cohort comparisons that would allow Scorecard users to see how institutions perform when compared to others with a similar profile (as in the Carnegie Classifi­cations).

The second issue is fairness.

Current measurement systems make no effort to account for the difference between (in basketball terms) Division I and Division III, between “highly selective schools” that “select” from the top echelons of college “recruits” and those schools that work best with students who are the first in their families to go to college, or low-income, or simply less prepared (“You can’t coach height,” we used to say).

As much as you might love the way Wisconsin-Whitewater won this year’s Division III national championship (last-second shot), I don’t think even the most fervent Warhawks fan has any doubt about how they would fare against Coach Bo Ryan’s Division-I Wisconsin Badgers. The Badgers are just taller, faster, stronger — and that’s why they’re in Division I and why they made it to the Final Four.

The bottom line on fairness is that graduation rates track closely with family income, parental education, Pell Grant eligibility and other obvious socioeconomic indicators. These data are consistent over time and truly incontrovertible.

Mr. Secretary, I know that you understand in a personal way how essential it is that any measuring system be fair. And I know you already are working on this problem, on a “degree of difficulty” measure, very like the hospital “acuity index” in use in the health care industry. 

The classi­fication system that your team is working on right now could assign a coefficient that weighs these measurable mitigating factors when posting outcomes.  Such a coefficient would also help to identify those institutions that are doing the best job at serving these very students.  Let us hope that your team can successfully weigh measurable mitigating factors to more fairly score schools.  This also would help identify those institutions that are doing the best job at serving the students with the fewest advantages.  

In the health care industry, patients are assigned “acuity levels” (based on a risk-adjustment methodology), numbers that reflect a patient’s condition upon admission to a facility. The intent of this classi­fication system is to consider all mitigating factors when measuring outcomes and thus to provide consumers accurate information when comparing providers. A similar model could be adopted for measuring higher education outcomes.

This would allow consideration of factors like (1) Pell eligibility rates, (2) income relative to poverty rates, (3) percentage that are first-generation-to-college, (4) SAT scores, etc.  A coefficient that factors in these “challenges” could best measure higher education outcomes.  Such “degree of difficulty” factors, like “acuity levels,” would provide consumers accurate information for purposes of comparison.

Absent such a calculation, colleges will continue to have every incentive to “cream” their admissions, and every disincentive against serving the students you have said are central to our economic future, including two-year, low-income and minority students. That’s the “court” that schools like Mount Aloysius and 16 other Mercy colleges play on. We love our FTFYBDs, but we work just as hard on behalf of the more than 50 percent of our students whose circumstances require a less traditional but no less worthy route to graduation. We think they count, too.

Thanks for listening.

Thomas P.  Foley

Mount Aloysius College

Thomas P. Foley is president of Mount Aloysius College.

Editorial Tags: 
Image Source: 
Image Caption: 
Arne Duncan (center) receiving the MVP trophy at the NBA celebrity all-star game.

Residential colleges should use 'competency' for their own purposes (essay)

Recently The Atlantic predicted that one of the top five trends impacting higher education will be a push toward credit given for experience, proficiency and documented  “competency.” The recent results of Inside Higher Ed’s survey of chief academic officers also show openness to competency-based outcomes.

For many, myself included, this simply sounds like a series of placement tests and seems like a pretty shallow approach to a college education and degree. However, as vice president of enrollment and chief marketing officer for a residential college, I can’t ignore the appeal of the “validation” of learning this trend suggests.

In fact, I find myself thinking more and more about how residential colleges, with their distinct missions, might respond to the potential threat this trend represents. I find myself hoping we can prove the residential environment results in valuable learning and life experiences beyond getting along with a roommate, asking someone on a date, learning how to tap a keg and configuring a renegade wireless network.

We can do more. Perhaps the idea of competency-based education should inspire us to think differently about how the learning environment of the residential experience is superior. Perhaps there are competencies associated with a residential college we’ve not done an adequate job of documenting?

This will not be easy for most of us. Our natural instinct to “wait and see how good our students turn out” to justify why students should live and learn on campus won’t work this time, as we face a skeptical public and witness more and more college presidents, administrators and boards reconsidering the value of online education. With some intentionality, we can do a much better job of proving why learning in a residential setting is superior.

We need to ask ourselves: Why is the residential campus experience of utmost importance to a contemporary undergraduate education? We must identify the sorts of learning that can only occur in such a setting, and validate, or better identify, the learning competencies that occur outside the classroom on a residential campus.

This will be difficult in an environment defined by shrinking resources, when many resort to thinking about eliminating activities considered not central to the core mission. The instinct is to cut, de-emphasize or keep separate and second. We see this time and time again in any setting that faces difficult choices about resources.  But investment, integration and intentionality create a better path forward.  

Can liberal arts colleges resist the urge to cut, and rethink how activities in the residential environment are central to the core mission? Can these colleges develop meaningful ways of measuring the value and impact of such activities and how they result in competencies that add value and worth? Can residential liberal arts colleges develop a “currency” that demonstrates they value out-of-classroom learning comparably to in-classroom learning? I hope so.

While many colleges would benefit from integrating out-of-classroom learning, residential liberal arts colleges must do so because of the infrastructure around which our colleges have been built -- residence and dining halls, student activity centers, athletic venues and performance halls. We need to prove these are not just modern amenities, but central to superior learning.

To validate this learning experience, residential liberal arts colleges will need to rethink historic barriers. Learning that occurs outside the classroom can no longer be viewed as “separate and second.”

Extra-curricular and co-curricular transcripts that fully document competencies and outcomes essential to success beyond college must evolve to be fully integrated with the academic program, and valued both internally and externally.

First, residential liberal arts colleges must clearly define the learning outcomes and expectations. This is frequently a faculty-driven exercise. Understanding the knowledge gained from an activity provides a framework around which out-of-classroom learning can be developed. This framework will allow for alignment of purpose and some measure of control about how central an out-of-classroom activity is to the core mission and which competencies are satisfied as a result.

Georgetown University was recently recognized for their excellent programming in the area of preparing student-athletes for leadership. Recognition of activities that successfully align with and even expand learning is critical for the public to be convinced that such activities are core to a high-quality education.

Next, residential liberal arts colleges must create a “currency” that meaningfully recognizes those activities that advance a student’s education, e.g., elective academic credit, a credit-bearing on-campus internship, or certificate for activities that demonstrate substantive interest and professional and personal development.

Student activities might be reorganized into mission-focused areas that provide students with experience not always fully represented in the academic program, but with relevance to a successful application for employment or graduate school.

Some examples might be: Leadership, Teamwork, Civic Engagement, Social Justice, Service Learning, Entrepreneurship and Business Development, Intercultural Understanding, Interfaith and Spiritual Development, Public Relations and Event Planning, and Sustainability. This approach is similar to competency-based certification, but broader than proof that a student can read a balance sheet or do a 10-minute presentation.

Finally, liberal arts colleges should engage in a broader conversation about why they are residential, without saying it’s because they’ve been that way for 150 years. Too many colleges assume students already understand. Such a learning environment can positively shape a student’s character and skillset, and result in sweeter success, but a residential community does not always acknowledge or articulate this success.

With competency-based education in the spotlight, residential colleges have an opportunity to renew a focus on the benefits to students who not only eat and sleep, but also meet colleagues, connect with mentors, challenge themselves in new ways, and develop 21st-century skills and competencies on campus.

If we do not champion and clearly identify the benefits to our students, we are vulnerable to the advocates of no-frills bachelor’s degrees, willy-nilly life experience for credit, online learning, and the commodifiers among us who believe the value of the college experience is test- and content-driven, rather than experiential and residential in nature.

W. Kent Barnds is executive vice president and vice president of enrollment, communication and planning at Augustana College, in Rock Island, Ill.

Higher ed needs better data to spur reform (essay)

While there is heated debate over how best to fix America’s higher education system, everyone agrees on the need for meaningful reform. It’s difficult to argue against reform in the face of college attainment rates that are stalled at just under 40 percent and the growing number of graduates left wondering whether they will ever find careers that allow them to pay off their mounting debts.

Any policy debate should start with a clear picture of how the dollars are being spent and whether that money is achieving the desired outcomes. Unfortunately, a lack of accurate data makes it impossible to answer many of the most basic questions for students, families and policy makers who are investing significant time and money in higher education.  

During the recent State of the Union address, President Obama talked about shaking up the system of higher education to give parents more information, and colleges more incentives to offer better value. Though he provided little detail, this most certainly referred to the broad vision for higher education reform he outlined over the summer centered around a new a rating system for colleges and universities that would eventually be used to influence spending decisions on federal student financial aid.

However, the President’s proposal rests on a data system that is imperfect, at best. As former U.S. Secretary of Education Margaret Spellings said of the President’s plan, “we need to start with a rich and credible data system before we leap into some sort of artificial ranking system that, frankly, would have all kinds of unintended consequences.”

The American Council on Education, which represents the presidents of more than 1,800 accredited, degree-granting institutions, including two- and four-year colleges, private and public universities, and nonprofit and for-profit entities, agrees on the need for better data as well.

A senior staff member at ACE has been quoted to say that “if the federal government develops a high-stakes ratings system, they have an obligation to have very accurate data,” and that he was “surprised that anyone would think it controversial that having such data is a prerequisite.”

In order to bridge the data gap, we introduced the Student Right to Know Before You Go Act, which would make the complete range of comparative data on colleges and universities easily accessible to the public online and free of charge by linking student-level academic data with employment and earnings data.

For the first time, students, and policy makers, would be able to accurately compare -- down to the institution and specific program of study -- graduation and transfer rates, frequency with which graduates go on to pursue higher levels of education, student debt and post-graduation earnings and employment outcomes. Such a linkage is the best feasible way to create this data-rich environment.

None of these metrics is currently available to those seeking to evaluate a school or program, though plenty of misleading data are out there.

For example, Marylhurst University, a small liberal arts school in Oregon, was assessed with a 0 percent graduation rate by the U.S. Department of Education. This is because the department's current metrics account only for first-time, full-time students, and Marylhurst serves nontraditional students who are part time or have returned to school later in life. Schools like this that serve nontraditional students -- who now make up the majority of all students -- don’t get credit for their success, at least not according to current federal evaluations.

With so many in the higher education community bemoaning the lack of quality data, and clear solutions forward on how to attain better data, why hasn’t it happened?

A major part of the answer: institutional self-interest. Every school in the country has widely disparate performance outcomes depending on the category, and many college presidents are in no hurry to make their less-than-appealing outcome data available for public scrutiny.

There’s a fear that students and families will vote with their pocketbooks and choose different schools that better meet their needs. The abundance of inaccurate and incomplete data provides institutional leaders with a line of defense: so long as such data are the norm upon which they are ranked and rated, they can defend themselves on the basis of flawed methodology. 

Not all schools fear the implications of better quality data; in fact, many schools crave these data and want them made public. They know they’ll stack up well against their competition.

Moreover, many schools realize that getting better data is critical to helping identify what’s working and what’s not for their students in order to build stronger programs. Nevertheless, some of the “Big Six” higher education associations still cling to the status quo and represent a key challenge to realizing these commonsense reforms.

It is long past time for these important actors to look away from their self-interest and toward what’s in America’s collective interest -- a future where higher education produces better outcomes for students and the economy -- by supporting the Know Before You Go Act.

U.S. Sen. Ron Wyden is an Oregon Democrat, and U.S. Sen. Marco Rubio is a Florida Republican.

Editorial Tags: 


Subscribe to RSS - Assessment
Back to Top