You have /5 articles left.
Sign up for a free account or log in.

Istockphoto.com/metamorworks

The tight labor market is helping prod employers and colleges to cooperate more closely to ensure that credentials pay off in the work force. And solid data on the labor market and student outcomes are key to this collaboration.

Matt Gee works on these issues as a senior researcher at the University of Chicago and as the co-founder and CEO of BrightHive, a technology company focused on work-force data. So does Yuanxia Ding, a former Education Department official during the Obama administration who is chief impact officer for Skills Fund, which provides student loans and quality assurance for the boot camp sector.

Inside Higher Ed sat down with Ding and Gee a few months ago to discuss developments with outcomes and labor-market data in higher education. Excerpts from the conversation, which have been edited for clarity, follow below.

Q: What’s driving interest in labor-market data in postsecondary education?

Gee: My organization, BrightHive, helps stitch together data from organizations. And one of the reasons to do that is to measure outcomes. So as a result, we spend a lot of time in conversation with state data holders, federal data holders, postsecondary institutions and others in the education space talking about data and why it matters.

We’ve seen in those conversations three big reasons that things are changing. People’s attitudes toward the use of data and outcomes measurement are shifting. And to understand how, and to understand those shifts, it’s helpful to understand what data is good for.

In the postsecondary space, data is good at three things: it’s good for mapping; it’s good for measuring and good for recommending. And you see this whether data is used to recommend movies in your Netflix queue or to recommend which program you should take next semester -- its uses are similar across settings. But one of the things that is shifting the views of data in both state policy settings and statehouses as well as postsecondary educational institutions is the cost gap -- this notion that postsecondary education is increasingly expensive, and learners and workers who are being asked to pony up that much, to take on that much debt, increasingly are wanting to know whether they’re going to get it back.

Second: consumer expectations because of their experience in every other aspect of their life besides education. It’s so easy now for us to know whether the product we’re going to on Amazon or the movie we’re going to watch on Netflix or the site we’re searching for on Google is going to meet our needs. We have rich recommendation systems. We have clarity in prices and value just about everywhere else in the marketplace, and we don’t have that in education. And consumers are now demanding it.

And third is shifts in the dispositions of policy makers themselves. Folks saying, "You know, we’ve been putting a lot of public money into these systems and it’s unclear whether we’ve been getting back what we’ve invested." So you’re seeing statehouses like the state Legislature in Colorado in 2017 passing a piece of legislation expressly saying that by the end of this year every postsecondary institution has to have a publicly available measure of return on investment. That’s a huge shift in state policy. And we’re seeing that not just in Colorado but across the country.

Ding: There is greater consumer demand for information. There’s also greater demand from businesses, in terms of thinking about our nation’s talent. We keep hearing about the skills gap. We keep hearing CEOs say, "I have to go overseas because I can’t find the skills I need here." Whether that’s because they’re not looking in the right ways or the right places, or whether that’s because we’re lacking the signals, or that’s because we’re lacking the data, that is a constant refrain.

And it is something we also need to be thinking about as we think about the use of data about postsecondary and labor-market outcomes. At the end of the day, it serves both the individual purpose of being able to determine what is ROI -- what do I get for what I put in? And, also, it determines a broader national purpose, which is, are we developing the globally competitive work force that we need to be the nation that we need to be? And that, unfortunately or fortunately, relies a lot on the way that businesses think about their talent, and the way in which they can assess that talent, the way they can recruit that talent, the way that they can retain that talent -- all of which also uses data.

Q: Can you give examples of how the data are being used in postsecondary education and training?

Ding: The biggest one is in competency-based education. That is a way to be more responsive to the labor market from an institutional perspective. To fill tech talent needs, tech and digital skills training programs, sometimes referred to as boot camps, have grown substantially to over 1,000 programs and about 100 schools, and traditional institutions have begun to take notice. For postsecondary institutions to be able to build a program in less than a year or two years, these alternatives to postsecondary education are partnering with traditional postsecondary institutions in order to be responsive to labor-market demand, in order to be responsive to what the data shows and is needed.

Gee: We’ve seen some great examples over the last couple of years. Historically, measuring outcomes has happened in two places: in state longitudinal data systems, mostly for research purposes, and at the Census Bureau, again to generate some higher-level policy answers and some public-use data sets. And those two historical locations for connecting data and measuring outcomes have already started to see, in several states and over the last couple of years, some really exciting forms of innovation within those uses of data. A lot of states are starting to use their state longitudinal data systems to generate not just research reports, but publicly available data that folks can build applications on top of. That’s powerful.

New Jersey has done this with their public outcomes data for all their work-force programs. And Code for America built, using their openly available data, a fantastic app that allows individuals to search for programs that meet not just their outcomes’ needs but also to know whether they have childcare on site and whether there are necessary support programs. So you’re seeing innovation on helping with decision making because of the use of that historical infrastructure.

The Census Bureau has paired with the University of Texas system to generate programmatic outcomes data for all its programs and has done it now in partnership with the Colorado Department of Higher Education and is moving on to several other states. So you’re seeing both scale and new use at the state level, and some at the national level.

And the third area of innovation on outcomes, that also has scale, is some private-sector actors are getting into the game. Folks like Emsi are taking millions of online profiles -- data that folks just make publicly available for anyone to see on the internet about themselves -- and saying, you know what, we can connect your alumni database to what your alumni are saying on the internet about themselves and help you, an institution of higher education, know those outcomes that you’ve struggled to know in the past. Where are they going to work? Who are they working for? What do they do after they get this degree? Do they get a job in the area that they got a degree in or the certificate?

Ding: What we do at Skills Fund is we finance students to go into skills training programs. And as part of that process we also essentially quality assure those programs to make sure that the students are getting the skills they need such that they can get a job and pay the loan back.

Part of what we use in that quality-assurance process is state-level data, where it’s publicly available. For example, the Texas Workforce Commission publishes some outcomes. Same thing with Wisconsin. Getting postsecondary institutions, whether they’re alternative schools like the ones we work with at Skills Fund or more traditional higher ed, to report that information -- how to get them to do that is a big question. The states certainly have the power to do that, to mandate it. We have some influence to do that, because we approve the programs for students to get financing or not. And making sure that those numbers match, where it’s publicly available, is great. Not every state does that. There are states that gather that information but don’t publish it and don’t make it available. And so while in addition to the complex back end of making sure the state systems are talking to each other and working correctly, the thing that I would love to see is all states at least making available what they are collecting, period, and, ideally, standardizing that somehow. That would be phenomenal for us to be able to know whether a placement rate in one state is the same as a placement rate in another. It would be great to see that at scale, coming from states that have far more authority to force that to happen.

Gee: Totally agree. We’ve already paid for it as taxpayers. The data’s already been collected. It’s been connected. We’re just not getting much value from it. If you’re a state department of education or a state work-force agency, you can generate millions of dollars in new value for all the people in your state who are making decisions every day about what programs to go into, what to pay for, by just making the data that you already have publicly available. That’s all you need to do.

Q: Can we measure student outcomes in a meaningful way at this point?

Ding: What you’re asking is an even deeper question, which is: At what level do outcomes matter? Because what we’re seeing in federal policy, in terms of what’s been done -- the College Scorecard, for instance -- has been historically at the institution level because that’s what we have. That’s what’s available. In many ways, at the state level that’s also what’s largely available. A lot of the pushback against outcomes and outcomes measurement comes from faculty members who think about outcomes as, “I know the outcomes, because I literally give the grades. That is the outcome.” And that doesn’t necessarily or even logically translate into labor-market outcomes.

There has to be some recognition for folks like us that yes, faculty are the closest to the students. And they do know the immediate outcome. And that institutional-level outcomes are almost meaningless because programs are what can change a student’s life. Understanding program-level outcomes, it just hasn’t been done. It’s just too hard. I’m very hopeful about the ability to do that in vocational and skills training programs that are intended to do that as a starting point. Because then maybe we can find a way of expanding that into programs that are not as directly skills and vocation oriented.

Gee: There are a lot of bogeymen that people can point to in the outcomes conversation. What ends up happening if we focus on those bogeymen is that we have the conversation about data misuse and not the conversation about [missed opportunities]. And that’s important because, yes, there are and always will be unintended consequences for making data available. That’s a feature of reality that we have to understand and be thoughtful about but embrace. But the potential costs of those unintended consequences are generally lower than the costs of the inaction. The costs that we are imposing, we are shifting from the institutions to the individuals right now. And that’s the choice that we’re making right now by not doing anything.

Q: What are some of the primary hurdles to doing more with outcomes data?

Ding: In K-12, with No Child Left Behind, there was a lot of pushback with the oversimplification of standards and assessments, and the use of adequate yearly progress (AYP). I would love for higher ed to have something even as basic and as terrible as AYP. We literally don’t have any standardized measure that we can apply. Completion rates, maybe? But that’s as close as we are right now. I would love to be able to see us do that with placement rates. That’s in many ways the holy grail. I would love to see us be able to do that with financial outcomes. There’s a lot of work being done on that right now. We have so far to go on this.

Gee: The biggest barrier is frankly culture. It’s the culture of the institutions of higher education and the culture of state offices. When you get down to it, there are very few non-overcome-able barriers that can prevent an institution that has data on its students who went through their programs from working together with a government agency to be able to measure what happened afterward. It’s something we have laws that support. It’s something that we have technology that makes simple to do. And so, the biggest barrier is culture.

There are some barriers to some things that would make it a lot simpler to do at scale. Like, for example, the current ban on the student unit record. This is a hotly debated topic. There are decent reasons for and against this. That is something that prevents national measurement in a way that we see in other national data sets. There is nothing equivalent at the state levels that would prevent states from being able to do this. We can do this and start at the states. Beyond culture, it’s often capacity that is the second biggest thing. Yes, some of the technical and legal challenges that you have to work through in order to do this well at scale require some specialized knowledge. And often there is maybe one person at a department who is capable of doing the few technical tasks that you need to do this well. And that person also has 15 things on their plate because they’re the only person that can do that same set of technical tasks for 20 other projects.

Ding: You said capacity. I was thinking it’s actually skills. Part of financing, part of supporting, part of promoting and encouraging skills training is the need for more technical skills across our work force to be able to do things like this. It’s ironic: we want to be able to do this better, but to be able to do this better we need more people who can be a part of it at all, to actually roll up their sleeves and do the data work. And that takes time and energy on the part of providers, students and all of us who support them. Bottom line is that we’ve got a long slog ahead of us, but it’s great to see the progress we’ve made over the past few years and the growing community of people that are aligned around this vision.

Next Story

Written By

More from Learning & Assessment