You have /5 articles left.
Sign up for a free account or log in.

Now that I am a full-time analyst and strategist for Willow Research of Chicago,[1] I was excited to see the title of a recent Matt Reed (Dean Dad) post, “A Data Puzzle.”

After all, significant parts of my day are now spent trying to solve data puzzles, gathering and then interpreting data in order to provide insights to clients that allow them to make the best possible strategic decisions.

It is fascinating to see the various commentators working through Reed’s data puzzle regarding the relatively small gap between white and Latinx students in terms of semester-to-semester persistence (less than 3 percent), coupled with a much larger gap (12 percent) when it comes to graduation rates.

Perhaps the plainest way to put this is that Latinx students are staying in school without graduating. What’s up with that?

This is the kind of question we relish at Willow Research, something complicated and where an answer will be meaningful to the client. For us, the most time-consuming part of the data puzzle is up front, where you figure out what data you have and what you need to go get. (As well as how you’re going to get it.)

As a researcher, it was fascinating to read through the comments offering theories and additional angles through which to examine the institutional data.

These theories include: students taking the wrong classes, differences in percent who transfer, remedial classes as a barrier for Latinx students, economic barriers, bad advising, financial aid running dry and lots of other very worthy hypotheses.

From this researcher’s perspective, though, looking at the available “enterprise-level” data is never going to reveal the answer (or more likely answers) to this data puzzle, because those data are not designed to answer the question.

The data available through IPEDS or other institutional catchments are really only capable of identifying a “what,” as in “What is happening here?” It is not well-suited to answering a “why,” as in “Why is this happening?”

You can see this in the different hypotheses themselves in the comments, including some very clever ways of cross-comparing institutional data that may reveal additional insights.

But let me suggest there’s a shorter and surer way to figure out the “why” underneath the “what”: ask students.

To be sure, this reflects my broader philosophy of education in general, that students are the best and most reliable sources for information about their own lives and experiences. For me, this extends even to their learning, particularly learning to write, where it’s my view that if you want to know what’s being achieved, it’s as meaningful to ask students to reflect upon and judge what they’re learning as it is to rely 100 percent on examining and ranking the writing artifacts they produce.

But my personal philosophy aside, the information gap between what the available data can say and what students may have to tell us is apparent in this example. This is where companies like mine come in and help by doing targeted research to provide insights into those unanswered questions.

For example, we could measure student aspirations and expectations prior to entering school versus their experiences once they’re in school. Where are the gaps? What are the hurdles? What resources are students utilizing to close those gaps and clear those hurdles? What available resources aren’t they using? What new resources need to exist?

These are the kinds of questions tried and true social science methodologies can answer in ways that incomplete enterprise-level data can never touch. One of my personal frustrations in general is the belief that “big data” will reveal all, whether this be in a realm like personalized learning software and its alleged ability to “map” how we learn (not going to happen), or questions like Reed’s, where even the most clever parsing of institutional data isn’t going to answer the core questions.

More data do not shed light on a question they are not fundamentally designed to answer. If anything, more data create greater confusion as more possibilities are generated without any method to sort through them.

The most interesting part of my job at Willow is working with my colleagues to develop the research instruments that answer these questions. It takes a lot of planning and benefits significantly from experience. While there are many repeating patterns, no two projects are necessarily alike, and that experience helps avoid potential pitfalls. It’s a lot like teaching that way. The more you’ve seen, the more you can anticipate.

If your institution is looking at a question your enterprise-level data can’t answer, consider the other tools that are out there for the using. Heck, I’m available and more than willing to answer questions, even if they seem basic.

And remember that in the end, it never hurts to just ask students directly.

 


[1]#plug

Next Story

Written By