Many colleges don't lack for data on student performance. Administrators and faculty often find there is a measurement for nearly everything they and their students do as they strive to increase college completion rates.
Despite this wealth of information, colleges still struggle to use data in the best way possible so they can help their students succeed. A new book from Brad C. Phillips and Jordan E. Horowitz, Creating a Data-Informed Culture in Community Colleges: A New Model for Educators (Harvard Education Press), seeks to find the best way educators can use data.
Phillips is president and chief executive officer and Horowitz is vice president of the Institute for Evidence-Based Change, a nonprofit organization that connects educators to research. They recently responded together to questions about the book from Inside Higher Ed by email.
Q: What are some of the problems with the volume or quality of data that are currently available to community colleges?
A: The biggest issue with the amount of data -- there simply is an overwhelming amount of undifferentiated data. This leads to community college administrators, faculty and staff wanting to address all of it. Behavioral economics tells us that with too many choices, folks get overwhelmed trying to set priorities and can’t decide what are the important things to work on. So they end up trying to influence too many indicators. This means too many projects are implemented without adequate resources, efforts are diluted and going to scale is not possible with any one intervention.
To address this, we work with colleges to reduce the number of metrics and focus on leading indicators. Too many colleges focus solely on completion metrics. But these metrics, which are lagging indicators, are counting students who already are out the door -- whether they graduated or dropped out -- and are not actionable. We encourage colleges to focus on leading indicators, and only a couple of them. Leading indicators are in their control to influence with student supports, policies and programs (e.g., course success rates, course completion rates, term-to-term persistence) and lead to the lagging indicators, such as credit accumulation and completion. Because colleges focus on lagging indicators and cannot directly influence them, student completion is not being sufficiently improved.
In our book, we also describe how data quality impedes data-use efforts. It is a real problem in community colleges and, frankly, all of education. Unfortunately, even though data is one of the few things a college can control, it is often not viewed as a “sacred” resource and thus, accuracy is lacking.
Q: How can educators and institutions better use the data that are available to them, and how can those institutions that analyze data do a better job of providing educators with the information they can use to improve student performance?
A: When there is too much data, it is crucial to know what to use. How does a college do this? How do they interpret conflicting results looking at similar metrics from different sources? Educators need to be provided with clear and simple data exhibits that convey the story to be addressed. Too often educators are presented with dozens of tables and are expected to find the story themselves, which is what we call the “Where’s Waldo” approach. But educators should not have to be analysts and go searching for the problem. It should be evident in the one or two tables/figures and identified in the titles of the exhibits they’re presented with (see pages 28 and 29 in the book). Their job should be to craft the appropriate research-based, high-impact solution that fits the culture of their college.
While we base our work on theories of behavioral economics, neuroscience and psychology, we provide practical examples of how to employ what we know in practice. We take what these fields know about how people make judgments and decisions, and how organizations change, to build a practice that capitalizes on the best of this science to build good habits of data use.
Q: What is the best language to convey data to instructors?
A: Simply, clearly and in common language that tells the story. We use the neighbor rule in any communications about data. That is, pretend you have a neighbor who owns an ice cream store and you’re explaining to them the data at your college. This neighbor should be able to tell that story.
We also hear too often about educators who, reviewing the same data, have different interpretations. How can decisions be made if the judgments about the data differ among educators? The tools and techniques we detail in the book address how to get to a common understanding about what the data means and, more importantly, the decisions that need to be made. One point we emphasize is that data should only be developed for two reasons: compliance (because you have to) and to inform decision making. Data should never be developed “for information only.” We call this the “must know vs. nice to know” rule.
Q: What are the challenges to changing the way educators and colleges currently use data? What are the challenges to encouraging organizations and institutions that provide data analysis to shift to this new approach?
A: On page 175 of the book, we explicate the issues related to funding changes, lack of focus, leadership changes and resistance to change. These are all threats to making better use of data.
Habit change, however, is a big issue. Data is often presented in ways that we would see in a research journal, because that is the way it always has been organized. Unfortunately, such presentations do not automatically lead to good use of data. In order to inform policy and practice, we need to use techniques and tools that lead to making accurate judgments and decisions about the phenomenon of interest. We describe how to do that.
It is important to reiterate that we all want to make a difference in our student outcomes, helping to change lives for the better. Using the tools and techniques presented in this book will help educators better support student success efforts.