You have /5 articles left.
Sign up for a free account or log in.

Among the center’s upcoming projects is ongoing research to understand where and how computer science programs are offering technical AI instruction.
Photo illustration by Justin Morrison/Inside Higher Ed | artiemedvedev, hiphotos35 and skynesher/iStock/Getty Images
Five years after Northeastern University’s Center for Inclusive Computing was founded, the center can boast broad success in its goal of making computer science education more accessible. At its partner institutions, which number more than 100, the numbers of women and people of color studying computer science have increased much more sharply than those of men and white people.

Led by Executive Director Carla Brodley, the center spent its first half decade paying special attention to supporting students who did not take computer science classes in high school and therefore lack the baseline knowledge that some of their peers enter college with. Brodley and her team developed a slew of best practices for helping those students succeed in their introductory classes—and for drawing students who might not consider computer science as a major into the field. One of the center’s most recent projects, for example, will help a group of universities test how well integrated computer science majors—which merge comp sci curricula with various other majors, such as statistics, graphic design and even English—attract new students to the discipline.

Carla Brodley
But the computer science landscape has also changed since the CIC launched. Most notably, generative AI has become a pain point for professors as students use the technologies to cheat on coding assignments. But generative AI has also become an increasingly important and interesting part of the tech landscape, meaning institutions have to figure out the best ways to incorporate studying it into their existing curricula.
As the center plans the rest of its first decade, Inside Higher Ed spoke with Brodley by phone about where computer science education, one of the fastest-growing majors of the past several decades, stands—and where it is going. The interview has been edited for length and clarity.
1. The rates of women of color in computer science at CIC’s partner institutions have skyrocketed in recent years, thanks to the CIC’s work. What are the key elements that the CIC has found contribute to those numbers?
The strongest key element is [making] it such that someone who is completely new to computing doesn’t feel behind from day one and has an equitable beginning. There’s sort of three components to this.
[First,] making sure that they’re not feeling bad in a classroom with people who already know everything and they’re sitting there talking about what they got on the AP exam and how this is easy. That’s a terrible feeling. And they’re usually lying, but you don’t necessarily know that at age 18. The second component is making sure that your TAs understand that not everybody is going to come with the same level of experience and not to think that these people are dumb because they maybe don’t understand something that the majority of the students who’ve had some coding and some experience do. So, TA training is really important.
And the third thing is sort of in the weeds; it’s this idea of common assessment, which is, if you have multiple sections of a course, making sure that they have the same assignments and the same exams, because then when they go to the next course, they’ve all learned the same stuff. That’s important, because if you don’t have that and you have someone who’s completely new to computing and they get the easy teacher for the first class, they suffer in the second class, whereas the experienced student doesn’t.
We didn’t do anything in particular for women of color. It just turns out that women of color are more likely to not have had prior coding experience, because it’s an elective in high school, and it’s not taught in every high school.
2. The hottest topic within AI is fears about students cheating. How do you see institutions addressing this right now?
We’ve heard from our partner schools that TAs, in particular, are sick of grading programs that were generated by AI, and faculty are seeing large disparities in what people get on their assignments versus how they do on their exams. They might get an A on their assignments and then they fail the exam. And so that’s kind of a clear indication that maybe they had a little too much help.
I think we have to put in [place] grading policies that make it so that you can’t pass a course by just using generative AI, and I’ll give you just one example of a grading policy that would do this. So, you take whatever grade a person got on their exam—let’s say they got a B on their exam. As long as the written homework is within one letter grade of that, so if they got a B and they got an A on the written homework, then yes, it will pull their overall grade up. But if they got a D on the exam and an A on the written homework, then they just get a D for the course. That would be the policy that I personally would institute into my classes.
That’s just one example. And I don’t think anyone has settled on what is the best way to handle this.
Trying to come up with a way to do that, I think, is actually a really interesting research question: How do you evaluate a student’s progress while encouraging the use of generative AI for their learning, but not for cheating? There’s always been cheating in coding classes, and there’s software that can take two programs and see whether or not they’re structurally and logically identical—even if a student changes the indentation, formatting and all the variable names, you can still tell that the logic is the same. We’ve been using that for decades to catch human-to- human cheating. That’s harder to apply for computer-to-human cheating. So I think that it’s an open problem and it’s a fascinating one. And how do we incentivize students to actually do the work themselves?
3. Obviously, over the past few years, a narrative that we’ve seen a lot in the news has been an increase in the number of tech layoffs and of coding jobs being replaced by AI. Is that a fair concern? Is that something that students are feeling worried about when they go into computer science at this point?
I will say that students are quite concerned about that, and we are starting, not necessarily at Northeastern, but in some of the other schools we partner with, we’re starting to see a little less over-the-top demand. Computer science has been just growing by leaps and bounds over the last few years, to the point where it’s hard for universities to even keep up with the staffing of the courses that are needed. This might give us a little bit of a respite if enrollments go down a bit.
But I think that the demand for people who understand technical AI is going to be quite large, and I just can’t see the demand going away for computer science. Whether there’ll be as many entry-level jobs, I think that’s what students are concerned about. But I haven’t seen the data nationwide for this in any way. Unless [the Burning Glass Institute] or Indeed.com is going to publish something on this, I don’t trust the anecdotal [evidence].
4. Some of the CIC’s projects and programs have been funded by the National Science Foundation. What are your thoughts on the grant cancellations that are happening there and how that’s affecting STEM and computer science education?
So, I don’t know the details on [the cancellations], but I do know that there are publicly published lists of the grants that have been canceled, and I think many of them are worthwhile and were doing important things. And I’m distressed about the cuts to science research in general that are happening in the country. I think it’s going to be a challenge for us, as a country, to make really important progress on solving the world’s challenges, and some of the world’s challenges are around STEM education.
I think that the work that the CIC does, where we are working on making computer science accessible for people without prior experience in computing, doesn’t run afoul of any of the recent executive orders and certainly does help broaden participation in computing, just because of who has prior experience in computing before they get to university. Less than 60 percent of our high schools teach computer science. So regardless of whatever a student’s identity is, if they’re at a school that doesn’t have computer science, they won’t have taken it before they get to university. Making sure that they can successfully get through a computing program and, certainly, get through the intro sequence without feeling like they’re behind from the day that they start is really kind of a mission of what we’ve done at the undergraduate level. I think those types of initiatives will continue to be really important as we go forward.
5. What are some of the key issues in computer science education that the CIC is going to be working to address moving forward?
Two new initiatives: One is around making access to education in technical AI easier. This is in contrast to knowing how to use AI and using AI within the learning environments in other subjects. This is actually around really understanding the AI algorithms from a computer science viewpoint.
We’re currently working on a landscape study of the entire country of who’s offering master’s, who’s offering minors, who’s offering concentrations in AI, who has required courses. We’re also looking at the prerequisites, because one of our working hypotheses that we just have anecdotal evidence for right now is that it can take students until their seventh or eighth semester to be able to have completed the coursework needed to be able to take the technical AI classes. And that is a problem, because we really need them to be able to get to them sooner, but it causes a second related problem, which is, if a student can only get to the AI classes in their last year, you can’t really set up a prerequisite structure within AI.
The second large initiative that we’re looking at is really examining and working on what’s called the credit-loss problem between community college and four-year universities in STEM. In computer science, when we look at this, yes, there are articulation agreements between community colleges and four-year universities, but often those are not updated on a very regular schedule. What happens is, in a degree with strict, progressive requirements like computer science, where you have to take Computer Science 1 before you take Computer Science 2, students will end up not having something accepted in that sequence, which means that they’re really going back and taking more introductory material upon transferring to the four-year.
We’re working on a small pilot project around that, because in my view, credit loss is a really pressing problem and an urgent problem. It’s causing many people not to be able to major in STEM when they get to the four-year university because they just can’t afford another semester of school.