In partnership with THE 74 Share your alumni story with us

The Data Behind The Alumni: Unbundling Facts, Figures, and Caveats

By Richard Whitmire July 26, 2017

Steven Susaña-Castillo, a KIPP alum, on his graduation day from Wesleyan University. (Photo credit: KIPP)

The idea that high schools should track their students through college, and then hold themselves accountable based on how many actually earn degrees, is not just new — it’s radically new.

Given the novelty of all this, it’s not surprising that the databases these charter networks rely on are new as well — and warrant discussion. Here’s what you need to know:

How do the charter networks calculate how many of their graduates earn bachelor’s degrees within six years?

They rely mostly, or exclusively, on the National Student Clearinghouse, a nonprofit, non-governmental organization that works with 3,600 colleges and universities that share enrollment data. Student names from the enrollment information are then matched against high school graduation data, making it possible to track students as they enter college and determine whether they persist to finally earn a degree.

In some cases, networks are also manually tracking students as they progress through college.

So are the college success data reported by the major charter networks easily comparable?

Not easily, no. Some of the charter networks, such as KIPP and Uncommon Schools, have student tracking systems of their own, often using Salesforce software to monitor their graduates. Thus, they are able to account for what the Clearinghouse calls “data discrepancies,” or students missed because their names never got properly matched as they transitioned from high school to college or because they attended colleges not part of the Clearinghouse network. (Colleges have to opt in to the Clearinghouse network.)

Another data discrepancy issue: students exercising their privacy rights (known as “FERPA blocks” for the Family Educational Rights and Privacy Act) to shield their college transcripts from outside review. FERPA blocks vary by region; a small percent on the East Coast block their information, but as many as 6 percent of students attending West Coast colleges do so. At some colleges, the block rate rises above 50 percent, according to the Clearinghouse.

Yet another discrepancy factor: The Clearinghouse data experts concede their record-matching software often struggles with Hispanic surnames. A high school might list a surname one way, a college another.

Thus, the college success rates for the networks able to fill in the gaps by collecting their own data may be higher (or at least more accurate) than the rates for networks such as Alliance College-Ready, Green Dot, and Aspire, which rely solely on Clearinghouse data, are based on the West Coast, and also happen to be networks with large numbers of Hispanic students.

So what’s the data advantage for networks able to track all their students? Estimates for that difference range between 2 and 10 percentage points. The Academy of the Pacific Rim, for example, was able to track every alumnus, discovering students who earned four-year degrees who were missed by the Clearinghouse. As a result, its success rate rose from 60 percent to 70 percent.

Did you rely on self-reporting from the networks for The Alumni?

Yes, by necessity. The information the Clearinghouse passes along to the networks is proprietary. However, the Clearinghouse has overall charter success rate calculations that roughly match the averaged rate of these nine charter networks. Plus, any charter network fudging numbers would be doing so in clear view of the Clearinghouse. Finally, if networks are going to fudge numbers, presumably they would make up higher numbers. As for the networks that fill in the students missed by the Clearinghouse, there is no independent check on the self-reporting.

Don’t the elaborate student alumni tracking programs found at the top charter networks increase their college success rate?

Yes, and that’s pretty much the entire point of those programs — to help guide their students through college to win four-year degrees within six years. It’s possible to view these programs as nannyish, but if they produce more college graduates from minority and low-income families, isn’t that a good thing?

It’s also important to point out that traditional high schools that serve students from low-income families probably can’t afford to do what KIPP does (KIPP Through College in the New York region costs $2,000 per student per year, funded by independent fundraising). That may be pricey, but KIPP and other charter networks are providing an invaluable research service in determining what works, and doesn’t work, in pushing up college graduation rates. A classic case of charters acting as incubators of innovation.

For example, several charter networks are working on low-cost systems, often relying on texting or social media, that remind alumni about anything from financial aid deadlines to course-switching decision points. Those systems will take time to yield quantifiable outcomes, but once established, that software will prove invaluable to traditional school districts in cities such as Baltimore, Detroit, Los Angeles, and Newark that could use a low-cost method to bolster the college success rates of their alumni.

Do charter students who drop out during the high school years get included in the college success data?

The one network that insists on including students who leave the system is KIPP, which reports its college success data starting in ninth grade for students new to the KIPP system and at the end of eighth grade for existing KIPP students. YES Prep, part of the United for College Success Coalition in Texas, has promised to start calculating its college success data from ninth grade, but no figures are yet available.

All the other networks start their data set in 12th grade — and say they don’t have data that begins in ninth grade. KIPP takes a principled stand on that issue, refusing to release any results that start the tracking in 12th grade, despite the fact that it would boost its college success rate.

Within the charter community, this is turning into a hot-button issue. KIPP feels very strongly that the only honest method for reporting graduation is to start in ninth grade. In theory, a charter network could increase its college success numbers by pushing or counseling out weak students before their senior year. That would apply to any high school, not just charter high schools.

We cite this tracking difference throughout the series anywhere the success data appears.

Is the right comparison point to charters the 9 percent of poor kids who earn college degrees?

That’s a question we struggled with during this project. The 9-percent figure accounts for all children from the bottom (fourth) of the income tax bracket, including students who drop out of the system before they are able to graduate from high school. An alternative measure is comparing these charter graduates to all of America’s students from low-income families who first graduate high school and and then go on to earn college degrees (roughly 15 percent, probably a few percentage points less, because that figure includes some students who earned associate’s degrees). Even if as many as 15 percent of low-income minority students who make it through high school earn college degrees, that means these top charter networks are still doing three and a half times better.

What do the Clearinghouse numbers say about the overall charter college success rate?

According to the Clearinghouse, 30 percent of charter school graduates earn two- or four-year degrees within six years. Unbundling that figure, 24 percent earn four-year degrees in that time.

So why the difference between the rates reported by the charter networks, all of which use Clearinghouse data, and the overall Clearinghouse rate?

My first assumption, that the Clearinghouse numbers reflect a broader database of charter schools — schools of more mixed and lesser quality than the networks profiled in this project — got downplayed by the data experts at the Clearinghouse. The major players, they said, are the “data discrepancy” factors cited above.

Why do the West Coast charter networks have lower college success rates?

That is unclear. The challenges the Clearinghouse faces in tracking Hispanic names may be a player. Also, the per-student funding for charter students in California is less than what the big East Coast charters receive — almost half of what some charters in New York City receive, depending on location. But the per-student reimbursement in California matches the reimbursement in Texas, where IDEA’s college success rate of IDEA charters is far higher.

But there is one common trait for the three West Coast charter networks included in the project: They all got a late start tracking their graduates through college. During the years that the East Coast charters were building college tracking systems, California charters faced severe budget cuts and were forced to keep all resources focused on K-12 instruction. My assumption is that’s the biggest player in the success rate differences.

What about the high college “persistence” rates some charters tout?

Earning a bachelor’s degree by the six-year mark is a solid method for measuring success. Caution is required when interpreting “persistence” rates prior to earning a degree — a situation many charters find themselves in, given how young they are. Thus, many of their graduates aren’t old enough to get measured at the six-year mark. Any calculation of pre-diploma-earning persistence rate that places all its college alumni in a single bucket can be misleading, given that freshmen and sophomores are likely to dominate that calculation, and their persistence rates are the highest. What matters most in an early sampling rate of persistence are the later years, such as junior-to-senior.

Can you really compare degree-earning rates at charters to traditional schools that serve low-income students?

To charter detractors, the answer is clear: No, you can’t. Although charter schools admit their students through a blind lottery, that doesn’t preclude selection bias. And the critics have a point. It seems reasonable to conclude that parents desiring a more challenging education for their children are likely to seek out better schools, as this recent study in Oakland found. Of course their children will end up earning college degrees at higher rates.

And we know that in many cities, fewer parents whose children have learning disabilities choose charters. Same with children who are English language learners. Of course the college outcomes are different.

To charter supporters, the selection bias issue looks a little different. They point to numerous studies in cities such as Los Angeles, New York, and Boston that show charter schools outperforming equivalent traditional schools, with many of the networks enrolling equal percentages of students from low-income families, as well as special education students. Selection bias, for example, can’t explain the success seen at networks such as Boston’s Brooke Charter Schools.

So what’s the bottom line on selection bias?

That issue won’t be settled in this reporting project. In many ways, the self-selection debate is more about scoring debating points than about anything real that affects students and families. If by attending college-oriented charter schools, many students from low-income families end up graduating from college who otherwise would never even have considered college, is that a bad thing?

In the real world, selection bias triggered by parents signing up for charter school lotteries pales in comparison with the blatant selection bias found in the many traditional schools that students must test into. In New York City, for example, 98 traditional middle schools control enrollment through screening.

So the bottom line, at least from my perspective, is that whatever advantages charter schools gain by requiring parents to sign up for lotteries gets dwarfed by the strong outcome differences: The college success rates at the top charter networks, depending on the comparison group, are anywhere from three and a half to five times greater than the rate for comparable students who don’t attend charters.

That’s an effect size that can’t be explained away by selection bias. Keep the KIPP data in mind — a 38 percent success rate (46 percent in the New York region) that takes into account high school dropouts. Something is going on here, something probably worth replicating by all schools serving students in poverty.

Submit
Your Alumni
Story Here

We Recommend