Helping people fill out financial aid forms (at H&R Block!) increases the rate of college attendance

Eric Bettinger, Bridget Terry Long, Philip Oreopoulos, and Lisa Sanbonmatsu write:

Growing concerns about low awareness and take-up rates for government support programs like college financial aid have spurred calls to simplify the application process and enhance visibility.

Here's the study:

H&R Block tax professionals helped low- to moderate-income families complete the FAFSA, the federal application for financial aid. Families were then given an estimate of their eligibility for government aid as well as information about local postsecondary options. A second randomly-chosen group of individuals received only personalized aid eligibility information but did not receive help completing the FAFSA.

And the results:

Comparing the outcomes of participants in the treatment groups to a control group . . . individuals who received assistance with the FAFSA and information about aid were substantially more likely to submit the aid application, enroll in college the following fall, and receive more financial aid. . . . However, only providing aid eligibility information without also giving assistance with the form had no significant effect on FAFSA submission rates.

The treatment raised the proportion of applicants in this group who attended college from 27% (or, as they quaintly put it, "26.8%") to 35%. Pretty impressive. Overall, it appears to be a clean study. And they estimate interactions (that is, varying treatment effects), which is always, always, always a good idea.

Here are my recommendations for improving the article (and this, I hope, increasing the influence of this study):

1. Show some data. I want to see scatterplots with the outcome on the y-axis, pre-treatment variables on the x-axis, and individuals shown by points (treated units as circles, controls as dots). I wanna see what's going on here. The data are discrete, so maybe plot some binned averages. For an x-axis you can use the combined linear predictor, the X*beta for all the pre-treatment X's put together, with beta as fitted from the regression model.

The only graphs in the actual article are graphs of the estimated model. That's fine if you fully believe the model, but I'd be more convinced with some data.

2. Display the key causal inferences graphically. Nothing cute needed here, I'm just thinking of a coefplot such as on page 419 of this article (although slightly different, because I'm suggesting they plot all their different estimated treatment effects from their different models, I'm not saying they should plot all their regression coefficients from a single model). They have multiple outcomes so they could do a multicolumn display: one column for each outcome and one row for each group where treatment effects are being estimated.

3. Get rid of all the tables and replace with graphs. Just for example, do readers really need to know that the rate was ".374" for some group in Table 1? Yeah, yeah, I know this won't be done. Still, it's the right thing to do.

Anyway, suggestions 1 and 2 would be a great start, I think. The first step to build confidence in the results and the second step to summarize their findings. I read over the paper and it was difficult for me to pick out all the different numbers. If they'd made a graphical display of their results, I'd've displayed it above, the graph would've grabbed everyone's attention, and right now everybody would be talking about this study. Also maybe we'd have a better understanding of what they've found in this fascinating experiment.

4. Post the article on the web. As it was, it was not easy for me to get it, and I don't know if I'm allowed to post it here.

More like this