Did that government program really work? How do you know?
Go Deeper.
Create an account or log in to save stories.
Like this?
Thanks for liking this story! We have added it to a list of your favorite stories.
Say you devised a home visit program that regularly sent helpers into the homes of young, poor women after they gave birth, providing assistance with parenting and other skills. Then suppose that some years later you surveyed the women and found that many had substantially improved their employment levels, their kids' achievement, their health.
You'd be tempted to declare victory and seek money to expand your program, right?
But what if another group of similarly young and poor moms got no help from your program yet several years later showed the same improvement in employment, achievement and health? Did your program have the impact you hoped for?
That's a question we aren't prepared to ask nearly enough, Jon Baron told a couple hundred policy types from around Minnesota Wednesday at the University of Minnesota College of Continuing Education's 28th annual Conference on Policy Analysis. Baron is the president of the Coalition for Evidence-Based Policy, an 11-year-old non-partisan organization in Washington, D.C., that argues for more rigor in figuring out which government programs actually work and which don't.
Turn Up Your Support
MPR News helps you turn down the noise and build shared understanding. Turn up your support for this public resource and keep trusted journalism accessible to all.
His talk was the keynote address for the conference on the St. Paul campus of the university. I was there to moderate a panel on rural vitality, but Baron's message caught my attention, partly because it appealed to my sense that we often lack basic statistical skills to evaluate what's going on around us. From cancer-screening to climate change, reading scores to political polls, we're surrounded with data but often have difficulty knowing when to be convinced by it.
Baron's seems like a no-brainer of a principle. Yet he said government usually neglects establishing random-sample surveys or other means of studying equivalent groups of subjects to learn something important about the work it does.
The case of the home visit program was a real-life example, he said. Positive results that seemed to be produced by a government program more likely came from people's tendency to rebound from the most trying times of their lives, he said. (Regression to the mean, for the statistically inclined.)
The housing crisis was a classic missed opportunity, Baron said. As more homeowners sank underwater on their mortgages, no one really knew what would be most helpful -- lowering interest rates, extending mortgage lengths, partial forgiveness of debt. It was a great opportunity to answer the question.
"But nobody did the study and we're still in the dark about which homeowners' aid is effective," Baron said.
Baron's audience Wednesday consisted heavily of state employees involved in a great variety of programs. The state departments of revenue, health, employment and economic development, transportation and more were well represented. It seemed clear that Baron thought they, not elected officials who often draw conclusions from something other than evidence, were the key to more rigorous accounting.
If you want to see examples of government programs Baron thinks were well-studied, visit this page of his organization's website. It covers a range of programs, from obesity to fighting parasites in Kenya.