Like a researcher testing a new drug, Howard Bloom designs randomized trials. But instead of looking at whether a medicine is effective, Bloom tests programs to see whether they have the impact they’re supposed to have.
Are people getting more and better jobs? Are test scores improving? Are more students graduating? And -- most importantly -- are people’s lives improving as a result of a particular program?
Bloom, the chief social scientist at MDRC, has spent 40 years designing and implementing rigorous evaluation. But even organizations that can’t hire a social scientist can evaluate their programs, he said.
Ask the laypeople in your congregation to help you, he said.
“Bring a little expertise in, people who will ask the questions that maybe you wouldn’t have thought of or maybe you didn’t even want to address,” he said.
“Ask the hard questions: How are we going to know if this works? Why do we really think it’s going to work in this place, for these people? It might’ve worked over there; why do we think it’s going to work here?”
Since 1999, Bloom has led the development of experimental and quasi-experimental methods for estimating program impacts at MDRC, a social policy research organization.
Before that he taught at Harvard University and New York University. Bloom has a master of city planning degree, a master of public administration degree, and a Ph.D. in political economy and government from Harvard University.
Faith & Leadership spoke with Bloom while he was at Duke to lecture on evaluation and a large-scale small-school reform initiative in some of New York City’s poorest neighborhoods.
Q: What advice would you give to people who are trying to help the poor in a variety of ways but who don’t have expertise in rigorous evaluation?
Organizations oftentimes create really good ideas -- an educational intervention, a welfare intervention, housing policy intervention -- and the people who create these interventions usually are completely convinced from the outset that it must work; it just stands to reason that it must work.
But we know -- “we” being myself and organizations that do rigorous evaluation -- that not all ideas that seem good on the face do in fact make a difference, if for no other reason than that the problems that people are trying to address are so serious and so ingrained and so deep and long-standing that it takes some pretty substantial resources and a lot of good luck to really make a difference.
I would never argue that everything should be evaluated all the time; nothing would get done. You have to try things, assess them as best you can.
[But] at a certain point, if you’re thinking of bringing it to a larger scale, before you go to a series of foundations to raise money, at that point in time you really ought to start thinking about a serious, rigorous evaluation to make sure that those resources are used wisely.
In the meantime, you ought to have as much information as you can get about the operation of your intervention, the kinds of people it’s serving, what their outcomes are, what happens to them subsequently, what their perceptions of their experience in the intervention are.
Those kinds of things, I think, most organizations can do. It still requires some outside expertise, and the idea of hiring a consultant to help you structure that is a good one, in my opinion.
Q: What should you evaluate?
Let me separate two things. One thing is to assess the operation of an intervention. Are the steps that are supposed to be carried out actually being carried out? Are the people who are running the intervention actually doing what the intervention’s supposed to do? You start there. Without that, you really have pretty much nothing.
The next question is, What’s actually happening to the people, the clients of the intervention? What services are they experiencing? And, if possible, how do they perceive those services, and how do they perceive their experiences in the intervention?
And then what happens immediately thereafter? If it’s an employment intervention, do they get a job? If it’s an educational intervention, do they stay in school or not? That’s all-important. You might call that monitoring the operation and the outcomes of it.
Now, what we do is try to estimate the impacts of those interventions. It’s an incredibly important distinction. [Outcomes and impacts] are completely different.
In an employment intervention for unemployed people, you would want to measure what percentage of them got a job. That’s an outcome.
The difference between an outcome and an impact is the following: the impact is, “Well, how much difference did the intervention make?”
You don’t know, without actually measuring, how many of these people would’ve gotten a job without the intervention. You can’t assume that none of them would’ve gotten a job.
So the impact of the intervention is the difference between the outcome and what’s often called in the business the counterfactual, which is the percentage that would have obtained a job without the intervention. And the difference between those two is the value-added of the intervention. How much difference did it make?
People who are not used to evaluation really confuse those two concepts.
And that difference is at the heart of rigorous evaluation, in terms of impacts, because ultimately, if you’re putting resources in people’s time, you know -- forget about money -- just people’s time, people’s goodwill, people’s energy, if you’re putting that kind of energy into something, you’d like to know how much difference that made.
And the kind of research that’s required there really does require outside expertise. We try when possible to run those trials very much like a medical randomized trial. That’s considered the gold standard of evaluation, when you can do it.
What you do is like in a medical trial. We’ve randomized in education, and we’ve randomized in employment programs; we’ve randomized in welfare-to-work programs.
And there’s a lot of ethics behind it. There’s a lot of discussion about, “How do you do it right? How do you deal with human subjects?” And you’ve really got to do this right, and that requires professionals. That requires security measures with respect to data; that requires procedures to make sure that human subjects are being treated appropriately and that these kinds of trials are only run when the benefits exceed the cost, both the cost of running the trial and the cost of being in the trial.
So there’s a tremendous amount of thinking that goes behind the conducting of these things, the analyzing of these things and the justification for these things.
Q: And correct me if I’m wrong, but it sounds as though this has to be built into the system from the very beginning?
You bet.
Q: So you can’t say, “We’ve been doing this for 10 years; tell us what our impact is?”
Right. That is huge. We’re always trying to get in as early as possible so that we can design it -- it’s always going to be intrusive to some extent; it’s always going to change operations somewhat. But we’re always looking for ways to imbed it in as natural and flowing a way as you can do it.
And you’ll have many more failures than you will successes. I’ve been doing this for about 40 years now, and what I’m talking about today [the school experiment in New York City] is really a successful intervention, and that’s a rarity.
That’s a rarity because, like I said, things look good, they sound logical, they make sense, but they usually don’t work, in terms of making a difference.
People might get jobs, and they might actually like the experience they had in the intervention, and they might speak highly of it, but it may or may not have actually made a difference in their lives.
Q: What would you recommend for people involved in small interventions that can’t afford or don’t require this kind of rigorous evaluation?
One is that I would recommend that they read some literature about the interventions to see what evidence already exists about the likely success of that kind of intervention.
Q: And when you say literature, you mean scholarly literature?
Yes. Some of it’s too technical to read, but in the evaluation literature, there’s a fair bit that’s not necessarily too technical.
But maybe you have doctoral students that are part of your congregation or a professor that’s part of your congregation that has doctoral students or colleagues that are knowledgeable in this field. Maybe they could pick articles that would be good to read that would say something about the evidence base.
There are people that are trying very hard to be systematic about accumulating knowledge across these various evaluation studies that are being done.
In education there’s something called the What Works Clearinghouse. You don’t need to reinvent the wheel, particularly wheels that don’t work. So that, I think, is a place to start.
The next place to start is to think carefully about just simply assessing the operation of your program. You may well have congregants who are in this business. If you have congregants who are in advertising or marketing or something like that, you can get them on a committee and you can draw on their professional expertise.
It’s been my experience that they’re usually delighted to be asked to bring their special expertise into their congregation and use it.
You’re not going to go nuts and try to emulate a consulting firm, but bring a little expertise in, people who will ask the questions that maybe you wouldn’t have thought of or maybe you didn’t even want to address.
Ask the hard questions: How are we going to know if this works? Why do we really think it’s going to work in this place, for these people? It might’ve worked over there; why do we think it’s going to work here? You know, somebody will do that.
Q: That’s a good point, because I think a lot of times, as you say, people get excited and they fall in love with their idea, and if that person is your boss or your religious leader, it may be hard to ask hard questions.
Right. I mean, if you will, there’s faith and there’s faith, right?
Q: In terms of outcome, what would be the minimum of useful data for people making decisions? Is there any kind of rule of thumb about that?
The only rule of thumb is that there’s no rule of thumb. I’m not trying to be cute; there’s no one-size-fits-all. You’ve got to really focus on your objective. If you ask two people whether 10 percentage points’ increase in a graduation rate is big, you’re going to get two different responses, based on what their expectations were.
And there are some people -- there’s some interesting work being done on some of the long-term follow-up on some of these interventions. Sometimes they make a big difference, and sometimes they don’t. Sometimes there’s an intervention that didn’t seem to make much difference in the very short run, particularly on test scores. Sometimes it doesn’t make a difference on test scores, and it makes a difference on other things.
You’ve got to have a certain amount of tolerance. Science is an accumulation. There are no shortcuts; it’s just something that has to accumulate.
Q: Does your faith life influence the work that you do?
Actually, it doesn’t. My work ended up affecting a little bit how I acted out my faith. I was a university professor for, I don’t know, 20-some-odd years, and now I work at a full-time research institute. One of the things I did in our old congregation, I helped [Rabbi Rick Jacobs at Westchester Reform Temple in Scarsdale, N.Y.,] run an adult education program, and that was in no small part because I was a university professor.
Q: Especially in a time of declining revenues, there’s a lot of desire to turn to laypeople, and laypeople want to contribute.
Yes, they do. They really do. If you can draw on somebody’s expertise, they’re thrilled.
I also play music, and one of the biggest thrills I get is I play in High Holiday services in our congregation in Nantucket, and it really works; it really moves people. I mean, I don’t move people when I show them my findings.
And I must tell you that I find that as rewarding as anything I’ve done in my whole life, by orders of magnitude. I can really feel the power of it.