Cracking the code: Funders the key to unlocking evidence-based practice

Posted on 17 Dec 2018

By Kathy Richardson, Chaos Controller, Our Community

Our Community's Chaos Controller and executive director Kathy Richardson examines how we might create a sector where there are incentives for using evidence, and how we might also create a world where dollars follow evidence. Her thoughts range over some of the latest trends in Australia around evidence, measurement and social impact.

In October 2014 I thought I might just have cracked the code. I was about three-quarters of the way through a seven-week safari across the United States investigating tools and practices that Our Community could use to catapult the Australian social sector - en masse - all the way over the fence and far into the field of evidence-based practice.

I'd heard some good stories along the way, but there had also been some moments of deep despair. "It won't work," many people were telling me. "It's too complex, there are too many variables." "There's no way you can do this systemically." "You'll never pull it off."

Then I met a young woman who'd seemingly revolutionised the provision of homelessness services in her locality through the clever use of data. She'd come up with a system, based on analysis of past practice and results, to better categorise the people seeking help to determine how best to apply the city's scarce emergency housing resources. Score a "one" and you'd be offered overnight accommodation but nothing more (that would most likely be enough to kickstart a recovery). Score a "two" and you'd get thorough, ongoing help (overnight accommodation alone wouldn't help your plight). Score a "three" and you'd be turned away - the algorithm showed there wasn't much that could be done for you. All the trials had shown it was really working. She'd done it!

While I was congratulating her, my mind jumped ahead to how I was going to take this tale back home to Australia and apply its lessons to help the tens of thousands of not-for-profit groups we work with to shift to evidence-based practice. The answer was good data!

Then I came crashing down to earth.

Richardson Kathy CIC 2017 3
Kathy Richardson pictured at Our Community's Communities in Control Conference, has a passion for technology systems that deliver their promises. Picture: Ellen Smith.

"It didn't work," she was telling me. "The social workers wouldn't use the system. They gamed the survey to ensure it would produce the results they wanted."

On reflection, I couldn't blame them. If I was the one sitting across from a person in desperate need, I too would find it pretty hard to say, "Go away, you're a lost cause."

So where does that leave us? How do we shift not-for-profit organisations towards evidence-based practice when there are clearly other forces at work? How do we convince well-meaning, hard-working groups to abandon the things they've always done - after all, they believe they're effective, perhaps they help some people sometimes, their funders are already on board - and use their precious resources to do the things that have actually been proven to work?

Despite what many people think, individual donors are not strongly motivated by evidence. We've operated Australia's longest-standing online donations website, GiveNow, since 2000 and one thing we've learned is that people give not because they've seen the stats, but because someone has asked them to give. People give to people.

Institutional donors - grantmakers - are a better bet as drivers of change towards evidence-based practice. They're generally more strategic than individual donors. Most have an outcome in mind as their goal when they distribute their pot of funds - they want to cure cancer, say, or improve the natural environment, or reduce homelessness.

They're also more likely to feel pressure to deliver on their aims. Their trustees or (in the case of government grantmakers) government secretaries or departmental chiefs want to know that the precious funds under their control are being used to the best possible effect.

There's another reason why institutional grantmakers could present the best chance we have to shift behaviour. Not-for-profits, in the main, are willing participants in the shift to evidence-based practice. The problem is they're so busy doing the things that need to be done - planting the trees, supporting the sick, running the festival, finding housing for the homeless - and they're so chronically short of funds that they have neither the time nor the money to put in place the systems that will help them track their impact.

Grantmakers have the capacity and the means to help. They can develop and disseminate generalised or sector-specific templates. They can run or fund workshops and training. They can provide one-on-one assistance where appropriate.

Importantly, grantmakers can also allow a proportion of their grants to be used for measurement and evaluation. In fact, they can insist on it, making it a condition of their funding that not-for-profit organisations track and report on their progress. By our estimates, Australian grantmakers dispense around $80 billion a year - that's a giant bunch of carrots.

There are signs of change already afoot. Modern grantmakers (well, the best ones) are strategic, and they want to know what outcomes their grants are purchasing. At Our Community we work with hundreds of grantmakers every year. They have collaborated with us to develop what's become Australia's leading grantmaking software, SmartyGrants, and they're working with us again to nut out how to add tools to SmartyGrants that will help them define and track their outcomes.

OK, great. Grantmakers (in the main) have the resources, not-for-profits (in the main) have the will, and the technology is now available that will help us collect and analyse the data.

Farm Tech
Grantmakers have the technology and the motivation to use evidence effectively to produce change with their investments.

So what's the problem? Why aren't we there yet?

The truth is this stuff is hard. It is messy.

Technology can't solve every problem. Human behaviour is hard to predict. Putting in place a sensible measurement and evaluation framework is complex and expensive. Counterfactuals are confounding. Every answer throws up another question. Poorly designed evaluation processes can produce misleading results. Outcomes are long-lived but can be slow to emerge. The attention spans of funders and project managers are quick to take hold but short-lived.

It gets worse. The system we're working within is actually working against us.

Grantmakers themselves often don't have the tools they need to help them define their own outcome goals, match them with their grantees' goals and define what data they need to collect (much less crunch it all together). They have money to give away but not much to spend on themselves.

They're rewarded for getting the money out the door efficiently and with good photo opportunities, for collecting the receipts to prove that the funds weren't misspent, for avoiding bad headlines. They're not rewarded for changing the world.

Meanwhile, their grantees - the organisations at the pointy end - are time-poor and resource-restrained. They're rewarded for doing, not analysing. They're asked to innovate. No one wants to fund them to do what they've being doing forever (even if they know it's working). They're punished if they spend too much money (or, in many cases, any at all) on salaries, overheads and administration, although nobody can explain exactly how they're supposed to distribute surveys without a photocopier. They're put on the blacklist if a project turns out differently than what was expected.

All this results in a system that's about reporting, not learning; talking up the positives rather than reflecting on how to do better next time; avoiding complexity; obsessing over headcounts. The system has in-built design faults that are working to impede a shift to evidence-based practice.

What's required is a revolution in how we think about grantmaking. In an era where science is increasingly being defunded and ignored, we need to go the other way. At high school we were taught to identify a problem, do some research, pose a hypothesis then set about testing it. We didn't get marked down if our hypothesis was not supported by the results. The experiment was only a dud if we failed to learn from it, failed to share what we were learning with others, failed to progress.

We need to turn our communities into laboratories (though remembering that people are not lab rats). We need to clearly define our outcome goals (put our missions back at the core). We need to research and articulate our cause-and-effect hypotheses, and re-engineer our processes so we can draw a clear line from goal to results.

We need to lift our expectations (know more about what we're doing) while working within our competencies (don't try to solve world peace, ask one important question and answer it reliably).

All this can be done. We've always had the wit and kindness; now we have the technology. Let's do it.

Sign-up to our newsletter