Transforming your intentions into outcomes

Posted on 20 Nov 2018

By Matthew Schulz, journalist, Our Community

As the grantmaking world comes to grips with the rapid rise of outcomes measurement, there's no shortage of experts, but who knows what really works?

We draw your attention to the keynote speaker at the "Prepare for Impact" Grantmaking in Australia conference, Rory Gallagher. He heads the Australian branch of the Behavioural Insights Team (BIT), a social-purpose company part-owned by the UK government.

BIT began as the world's first government institution dedicated to the application of behavioural sciences, but its lessons have been increasingly applied here in Australia, including by the NSW government and more recently the Vincent Fairfax Foundation.

Mr Gallagher explains that BIT aims to improve outcomes by developing policy based on a realistic understanding of human behaviour. That approach stresses an outcomes focus, and requires a high standard of evidence.

Dr Gallagher spelt out an assessment practice he summarised as "test, learn and adapt".

He said good empirical data generates insights into human behaviour that will better direct funding, based on evidence.

Millions of dollars in funding wasted

Dr Gallagher busted false assumptions that could lead to failure, such as the US-based "Scared Straight" program that sent ex-cons to speak with juvenile delinquents to put them on the right path, only to increase crime.

And he highlighted the vast number of government-funded programs - and the wasted millions, or even billions of dollars - that couldn't be shown to work.

He cited a 2014 UK study of programs that showed a "weak or no positive effect" in the following proportions:

  • Education: 90%
  • Employment/training: 75%
  • Business: 80-90%
  • Medicine: 50-80%

Dr Gallagher said those serious about wanting to know whether a program was effective should apply randomised control trials (RCTs), where a test group could be compared with a control group unaffected by the proposed intervention.

Medical research has led the way in the use of RCTs since adopting the method in the 1950s.

Nowadays, we wouldn't expect to be offered a drug that hadn't been rigorously checked. Yet the same cannot be said for many of the social sector programs that win grants. That trend is changing.

Where are you on the hierarchy of evidence?

Dr Gallagher described anecdote as the lowest standard of proof or evidence, yet it's not unusual to find funders and government agencies that "cherry pick" positive results "alongside a photo op" to suggest a program is effective, he said.

Yet how would anyone know that a particular intervention was the cause of any particular outcome, such as a jump in employment rates, without a control group to demonstrate what had happened to a group that hadn't received the intervention?

While expert opinion was somewhat more compelling than anecdote, "before and after" studies or comparison studies without randomisation were higher in the chain of evidence, Dr Gallagher said.

Pyramid
BIT's hierarchy of evidence

Randomised control trials and systematic reviews are at the top of the "hierarchy of evidence", and these are are the only ways to be sure your program has had the effect you're claiming, or that the grants you have distributed have really have the desired effect, he said.

He told conference delegates that evidence trumped "common sense" and expert analysis, and could help reduce the influence of politics, media and personality in decision-making.

Evidence-based action paves the way, pays its way

Evidence-based action that drew on data and the psychology of human behaviour could provide value for money even after the up-front costs of the measurement itself were accounted for, Dr Gallagher said.

He cited a 2013 UK government-sponsored intervention that targeted the over-prescription of antibiotics, in which letters were sent to the highest-prescribing doctors.

Before the BIT project, the UK government had planned a £23 billion scheme to cut prescription rates through incentives.

It cost £4000 for BIT to send letters to half of the top 20% of over-prescribing doctors, alerting them that "the great majority (80%) of practices in London prescribe fewer antibiotics per head than yours", and providing carefully worded messages aimed at nudging them towards better practice.

Over six months, those who received the letter reduced their antibiotic prescribing rates by 3.3%.

The other half of the top 20% of over-prescribing doctors - those who weren't contacted - didn't change their behaviour.

The same method was employed in Australia last year with great success.

The Behavioural Insights Team has used similar methods to increase tax return rates, sending letters reminding taxpayers that 90% paid their tax on time and saved hundreds of millions of pounds.

The same methods were used to review the value for money provided by a variety of education interventions. The studies found that strong feedback from teachers to students is much more effective in improving student results than, say, performance pay or smaller class sizes.

He said when it came to education, the work of the UK Education Endowment Fund was among the "best in the world" at evidence-based policy making.

Its "Teaching and Learning Toolkit" for instance, ranks the most effective education methods by cost, evidence and expected timelines for results.

It describes its own work as: "Testing the impact of high-potential projects to generate new evidence of 'what works'."

Dr Gallagher also cited a series of other interventions that had been proven to work using RCT, such as a project to engage teenagers to do their homework using a text-messaging service; and, a project to boost organ donor rates by issuing prompts when drivers renew their licences.

Each case used both evidence and behavioural science to nudge people towards behaviour with the biggest benefits.

BIT is currently working with the Vincent Fairfax Family Foundation to examine the ethical development of young people in a world of social media.

Dr Gallagher said the organisation was about to embark on a series of RCTs to test "how we can encourage kids to use [social media] tools more effectively".

A critical issue for grantmakers and governments alike, he said, was to know what to do once an intervention proved successful.

He said it was one thing to be inventive with new ideas and programs that worked, but it was up to social impact organisations to do what's needed to extend effective methods.

He said the money spent on work to "support adoption and the spread of innovation" was still just a fraction of the money spent on research and development, but that needed to change as outcomes measurement methodology entered the mainstream.

That change needed to include investments to "scale up" projects through good systems.

Start with "I don't know"

After his presentation, Dr Gallagher said accepting a lack of knowledge about whether a program worked was often the starting point, something he described as the "I don't know" factor.

"It's often said that 'I don't know' are the three hardest words in the English language. It's a really hard thing to acknowledge that we actually don't know if something's going to work."

"It's really the dirty secret of government that millions, if not billions, of dollars are spent on programs that we have no idea if they're effective or not, or maybe even having an adverse effect."

He stressed that grantmakers should always ask, "How are we going to know if this has really made a difference?"

Beyond RCTs, "there are lots of other options you can have for measuring impacts".

He said that at the "robust end of the scale", grantmakers should examine the "Test, Learn and Adapt" research, which sets out the steps involved in running detailed evaluations, and which might prompt grantmakers to re-think they way they designed and delivered grants.

At the other end of the scale, options for grantmakers included "something as simple as merely asking the question and asking what metrics are we using to know if we've had an impact or not".

Evaluation is not free

Dr Gallagher said, "Often people see it [evaluation] as a sort of "add on" cost [but] the way that I like to think about it is, what are the costs of not doing that?".

Rather than just focus on the cost of evaluation, grantmakers should consider the millions lost on "stuff that doesn't work".

The key is that evaluation must be planned at the start of a project, he said.

"Build that evaluation in from the start and think about what things are we already collecting? How can we use that to get a sense of worth? That's much more likely to be cost effective."

Sign-up to our newsletter