Good grants need a kernel of truth, says Squirrel

Posted on 04 May 2018

By Matthew Schulz, journalist, Our Community

The unforgettably-named Squirrel Main – research and evaluation manager at the Ian Potter Foundation – says good evaluation is the difference between effective grants and wasted effort.

Dr Main was commenting on the reasons for the foundation’s decision to release detailed information to grantmakers and grantseekers through its online “Knowledge Centre”, which compiles lessons gleaned from its programs.

Established late last year, the Knowledge Centre hosts a series of Foundation publications containing lessons for grant recipients, grantmakers and other funders.

Lessons from its general, arts, community wellbeing, education, and medical research programs have already been released, with more program-specific reports expected to come this year.

The Ian Potter Foundation is well known as one of Australia’s biggest philanthropic organisations, distributing nearly $23 million and 193 grants in the 2016–2017 year, and tallying $273 million since its inception in 1964.

The Melbourne-based organisation’s grants programs cover the arts, the environment, science, medical research, education, community wellbeing, health, disability, knowledge and learning.

Its funds aim to “make a difference” by promoting excellence, innovation, social change, creativity and our nation’s capacity, while reducing disadvantage.

It is an ambitious mission, with many successes, but behind some of that achievement lies a commitment to understanding what works, and what doesn’t.

That is Dr Main’s engine room, and there’s no doubting her keen interest in understanding – and sharing – those insights.

“We understand that having good data can inform good decisions. We are focused on excellence, and that includes in the arena of looking at results.”

SqM
The Ian Potter Foundation's research and evaluation manager Dr Squirrel Main.

It is that kind of attitude that led the foundation to receive the 2017 SIMNA Award for Outstanding Social Impact Measurement Funder, for its work improving its measurement and evaluation methods.

Foundation’s actions based on on-the-ground knowledge

The foundation recently published information it has compiled on 1000 grants from several grant programs between 2009 and 2017.

Dr Main summarises some of the key lessons for grantmakers and funders when it comes to evaluations:

  • Understand the Australian context
  • Create a pool of trusted evaluators
  • Set realistic timelines
  • Host “welcome to the family” workshops for recipients to hone goals and expectations
  • Involve other key stakeholders as early in the process as possible.

A combination of technical, philosophical and practical observations and advice are included.

While Dr Main accepts that the 14-page document Key Learnings for the Foundation, aimed at funders, is a “laundry list”, she stresses that any grantmaker working their way through the list will gain greater clarity in relation to their program, and says grantseekers could also find it valuable.

Jobsupport
Jobsupport client Micah Passfield (right) and co-worker Stuart White making salads. Jobsupport has received its third grant from the foundation.

The material is organised into stages – before, during and after a grant – as well as thematically, and much of it backs the findings of our own Grants in Australia survey series.

The lessons cover designing, implementing and assessing grant programs.

Foundation staff have also compiled a series of scales and measures for evaluators to consider, depending on whether they’re working with youth and children; health issues such as diabetes; psychological issues such as loneliness or depression; bullying; cognitive abilities; or health service effectiveness.

It’s clear the list comes from on-the-ground grants experience, giving highly specific warnings and advice about expanding programs into remote areas, being alert to grant applicants who copy their CV directly from LinkedIn, and budgeting funds for relief teachers to ensure permanent teachers have time to evaluate their school-based projects.

Leading the team, Dr Main draws on more than 20 years of evaluation experience. She cut her teeth working in Californian schools, then headed to New Zealand, where she got her doctorate. Moving to Australia, she worked first for the Brotherhood of St Laurence, then Melbourne University, before heading up the unit at the Ian Potter Foundation.

Good measures help us all

Dr Main says there are good reasons for the foundation to release such valuable insights into grants evaluations.

“It’s important to be a team player and think of the sector as a whole,” Dr Main says.

“Operating in silos is not our modus operandi. We really want to make sure that we’re not just improving things for our grantees, but the more we can do for the philanthropic sector, the better.

“We can’t do it alone.”

Another example of this cooperative spirit is the foundation’s financial support for the Foundation Maps: Australia project being rolled out by Philanthropy Australia, which uses Our Community’s CLASSIE classification system and is compatible with both the SmartyGrants and Gifts grants management systems.

For similar reasons, the foundation is a member and financial supporter of the AEGN (Australian Environmental Grantmakers Network).

Local knowledge critical to good evaluation

Dr Main says one of her own top pieces of advice for evaluators is to remember that the measure of “success” is not the same across the world.

In Germany, success is often measured in terms of “meeting specifications”; in Japan, “continuous improvement”; and in the United States, “solving a crisis”. Australians, according to Ms Main, are far more concerned with relationships, and whether they “see a piece of me involved”.

She points to the work of Australian business thinker Colin Pidd, who co-authored the report Australian Cultural Imprints at Work (The cached version can still be found with a Google search).

Pidd found that Australians’ motivations, identity and expectations of their leaders were vastly different from those of people from other cultures.

“This means that before evaluations are done, particularly in Australia, that it’s crucial to know your endgame stakeholders, and get them on board early.”

Dr Main says there can also be significant differences in approaches to evaluation between philanthropic and government agencies.

Government is very focused on outcomes measurement, and often they’ll have a buzzword of the time.”

She says it could be “investment logic model” or “program logic model”, for example, which are popular in Victoria and Western Australia.

Tasmania’s government, by contrast, is currently more focused on cost-benefit analyses.

And while philanthropy is known for risk-taking when it comes to selecting grant recipients, Dr Main says that for foundations, this can also extend to the evaluation process.

“Sometimes a tried and true (program that is) replicated is a wonderful way to go,” she says, if it is combined with a “developmental evaluation” during the start-up phase that might question, for instance: “Is it smart for us to have the caseworker working on the weekend in our social enterprise?” And making changes or “micro adjustments” can mean making improvements well before outcomes measurement begins.

“Government tends to do more of that summative outcomes-focused measurement, whereas we can do what suits the grantee, and the beauty is that we can be flexible. We don’t have a form, or tick box, that must be filled in. Ideally, we can be a bit more relaxed and if it's helpful to the grantee we can meld or merge with the government standards.

“We’d be the kayakers if they are the ocean liners. Ocean liners can be great, but sometimes you need something smaller to see if there are icebergs.”

And are there icebergs?

“Yes there are.”

Don’t risk sinking the ship

It’s clear that for an evaluation expert, there’s nothing more distressing than watching government adopt a program before the evaluation is done, particularly when that evaluation subsequently shows the program “makes no difference over the status quo”.

“But it's already been picked up. There is an iceberg ahead, and they take their ocean liner and go right into it. And then they wonder down the track: "Well, how come that program didn't work? It must have been the organisation.”

Dr Main says she’s seen as many as four such cases in which “you can see that it is … not the best use of resources, and you're almost better off not funding that program.”

“Don’t just assume what’s good, wait for the results,” Dr Main says.

Dr Main wouldn’t give actual examples, but offered this fictional case.

“Imagine we’re working with a project for kids with out-of-home care, and the funder finds some positive or minimal results, fairly equal to the control group. But government is happy to run with that in regional areas. So that makes us as a foundation look good, because the government has picked up our program, but really we’re not improving the state of out-of-home care in Australia.”

“The reason I’m personally passionate about this is that if you’re not carefully putting resources in, you’re causing harm … such as to children, women or marginalised people who are depending on the not-for-profits and services for good programs.”

Powerful evaluations, powerful results

On the other side of the equation, Dr Main cites the Federal Government’s Communities for Children Facilitating Partner program, which targets 52 disadvantaged communities across Australia.

Among the criteria for winning federal grants funding is that service providers must spend at least 50% of their allocation on programs that have been proven to work, based on strict guidelines.

It is expected that the requirement will rise to 70% as more programs qualify for the list.

That list has been assessed by an expert panel of the Australian Institute of Family Studies (AIFS), in which only programs with “high quality” evaluations demonstrating positive results (and no negative ones) make the cut.

The program has been adopted following strong sector support for more “evidence-based programs and practices, focusing on prevention and early intervention and moving towards true outcomes measurement and reporting”.

“My recommendation to funders is that if you’re going to fund something, make sure that it is rigorous enough to pass the AIFS standard,” says Dr Main.

Dr Main praised the University of Colorado, Boulder, and its Blueprints Program , which reviews the effectiveness of programs for youth at risk. Only 11 of 600 submissions were found to meet its exacting evaluation standards.

In response to that study, the City of New York decided to fund only foster care groups that used evidence-based programs, and in two years had slashed the number of kids in foster care from 15,000 to 10,000.

“That’s because they were using programs that worked.”

She says Australian grantmakers must follow suit, to ensure “we’re evaluating well enough to know either it worked, to learn from the mistakes to improve it, or to know when to let it go.”

“If there’s an iceberg, steer the other way.”

Quality in, quality out

There’s no doubt that evaluation can be a complicated business, with numerous models and even sub-models to choose from.

But Dr Main says the key is to ensure that those involved in the process are motivated and committed, and can work well with their grant recipients.

She draws a parallel with the finding by education expert Professor John Hattie that many educational systems will work if the teacher applies some key principles. The professor came to this conclusion after reviewing more than 500,000 studies.

Similarly, the concept of “teacher efficacy” can be applied to evaluation, she says.

“Funders … shouldn’t stress, just make sure it’s competent people doing it,” Dr Main advises.

“You pick good organisations, help them to find good evaluators and stay in touch, keep that relationship going. Be interested in the content of the evaluation, because it can help inform your future grantmaking decisions.”

The Ian Potter Foundation produces an annual list of recommended evaluators.

The Australian Institute of Grants Management, too, provides recommendations for consultants to help with grantmaking, particularly in conjunction with the SmartyGrants system.

To help grantseekers with evaluations, get real with evaluations, timelines

Grantseekers are still overwhelmingly paying for their own outcomes measurements (Grants in Australia study, 2017), a fact that Dr Main says is plain “dumb”, given the importance of evaluations.

The Ian Potter Foundation, by contrast, has three streams through which it funds evaluations:

  • Funded directly as part of a project
  • Supplementary “impact enhancement grants” above the original grant to calculate the effectiveness of a program
  • Stand-alone money for comprehensive evaluations such as nearly $10,000 for a randomised controlled trial and cost-benefit analysis of the Children’s Protection Society.

The foundation conducts “welcome” workshops for new grant recipients, and subject-specific ones covering disability, employment or medical research, for instance, covering outcomes measurement in each field.

Dr Main meets grant recipients in person or via Skype to discuss evaluation and data methods at the start of the grant and connects them, if necessary, with other grantseekers with more advanced evaluation skills.

Helping grantseekers produce quality evaluations also means remaining flexible when, for example, a project runs unexpectedly over schedule.

“Foundations get caught up with their reporting cycles and needing to get that 4% distribution out the door,” Dr Main says, “whereas actually, the important thing is working closely with the grantees to have a high quality project, not one that just meets your fiscal year.”

Top tips from the Ian Potter Foundation

  • A poorly conducted positive evaluation is a waste of money (and not worth quoting)
  • Evaluation design must take account of such things as survey response rates, data accuracy, control groups (even in small studies) and before-and-after surveys or tests
  • Ensure raw data and survey questions are included in reports
  • Consider standard reporting measures (e.g. numbers employed, instead of percentages)
  • Be able to update KPIs if a program changes
  • High quality evaluations can take more than a year
  • Develop a pool of highly skilled evaluators
  • Budget 10% of funding for external evaluation in all large, multi-year projects
  • Partnerships with parent organisations, universities and others can assist in reducing evaluation costs
  • Include stakeholders early in designing and understanding evaluation measures
  • Include a cost-benefit analysis in the evaluation plan, if needed by stakeholders
  • Fund relief workers to allow teachers, for example, to participate in evaluation
  • Stick to deadlines when providing governments with evaluation reports
  • Remind grant recipients about their evaluations during regular “check-ups”
  • Follow up to ensure your evaluation reports arrive as required
  • Ensure you help grant recipients to understand your measurement process, such as results based accountability (RBA)
  • Help grant recipients to establish those measures, such as NAPLAN scores, client-users of a service, website analytics (with a common platform), job attainment
  • Keep surveys short
  • Use high-quality evaluation reports as examples to help guide grant recipients
  • Be thorough in reading evaluation reports, to avoid skipping crucial details
  • Reduce your administration with more year-long projects

Sign-up to our newsletter