Impact evaluation for grantmakers: three top tips from the experts

Posted on 20 Jun 2024

By Jen Riley, chief impact officer, SmartyGrants

Dashboard Data display shutterstock 2435055163
Grantmakers need to know what the Australian government's preferred approach to evaluation will mean for their programs, writes SmartyGrants chief impact officer Jen Riley.

I recently attended a half-day series of talks titled "Impact evaluation: assessing the effectiveness of Australian public policy," presented by the Australian Centre for Evaluation (ACE).

Charities Minister Andrew Leigh set the tone for the event by reflecting on the idea that sometimes truths are thought to be so self-evident that they transcend the need for evidence. He gave the example of a policy of threatening parents with the loss of income support if their children did not attend school. This policy passed the pub test, but when it was evaluated, it did not produce the results the program had expected. This was Leigh’s way of highlighting the importance of rigorous impact evaluations to verify the effectiveness of policy interventions.

As I sat through the other presentations during the four-hour session, I kept thinking about the implications for grantmakers. If the government’s preferred approach involves randomised controlled trials (RCTs) and quasi-experimental designs, what does this mean for grantmakers?

Several key messages emerged from the showcase.

1. Evidence is key

Use evidence to inform your programs, but keep in mind that local context is crucial. Simply adopting an approach without considering local conditions can be problematic. Best practice involves monitoring and evaluating the effectiveness of interventions to ensure they are not causing harm and are achieving the expected results.

Dr Robyn Mildon
Dr Robyn Mildon.

More tips on using evidence:

  • Use existing evidence: As Dr Robyn Mildon from the global Centre for Evidence and Implementation mentioned, we don't need to evaluate universally accepted practices like praising children, as their effectiveness is well-documented.
  • Block out the noise: There is a lot of 'noise' globally regarding evidence, but resources like the UK What Works Network provide valuable clearinghouses for credible evidence.
  • Take the long view: Professor Julian Elliott from the Future Evidence Foundation emphasised that evidence is never complete; it continually evolves as new data is added. Platforms such as the parenting website raisingchildren.net.au offer evidence-based recommendations and are excellent resources for parents, carers and professionals.

Base your grants criteria on evidence: When selecting initiatives to fund, ensure they are based on proven evidence.

  • Ask to see applicants’ evidence base: On application forms, request the evidence base for interventions.
  • Involve experts: Engage subject matter experts as assessors of grant applications to ensure that proposed interventions are grounded in evidence.
ACE teaching presso
The University of Newcastle's Dr Drew Miller presenting findings about teaching research at the event. Picture: UON_research/X

2. Select good lead indicators

Evaluations are critical for several reasons: verifying whether interventions created intended benefits, ensuring no harm, maintaining accountability, learning for future funding, and contributing to the broader evidence base of what works.

Sharon Goldfeld
Professor Sharon Goldfeld

However, post-grant evaluations often occur too late to be useful. The director of the Centre for Community Child Health at the Royal Children’s Hospital in Melbourne, Professor Sharon Goldfeld, highlighted the importance of lead indicators versus lag indicators. Lead indicators predict future outcomes (e.g., the number of people wearing safety gear), while lag indicators reflect past events (e.g. the number of workplace accidents).

Choose indicators with a strong evidence base that can serve as proxies for long-term success. Mid-term evaluations are also useful for making course corrections during the grant period.

3. Support rigorous evaluations

Jen Riley
Jen Riley

Grant managers can support rigorous evaluations, such as randomised controlled trials, by designing grant programs to facilitate such evaluations. Here are some practical examples:

  1. Fund control and treatment groups: Use a stepped-wedge cluster-randomised method. For example, if you are funding a job-seeker support program, you could have two sets of grantees start the program at different times and collect data on both.
  2. Request cohort comparisons: For a financial literacy program, you could ask grantees to have two cohorts starting at different times and track the outcomes across both groups.

This approach requires a shift in mindset, funding for data collection, and changes in processes. However, it provides valuable insights into what works and what doesn’t. It is important not to penalise programs that didn’t work or returned null results but to continue working with grantees to refine and improve interventions over time, as Professor Sharon Goldfeld emphasised.

By adopting these practices, grant managers can enhance the effectiveness of their programs and contribute to a stronger evidence base, ultimately leading to better outcomes for communities.

Read more recent commentary from Jen Riley

More news