Five grants funding trends to watch

Posted on 18 Apr 2019

By Kathy Richardson, Executive Director, Our Community

At the Australian Institute of Grants Management, we're constantly learning about new ways of funding, alternative approaches and grantmaking experiments that aim to increase bang for buck. There's never a shortage of those, but here are our top predictions for the grants trends we've already begun to see emerging over the forthcoming year.

Trend 1: Micro-focus grants

Most grants programs have outcome goals in mind, whether these are expressed clearly in the guidelines or not. You may be aiming to build a stronger community in a locality, to stimulate economic development, or to aid recovery after a disaster, to suggest just three examples.

Often a grantmaker’s outcome goals can be described as macro – they’re big-picture, ambitious goals. As a result, it can be hard to track progress towards their achievement.

One trend we are seeing emerging right now is the tacit acknowledgement that achieving world peace is beyond most grantmakers. Some are opting instead for a well-honed singular focus on one well-defined outcome goal; a micro outcome goal. Micro, in this sense, does not mean small, or insignificant: it means narrowly focused.

The James Irvine Foundation is shifting away from a program-based model to focus instead on “a California where all low-wage workers have the power to advance economically.”

“A core aspect of this approach is large initiatives with specific timelines and outcome goals,” the foundation says.

In the UK, meanwhile, the government is zeroing in on the issue of loneliness through its £11.5m Building Connections Fund, while other organisations are chipping in by sharing information on how not-for-profits can measure the effect of their interventions in this arena.

Could you briefly summarise your outcome focus? Tell us why you've chosen that focus, and whether your progress is trackable.

Trend 2: Randomly allocated funding

Scientific American recently published an article that made a convincing case that while we may believe our decisions are in the main rational and defendable, luck has far more influence in success than most of us would care to admit.

“The importance of the hidden dimension of luck raises an intriguing question,” Simon Barry Kaufman notes in the article. “Are the most successful people mostly just the luckiest people in our society?”

The finding is significant for grantmakers. Kaufman says many apparently meritocratic strategies used to assign honours, funds, or rewards are in fact based on past success.

“Selecting individuals in this way creates a state of affairs in which the rich get richer and the poor get poorer … But is this the most effective strategy for maximising potential?” he writes. “Which is a more effective funding strategy for maximising impact to the world: giving large grants to a few previously successful applicants, or a number of smaller grants to many average-successful people?” While the data in the article applies to individuals, not organisations (and most grantmakers we work with fund the latter not the former), the article invites the conclusion that randomisation of funding allocations might be fairer and less labour-intensive.

And further, it could in fact result in “greater collective progress and innovation for society”.

Meanwhile, an article on funding for innovation by UK researchers Teo Firpo and Laurie Smith points out the chronic wastage built into most merit-based assessment processes (medical research scientists in one Australian study “spent the equivalent of four centuries writing applications that were eventually rejected,” they note).

“Not only is this a waste of researchers’ time, it’s a waste of time for funding organisations, and the researchers who peer-review funding bids,” Firpo and Smith say. “Humankind’s smartest thinkers are spending time on proposals that often go nowhere – time that could be better put to use doing actual research that can change lives.”

Firpo and Smith also note the bias (whether unconscious or conscious) that’s inherent in most application assessment processes.

The answer to these issues, they suggest, might lie in randomisation of funding allocations, at least for those applications that reach a certain threshold of acceptability.

“In a partially randomised system, proposals could be divided into three categories – a top category which are all funded, a bottom category which are never funded, and a middle category where funding is allocated by lottery,” they write.

“A process like this would be shorter, freeing up time for both researchers and funders. It would be fairer, as bias could be reduced.

"Most importantly, it would combat the tendency towards familiarity in the current system, allowing better, more innovative research to be funded.”

This isn’t science fiction. New Zealand’s Health Research Council and the Volkswagen Foundation are both already using a lottery-based system to distribute funding, Firpo and Smith point out.

Do you know anyone doing this in Australia? We’d love to talk to them. Please let us know.

Trend 3: Automated decision-making

If randomisation is a bridge too far for you, how about machine-assisted decision-making?

You may shy away from the idea of handing control over to a robot, but automation can have its advantages; in fact, it can promote fairness. A machine does not get tired and will follow consistent rules – the same cannot always be said of human assessors.

Our Community’s Innovation Lab data scientists have been pioneering research into the feasibility of using algorithms to streamline the process of assessing grant applications. Our project – Tessa (the Assessor) – has turned up some interesting insights about what works and what doesn’t when it comes to automating assessments, as well as highlighting some of the ethical considerations of turning control over to a bot.

We’ll have more to report on how this project is proceeding in coming weeks.

Trend 4: Grantmaking catharsis

Have you charted the emergence and rising popularity of “f**k-up nights”? (Yep. They’re really called that.) At these events people come together with the express aim of sharing stories of professional failure.

While we’ve never been quite brave enough to stage a f**k-up night, partly because we’re not sure the name would pass through most grantmaker’s IT filtering systems, the AIGM has always been interested in sharing tales of grantmaking woe.

This is not because we’re voyeurs; we believe that lessons are learned from what doesn’t work as much as what does.

With a few notable exceptions, though, our members (for obvious and understandable reasons) have not always been comfortable sharing their own missteps, nor those of their grantees.

That trend might be about to turn, if international examples of warts-and-all sharing of lessons are anything to go by. Take, for example, the case of the Canadian JW McConnell Family Foundation, which recently released a graphic collection of 12 hard-won lessons.

“There is a lot of fanfare about failure, but that doesn't change the fact that failing can be painful, embarrassing and often un-strategic,” the foundation says. “If we want civil society to embrace failure and reap the benefits of iterative innovation, we must create environments where admitting failure aligns with the interests of the individual or organisation.”

The foundation recasts the people responsible for the failures as “learners”, a frame the AIGM has also used in the latest versions of the template acquittal forms available through the SmartyGrants system – grantmaking should be about learning, not punishing, we believe.

We’ve also been a keen participant in the burgeoning “what works” movement and would love to see a “what doesn’t work” appendage to that.

Want to share a hard-won lesson with your peers? We’d love to (sensitively) write up your case study, and/or showcase you at our next Grants in Australia Conference. Drop us a line.

Trend 5: Data and outcomes-driven decisions

Last year, privacy and data protection came to the fore; 2019 is set to be the year when grantmakers and their grantees hit their strides when it comes to getting the best out of data, using it to soup up efficiency and results.

In the UK, the open data evangelists at 360Giving have for the past few years been advocating for and helping grantmakers publish their grants data in an open, standardised way. We’re following their lead and will have more to share with you about open data in coming months.

With the recent release of 360 Insights now comes the tool that brings that data to life. The tool combines open grants information with data from other sources and turns it into an interactive visual dashboard.

Meanwhile, here and in the United States, Foundation Maps is creating a geographic visualisation of grants data.

The AIGM is working hard in this arena too. In recent years we have released a system for classifying grants so that funding patterns can more easily be intuited, we’ve introduced dashboard displays and a powerful mapping tool into SmartyGrants, and we’ve begun experimenting with interactive scrollers as a tool for helping people make sense of complicated data sets. (What’s a scroller? Here’s an example of one the ABC developed about what’s in kids’ lunchboxes.)

Our Innovation Lab is also working on the development of an Outcomes Engine. We’re using modern data analytics, machine learning, data visualisation and impact evaluation concepts and tools to help grantmakers analyse their funding patterns, compare them with their objectives, and track progress towards their aims.

As an aside: the Sustainable Development Goals have been around for a while now but we’ve noticed they’re being used more and more by grantmakers – and not just those working with developing communities – to frame their grantmaking outcome goals. We expect to see more of this as we speed towards 2030.

The Innovation Lab is also close to finalising version one of an automated classification system (CLASSIEfier) that will allow grantmakers to categorise past, current and future grants by subject or beneficiary at the click of a button.

This work is taking place at our new headquarters, Our Community House, a co-working space for the social sector that not only enables social change agents to work together but also serves as the world’s first social innovation data science and communications lab.

Thanks to the pioneering support of Equity Trustees, at OC House we’re working with not-for-profits and grantmakers to design and implement tools that will power social reform into the future.

Want to find out more? Send me a note and I’ll happily take you on a tour.