Social Value International

View Original

What is impact data for? Using impact data to gain insights

For most of the relatively short history of impact measurement – and the even shorter history of impact measurement and management, the purpose of impact data has been to prove that you have had an impact. It will not come as a surprise that the conclusion is nearly always that yes you have had an impact, yes, it is positive and yes, it is in line with your goals.

This is not very useful. Organisations like Social Value International have been arguing (too quietly perhaps) that not only is this conclusion not useful, but that it is also not the primary purpose of impact data.

The main purpose of impact data is a source of insights. Insights that generate options to increase impact. Followed, hopefully, by decisions to choose an option that is expected to have more impact than other options. ‘Expected’ immediately means that this is a forecast and it is not possible to measure a forecast with accuracy and that there are therefore risks that the wrong, or not the best option, is the one that has been selected. It is then also true that you are unlikely to ever know whether you chose the right option as you cannot go back and choose the other one. The situation is now different, even if only slightly. We can only create processes and systems that increase the probability that we are choosing the best option – which is what effective accountability achieves.

Although many argue that impact data is used for decision making, there are few examples and fewer where the decision of a choice between options is clear (and what else is a decision if not a choice between options). Perhaps it is information that a funder uses to decide whether to invest or not, and yet even here it might seem that it is more often a threshold to reach and then the funder chooses between options using other criteria.

And we know that everyone should be doing option appraisal before they allocate resources to an activity – as a critical way to checking that scarce resources are being applied effectively. But, of course, where organisations are not held to account by the people who experience impact, ‘effectively’ just means that there has been a positive impact (generally ignoring any associated negative impacts). There is little evidence that ‘effectively’ means being tested against options.

And where there are options they often then come as three types. The preferred option (genius), the doing nothing option (not an option) and the wild option (dangerous).

Impact data is the key source of good options – many of which will be viable. And they all arise from comparisons – which after all is the point of measurement.

The main sources of comparison do not need data to be comparable with other organisations. Given the amount of focus on standardising indicators so that we can compare between organisations this may come as a surprise.

There are (at least) four possible comparison we can make:

Comparison of actual measured impact against targets

If performance is less than target, what ideas do we have to increase it – including ideas with the same level of resources. The argument that any improvement is only possible with more resources assumes that the organisation is already maximising performance (very unlikely) and whoever set the target was being wildly optimistic.

Comparison of actual measured impact with previous years

Has it gone up? Why? Can we do more of whatever that was?

Has it gone down? Why? Can we do less of whatever that was?

Comparison with peers

Possible but often less useful – even with standard indicators. After all you can’t compare the things we do and the people we work with any other organisation etc etc etc (well that’s what we hear). But included in this list for completeness.

Comparison within stakeholder groups

This is the main source of insights.

Impact data comes with a number of data points, for example an amount of change in an outcome, a duration of change, the causality of the change, the relative importance of the change compared to other changes.

And stakeholders come with a number of characteristics. Often age, gender, location, education but also hopes, attitudes, preferences – just as a customer insights team would explore (think mamil – middle aged men in lycra for a mix of characteristics)

Every time there is any difference in the results for any data point between a segmentation of stakeholders based on any single or mix of characteristics there is the question. Why did one group get a higher result than the other. And this will lead to insights.

You may not have all the data points, or all the characteristics. Doesn’t matter. There is still the possibility of insights. But as the number of data points and characteristics increase the number of insights will increase exponentially.

Even more so if the impacts include unintended positive and negative impacts (your unintended perhaps not unintended from the perspective of a stakeholder). These are gold dust for insights, for creating options, for innovating. They will help you realise how you can increase performance against your purpose by increasing performance for other impacts your stakeholders value.

Effectiveness is the number of insights, the number of options considered, the number of decisions made, the proportion of decisions to change compared not to change. And then funders and investors might start to make choices based on actual effectiveness. Combined with a report of your impact this will start to provide prove of your impact as well – but as a consequence not as the primary purpose.