While outcomes are vital, SIBs require more than outcome data to work. If stakeholders are not careful, they may find themselves in a position of expending disproportionate amounts of time and resources in collecting, analysing and reporting data.
We should not, however, simply accept this at face value as a ‘fact’ of SIBs. If we are to take the outcomes-focussed principle to its logical conclusion, then we must surely also be clear about the desired outcomes for data collection and its use. Starting with this helps streamline data collection and use, ensuring that we know why we are collecting certain types of data, and how we are going to use them.
As SIBs involve multiple players, each must develop clarity about data for their specific needs. In addition, the different players need to work together to minimise duplication and ensure that information is shared and that there are systems in place to support collaborative interpretation and scrutiny.
Shared approaches to data categorisation and collection
Different though SIB stakeholders may be, their approaches to data categorisation and collection is surprisingly similar.
When categorising data there are particular ‘headings’ that data relate to. Data can be categorised as regular performance management data, process data, impact data, and cost-benefit data. This points to the fact that delivering a SIB effectively requires parties to monitor ongoing operational matters; constantly assess and review the implementation of the intervention(s); ascertain the degree of which implementation may be leading to the desired outcomes; and assurance that transactions represent good value for money.
Indeed, different SIB players often collect and/or require similar, if not identical, data. This immediately alerts us to the fact that the various players need to work collaboratively to ensure they do not duplicate efforts; share data where relevant; and streamline processes to reduce the overall burden of collecting, analysing and reporting data. Not doing so can lead to unintended additional costs for all parties, as our second Essex SIB interim evaluation report has shown.
Different stakeholders use data differently
While data required can be identical across the various players, the use of the same data can be quite different.
Outcome payers scrutinise data as part of due diligence, which can be heightened in the case of SIBs. They need to show that they have undergone robust scrutiny of the data to justify paying out to investors. They also look at data from the point of view of assessing performance against the original business case for the SIB, and to see how the SIB way of doing things compare with more conventional ways of commissioning services.
Service providers look at the data in terms of understanding the effectiveness of implementation and the efficacy of the intervention. This may be particularly true if they are not delivering a strongly evidence-based intervention, and/or if their intervention is flexible and adaptive. In addition, service providers will wish to be clear about the true cost of delivering a service under a SIB model and how it compares with other ways of ‘selling’ services. They may be interested in ‘going to market’ more widely through a SIB model, and such information is therefore crucial in helping to price appropriately and competitively. Needless to say, most if not all service providers have a strong focus on outcomes for their service users.
Social investors, from our experience, tend to look at data with an eye on what can be improved. They are always looking at how they may redirect resources, adjust inputs and the approach to give it the best chance of success. After all, payment is linked to success. They also look at the data to assess return on investment, and how it compares with other forms of investment, and also how it compares with their investments in other SIBs.
Evaluators, of course, look at the bigger picture in terms of what the impact of the SIB has been and whether it adds values, over and above the operational concerns of individual SIB players. This is where I would encourage evaluators of SIBs to place emphasis on understanding the impact of the SIB, as opposed to the impact of the intervention per se. There is a real gap in our collective knowledge base in terms of how and whether SIBs add value; and whether particular models of SIBs may be more or less effective in different contexts, policy areas, or target groups.
Just as SIBs are focussed on outcomes, the exercise of collecting and analysing data for a SIB should equally be outcomes-focussed. Many commentators have noted that SIBs can be overly complex, and data requirement is often part of this complexity. Equally, commentators have pointed out that in order for SIBs to flourish and to achieve the desired degree of spread and scale, it is vital for us to work together to find ways of simplifying and streamlining core SIB components so as to reduce transaction costs.
There will always be a degree of bespoke tailoring required in specific contexts, but there are core generic components that may be simplified or made consistent. The information collection and reporting requirement seems to be one of these ‘design features’ of SIBs, using the terminology from Bridges Ventures, that may be amenable to this, thereby contributing towards reducing the transaction costs of SIBs.