How to co-produce a suitable evaluation method

 

Kate Tobin | Scotland Director | @katetobin_

We’ve seen the story go something like this. You are an organisation with an idea. An idea to fix a problem. This may be a smart way to help families more easily access benefits to which they are entitled[1], activities to increase children’s emotional literacy or a way of reducing social isolation amongst vulnerable populations. Regardless of the idea, testing is crucial to see what needs to change or improve and you work with evaluators to help you.

But we’ve seen testing go wrong and not achieve its aims of building understanding of effectiveness, or areas which could be improved. The wrong things get measured, data completion is low, and quality is poor. What does emerge isn’t shared with the right people, at the right times to trigger action. 

We think these problems are due, at least in part, to not involving those ‘right people’ in the design of the testing – particularly people who deliver and use the service. It sounds efficient for an evaluation expert to design testing processes while staff get on with the ‘real work’ – but be aware of false efficiencies. What seems sensible and achievable to evaluation experts may well be neither feasible, useful or acceptable to practitioners and those using the service. 

Our experience 

We were working with a charity entering the testing phase of a service design process. Our Lab team presented a mock-up of a data dashboard. It was our proposal of what we thought should be measured based on what was being implemented. 

The conversation quickly became challenging. We started getting lost in the detail of what could be measured, losing sight of why we were testing in the first place. So, we took a step back and realised our whole approach was too top-down. We were defining the research question for the charity. In doing so we failed to draw out and define what they wanted and needed to know from the testing, and we didn’t pay enough attention to their capacity and capability actually to do anything useful with the data. Understanding this should always come before designing dashboards.

In this case, we adapted our approach and invested time in building up their understanding of the reasons why you test, what questions they might answer as a result, and working out what the most important questions and feasible methods for the testing phase were.

Our evolving approach 

Our experience provides an illustration of one type of unhelpful dynamic that can occur between researchers/designers and those responsible for delivering services. We should be working to try and disrupt this power imbalance. We want to bring our expertise to bear within a relationship of equals which values the knowledge of everyone.

It’s one of the reasons we are serious about co-production. We know it’s messy, complicated and needs to be consciously negotiated (see our previous co-production blog). We’ve learnt from our own mistakes – that even if significant attention has been paid to co-production during the early stages of service design (i.e. working with the people that matter, to understand the problem and create, or adapt, a solution) meaningful co-production during testing is often neglected.

We think about co-production during testing in two ways: 

  1. Co-producing a testing approach that is feasible for practitioners and people using the service, measures what’s really important about the service and is grounded in science as far as possible.

  2. Co-producing an approach to using the results – who will determine what they mean, and how will action be taken in response, including designing adaptations?

Both are equally important. If you fail to pay attention to co-producing a suitable testing method, you won’t get the buy-in or information you need. However, if you get high-quality information, but no-one is empowered to use it for improvement, what has been the point? 

Three recommendations 

If you’ve been inspired to inject a bit of co-production into testing design, here are three steps for evaluators to follow: 

  • Take the cultural temperature.

Every organisation has its own ‘emotional evaluation baggage’. People come with a range of experiences in testing and measurement.

We always assess our partners’ experience with evaluation and testing through questions such as “Do you routinely collect useful pre- and post-service measures?”. But on its own, these can’t get to the heart of how the people in an organisation feel about evaluation – and this will drive their culture around how they collect and use data. 

What is needed are questions that get to the heart of how an organisation will use the results from a testing phase. See Appendix 1 for an example of a draft quick pulse survey that can be used as a temperature gauge. 

  • Talk to all the people that matter – just like you did for service design.

When it comes to designing testing, we can find ourselves only talking to the Impact Lead, or the CEO, or some other senior colleague. Understandable – but unhelpful. We need to speak with staff who’ll be collecting and inputting data, the ‘users’ who’ll be supplying it, and those who input it. We also need to understand, and often provide support through the creation, the process the organisation will use for interpreting the data and developing improvements. 

  • Be aware of your learning too.

It isn’t just the organisation which learns through co-production. As researchers and designers, it’s worth investing time in understanding an organisation’s improvement culture and their capacity to make data-driven decisions. Not only can we support better testing in that project; we learn about how to support change management as well – an essential component in turning service design and testing from a niche project into something at the heart of an organisation. 

Testing should not be about a top-down push from experts or senior managers – it doesn’t create the best research questions or methods, and it won’t get results. It’s about respecting and benefiting from insights of all the people who matter (users, frontline staff, managers, researchers) in the process of co-production to make sure the testing does what it’s supposed to – generate enough insight to support the next decision in the interest of impact.  

______________________
[1] Independent Advisor on Poverty and Inequality. (2016). Shifting the curve: a report to the First Minister. Scottish Government that families aren’t always claiming the benefits which they are entitled to, thus contributing to poverty and inequality.

Appendix 1: Quick Pulse Survey

 
 
 
ReflectionBLOGKate