By Jess Dart
In situations where multiple factors, in addition to your research, are likely to have caused an observed policy or practice change, how can you measure your contribution? How can you be sure that the changes would not have happened anyway?
In making contribution claims there are three levels of rigour, each requiring more evaluation expertise and resourcing. These are summarised in the table below. The focus in this blog post is on the basic or minimum level of evaluation and specifically on the “what else test.”
Source: Jess Dart
The “what else test” is a basic guide for non-evaluators to strengthen their contribution claims. This tool can be seen as a minimum requirement for making contribution claims.
The what else test
Following six steps can reduce bias and strengthen your impact claims. These are summarised in the figure below.
- OUTCOMES. Check to see if the outcomes have been achieved. Ideally apply at least one form of data triangulation to cross-check whether your outcomes have really been achieved. Triangulation involves minimizing bias by looking at the results from different angles. There are a number of ways you can do this. You can triangulate by source (for example: if you think participants subjected to the intervention have improved, you can assess this by interviewing participants as well as checking their results in a test).
Source: Jess Dart
- YOUR ACTIVITY. Check that what you have implemented was sufficient to make an impact claim (dose). Check that your implementation did happen in sufficient quantity and quality to make a reasonable claim that you contributed to the results, it also needs to have happened in time-frames that make sense to have contributed to the results.
- CONTEXT. Consider the context to check that the same results aren’t happening everywhere. Check to see whether the outcome is also showing up in places where you are not working. This might be done by interviewing people not involved in your intervention or collecting data for similar situations where the intervention was not delivered.
- OTHER’S ACTIVITY. Consider who else may have contributed. Consider who else has been working in the same place. Can you eliminate them, for example because the timing does not fit? Or do they have a legitimate claim that needs to be acknowledged as well as your own?
- KEY INFORMANT OPINION. Conducting some interviews to test your claim with key informants adds to your case. In these interviews, pose questions about the counterfactual (what they think would have happened without the intervention). Do this with a small number (1-5) of strategic informants who have no vested interest in your intervention. Record these comments as quotes and include in your claim.
- SYNTHESIZE YOUR CONTRIBUTION CLAIM. Compile all the evidence from 1 to 5 to make a case about whether it is probable that your program contributed to the results. It is fine to say that your program was one small part of what was necessary.
What methods have you found helpful to substantiate impact claims? Are there any modifications or additions that you would suggest to the what else test?
This blog post is a slightly modified version of: Clear Horizon Consulting. (n.d). “What else test” A basic tool for strengthening contribution claims. Online: http://www.clearhorizon.com.au/f.ashx/%24186820%24The-What-Else-Test.pdf
Biography: Jess Dart PhD is the founder and Chair of the Board of Directors of Clear Horizon Consulting. Clear Horizon is a specialist evaluation company established in 2005 operating nationally and internationally, with 30 employees. Jess specialises in the evaluation and design of programs with complex, intangible outcomes, making use of mixed methodologies. She is passionate about ensuring that evaluation leads to improved programs and policy.
5 thoughts on “Assessing research contribution claims: The “what else test””
I suppose something that could be added is the idea of interim outcomes: can you demonstrate or provide reasonable evidence for a ‘golden chain’ of outcomes or results — one outcome leading to another which leads to another, all of these finally amounting to or leading to the final ‘headline’ outcome? Also, guarding against the Bingo Effect https://cuttingedgepartnerships.blogspot.com/2014/04/dont-play-partnership-bingo.html could help ensure outcomes and results are accurately and fairly (an important point for collaborations) attributed. Thanks for a very clear and easy to understand post!
Thanks for your comments! I do remember that workshop all those years ago! Great to hear that key evaluation questions stuck.
Like with Most Significant Change, your methodological labels are very nice and self-explanatory…
Thanks for this. In an age of cost cutting very few commissioners will pay for the high rigour option, and only a few for the middle one, so this contribution comes in handy. In complex systems, one could argue that your contribution has the added rigour that comes from multiple voices, which experimental designs may not address.
btw- I still remember your presentation at an IDRC meeting in Malaysia (a decade ago). We added your focus on key evaluation questions to our utilization-focused evaluation work, and it has remained a central piece.
Dammit Jess, when are you going to come up with a dumb idea? This is superb.