Overturning the design of outcome measures

By Diana Rose

rose
Diana Rose (biography)

Outcome measures in research about treatment and service provision may not seem a particularly controversial or even exciting domain for citizen involvement. Although the research landscape is changing – partly as a result of engaging stakeholders in knowledge production and its effects – the design of outcome measures has been largely immune to these developments.

The standard way of constructing such measures – for evaluating treatment outcomes and services – has serious flaws and requires an alternative that grounds them firmly in the experiences and situations of the people whose views are being solicited.

Standard measure generation is top-down. It starts from the literature, written by clinicians and academics, and collects together all existing scales on a given topic, also produced by clinicians and academics. This generates a large number of  ‘items’ and there follow complex statistical processes to reduce the number of items, to ensure the measure is psychometrically robust and thus to develop a new and hopefully ‘better’ measure of what is being assessed. Occasionally the new scale will be ‘concept checked’ by potential treatment and service users, but that is both uncommon and a very limited role.

The first problem with this is that not much can ever change as new scales are so contingent on what has gone before. But more significantly, this method ensures that the measures assess what clinicians and academics think is important in what is being evaluated. This may be light years away from what is important to those on the receiving end of services. Their insights are rendered entirely invisible by the usual method just described.

My work is mainly in mental health, although the insights described here are transferable. Myself and my team are ourselves mental health service users as well as researchers (we refer to ourselves as user-researchers). We have devised a completely different way of generating outcome measures that has now been successfully used nearly a dozen times.

It starts with qualitative research with people who have used or experienced the treatment or service that we want to evaluate to gain their perspective. We run focus groups and expert panels (here the users are the experts) and these are facilitated by researchers who themselves have experience of what is being assessed. This ‘participatory method’ is an effort to break down the power structures that usually pervade the research endeavour – even though this is never spoken of. We are all part of the same community because we all have experience of mental health challenges and treatment – not exactly the same experiences but with much in common.

From intensive qualitative work like this we gradually build up a mixed-methods measure, constantly checking with participants that it reflects their concerns. When it is finally ready we do indeed undertake psychometric assessment. We are not anti-science!

In my area of mental health we have found that participants with diagnoses of psychosis deliver more robust psychometric results, for example very high test-retest reliability, than are usual for such scales. Interestingly this flies in the face of dictums in psychometric textbooks that ‘subjects’ must be ‘cognitively intact’ for psychometric assessment. It suggests that scales developed by peers are more likely to make sense and are therefore more robust.

Nevertheless, the method is not perfect, with three major problems. First, breaking down power relations in this context is difficult. It can fail completely or give rise to complex issues about reciprocity and degrees of disclosure by user-researchers.

Second, like any participatory method, it relies on ‘gatekeepers’ and crucial groups may be missed – for example those thought to ‘lack capacity to consent’ may be excluded, as may others whom gatekeepers regard as too ‘vulnerable’. The views of such people are crucial yet including them is incredibly hard.

Third, the measures do not please everyone. Out in the wider community most seem to like them, but – by believing we needed to make the measures ‘easy to complete’ – we have been accused of envisioning mental health service users as ‘simple-minded’. This was a wake-up call.

Paradoxes abound but this method does turn the standard on its head and in our view is grounded in and stays closer to the people who all this research is supposed to benefit.

What do you think? Do you have relevant experience to share?

For more information see:
Rose, D., Evans, J., Sweeney, A., and Wykes, T. (2011). A model for developing outcome measures from the perspectives of mental health service users. International Review of Psychiatry, 23, 1: 41-46. Online (DOI): 10.3109/09540261.2010.545990

Biography: Diana Rose PhD is Co-director of the Service User Research Enterprise and Professor of User-Led Research at the Institute of Psychiatry, Psychology and Neuroscience, King’s College London. She has also been treated by mental health services all her adult life and uses that experience to spear-head consumer-led research.

4 thoughts on “Overturning the design of outcome measures”

  1. What you all are saying about user involvement rings true at all levels, be they aiming for system outcome measure, to a small project development. I think of a committee I was on, which shifted energy to being more grounded and real, when a consumer of services joined in and various connections to groups and dynamism kicked in. The challenge as Dr. Grey says, goes to the power boundaries of how far things get taken and to my mind how inclusive of the levels of the various, dare I say, players are. That is where I have been liking the idea of patient, clinician etc dialogue the Co Creation people have been talking about (see the contributions on Co-creation in this blog).

    Reply
    • Power is a strong influence I believe. For some time, I have been explaining to people in the public service the value of methods that engage service users in making sense of their requirements and setting the direction for change. The usual response is agreement with the principle but difficulty seeing how to marry it up with other processes, strategic planning and funding in particular.

      One person with good insights into the way the upper echelons of the public service works in Australia said, paraphrasing a bit, that senior managers would object to having their hold on programs diluted as this is the means they use to gain personal advancement.

      The funding issue is another challenge. Most government and a lot of private sector project funding has to be supported by a business case, even if it’s nothing to do with business, which means, traditionally, a plan with a commitment to deliver something specific at the end. That cuts across the idea of a program with an emergent direction based on stakeholder engagement that will shift as insights are developed in the course of its implementation.

      It’s not just a matter of measurement but the entire chain of direction setting, resource allocation, funding, monitoring and control. I can’t say I have any magic bullets to resolve that.

      Reply
  2. This is clearly a useful move, for all the reasons given. It can be framed as a common-sense response to the challenge of working with a system of loosely aligned partially autonomous people. Why would anyone think that an external expert could define once and for all what they and, where relevant, their carers value? If their needs are to be met, they have to be involved in setting direction and priorities no matter how messy that may be. The alternative is effectively to deprive them of a say in their own future.

    I would add a fourth challenge to the list of problems: power relations; inclusion; and diverse aspirations.

    As soon as you start encouraging or allowing people to think about the services they use or deliver and they discuss this with others, their own views will start to change. What constitutes a good design that is generally acceptable will not be static. Even working on the design of outcome measures can change this view and once a program of change or intervention is underway that evolutionary process will continue.

    Anyone concerned this field might find the work at [Moderator note as of October 2021 – this link is broken and has been removed: portraitsinblue[dot]com] interesting.

    By abstracting the evaluation framework a little and sensing how stakeholders are responding to the context in which they exist, as opposed to a static forecast made when a program is initiated, insights may be derived that reflect that evolving situation.

    Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Integration and Implementation Insights

Subscribe now to keep reading and get access to the full archive.

Continue reading