Ok, so I’m on leave and taking a break from writing a 2000 word creation-myth story, as you do. I was at two meetings recently where the impact of research was discussed. The first was to do with demonstrating the value of research to organisations; the second was with a group of service users with learning disabilities (Powerful Trainers). Not surprisingly, the prevailing interpretation of ‘impact’ in these two discussions differed significantly, and the problem of resolution has been bothering my thinking genie ever since.
As a clinician whose main output has been what might be termed ‘armchair’ discussion items rather than reports of active research, my concern has been to write material that might make a difference to practice by getting people thinking or offering new models of working. I never considered impact; beyond the notion that someone might pick up my article and do something with it. One of my first publications was the co-edited book (with Alexis Waitman) ‘Psychotherapy and Mental Handicap‘, published in 1991 at a time when talking therapies for people with learning disabilities could only be accessed at The Tavistock, in London. That resulted in a series of workshops and presentations across the UK; the seeds of change were planted, and now this kind of therapy is routinely available. I would consider that to constitute impact, but does it?
In research circles, and so in organisations to whom researchers are accountable, the key measure of impact is citation, and preferably from within a journal that is, itself, frequently cited. Essentially, this means getting your paper accepted by a particular stable of peer-reviewed publications so that it will have a provenance of citation and so, it seems to me, will be more likely to be cited by someone else. Including yourself. Yes, you can cite your own papers and this counts. The argument in defence of this practice is that some fields are so highly specialised that the pool of possible referents is very small and you could hardly avoid citing your own work. The counter argument might reasonably be, well heck, how objective is that, then?
Supposing we accept for now that this is a good way of measuring impact. After all, acceptance by a prestigious journal like Nature is more likely to carry a stamp of quality and scientific rigor than something you read in the local paper, for instance. You might imagine, then, that there might be some sort of follow-up on readership (not subscriptions, actual readings) to evaluate the way in which the article is being used. Well, no, it seems not. Impact is calculated mathematically – this over that – to arrive at a value. These are means, averages, over a specified period of time during which the number of citable articles cited by other citable journals is counted. There is also a PageRank Algorithm. You don’t want to know.
If you look at this with a degree of cynicism, what you might see is an incestuous cycle of citation-seeking among journals that might or might not ever see the light of day from the desks of the more applied practitioners among us. Personally, I am at a loss. High impact, as it is currently defined, feels remote, insular, and meaningless. Further, my book with Alexis would not have counted, although it seems to have had a major impact on practice in this country and is still on the reading list for many therapy courses worldwide.
I asked our service users at the second meeting, what ‘impact’ meant to them. They said ‘Tell us on your web site’, ‘Send it out by email’, and ‘Put it on twitter‘ because, they said, ‘someone might do something with it then‘. They wanted to see some action coming out of the research; some change, a new way of doing something. And they wanted to be part of that. Impact, for them, was much more visible, accessible, tangible, and isn’t that just as accountable? After all, our funding bodies ask us to make clear our plans for dissemination – not publication, note – on the application form, and they want to see how we take this back to the public.
It seems to me that the bodies calculating impact in terms of divisibles might benefit from liaising with the ones working on the communication of science to the public. Putting clinical research in particular right back in the hands and minds of service users, public, patients seems most likely to deliver meaningful impact than counting beans in a very small box.
So, over to you. What does impact mean to you? You should be asking us to keep the integrity of our work, make sure it is robust and properly conducted. You have to ask us to make sure it is meaningful. But after that, what should we see as ‘high impact’? What should we be aiming for? I don’t think this is either/or so let it rip!