What does it mean when a research journal is ‘high impact’?

A panorama of a research room taken at the New...

Image via Wikipedia

Ok, so I’m on leave and taking a break from writing a 2000 word creation-myth story, as you do. I was at two meetings recently where the impact of research was discussed. The first was to do with demonstrating the value of research to organisations; the second was with a group of service users with learning disabilities (Powerful Trainers). Not surprisingly, the prevailing interpretation of ‘impact’ in these two discussions differed significantly, and the problem of resolution has been bothering my thinking genie ever since.

As a clinician whose main output has been what might be termed ‘armchair’ discussion items rather than reports of active research, my concern has been to write material that might make a difference to practice by getting people thinking or offering new models of working. I never considered impact; beyond the notion that someone might pick up my article and do something with it. One of my first publications was the co-edited book (with Alexis Waitman) Psychotherapy and Mental Handicap, published in 1991 at a time when talking therapies for people with learning disabilities could only be accessed at The Tavistock, in London. That resulted in a series of workshops and presentations across the UK; the seeds of change were planted, and now this kind of therapy is routinely available. I would consider that to constitute impact, but does it?

In research circles, and so in organisations to whom researchers are accountable, the key measure of impact is citation, and preferably from within a journal that is, itself, frequently cited. Essentially, this means getting your paper accepted by a particular stable of peer-reviewed publications so that it will have a provenance of citation and so, it seems to me, will be more likely to be cited by someone else. Including yourself. Yes, you can cite your own papers and this counts. The argument in defence of this practice is that some fields are so highly specialised that the pool of possible referents is very small and you could hardly avoid citing your own work. The counter argument might reasonably be, well heck, how objective is that, then?

Supposing we accept for now that this is a good way of measuring impact. After all, acceptance by a prestigious journal like Nature is more likely to carry a stamp of quality and scientific rigor than something you read in the local paper, for instance. You might imagine, then, that there might be some sort of follow-up on readership (not subscriptions, actual readings) to evaluate the way in which the article is being used. Well, no, it seems not. Impact is calculated mathematically – this over that – to arrive at a value.  These are means, averages, over a specified period of time during which the number of citable articles cited by other citable journals is counted. There is also a PageRank Algorithm. You don’t want to know.

If you look at this with a degree of cynicism, what you might see is an incestuous cycle of citation-seeking among journals that might or might not ever see the light of day from the desks of the more applied practitioners among us. Personally, I am at a loss. High impact, as it is currently defined, feels remote, insular, and meaningless. Further, my book with Alexis would not have counted, although it seems to have had a major impact on practice in this country and is still on the reading list for many therapy courses worldwide.

I asked our service users at the second meeting, what ‘impact’ meant to them. They said ‘Tell us on your web site’, ‘Send it out by email’, and ‘Put it on twitter‘ because, they said, ‘someone might do something with it then‘. They wanted to see some action coming out of the research; some change, a new way of doing something. And they wanted to be part of that. Impact, for them, was much more visible, accessible, tangible, and isn’t that just as accountable? After all, our funding bodies ask us to make clear our plans for dissemination – not publication, note – on the application form, and they want to see how we take this back to the public.

It seems to me that the bodies calculating impact in terms of divisibles might benefit from liaising with the ones working on the communication of science to the public. Putting clinical research in particular right back in the hands and minds of service users, public, patients seems most likely to deliver meaningful impact than counting beans in a very small box.

So, over to you. What does impact mean to you? You should be asking us to keep the integrity of our work, make sure it is robust and properly conducted. You have to ask us to make sure it is meaningful. But after that, what should we see as ‘high impact’? What should we be aiming for? I don’t think this is either/or so let it rip!

Advertisements

When Bad Science Persists on the Internet (via The Scholarly Kitchen)

Persistence and accessibility of information via the internet has been one of its major assets. Unfortunately, that’s also its major flaw. When scientific work is modified, retracted, withdrawn, discredited, there is no real way in which the majority of us can be sure of avoiding the old or dated and finding the best and most creditable material.

This article makes the case, with examples, for enduring responsibility by publishers of whatever kind in the governance of the material they put out.

For me, there is also the matter of impact and what that means. Academics seek publication in ‘high impact’ journals but clearly, these are not the ones being read by the public. The ones most people track down are essentially ‘low impact’ but, ironically, probably have more actual effect than the rarefied publications to which we aspire. So what, exactly, is a ‘high impact’ journal? Well, let’s keep that for another time. For now, ensuring the eradication of discredited or just pre-modified material is a fine objective.

When Bad Science Persists on the Internet Search Google for the phrase "ileal-lymphoidnodular hyperplasia," and you are likely to find several free copies of a popular medical article hosted on public websites around the Internet. The problem is, this article was retracted in February 2010, the result of a investigation that ultimately found the paper fraudulent and stripped its author of the right to practice medicine. If the medical terminology of this paper is still confusing you, thi … Read More

via The Scholarly Kitchen

Wired for Health

This post was due up last week, then the news about Samantha Backler came through. She deserved her time in the spotlight.

On March 17th, an extraordinary event took place at the Lighthouse in Brighton’s North Laines. The R&D department at Sussex Partnership has been developing ideas for projects – research and clinical practice – that seeks digital solutions to health care problems. Second Life is already a research environment for some of us, and more projects are either underway or at the work-up stage. We are also keen to capitalise on social media for communication with staff and service users, and to make use of apps for community support. For clinicians, the ideas come from practice. We can see the problems up close and we know what we need to do to address them We’re not that tech savvy though. We are not developers or designers. On the other hand, the tech savvy digital community doesn’t necessarily know what kinds of products we need, or how to access a user group to trial prototypes. From a very understated meeting with Phil Jones of Wired Sussex at which we speculated about a meeting of clinicians, academics, and entrepreneurial developers, came Wired for Health. Phil took that basic idea and produced an event that exceeded all expectations. No, I’m not going to be cool about it, this was very very exciting! Chaired by John Worth (Worth Digital)and Lynn Smith (NHS South East Coast), presentations from the health and business communities preceded a superb networking session from which we pretty much had to be evicted as no one wanted to stop talking when time was up. We heard from Sarah Pearson (Health Psychologist), about the difference between what people say they do and what they actually do (e.g. belieiving they watch very little live TV and, in fact, watching over 60% live TV), which has implications for self report about health issues. We also heard from Ribot, a small company that has developed a phone app to assist people with dexterity problems (the Threedom phone). In fact, this was the occasion of its formal launch, so the glasses of wine that were waiting upstairs could easily have been deployed ahead of time for a rollicking good crack over the bows! Dave Taylor (Imperial College) and I presented a live look at the medical training environment and our Brighton simulation, used for the study with people with learning disabilities. This being a digitally capable venue, there was no trouble getting a good broadband connection and even Second Life behaved itself, so that the audience got a good look at the potential of virtual worlds in health care and research.

Upstairs in the foyer of the Lighthouse, Jo Roberts (Wired Sussex) had set up media nooks for particular interests. Virtual worlds, social networking (and yes, we’re on twitter), and webs and apps. Somewhere, I saw small food items being passed around but was never able to shut up long enough to take advantage. If you can measure success in terms of the croakiness of your voice the day after, this was off the scale. Mine was a husky growl for two days as a result of all the talking.

And the outcome? Wired Sussex is preparing a report for their funding body. Productive relationships were begun and are bearing fruit. Ribot is in touch with a posse of service users whose dexterity is challenged by motor, anatomical, and brain injury factors. We, R&D, can begin to hope for some major steps forward in our digital research and product development capability.

Thank you John, Phil, Lynn, and Jo.

Photos by Wired Sussex

#WiredHealth