Stuff that occurs to me

All of my 'how to' posts are tagged here. The most popular posts are about blocking and private accounts on Twitter, also the science communication jobs list. None of the science or medical information I might post to this blog should be taken as medical advice (I'm not medically trained).

Think of this blog as a sort of nursery for my half-baked ideas hence 'stuff that occurs to me'.

Contact: @JoBrodie Email: jo DOT brodie AT gmail DOT com

Science in London: The 2018/19 scientific society talks in London blog post

Thursday 17 June 2010

Stempra event: Promoting non peer reviewed science?

Disclaimer: I work for Diabetes UK and UCL; these views are my own.
Recently I attended an event, on 'Promoting non peer reviewed science' about research that hasn't been through the whole publication and peer review process, hosted by Stempra (science, technology, engineering and medicine public relations association) which was held in a comfortable upstairs room in a London pub - there were about 20 of us in attendance. I took along my laptop to make some notes and here they are.

Lucy Harper led the discussion - she's the Communications Manager at 'SfAM', the Society for Applied Microbiology and she edits their members' magazine 'Microbiologist'. Each year, like many similar organisations, their society has professional meetings. Such meetings often involve presentations of preliminary data and the publicity given to these meetings means that journalists may well be present in the audience. What's the appropriate thing to do here? Forewarn the speakers and find out if any are happy to be interviewed, and mention that other people may be tweeting or does this run the risk that the speakers will make changes (ie remove bits) to their presentation. Scientists can be (not unreasonably) wary of publicising preliminary data - there's the fear that future publication might be harder if the content has been widely shared in advance (see Ingelfinger rule, below) but also the concern that sharing data via non-published means could result in others 'scooping' them. One scientist that Lucy spoke to wasn't so concerned about potential risks to future publications, but was worried that a larger and better financed research team might take advantage of their data.

An audience member commented that even if Twitter or news media were taken out of the equation this fear of being scooped didn't really make sense since the potential scoopers were likely in the audience and hearing from the presenter directly, although the reach of Twitter is obviously much larger than just the room. I'm aware of the contrast with a few scientists (Peter Murray-Rust, Cameron Neylon, Jean-Claude Bradley), all of whom have spoken at either Science Blogging 2008 or Science Online 2009 (in London, Science Online 2010 is in September) on making their results available in real-time by posting them on the web and getting immediate feedback from comments. I'm not sure if chemistry / biophysics is less amenable to being scooped by other teams than other fields making this a 'safer' thing to do, let alone a helpful and completely transparent way to conduct the research that is paid for by the public (through the research councils).

On the concept of further reach I was reminded of similar comments made around the time that Daniel MacArthur tweeted freely from a genetics conference while journalist delegates had been required to sign an agreement on how they would use conference material. In this case, the conference subsequently changed its regulations so that anyone - mainstream media, bloggers or microbloggers (those on Twitter etc) - would be aked to sign an agreement in advance. I've collected quite a few of the blog posts relating to this (http://brodiesnotes.blogspot.com/2010/02/curated-posts-liveblogging-science.html), as part of a wider-themed post myself on the issue of tweeting from scientific conferences. My particular interest in this is tweeting from medical health conferences - Diabetes UK (where I work) has an annual professional conference at which new data is presented, not always published / peer-reviewed of course, and I expect that with each passing year there may be more people tweeting from the audience. I don't think we, or other charities in a similar position, can reasonably expect to prevent people from doing this but I don't think it hurts to have some ground rules (well, suggestions) given that this info goes into the public domain and may be picked up by someone who isn't aware of the full context. I'm not sure I agree with putting conferences under Chatham House Rules although I acknowledge the argument that it might help scientists feel more comfortable in sharing more info.

Re context: this point was highlighted by one of Lucy's scientists as well - the quality of data is likely to be deprecated in a tweet and can easily be misinterpreted by the 140 characters limit, with speculation and caution from the speaker turning into 'fact'.

Ingelfinger rule"The policy of considering a manuscript for publication only if its substance has not been submitted or reported elsewhere. This policy was promulgated in 1969 by Franz J. Ingelfinger, then the editor of The New England Journal of Medicine. The aim of the Ingelfinger rule was to protect the Journal from publishing material that had already been published and thus had lost its originality." Source: http://www.medterms.com/script/main/art.asp?articlekey=13488

See also: http://en.wikipedia.org/wiki/Scientific_misconduct#Responsibility_of_authors_and_of_coauthors
and Ingelfinger Over-Ruled: The Role of the Web in the Future of Refereed Medical Journal Publishing (2000)
http://cogprints.org/1703/1/harnad00.lancet.htm

If the Ingelfinger rule was extended so that any mention of data precluded its publication then this might well be a case of the communications movement taking a step backwards, limiting the sharing of information. If something has been presented as a poster at a conference this shouldn't prevent it from being published in a journal and this is the approach that SfAM takes.

I don't know what the percentage of new or preliminary data is presented at conferences - obviously it's not 'old' info, but some stuff is surely in press and so there must be a continuum of data from the very fresh and unexamined to much more pored-over stuff. Nonetheless a lot of what gets reported from scientific conferences hasn't been through the full process of peer review, either the review process to get into a journal publication or the 'post-marketing' scrutiny once other scientists get their mitts on it - an example of post-publishing scrutiny might be the Rapid Responses from the BMJ, although these comments aren't peer-reviewed either of course.

My job at Diabetes UK involves ­me answering public enquiries on the science of diabetes and this can include results publicised from our Annual Professional Conference. If something preliminary has been mentioned in the papers it may take some time before a formal publication is available so this means that the status of the information is 'lower' than something which has been through the process. That's a bit of a simplification as the peer-review process isn't perfect and just because something is published doesn't mean that it's 'right' - but you know what I mean ;)

Part of my reason for showing such an interest in the publicising of data, perhaps via Twitter, was my concern that someone following the conference hashtag might read more into a tweet than they should - I don't particularly enjoy having to explain to someone that a piece of early stage research has somehow been translated as being further forward and more certain than it is, particularly if it's work that we've funded.

I've previously written about my thoughts on stories in the media and how they are understood by those reading them - this is my experience of over six years responding to enquirers calls and emails about something they've read in the papers. I've quoted it below because the rest of the blog post isn't as relevant here.

"Firstly, some observations of my own. Often a story is perfectly clearly written but the headline lets it down. Or there's a throwaway brief para which contains an otherwise minor error but in context gives the wrong impression. I don't think that view is going to startle anyone.

What you *might* be less aware of is how wrongly people can apprehend, or remember, a story they've read in the newspapers - although if you think about it, not really that much of a surprise. Probably someone's done some research on this but I confess I am ignorant of it.

People have rung in wanting to know about something they read "last month in the papers", only for me to find that it was actually *months* ago, my record for the longest gap is three and a half years. Memories - not so reliable."
Source: http://brodiesnotes.blogspot.com/2010/04/healthy-journalism-challenges-and.html

I'm not entirely sure how the press release world works - I know we put them out, but presumably the researchers' host university will put out releases too and, if the research is at the point of being published then journals might pitch in too. Hopefully the universe doesn't end up with three separate releases on the same topic but for all I know it might. Only in the case of research-about-to-be-published has it been through a peer review process.

It seems that most of us in the room could live with non-peer-reviewed information being publicised but only as long as the preliminary nature of the work is made clear.

I also thought about other ways in which preliminary work might be publicised. Our charity, and I'm sure other charities too, will write articles which include progress reports for the work that we fund. Inherent in its 'progress report' nature is the implication that this work is at an early stage, and it's entirely possible that as the work progresses it will 'change direction'. Talking about the research that we fund is an essential part of what we do - people raise money for us and they have a right to know how that money is being spent and what is happening. We do also want to tell people a little about the process of research, for example that it takes a long time and that each project generally looks at one small aspect of research. Every charity has a responsibility to publish basic information about the breakdown of spending costs but almost all take this a step further and use this as an opportunity to talk about the research itself, the people undertaking it and any collaborations between institutions. The resulting document is not only a fundraising tool but it lets everyone in the charity know what work is being done.

Someone made the interesting comment that even if the data is at the point where it hasn't been peer-reviewed, the application form for the project that generated the data has been peer-reviewed. While that's true I'm not sure how satisfactory that is. It was also pointed out that the Government's reports aren't peer-reviewed, though they do go through an editing process. Similarly conference abstracts go through a version of peer review, beyond merely editorial control.

Even if journalists aren't present at a conference the output of those tweeting from it (in a delegate capacity) is part of the public record and may well be being followed by journalists not 'in the room' who might spot a story and follow it up.

In addition to material not being peer reviewed we discussed whether members of the public are aware that the concept of peer review might be used as a benchmark of quality (except in the case of homeopathy which seems to be a fine example of peer review failing to pick up and winnow out nonsense, also Andrew Wakefield's paper). There's a useful document from Sense About Science which explains peer review to a non-specialist audience (http://www.senseaboutscience.org.uk/index.php/site/project/30, see also http://www.senseaboutscience.org.uk/index.php/site/project/29/). Peer review is often criticised as something that's subjective / biased and a very imperfect system although it's 'the best we've got' at the moment, perhaps.

A couple of final thoughts which didn't quite fit anywhere else...

How should an opinion from an esteemed person at a conference be reported, if at all. It's all very well with the 'nullius in verba' (the Royal Society's motto which means 'on the word of no-one' or 'take nobody's word for it' - ie don't be overly impressed by authority, just because someone important says something it doesn't mean that it's true) but what happens if someone comes out with a great soundbite? If people are speaking more colloquially at a conference (and this is understood by those in the room but not necessarily by those outside) then there's a risk that it will get mistranslated. In the last six years working at Diabetes UK I've read plenty of news articles where a scientist is quoted as saying that there will be a cure for diabetes within the next five years - not unexpectedly people ring us up wanting to know what this is all about. If I heard someone say this at a conference to be honest I'd simply ignore it, I wouldn't even say "so and so says...".

Finally - science blogs. The blogs themselves are in the most part not peer-reviewed, although I think those hosted at the Research Blogging platform restrict themselves to writing about peer-reviewed research and many other blog platforms host blogs that go through an editorial process.

This blog post hasn't been peer reviewed though I did send it to Lucy Harper, the speaker, to check that I'd not misunderstood, misinterpreted or misrepresented anything :)

No comments:

Post a Comment

Comment policy: I enthusiastically welcome corrections and I entertain polite disagreement ;) Because of the nature of this blog it attracts a LOT - 5 a day at the moment - of spam comments (I write about spam practices,misleading marketing and unevidenced quackery) and so I'm more likely to post a pasted version of your comment, removing any hyperlinks.

Comments written in ALL CAPS LOCK will be deleted and I won't publish any pro-homeopathy comments, that ship has sailed I'm afraid (it's nonsense).