[Relocated from Zombiestatistics] The "50% of all trials are never published" zombie pushed to the surface again this week, this time in a BMJ article from the Nordic Cochrane Centre about duloxetine. The first sentence caught my eye: About half of all randomised clinical trials are never published,1 and the other half is often published selectively,2 in both cases depending on the direction of the results.
I didn't recognise the first reference as amongst the usual crop of citations for this statement, so looked it up. It's a Cochrane review of the literature about the likelihood of a clinical trial that is reported as a congress abstract going on to be a full publication. Hardly "all randomised clinical trials"... And it found that 63% do go on to be published, so I'd say that was nearer a third that don't get published, rather than a half, but bear in mind not all trials make it to a congress abstract. The second reference is a retrospective analysis of trials registered with the Copenhagen and Frederiksberg ethics committee between 1994 and 1995, and how the final published reports tallied (or not) with the protocols that were initially registered. I wouldn't have said that was a particularly representative or contemporary sample, but it was a study on inadequate reporting of trial outcomes and it found what it was looking for. Given that the first iteration of CONSORT was only published in 1996, I'm not really surprised that the published reports were not of particularly high quality. There are more recent papers that have done a better job on completeness (or not) of reporting in publications vs protocols or registry information. Their first two citations are inaccurate/overextrapolated. When I find the first two references in an article are questionable, I tend to be wary of the rest of the paper. If I had been reviewing this as a draft from a less experienced colleague, I would have sent it back and asked for marked up references for the rest of the paper. It is the responsibility of the authors to cite supporting literature correctly. I assume the other, more experienced, authors on this paper reviewed the introduction (the contribution statement said all authors contributed to the manuscript, I assume that includes the intro)? Is it not the responsibility of experienced authors to guide their protégés toward the appropriate background information? Was the second reference particularly pertinent I wonder? No, because the literature the main article discusses was published between 2002 and 2007 and the protocols were dated from 1998 to 2001, so it's not as if trials registered in Denmark between 1994 and 1995 would be particularly relevant. I have no issue with the study results. I have seen many, many study reports during my career. Most have had some kind of stupid error in them. Most don't adequately describe the randomisation process (which is why I applaud the SPIRIT statement). The part I usually struggle most with is the adverse event reporting. Not because I think anyone is trying to hide anything, but because adverse events are complicated. I don't think most people understand the sheer volume of information in a clinical trial, which becomes distilled into a CSR. What bothers me considerably is that two of the authors of this BMJ paper are PhD students, so are in the formative phase of developing their writing and critical appraisal skills. If people being trained in a prestigious institution like the Nordic Cochrane Centre are not verifying their sources, then that's quite frightening. I teach trainee medical writers only to cite papers they have read, not to cite papers they've been told are relevant. If that's not the case in a gold standard, evidence-based-medicine stalwart instutution, then I am worried.
0 Comments
Your comment will be posted after it is approved.
Leave a Reply. |
Archives
November 2023
Categories
All
|