The progression of the SARS-CoV-2 pandemic, approval of COVID-19 vaccines and roll-out of vaccination programmes has revealed a world of epidemiologists, statistical modellers and virologists who publish their ‘expert’ opinions across the media. Alternative views are also published by anti-vaxxers, usually through social media. It is a wonder that there is not more confusion among the general population with such fierce competition for the ‘truth’.
A more common problem of journal articles is publication and related biases, which may have serious consequences for sound clinical decision making. Song et al (2013) suggested that a better term might be ‘dissemination bias’. Publication and related biases include: outcome reporting, study design, time lag, full publication, grey literature, language, location, citation and media attention. These biases may be introduced intentionally or unintentionally during the process of research dissemination. Song et al (2013) highlighted how the results of many completed research studies are not published, which has been confirmed by Scherer et al's (2018) analysis of abstracts converting to full publication within 10 years. Ayorinde et al (2020) also explored publication and related biases in health services research, but the data were sparse, so they were unable to assess its magnitude or impact.
Scherer et al's (2018) study included 425 research reports that followed up the subsequent full publication of 307 028 abstracts relating to the biomedical and social sciences. They found that less than half of all studies, and about two-thirds of randomised controlled trials, presented as summaries or abstracts at meetings were published in journals in the 10 years after presentation. They also noted that studies were more likely to be published if the results were positive; they had large sample sizes; the abstracts had been presented orally rather than as posters; the studies described basic science rather than clinical research; the studies were randomised controlled trials rather than other research designs; the studies were conducted across multiple research sites rather than a single site; the studies were classified as high-quality research; the authors came from an academic setting rather than other settings; the study was considered to be of high impact by the authors; a funding source was reported; and the studies originated in Europe or North America and English-speaking countries.
These biases have consequences, including the danger that they may result in misleading estimates of treatment effects, so that clinicians overestimate the relative efficacy of a treatment, which is revealed later to be unfounded in the real world. An increasing number of systematic reviews are being published and used to inform nursing. Systematic reviews combine the results of smaller studies and, because they have larger random errors, readers should look for a funnel plot, that is, a plot of sample sizes against estimated effect size from the studies in a meta-analysis, and it should be shaped like a funnel if there is no publication bias.
Community nurses are faced with a paradox-an increasing and almost overwhelming volume of information to inform their practice against the results of potentially relevant studies that have not been published and remain inaccessible, which, if accessible, may change practice. Faced with this paradox, community nurses should be critical readers and apply a dose of healthy scepticism until overwhelming evidence is made available.
‘… biases have consequences, including the danger that they may result in misleading estimates of treatment effects, so that clinicians overestimate the relative efficacy of a treatment, which is revealed later to be unfounded in the real world.’