- Research
- Open access
- Published:
Reducing risk of bias in interventional studies during their design and conduct: a scoping review
BMC Medical Research Methodology volume 25, Article number: 85 (2025)
Abstract
Background
Interventional studies are intended to provide robust evidence. Yet poorly designed or conducted studies may bias research results and skew resulting evidence. While there have been advances in the assessment of risk of bias, it is unclear how to intervene against risks of bias during study design and conduct.
Objective
To identify interventions to reduce or predict risk of bias in interventional studies during their design and conduct.
Search strategy
For this scoping review, we searched three electronic bibliographic databases (MEDLINE, Embase, and Cochrane Library) and nine grey literature sources and Google from in September 2024. This was supplemented by a natural language processing fuzzy matching search of the top 2000 relevant publications in the electronic bibliographic databases. Publications were included if they described the implementation and effectiveness of an intervention during study design or conduct aimed at reducing risk of bias in interventional studies. The characteristics and effect of the interventions were recorded.
Result
We identified, and reviewed the title and abstracts of, a total of 41,793 publications, reports, documents and grey literature, with 24,677 from electronic bibliographic databases and 17,140 from grey literature sources. There were 67 publications from bibliographic databases and 24 items from grey literature that were considered potentially eligible for inclusion, and the full-text of these were reviewed. Only three studies met the inclusion criteria. The first intervention was offering education and training to researchers during study design. This training included the implementation of a more rigorous participant screening process and systematic participant tracking program that reduced loss to follow-up and missing data, particularly for long-term follow-up trials. The second intervention was introducing an independent clinical events committee during study conduct. This was intended to mitigate bias due to conflicts of interest affecting the analysis and interpretation of results. The third intervention was to provide participants with financial incentives in randomized controlled trials, so that participants could more actively accomplish the requirements of the trials.
Conclusion
Despite the major impact of risk of bias on study outcomes, there are few empirical interventions to address this during study design or conduct.
Introduction
The number of publications in the field of medicine and health has rapidly increased over the past decade [1]. Yet many of these studies are at high risk of bias [2]. Bias refers to a systematic deviation of the observed effect from the true value and can result in overestimation or underestimation of an effect estimate [3,4,5]. While bias is difficult to quantify, risk of bias can be assessed, and studies with low risk of bias are more likely to produce results closer to the true value [3, 6, 7]. When included in evidence syntheses (e.g., systematic reviews and meta-analyses), such studies may compromise the reliability of results, which subsequently inform clinical guidelines and evidence-based practice. Low-quality medical research wastes resources [8,9,10,11] and funding [12]. High risk of bias studies can also be considered unethical, as participants put themselves at risk with the expectation that a study is designed and conducted appropriately and will advance science.
A review of 205 meta-analyses from the Cochrane database found that > 40% of included randomized controlled trials (RCTs) were at high risk of bias [2]. Generally, this is secondary to poor design, incomplete description of methods and results, and selective reporting [13]. An analysis of 20,920 trials found that 33% had a high risk of bias in the blinding of personnel and 23% in the blinding of outcome assessors [14]. If considering the fact that reviewers frequently underestimated the risk of bias [15], and articles in journals with a low impact factor still have higher risks of bias in random sequence generation, allocation concealment, and blinding [14], the situation could be even less satisfactory. For non-randomized studies of interventions (NRSI), confounders may contribute to bias and requires more sophisticated tools for assessment [16,17,18,19].
Improving the quality of research reports could be achieved by reporting guidelines and improving researcher adherence, but reducing the risk of bias caused by flaws in research design and conduct is more challenging. Existing examples of initiatives with this intention include guideline references for researchers to rationalize study design [20], pre-registration requirements to improve study transparency [21], and open data and code that may reduce questionable research practices [22] Other examples of interventions with a more theoretical basis include training, mentoring, incentives, tools, assistance, and infrastructure. Another opportunity is through funding applications. The review, and provision of feedback, of research proposals in funding applications could be formalised to address likely risks of bias in interventional studies, if there were validated methods to predict risk of bias at study design. We would like to explore further whether there are more interventions capable of reducing the risk of bias at the study design and conduct stage.
Objective
The aim of this scoping review was to identify interventions to reduce or predict risk of bias in interventional studies during their design and conduct, and summarise the outcomes of these interventions.
Methods
Protocol and registration
This scoping review was conducted in accordance with the Joanna Briggs Institute methodology manual [23] and reported in accordance with the PRISMA-Scoping Review checklist [24]. The study protocol was prospectively registered on the Open Science Framework (https://osf.io/8vqp5).
Research questions
-
1.
What interventions to reduce risk of bias of interventional studies during their design or conduct have been assessed, and what were the outcomes of these assessments?
-
2.
Are there any methods to predict, during study design or conduct, the likely risk of bias in interventional studies?
Definition of terms
Interventional studies
Include randomised controlled trials, pseudo-randomized controlled trials, non-randomised controlled trials, and single-arm clinical trials.
Interventions
Action to influence the researchers’ and participants’ awareness, attitudes, and behavioural intentions (e.g. education, incentives, supervision, training, initiatives_.
Literature search
The Boolean logic search strategy for the bibliographic databases was designed in consultation with an information specialist from Cochrane (AT-K) and an experienced librarian from the University of Sydney. The Boolean logic search strategy was supplemented with natural language processing (NLP) fuzzy matching of the same bibliographic databases to identify any articles which might have been missed. The search strategy for the grey literature was designed in consultation with an expert in searching trial registries (KEH) [25], with separate search strategies for each source. Searches were not restricted by date, language or type of publication (e.g., abstracts were included). Publications in languages other than English and Chinese were reviewed with the aid of translation software (https://translate.google.com/).
Search strategy and information sources
Electronic databases
We searched MEDLINE, Embase, and Cochrane Library on 19th of September 2022 and again on the 29th September 2024 (see Appendix A for search strategies). After obtaining the results, SR-accelerator was used to automatically removed duplicate publications [26]. These publications were imported into Rayyan [27] and manually screened to remove any remaining duplicate records. De-duplicated publications were imported into Covidence (www.covidence.org) [28]. In order to capture publications we might have missed, we conducted a secondary screening of the three electronic bibliographic databases mentioned above by adopting a natural language processing search approach [29], details of which are documented in Appendix C.
Grey literature
We searched the sources in Table 1, initially from 6 December 2022 to 17 January 2023 and then again from 29 September 2024 and to 11 October 2024 (see Appendix B for search strategy).
During the screening process, we also identified systematic reviews and scoping reviews relevant to our study and screened their reference lists for potentially eligible publications.
Inclusion criteria
Publications were included if they implemented and assessed the effectiveness of an intervention to reduce or predict risk of bias in interventional studies during their design or conduct. We included and defined dimensions of risk of bias in accordance with the Cochrane risk-of-bias tool for randomized trials (RoB 2) [6] (e.g., flaws in the randomisation process, deviations from intended interventions, missing outcome data, flaws in outcome measurement) and the risk of bias in the non-randomized studies of interventions (ROBINS-I) tool [7] (e.g., confounding, selection of participants into the study, misclassification of interventions, deviations from intended interventions, missing data). If an intervention was ineffective at reducing or predicting risk of bias, we also included it as long as the authors attempted to assess the effect of the intervention to some degree.
Exclusion criteria
Publications were excluded if they:
-
1.
Were a simulation study which only demonstrated an interventions’ effectiveness through virtual data.
-
2.
Were a non-empirical study (e.g. theoretical studies of methodology).
-
3.
Only described an improvement of research equipment, more appropriate statistical analysis methods, or financial incentives without reducing domains of risk of bias as an outcome.
The exclusion criteria were intended so as to focus on interventions that assist researchers to improve study design, reduce erroneous behaviours or decisions in the conduct of studies, and that have been have been used in practice.
Selection process
The titles and abstracts of all articles identified by the Boolean logic search and the top 2000 articles identified by the NLP fuzzy matching search were screened in duplicate by independent reviewers (ZR, YY, JZ) [27]. Conflicts were resolved by discussion between reviewers. All items, web pages, articles and other information identified from the grey literature search were screened by two reviewers (ZR, JZ). The full-text of publications which possibly met the inclusion criteria were then reviewed in duplicate by two independent reviewers (ZR, ACT) and the final included studies were confirmed by a third reviewer (ALS).
Data extraction
After all relevant articles had been identified, data extraction was undertaken by two reviewers (ZR, JZ) using a Microsoft excel spreadsheet. We documented the title, authors, year of publication, study type, name of the intervention, application stage of the intervention, target type of bias risk, and effectiveness of the intervention. Two reviewers cross-checked the extracted results. The final extraction data is saved in an Excel document and presented in the results section.
Data synthesis
The included studies were narratively synthesized due to significant heterogeneity. Interventions to reduce risk of bias were categorised by whether they were implemented during study design or conduct. The effectiveness of the included interventions was summarised.
Deviation from protocol
In the research protocol, we did not specifically declare whether intervention effectiveness was a criterion for inclusion. However, as unsuccessful interventions were still considered informative, we included these. Additionally, we extended the search of the grey literature database to obtain more studies that might meet the inclusion criteria.
Ethical statement
There was no patient or public involvement in this study, and no ethical review was required.
Results
Search
The results of our search are visualised in Fig. 1.
Boolean logic search
The search of the three electronic bibliographic databases identified 35,081 articles. After excluding duplicates, we screened the titles and abstracts of 24,677 articles. After excluding irrelevant articles, we reviewed the full-text of 67 articles. Of these, three articles met our inclusion criteria.
NLP fuzzy matching search
After reading the abstracts of 2000 records and excluding irrelevant articles, we reviewed the full-text of four articles. Of these, none met our inclusion criteria
Grey literature search
The search of the nine grey literature sources identified 17,140 publications, not including those from Google. The number of results obtained from Google searches is dynamic and difficult to record in total. Since the grey literature databases include a wide variety of types of publications or references, we were unable to report the number of duplicates as the results could not be input into Covidence or Rayyan. No publications or references met our inclusion criteria.
We recorded the reasons for exclusion after full-text review of articles identified in the Boolean logic search (see Appendix D). The following Appendix D does not include 17 articles for which the abstract could not be displayed during the abstract screening process and were therefore reviewed in full-text during the initial screening, as none of these articles met our inclusion criteria.
Characteristics of included publications
We included three studies (see Table 2).
Synthesis of results
We identified three interventions during study design and conduct with evidence to reduce some risks of bias in interventional studies. The three interventions addressed two different domains of risk of bias: flaws in outcome measurement and missing data. We were unable to find any methods that predicted risk of bias during study design or conduct.
Intervention 1 (Auerbach et al., 2013): Independent clinical events committee (CEC) [30]
Industry-sponsored studies often show positive clinical outcomes, with financial gain introducing potential bias [33, 34]. An independent CEC provides an unbiased, third-party assessment during study conduct, reducing risk of bias from outcome measurement [35, 36]. In the Auerbach trial on spinal stenosis treatment, a CEC of three unaffiliated spinal surgeons re-examined adverse event reports, resulting in the reclassification of 36% of events in the control group and 38% in the trial group [30]. The researchers attributed the causative factors for the adverse reactions to surgery and medical device and analysed them separately. There was no significant difference in the association with surgery and association with device domains, or in the reclassification of the severity of adverse events between the trial and control groups. The CEC increased the severity level of adverse events at a much higher relative frequency than it decreased the severity level of adverse events, and this was the area of most apparent conflict between the CEC and researchers. When patients treated by a researcher with a sponsored interest were analysed separately from patients treated by a researcher without a sponsored interest, it was found that for adverse events reported by sponsoring stakeholders, after reclassification by CEC, the odds of upgrading the adverse event level (compared to downgrading) were 8.9 times greater, regardless of device, compared to non-sponsoring stakeholders.
Researchers with a sponsored interest tended to underestimate adverse event severity more than their counterparts, influencing study conclusions. Although this bias did not differ between the trial and control groups, it still influenced the study conclusions. After validating the reproducibility of CEC decisions and ensuring that CEC members were blinded, it was demonstrated that independent CECs can contribute to correcting outcome data.
Intervention 2 (Bhandari et al., 2008): Recruitment protocols and training [31]
The reasons for loss to follow-up in studies are diverse [37,38,39]. Minimizing loss to follow-up in studies is crucial, with interventions in the recruitment stage preferred over corrective measures during data analysis. The Bhandari (2008) study provides a systematic set of recruitment and follow-up interventions [40]. The recruitment approach yielded promising results in a multicentre RCT, with reduced selection bias. The core of recruitment protocols is establishing a central methods centre to manage participant individual information, monitor participant conditions, and train researchers. The recruitment process comprised three stages. Initially, participants were identified and recruited, excluding those unlikely to complete follow-up. Adequate information about the study’s burden, risks, and benefits was provided. The second stage involved maintaining contact with patients, confirming their status and residence changes. Patients were encouraged to engage in trial-related activities during waiting periods. If patients withdrew voluntarily, their situation was confirmed promptly, with encouragement to continue. Researchers accommodated reasonable requests and attempted contact using collected information, systematically mobilizing the team to find patients in various ways [40]. Central to this intervention is the development of sound protocols around participants and training for research staff to help them cope with the different follow-up periods, as well as the need to improve the ability of different research staff to collaborate and follow a predetermined process at the first sign of a lost follow-up. At one year follow-up, the study achieved a 93% follow-up rate, significantly higher than other relevant studies in the same field [31, 40].
Intervention 3 (High et al., 2024): Financial incentive [32]
Modest financial incentives in questionnaire distribution or respondent recruitment could increase response rates and sample sizes [41]. For intervention studies requiring long-term follow-up, financial incentives have other positive effects. High et al. nested a parallel randomized controlled study within a host trial of the Quit Sense mobile app to assist with smoking cessation [42] and verify whether offering participants £20 and £10 monetary incentives impacted six-month follow-up data collection. The results showed that there was no significant difference in the rate of loss to follow-up between the two groups, but only 46% of participants in the £20 incentive group required manual intervention to prompt questionnaire completion and saliva sample submission during the automated data collection process, a significant difference compared to 62% in the £10 incentive group (OR = 0.53, p = 0.032) [32]. Meanwhile, the £20 incentive group had higher data completeness in completing the questionnaire items, with a median questionnaire completion time of only 7 days, lower than the £10 incentive group’s 14.9 days [32]. This study within a trial (SWAT) shows that financial incentives contribute to better compliance among participants in interventional studies, reducing the time and effort needed to reduce missing data and indirectly improving data quality and statistical power.
Discussion
In this scoping review, we identified three interventions during study design or conduct to reduce risk of bias in interventional studies. These interventions address flaws in outcome measurement and missing outcome data in RCTs.
In the Auerbach (2013) [30] trial, the independent CEC demonstrated a significant impact on outcome measures and reduced the influence of financial interests on outcome assessment. The necessary role of independent outcome adjudicators/committees is supported by other studies [35, 43, 44].To guarantee the operational integrity of this intervention requires rigorous trial design, safeguarding the independence of the CEC, assessing the level of expertise of CEC members and concealing treatment assignment to CEC members [45]. This may be limited by resource constraints of smaller study teams [45]. The recruitment protocol in the Bhandari (2008) trial achieved significantly higher follow-up rates than similar studies and is a useful strategy for addressing the problem of low follow-up rates. However, this recruitment protocol may contribute to selection bias, by excluding those with personal characteristics associated with loss to follow-up, such as homeless people, and those with mental disorders. Excluding such vulnerable populations may also reduce external validity. Balancing the internal validity of excluding specific populations to safeguard the trial with the external validity of generalizable trial results to obtain the optimal recruitment strategy requires a site-specific design approach by the researcher [46]. High (2024) trial shows that providing direct monetary incentives can reduce bias caused by missing data to some extent during the follow-up of interventional studies. This intervention can also be extended to surveys after the completion of clinical trials [47]. Still, it is necessary to consider the new selection bias caused by the fact that participants with poor economic conditions are more sensitive to monetary incentives [48], as well as the financial burden of the study due to unreasonable incentive amounts.
Regarding interventions to predict risk of bias, we did not find any relevant cases. The reason for adding this question to our scoping review is that all current bias assessment tools are designed for retrospective management after the study has been reported. Various types of handbooks actually describe in detail the sources of bias and errors in study design that can introduce high risk of bias [49, 50], and we wondered if it would be possible to adapt bias assessment tool so that it could be applied prospectively before the study is conducted to reduce the risk of bias. If there were researchers who could predict the risk of bias at the completion stage by understanding the characteristics of the study at the design stage and referring to any bias assessment tools, and improving the quality of their own study, we could incorporate these findings into help predict the risk of bias in the study proposal. We also found evidence of the feasibility of using AI for risk of bias assessment [51, 52], and machine learning can be used to assist in statistical analysis [53]. We would like to explore whether this technology can be used to assist in warning of bias in research at the study design or conduct stage. But this review failed to identify any relevant literature, even observational correlational studies or cross-sectional surveys.
Beyond our inclusion criteria, several methods and initiatives that may reduce the risk of bias are worthy of discussion and replication. The proportion of prospective registrations has increased over time, with significantly higher rates for studies published in high-impact speciality medical journals compared to lower-impact speciality medical journals and studies with prospective registrations have a lower risk of bias in all domains [54]. However, there remain issues, including retrospective registration after study completion (sometimes secondary to a lack of awareness of prospective registration [55]), modification of registration partway through the study conduct, and failure to adequately pre-specify outcomes [54, 56]. In contrast, registered reports allows for prior peer review of research design and methods, which may be more helpful in improving the quality of research design and reducing the risk of bias [57]. In the randomization process, the researchers used new scratch cards [58] and improved pharmaceutical allocation boxes [59] to ensure that the trial groups would not be unblinded due to artificial factors, and this has achieved satisfactory results in practice. Efforts to enhance study reporting include CONSORT, PRISMA, and STROBE guidelines [60, 61], aimed at improving reporting quality, though adherence remains suboptimal [62, 63]. COBWEB, a novel online writing aid aligned with CONSORT, could alleviate this situation [64].
Other proposed interventions to reduce the risk of bias are summarised in Table 3.
Despite an exhaustive search across multiple databases and sources using both traditional and artificial intelligence strategies and consulting various information specialists, because of the broad topic area there is a possibility that some intervention types may have been missed, especially those which used atypical terminology. We also realised that atypical studies that examine whether an intervention could reduce the risk of bias through an interventional study may be difficult to publish or fund. Researchers are more likely to validate the interventions we want to identify in the form of the Study Within a Trial (SWAT) [72], and our search strategy was not specifically designed for this situation. The above reasons may contribute to the extremely low hit rate we end up with.
Conclusion
After reviewing over 41,817 publications, reports, items, and grey literature, we found only three interventions during study conduct stage to reduce risk of bias in interventional studies. Existing research tends to focus more on statistical methods, reporting quality, and bias assessment. There is a lack of interventions that could be implemented at the more preliminary stage of study design to predict or reduce the risk of bias.
Data availability
No datasets were generated or analysed during the current study.
References
Bornmann L, Mutz R. Growth rates of modern science: a bibliometric analysis based on the number of publications and cited references. J Associati Inform Sci Technol. 2015;66(11):2215–22. https://doiorg.publicaciones.saludcastillayleon.es/10.1002/asi.23329.
Yordanov Y, Dechartres A, Porcher R, Boutron I, Altman DG, Ravaud P. Avoidable waste of research related to inadequate methods in clinical trials. BMJ: 2015;350: h809. https://doiorg.publicaciones.saludcastillayleon.es/10.1136/bmj.h809.
Higgins JP, et al. Cochrane handbook for systematic reviews of interventions. Wiley; 2019.
Jakobsen JC, et al. Direct-acting antivirals for chronic hepatitis C, (in eng). Cochrane Database Syst Rev. 2017;6(6):Cd012143. https://doiorg.publicaciones.saludcastillayleon.es/10.1002/14651858.CD012143.pub2.
Brown RB. Public health lessons learned from biases in coronavirus mortality overestimation, (in eng). Disaster Med Public Health Prep. 2020;14(3):364–71. https://doiorg.publicaciones.saludcastillayleon.es/10.1017/dmp.2020.298.
Sterne JA, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019;366.
Sterne JA, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355(p i4919):i4919. https://doiorg.publicaciones.saludcastillayleon.es/10.1136/bmj.i4919.
Macleod MR, et al. Biomedical research: increasing value, reducing waste, (in English). Lancet. 2014;383(9912):101–4. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/S0140-6736(13)62329-6.
Salman RA-S, et al. Increasing value and reducing waste in biomedical research regulation and management. Lancet. 2014;383(9912):176–85.
Ioannidis JP, et al. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.
Chan A-W, et al. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.
Chalmers ID, Glasziou PP. Avoidable waste in the production and reporting of research evidence. Lancet (British Edition). 2009;374(9683):86–9. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/S0140-6736(09)60329-9.
Glasziou PP, et al. Reducing waste from incomplete or unusable reports of biomedical research. Lancet (British edition). 2014;383(9913):267–76. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/S0140-6736(13)62228-X.
Dechartres A, et al. Evolution of poor reporting and inadequate methods over time in 20 920 randomised controlled trials included in Cochrane reviews: research on research study. BMJ. 2017;357:j2490.
Barcot O, et al. Risk of bias judgments for random sequence generation in Cochrane systematic reviews were frequently not in line with Cochrane handbook. BMC Med Res Methodol. 2019;19(1):170. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12874-019-0804-y.
Bilandzic A, Fitzpatrick T, Rosella L, Henry D. Risk of bias in systematic reviews of non-randomized studies of adverse cardiovascular effects of thiazolidinediones and cyclooxygenase-2 inhibitors: application of a new Cochrane risk of bias tool. PLoS Med. 2016;13(4):e1001987-e. https://doiorg.publicaciones.saludcastillayleon.es/10.1371/journal.pmed.1001987.
Reeves BC, et al. Including non-randomized studies on intervention effects. In: Cochrane handbook for systematic reviews of interventions. 2019. p. 595–620.
Reeves BC. Principles of research: limitations of non-randomized studies. Surg (Oxford). 2008;26(3):120–4. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.mpsur.2008.02.004.
Dhiman P, Lee H, Kirtley S, Collins GS. A systematic review showed more consideration is needed when conducting nonrandomized studies of interventions. J Clin Epidemiol. 2020;117:99–108. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.jclinepi.2019.09.027.
Chan AW, et al. SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials. BMJ. 2013;346:e7586.
De Angelis C, et al. Clinical trial registration: a statement from the International Committee of Medical Journal Editors. Arteriosclerosis, thrombosis, and vascular biology. 2005;25(4):873–4. https://doiorg.publicaciones.saludcastillayleon.es/10.1161/01.ATV.0000162428.48796.22.
Simmons JP, Nelson LD, Simonsohn U. False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci. 2011;22(11):1359–66.
Peters MD, Godfrey C, McInerney P, Baldini Soares C, Khalil H, Parker D. Scoping reviews. Joanna Briggs Ins Rev Manual. 2017;2015:1–24.
Tricco AC, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation, (in eng). Ann Intern Med. 2018;169(7):467–73. https://doiorg.publicaciones.saludcastillayleon.es/10.7326/m18-0850.
Hunter KE, et al. Searching clinical trials registers: guide for systematic reviewers. BMJ. 2022;377:e068791. https://doiorg.publicaciones.saludcastillayleon.es/10.1136/bmj-2021-068791.
Clark J, Glasziou P, Del Mar C, Bannach-Brown A, Stehlik P, Scott AM. A full systematic review was completed in 2 weeks using automation tools: a case study, (in eng). J Clin Epidemiol. 2020;121:81–90. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.jclinepi.2020.01.008.
Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan—a web and mobile app for systematic reviews. Syst Reviews. 2016;5(1):210. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s13643-016-0384-4.
Covidence systematic review software. Melbourne: Veritas Health Innovation. Available: www.covidence.org.
Sarker IH. Deep learning: a comprehensive overview on techniques, taxonomy, applications and research directions. SN Comput Sci. 2021;2(6):420. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s42979-021-00815-1.
Auerbach JDMD, et al. Mitigating adverse event reporting Bias in spine surgery. J Bone Joint Surg Am. 2013;95(16):1450–6. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/S0021-9355(13)73761-5.
Bhandari M, et al. Randomized trial of reamed and unreamed intramedullary nailing of tibial shaft fractures. J Bone Joint Surg Am. 2008;90A(12):2567–78. https://doiorg.publicaciones.saludcastillayleon.es/10.2106/JBJS.G.01694.
High J, et al. Effects of an increased financial incentive on follow-up in an online, automated smoking cessation trial: a randomized controlled study within a trial, (in eng). Nicotine Tob Res. 2024;26(9):1259–63. https://doiorg.publicaciones.saludcastillayleon.es/10.1093/ntr/ntae068.
Bekelman JE, Li Y, Gross CP. Scope and impact of financial conflicts of interest in biomedical research: a systematic review. JAMA. 2003;289(4):454–65. https://doiorg.publicaciones.saludcastillayleon.es/10.1001/jama.289.4.454.
Perlis CS, Harwood M, Perlis RH. Extent and impact of industry sponsorship conflicts of interest in dermatology research. J Am Acad Dermatol. 2005;52(6):967–71. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.jaad.2005.01.020.
Dechartres A, Boutron I, Roy C, Ravaud P. Inadequate planning and reporting of adjudication committees in clinical trials: recommendation proposal. J Clin Epidemiol. 2009;62(7):695–702. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.jclinepi.2008.09.011.
Mahaffey KW, et al. Disagreements between central clinical events committee and site investigator assessments of myocardial infarction endpoints in an international clinical trial: review of the PURSUIT study. Curr Controlled Trials Cardiovasc Med. 2001;2(4):187–94. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/CVM-2-4-187.
Zhou C, et al. Loss to follow-up and associated factors in a cohort study among men who have sex with men. Zhōnghuá liúxíngbìng zázhì. 2013;34(8):788.
Janson SL, Alioto ME, Boushey HA. Attrition and retention of ethnically diverse subjects in a multicenter randomized controlled research trial. Controlled Clinical Trials. 2001;22(6, Supplement 1):S236–43. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/S0197-2456(01)00171-4.
Howe LD, Tilling K, Galobardes B, Lawlor DA. Loss to follow-up in cohort studies: bias in estimates of socioeconomic inequalities. Epidemiology (Cambridge, Mass). 2013;24(1):1–9. https://doiorg.publicaciones.saludcastillayleon.es/10.1097/EDE.0b013e31827623b1.
Sprague S, Leece P, Bhandari M, Tornetta P, Schemitsch E, Swiontkowski MF. Limiting loss to follow-up in a multicenter randomized trial in orthopedic surgery. Control Clin Trials. 2003;24(6):719–25. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.cct.2003.08.012.
Edwards PJ, et al. Methods to increase response to postal and electronic questionnaires, (in eng). Cochrane Database Syst Rev. 2009;2009(3):Mr000008. https://doiorg.publicaciones.saludcastillayleon.es/10.1002/14651858.MR000008.pub4.
Naughton F, et al. Randomised controlled trial of a just-in-time adaptive intervention (JITAI) smoking cessation smartphone app: the Quit Sense feasibility trial protocol, (in eng). BMJ Open. 2021;11(4):e048204. https://doiorg.publicaciones.saludcastillayleon.es/10.1136/bmjopen-2020-048204.
Näslund U, Grip L, Fischer-Hansen J, Gundersen T, Lehto S, Wallentin L. The impact of an end-point committee in a large multicentre, randomized, placebo-controlled clinical trial: results with and without the end-point committee’s final decision on end-points. Eur Heart J. 1999;20(10):771–7. https://doiorg.publicaciones.saludcastillayleon.es/10.1053/euhj.1998.1351.
Leonardi S, et al. Comparison of investigator-reported and clinical event committee-adjudicated outcome events in GLASSY. Circulation Cardiovascular quality and outcomes. 2021;14(2):e006581–e006581. https://doiorg.publicaciones.saludcastillayleon.es/10.1161/CIRCOUTCOMES.120.006581.
Spitzer E, et al. Independence of clinical events committees: a consensus statement from clinical research organizations. Am Heart J. 2022;248:120–9. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.ahj.2022.03.005.
Stuart EA, Bradshaw CP, Leaf PJ. Assessing the generalizability of randomized trial results to target populations, (in eng). Prev Sci. 2015;16(3):475–85. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s11121-014-0513-z.
Hardy P, Bell JL, Brocklehurst P. Evaluation of the effects of an offer of a monetary incentive on the rate of questionnaire return during follow-up of a clinical trial: a randomised study within a trial, (in eng). BMC Med Res Methodol. 2016;16:82. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12874-016-0180-9.
Resnik DB. Bioethical issues in providing financial incentives to research participants, (in eng). Medicoleg Bioeth. 2015;5:35–41. https://doiorg.publicaciones.saludcastillayleon.es/10.2147/mb.S70416.
Higgins JP, Green S. Cochrane handbook for systematic reviews of interventions. 2008.
QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155(8):529–36. https://doiorg.publicaciones.saludcastillayleon.es/10.7326/0003-4819-155-8-201110180-00009.
Andaur Navarro CL, et al. Risk of bias in studies on prediction models developed using supervised machine learning techniques: systematic review, (in eng). BMJ. 2021;375:n2281. https://doiorg.publicaciones.saludcastillayleon.es/10.1136/bmj.n2281.
Millard LA, Flach PA, Higgins JP. Machine learning to assist risk-of-bias assessments in systematic reviews, (in eng). Int J Epidemiol. 2016;45(1):266–77. https://doiorg.publicaciones.saludcastillayleon.es/10.1093/ije/dyv306.
Getz K, Hubbard RA, Linn KA. Performance of multiple imputation using modern machine learning methods in electronic health records data, (in eng). Epidemiology. 2023;34(2):206–15. https://doiorg.publicaciones.saludcastillayleon.es/10.1097/ede.0000000000001578.
Tan AC, Jiang I, Askie L, Hunter K, Simes RJ, Seidler AL. Prevalence of trial registration varies by study characteristics and risk of bias. J Clin Epidemiol. 2019;113:64–74. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.jclinepi.2019.05.009.
Hunter KE, Seidler AL, Askie LM. Prospective registration trends, reasons for retrospective registration and mechanisms to increase prospective registration compliance: descriptive analysis and survey. BMJ Open. 2018;8(3):e019983. https://doiorg.publicaciones.saludcastillayleon.es/10.1136/bmjopen-2017-019983.
Scott A, Rucklidge JJ, Mulder RT. Is mandatory prospective trial registration working to prevent publication of unregistered trials and selective outcome reporting? An observational study of five psychiatry journals that mandate prospective clinical trial registration, (in eng). PLoS One. 2015;10(8):e0133718. https://doiorg.publicaciones.saludcastillayleon.es/10.1371/journal.pone.0133718.
Chambers CD, Tzavella L. The past, present and future of registered reports. Nat. 2022;6(1):29–42. https://doiorg.publicaciones.saludcastillayleon.es/10.1038/s41562-021-01193-7.
Beksinska ME, Joanis C, Smit JA, Pienaar J, Piaggio G. Using scratch card technology for random allocation concealment in a clinical trial with a crossover design. Clin Trials. 2013;10(1):125–30. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/1740774512465496.
Piaggio G, Elbourne D, Schulz KF, Villar J, Pinol APY, Gülmezoglu AM. The reporting of methods for reducing and detecting bias: an example from the WHO misoprostol third stage of labour equivalence randomised controlled trial. BMC Med Res Methodol. 2003;3(1):1–7. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/1471-2288-3-19.
Craig P. A new CONSORT extension should improve the reporting of randomized pilot and feasibility trials. J Clin Epidemiol. 2017;84:30–2. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.jclinepi.2017.01.009.
Tugwell P, Tovey D. PRISMA 2020. J Clin Epidemiol. 2021;134:A5-6. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.jclinepi.2021.04.008.
Turner L, Shamseer L, Altman DG, Schulz KF, Moher D. Does use of the CONSORT statement impact the completeness of reporting of randomised controlled trials published in medical journals? A Cochrane reviewa. Syst Rev. 2012;1(1):60–60. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/2046-4053-1-60.
Chan A-W, Hróbjartsson A, Haahr MT, Gøtzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA. 2004;291(20):2457–65. https://doiorg.publicaciones.saludcastillayleon.es/10.1001/jama.291.20.2457.
Barnes C, Boutron I, Giraudeau B, Porcher R, Altman DG, Ravaud P. Impact of an online writing aid tool for writing a randomized trial report: the COBWEB (Consort-based WEB tool) randomized controlled trial, (in eng). BMC Med. 2015;13:221. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12916-015-0460-y.
Larson EL, Cortazal M. Publication guidelines need widespread adoption. J Clin Epidemiol. 2012;65(3):239–46.
Simera I, Moher D, Hirst A, Hoey J, Schulz KF, Altman DG. Transparent and accurate reporting increases reliability, utility, and impact of your research: reporting guidelines and the EQUATOR Network. BMC Med. 2010;8(1):1–6.
Simera I, Moher D, Hoey J, Schulz KF, Altman DG. The EQUATOR network and reporting guidelines: helping to achieve high standards in reporting health research studies, (in eng). Maturitas. 2009;63(1):4–6. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.maturitas.2009.03.011.
Toelch U, Ostwald D. Digital open science—teaching digital tools for reproducible and transparent research. PLoS Biol. 2018;16(7):e2006022–e2006022. https://doiorg.publicaciones.saludcastillayleon.es/10.1371/journal.pbio.2006022.
Ross JS, Lehman R, Gross CP. The importance of clinical trial data sharing: toward more open science, (in eng). Circ Cardiovasc Qual Outcomes. 2012;5(2):238–40. https://doiorg.publicaciones.saludcastillayleon.es/10.1161/circoutcomes.112.965798.
Taichman DB, et al. Data sharing statements for clinical trials: a requirement of the International Committee of Medical Journal Editors. Ann Intern Med. 2017;167(1):63–5.
De Angelis C, et al. Clinical trial registration: a statement from the International Committee of Medical Journal Editors, (in eng). N Engl J Med. 2004;351(12):1250–1. https://doiorg.publicaciones.saludcastillayleon.es/10.1056/NEJMe048225.
Treweek S, et al. Trial forge guidance 1: what is a study within a trial (SWAT)?, (in eng). Trials. 2018;19(1):139. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s13063-018-2535-5.
Acknowledgements
We gratefully acknowledge Dr Anna Lene Seidler for her contributions to the study conception and design, and Dr Zhewei Miao for their contributions to the NLP fuzzy matching search.
Funding
No funding.
Author information
Authors and Affiliations
Contributions
Zhilin Ren. Conceptualization; Methodology; Investigation; Formal Analysis; Writing – Original Draft; Writing – Review & Editing. Zhilin Ren contributed to the study protocol, search strategy, review of search results, manuscript writing (all tables and figures), and revision of the manuscript. Angela Claire Webster. Conceptualization; Methodology; Writing – Review & Editing; Supervision. Angela Claire Webster was one of the supervisors of this study, made outstanding contributions to the study design, and was primarily responsible for the manuscript review. Kylie Elizabeth Hunter. Methodology; Investigation; Writing – Review & Editing. Kylie Elizabeth Hunter contributed to the search strategy and retrieval of publications for this study and was responsible for the manuscript review. Jiexin Zhang. Investigation; Writing – Review & Editing. Yi Yao. Investigation; Writing – Review & Editing. Ava Grace Tan-Koay. Methodology; Investigation; Writing – Review & Editing. Ava was the information consultant for this study, made a prominent contribution to the design of the search strategy and the study protocol, and was responsible for the revision and review of the manuscript. Aidan Christopher Tan. Conceptualization; Methodology; Investigation; Writing – Review & Editing; Supervision. Aidan Christopher Tan drafted the initial study protocol, designed the study methodology and search strategy, participated in the publications review, and was the primary supervisor of this study.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Ren, Z., Webster, A.C., Hunter, K.E. et al. Reducing risk of bias in interventional studies during their design and conduct: a scoping review. BMC Med Res Methodol 25, 85 (2025). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12874-025-02467-8
Received:
Accepted:
Published:
DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12874-025-02467-8