100+ datasets found
  1. Why Even More Clinical Research Studies May Be False: Effect of Asymmetrical...

    • plos.figshare.com
    • datasetcatalog.nlm.nih.gov
    xls
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Matthew James Shun-Shin; Darrel P. Francis (2023). Why Even More Clinical Research Studies May Be False: Effect of Asymmetrical Handling of Clinically Unexpected Values [Dataset]. http://doi.org/10.1371/journal.pone.0065323
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Matthew James Shun-Shin; Darrel P. Francis
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundIn medical practice, clinically unexpected measurements might be quite properly handled by the remeasurement, removal, or reclassification of patients. If these habits are not prevented during clinical research, how much of each is needed to sway an entire study?Methods and ResultsBelieving there is a difference between groups, a well-intentioned clinician researcher addresses unexpected values. We tested how much removal, remeasurement, or reclassification of patients would be needed in most cases to turn an otherwise-neutral study positive. Remeasurement of 19 patients out of 200 per group was required to make most studies positive. Removal was more powerful: just 9 out of 200 was enough. Reclassification was most powerful, with 5 out of 200 enough. The larger the study, the smaller the proportion of patients needing to be manipulated to make the study positive: the percentages needed to be remeasured, removed, or reclassified fell from 45%, 20%, and 10% respectively for a 20 patient-per-group study, to 4%, 2%, and 1% for an 800 patient-per-group study. Dot-plots, but not bar-charts, make the perhaps-inadvertent manipulations visible. Detection is possible using statistical methods such as the Tadpole test.ConclusionsBehaviours necessary for clinical practice are destructive to clinical research. Even small amounts of selective remeasurement, removal, or reclassification can produce false positive results. Size matters: larger studies are proportionately more vulnerable. If observational studies permit selective unblinded enrolment, malleable classification, or selective remeasurement, then results are not credible. Clinical research is very vulnerable to “remeasurement, removal, and reclassification”, the 3 evil R's.

  2. Characteristics of the randomized controlled clinical trials involved in...

    • plos.figshare.com
    • figshare.com
    xls
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lijuan Xu; Xuesi Wan; Zhimin Huang; Fangfang Zeng; Guohong Wei; Donghong Fang; Wanping Deng; Yanbing Li (2023). Characteristics of the randomized controlled clinical trials involved in this analysis. [Dataset]. http://doi.org/10.1371/journal.pone.0061387.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Lijuan Xu; Xuesi Wan; Zhimin Huang; Fangfang Zeng; Guohong Wei; Donghong Fang; Wanping Deng; Yanbing Li
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    GFR, glomerular filtration rate; CCr, rate of creatinine clearance; ACEi, angiotensin-converting enzyme inhibitor; ARB, angiotensin receptor blocker; RASi, rennin-angiotensin system inhibitor ; DM, diabetes mellitus; PKD, polycystic kidney disease; HBP, high blood pressure (hypertension).

  3. Impact of Inclusion of Industry Trial Results Registries as an Information...

    • plos.figshare.com
    doc
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Regine Potthast; Volker Vervölgyi; Natalie McGauran; Michaela F. Kerekes; Beate Wieseler; Thomas Kaiser (2023). Impact of Inclusion of Industry Trial Results Registries as an Information Source for Systematic Reviews [Dataset]. http://doi.org/10.1371/journal.pone.0092067
    Explore at:
    docAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Regine Potthast; Volker Vervölgyi; Natalie McGauran; Michaela F. Kerekes; Beate Wieseler; Thomas Kaiser
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundClinical trial results registries may contain relevant unpublished information. Our main aim was to investigate the potential impact of the inclusion of reports from industry results registries on systematic reviews (SRs).MethodsWe identified a sample of 150 eligible SRs in PubMed via backward selection. Eligible SRs investigated randomized controlled trials of drugs and included at least 2 bibliographic databases (original search date: 11/2009). We checked whether results registries of manufacturers and/or industry associations had also been searched. If not, we searched these registries for additional trials not considered in the SRs, as well as for additional data on trials already considered. We reanalysed the primary outcome and harm outcomes reported in the SRs and determined whether results had changed. A “change” was defined as either a new relevant result or a change in the statistical significance of an existing result. We performed a search update in 8/2013 and identified a sample of 20 eligible SRs to determine whether mandatory results registration from 9/2008 onwards in the public trial and results registry ClinicalTrials.gov had led to its inclusion as a standard information source in SRs, and whether the inclusion rate of industry results registries had changed.Results133 of the 150 SRs (89%) in the original analysis did not search industry results registries. For 23 (17%) of these SRs we found 25 additional trials and additional data on 31 trials already included in the SRs. This additional information was found for more than twice as many SRs of drugs approved from 2000 as approved beforehand. The inclusion of the additional trials and data yielded changes in existing results or the addition of new results for 6 of the 23 SRs. Of the 20 SRs retrieved in the search update, 8 considered ClinicalTrials.gov or a meta-registry linking to ClinicalTrials.gov, and 1 considered an industry results registry.ConclusionThe inclusion of industry and public results registries as an information source in SRs is still insufficient and may result in publication and outcome reporting bias. In addition to an essential search in ClinicalTrials.gov, authors of SRs should consider searching industry results registries.

  4. Methods for Specifying the Target Difference in a Randomised Controlled...

    • plos.figshare.com
    doc
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jenni Hislop; Temitope E. Adewuyi; Luke D. Vale; Kirsten Harrild; Cynthia Fraser; Tara Gurung; Douglas G. Altman; Andrew H. Briggs; Peter Fayers; Craig R. Ramsay; John D. Norrie; Ian M. Harvey; Brian Buckley; Jonathan A. Cook (2023). Methods for Specifying the Target Difference in a Randomised Controlled Trial: The Difference ELicitation in TriAls (DELTA) Systematic Review [Dataset]. http://doi.org/10.1371/journal.pmed.1001645
    Explore at:
    docAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Jenni Hislop; Temitope E. Adewuyi; Luke D. Vale; Kirsten Harrild; Cynthia Fraser; Tara Gurung; Douglas G. Altman; Andrew H. Briggs; Peter Fayers; Craig R. Ramsay; John D. Norrie; Ian M. Harvey; Brian Buckley; Jonathan A. Cook
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundRandomised controlled trials (RCTs) are widely accepted as the preferred study design for evaluating healthcare interventions. When the sample size is determined, a (target) difference is typically specified that the RCT is designed to detect. This provides reassurance that the study will be informative, i.e., should such a difference exist, it is likely to be detected with the required statistical precision. The aim of this review was to identify potential methods for specifying the target difference in an RCT sample size calculation.Methods and FindingsA comprehensive systematic review of medical and non-medical literature was carried out for methods that could be used to specify the target difference for an RCT sample size calculation. The databases searched were MEDLINE, MEDLINE In-Process, EMBASE, the Cochrane Central Register of Controlled Trials, the Cochrane Methodology Register, PsycINFO, Science Citation Index, EconLit, the Education Resources Information Center (ERIC), and Scopus (for in-press publications); the search period was from 1966 or the earliest date covered, to between November 2010 and January 2011. Additionally, textbooks addressing the methodology of clinical trials and International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) tripartite guidelines for clinical trials were also consulted. A narrative synthesis of methods was produced. Studies that described a method that could be used for specifying an important and/or realistic difference were included. The search identified 11,485 potentially relevant articles from the databases searched. Of these, 1,434 were selected for full-text assessment, and a further nine were identified from other sources. Fifteen clinical trial textbooks and the ICH tripartite guidelines were also reviewed. In total, 777 studies were included, and within them, seven methods were identified—anchor, distribution, health economic, opinion-seeking, pilot study, review of the evidence base, and standardised effect size.ConclusionsA variety of methods are available that researchers can use for specifying the target difference in an RCT sample size calculation. Appropriate methods may vary depending on the aim (e.g., specifying an important difference versus a realistic difference), context (e.g., research question and availability of data), and underlying framework adopted (e.g., Bayesian versus conventional statistical approach). Guidance on the use of each method is given. No single method provides a perfect solution for all contexts.Please see later in the article for the Editors' Summary

  5. f

    Main variations in implementation of the methods.

    • figshare.com
    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jenni Hislop; Temitope E. Adewuyi; Luke D. Vale; Kirsten Harrild; Cynthia Fraser; Tara Gurung; Douglas G. Altman; Andrew H. Briggs; Peter Fayers; Craig R. Ramsay; John D. Norrie; Ian M. Harvey; Brian Buckley; Jonathan A. Cook (2023). Main variations in implementation of the methods. [Dataset]. http://doi.org/10.1371/journal.pmed.1001645.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS Medicine
    Authors
    Jenni Hislop; Temitope E. Adewuyi; Luke D. Vale; Kirsten Harrild; Cynthia Fraser; Tara Gurung; Douglas G. Altman; Andrew H. Briggs; Peter Fayers; Craig R. Ramsay; John D. Norrie; Ian M. Harvey; Brian Buckley; Jonathan A. Cook
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    RCI, reliable change index; VAS, visual analogue scale; WTP, willingness to pay per unit of effectiveness.

  6. f

    Assessment of the value of the methods.

    • figshare.com
    • plos.figshare.com
    xls
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jenni Hislop; Temitope E. Adewuyi; Luke D. Vale; Kirsten Harrild; Cynthia Fraser; Tara Gurung; Douglas G. Altman; Andrew H. Briggs; Peter Fayers; Craig R. Ramsay; John D. Norrie; Ian M. Harvey; Brian Buckley; Jonathan A. Cook (2023). Assessment of the value of the methods. [Dataset]. http://doi.org/10.1371/journal.pmed.1001645.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOS Medicine
    Authors
    Jenni Hislop; Temitope E. Adewuyi; Luke D. Vale; Kirsten Harrild; Cynthia Fraser; Tara Gurung; Douglas G. Altman; Andrew H. Briggs; Peter Fayers; Craig R. Ramsay; John D. Norrie; Ian M. Harvey; Brian Buckley; Jonathan A. Cook
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Assessment of the value of the methods.

  7. Data from: genRCT: a statistical analysis framework for generalizing RCT...

    • tandf.figshare.com
    • datasetcatalog.nlm.nih.gov
    txt
    Updated Nov 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dasom Lee; Shu Yang; Mark Berry; Tom Stinchcombe; Harvey Jay Cohen; Xiaofei Wang (2024). genRCT: a statistical analysis framework for generalizing RCT findings to real-world population [Dataset]. http://doi.org/10.6084/m9.figshare.25567157.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Nov 28, 2024
    Dataset provided by
    Taylor & Francishttps://taylorandfrancis.com/
    Authors
    Dasom Lee; Shu Yang; Mark Berry; Tom Stinchcombe; Harvey Jay Cohen; Xiaofei Wang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    World
    Description

    When evaluating the real-world treatment effect, the analysis based on randomized clinical trials (RCTs) often introduces generalizability bias due to the difference in risk factors between the trial participants and the real-world patient population. This problem of lack of generalizability associated with the RCT-only analysis can be addressed by leveraging observational studies with large sample sizes that are representative of the real-world population. A set of novel statistical methods, termed “genRCT”, for improving the generalizability of the trial has been developed using calibration weighting, which enforces the covariates balance between the RCT and observational study. This paper aims to review statistical methods for generalizing the RCT findings by harnessing information from large observational studies that represent real-world patients. Specifically, we discuss the choices of data sources and variables to meet key theoretical assumptions and principles. We introduce and compare estimation methods for continuous, binary, and survival endpoints. We showcase the use of the R package genRCT through a case study that estimates the average treatment effect of adjuvant chemotherapy for the stage 1B non-small cell lung patients represented by a large cancer registry.

  8. Characteristics of included randomised controlled trials.

    • plos.figshare.com
    • datasetcatalog.nlm.nih.gov
    xls
    Updated Jun 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Celeste E. Naude; Anel Schoonees; Marjanne Senekal; Taryn Young; Paul Garner; Jimmy Volmink (2023). Characteristics of included randomised controlled trials. [Dataset]. http://doi.org/10.1371/journal.pone.0100652.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 7, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Celeste E. Naude; Anel Schoonees; Marjanne Senekal; Taryn Young; Paul Garner; Jimmy Volmink
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    CVD = cardiovascular disease; No = number; NR = not reported; Rx = treatment; T2DM = type two diabetes mellitus; USA = United States of America; yrs = years.Note: In the case of multiple intervention groups, we selected one pair of interventions i.e. treatment and control that was most relevant to this systematic review question.

  9. Study design characteristics stratified by type of trial population.

    • plos.figshare.com
    xls
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Adrian V. Hernandez; Vinay Pasupuleti; Abhishek Deshpande; Priyaleela Thota; Jaime A. Collins; Jose E. Vidal (2023). Study design characteristics stratified by type of trial population. [Dataset]. http://doi.org/10.1371/journal.pone.0063272.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Adrian V. Hernandez; Vinay Pasupuleti; Abhishek Deshpande; Priyaleela Thota; Jaime A. Collins; Jose E. Vidal
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    *Upper 95% confidence limit for Hazard ratio was no greater than 1.18; **Upper 95% confidence limit for the Hazard ratio was less than 1.40.

  10. Poor Reliability between Cochrane Reviewers and Blinded External Reviewers...

    • plos.figshare.com
    • datasetcatalog.nlm.nih.gov
    doc
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Susan Armijo-Olivo; Maria Ospina; Bruno R. da Costa; Matthias Egger; Humam Saltaji; Jorge Fuentes; Christine Ha; Greta G. Cummings (2023). Poor Reliability between Cochrane Reviewers and Blinded External Reviewers When Applying the Cochrane Risk of Bias Tool in Physical Therapy Trials [Dataset]. http://doi.org/10.1371/journal.pone.0096920
    Explore at:
    docAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Susan Armijo-Olivo; Maria Ospina; Bruno R. da Costa; Matthias Egger; Humam Saltaji; Jorge Fuentes; Christine Ha; Greta G. Cummings
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ObjectivesTo test the inter-rater reliability of the RoB tool applied to Physical Therapy (PT) trials by comparing ratings from Cochrane review authors with those of blinded external reviewers.MethodsRandomized controlled trials (RCTs) in PT were identified by searching the Cochrane Database of Systematic Reviews for meta-analysis of PT interventions. RoB assessments were conducted independently by 2 reviewers blinded to the RoB ratings reported in the Cochrane reviews. Data on RoB assessments from Cochrane reviews and other characteristics of reviews and trials were extracted. Consensus assessments between the two reviewers were then compared with the RoB ratings from the Cochrane reviews. Agreement between Cochrane and blinded external reviewers was assessed using weighted kappa (κ).ResultsIn total, 109 trials included in 17 Cochrane reviews were assessed. Inter-rater reliability on the overall RoB assessment between Cochrane review authors and blinded external reviewers was poor (κ  =  0.02, 95%CI: −0.06, 0.06]). Inter-rater reliability on individual domains of the RoB tool was poor (median κ  = 0.19), ranging from κ  =  −0.04 (“Other bias”) to κ  =  0.62 (“Sequence generation”). There was also no agreement (κ  =  −0.29, 95%CI: −0.81, 0.35]) in the overall RoB assessment at the meta-analysis level.ConclusionsRisk of bias assessments of RCTs using the RoB tool are not consistent across different research groups. Poor agreement was not only demonstrated at the trial level but also at the meta-analysis level. Results have implications for decision making since different recommendations can be reached depending on the group analyzing the evidence. Improved guidelines to consistently apply the RoB tool and revisions to the tool for different health areas are needed.

  11. Additional file 1 of Statistical methods leveraging the hierarchical...

    • springernature.figshare.com
    odt
    Updated Oct 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Laetitia de Abreu Nunes; Richard Hooper; Patricia McGettigan; Rachel Phillips (2024). Additional file 1 of Statistical methods leveraging the hierarchical structure of adverse events for signal detection in clinical trials: a scoping review of the methodological literature [Dataset]. http://doi.org/10.6084/m9.figshare.27322674.v1
    Explore at:
    odtAvailable download formats
    Dataset updated
    Oct 29, 2024
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Laetitia de Abreu Nunes; Richard Hooper; Patricia McGettigan; Rachel Phillips
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Additional file 1: Search strategy. The document AdditionalFile1.odt contains the full search strategy used for each database. The PRISMA-S guidelines [46] were followed to report the search strategy.

  12. Primary outcome results.

    • plos.figshare.com
    xls
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Adrian V. Hernandez; Vinay Pasupuleti; Abhishek Deshpande; Priyaleela Thota; Jaime A. Collins; Jose E. Vidal (2023). Primary outcome results. [Dataset]. http://doi.org/10.1371/journal.pone.0063272.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Adrian V. Hernandez; Vinay Pasupuleti; Abhishek Deshpande; Priyaleela Thota; Jaime A. Collins; Jose E. Vidal
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    RD = risk difference; MD = mean difference; HR = hazard ratio; PP = per protocol; ITT = intention-to-treat; NR = not reported; * = NI established; # = NI not established; $ = superior; % = study terminated early; ** = study inconclusive; ## = inferior; $$ = NI established by PP analysis, NI not established by ITT analysis.

  13. f

    Adverse reactions during study.

    • figshare.com
    xls
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yong-Chul Kim; Ho Jun Chin; Ho Suk Koo; Suhnggwon Kim (2023). Adverse reactions during study. [Dataset]. http://doi.org/10.1371/journal.pone.0071545.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Yong-Chul Kim; Ho Jun Chin; Ho Suk Koo; Suhnggwon Kim
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Adverse reactions during study.

  14. Descriptive Statistics of ITT Population.

    • plos.figshare.com
    xls
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yang Wang; Baoyan Liu; Jinna Yu; Jiani Wu; Jing Wang; Zhishun Liu (2023). Descriptive Statistics of ITT Population. [Dataset]. http://doi.org/10.1371/journal.pone.0059449.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Yang Wang; Baoyan Liu; Jinna Yu; Jiani Wu; Jing Wang; Zhishun Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Descriptive Statistics of ITT Population.

  15. f

    Data from: Appendix S1 - Missing Data in Randomized Clinical Trials for...

    • plos.figshare.com
    doc
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mai A. Elobeid; Miguel A. Padilla; Theresa McVie; Olivia Thomas; David W. Brock; Bret Musser; Kaifeng Lu; Christopher S. Coffey; Renee A. Desmond; Marie-Pierre St-Onge; Kishore M. Gadde; Steven B. Heymsfield; David B. Allison (2023). Appendix S1 - Missing Data in Randomized Clinical Trials for Weight Loss: Scope of the Problem, State of the Field, and Performance of Statistical Methods [Dataset]. http://doi.org/10.1371/journal.pone.0006624.s001
    Explore at:
    docAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Mai A. Elobeid; Miguel A. Padilla; Theresa McVie; Olivia Thomas; David W. Brock; Bret Musser; Kaifeng Lu; Christopher S. Coffey; Renee A. Desmond; Marie-Pierre St-Onge; Kishore M. Gadde; Steven B. Heymsfield; David B. Allison
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Pharmaceutical obesity RCTs used to evaluate the scope of the missing data problem.pdf (0.27 MB DOC)

  16. Characteristics of Cochrane systematic reviews on physical therapy...

    • plos.figshare.com
    • figshare.com
    xls
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Susan Armijo-Olivo; Maria Ospina; Bruno R. da Costa; Matthias Egger; Humam Saltaji; Jorge Fuentes; Christine Ha; Greta G. Cummings (2023). Characteristics of Cochrane systematic reviews on physical therapy interventions that provided trial data for the analysis of inter-rater reliability of RoB. [Dataset]. http://doi.org/10.1371/journal.pone.0096920.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Susan Armijo-Olivo; Maria Ospina; Bruno R. da Costa; Matthias Egger; Humam Saltaji; Jorge Fuentes; Christine Ha; Greta G. Cummings
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    HRQoL  =  Health-related quality of life; PT = physical therapy; RCT  =  randomized controlled trial; RoB  =  Risk of Bias; ROM  =  range of motion; WOMAC  =  Western Ontario and McMaster Universities Arthritis Index.

  17. Additional file 2 of Statistical methods leveraging the hierarchical...

    • springernature.figshare.com
    odt
    Updated Oct 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Laetitia de Abreu Nunes; Richard Hooper; Patricia McGettigan; Rachel Phillips (2024). Additional file 2 of Statistical methods leveraging the hierarchical structure of adverse events for signal detection in clinical trials: a scoping review of the methodological literature [Dataset]. http://doi.org/10.6084/m9.figshare.27322677.v1
    Explore at:
    odtAvailable download formats
    Dataset updated
    Oct 29, 2024
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Laetitia de Abreu Nunes; Richard Hooper; Patricia McGettigan; Rachel Phillips
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Additional file 2: Information Sources. The document AdditionalFile2.odt contains the list of all sources searched, the dates of the searches and the interfaces used.

  18. Appendix S1 - Accumulating Research: A Systematic Account of How Cumulative...

    • plos.figshare.com
    docx
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mike Clarke; Anne Brice; Iain Chalmers (2023). Appendix S1 - Accumulating Research: A Systematic Account of How Cumulative Meta-Analyses Would Have Provided Knowledge, Improved Health, Reduced Harm and Saved Resources [Dataset]. http://doi.org/10.1371/journal.pone.0102670.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Mike Clarke; Anne Brice; Iain Chalmers
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Cumulative meta-analyses of studies of the effects of healthcare interventions. (DOCX)

  19. Data from: Baseline characteristics of participants.

    • plos.figshare.com
    xls
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mette Toftager; Lars B. Christiansen; Annette K. Ersbøll; Peter L. Kristensen; Pernille Due; Jens Troelsen (2023). Baseline characteristics of participants. [Dataset]. http://doi.org/10.1371/journal.pone.0099369.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Mette Toftager; Lars B. Christiansen; Annette K. Ersbøll; Peter L. Kristensen; Pernille Due; Jens Troelsen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    M(SD) unless otherwise stated. N = 797aSex and age standardized cut points [22]. bHousehold income lower than 50% of the sample median income. cBoth parents native Danish. n varies due to different data sources

  20. File S1 - Generation of “Virtual” Control Groups for Single Arm Prostate...

    • plos.figshare.com
    • datasetcatalog.nlm.nih.gov
    doc
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zhenyu Jia; Michael B. Lilly; James A. Koziol; Xin Chen; Xiao-Qin Xia; Yipeng Wang; Douglas Skarecky; Manuel Sutton; Anne Sawyers; Herbert Ruckle; Philip M. Carpenter; Jessica Wang-Rodriguez; Jun Jiang; Mingsen Deng; Cong Pan; Jian-guo Zhu; Christine E. McLaren; Michael J. Gurley; Chung Lee; Michael McClelland; Thomas Ahlering; Michael W. Kattan; Dan Mercola (2023). File S1 - Generation of “Virtual” Control Groups for Single Arm Prostate Cancer Adjuvant Trials [Dataset]. http://doi.org/10.1371/journal.pone.0085010.s001
    Explore at:
    docAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Zhenyu Jia; Michael B. Lilly; James A. Koziol; Xin Chen; Xiao-Qin Xia; Yipeng Wang; Douglas Skarecky; Manuel Sutton; Anne Sawyers; Herbert Ruckle; Philip M. Carpenter; Jessica Wang-Rodriguez; Jun Jiang; Mingsen Deng; Cong Pan; Jian-guo Zhu; Christine E. McLaren; Michael J. Gurley; Chung Lee; Michael McClelland; Thomas Ahlering; Michael W. Kattan; Dan Mercola
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Supplementary material including Nonlinear curve fitting and estimation of time to relapse, Figure S1 and Table S1. (DOC)

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Matthew James Shun-Shin; Darrel P. Francis (2023). Why Even More Clinical Research Studies May Be False: Effect of Asymmetrical Handling of Clinically Unexpected Values [Dataset]. http://doi.org/10.1371/journal.pone.0065323
Organization logo

Why Even More Clinical Research Studies May Be False: Effect of Asymmetrical Handling of Clinically Unexpected Values

Explore at:
10 scholarly articles cite this dataset (View in Google Scholar)
xlsAvailable download formats
Dataset updated
Jun 2, 2023
Dataset provided by
PLOShttp://plos.org/
Authors
Matthew James Shun-Shin; Darrel P. Francis
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

BackgroundIn medical practice, clinically unexpected measurements might be quite properly handled by the remeasurement, removal, or reclassification of patients. If these habits are not prevented during clinical research, how much of each is needed to sway an entire study?Methods and ResultsBelieving there is a difference between groups, a well-intentioned clinician researcher addresses unexpected values. We tested how much removal, remeasurement, or reclassification of patients would be needed in most cases to turn an otherwise-neutral study positive. Remeasurement of 19 patients out of 200 per group was required to make most studies positive. Removal was more powerful: just 9 out of 200 was enough. Reclassification was most powerful, with 5 out of 200 enough. The larger the study, the smaller the proportion of patients needing to be manipulated to make the study positive: the percentages needed to be remeasured, removed, or reclassified fell from 45%, 20%, and 10% respectively for a 20 patient-per-group study, to 4%, 2%, and 1% for an 800 patient-per-group study. Dot-plots, but not bar-charts, make the perhaps-inadvertent manipulations visible. Detection is possible using statistical methods such as the Tadpole test.ConclusionsBehaviours necessary for clinical practice are destructive to clinical research. Even small amounts of selective remeasurement, removal, or reclassification can produce false positive results. Size matters: larger studies are proportionately more vulnerable. If observational studies permit selective unblinded enrolment, malleable classification, or selective remeasurement, then results are not credible. Clinical research is very vulnerable to “remeasurement, removal, and reclassification”, the 3 evil R's.

Search
Clear search
Close search
Google apps
Main menu