Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The primary outcome of this study was to examine discrepancies between the reported primary and secondary outcomes in registered and published randomized controlled trials in high impact-factor obesity journals. The secondary outcomes were to address whether outcome reporting discrepancies favor statistically significant outcomes, whether there was a correlation between funding source and likelihood of outcome reporting bias, and whether there were temporal trends in outcome reporting bias. We also catalogued any incidental findings during data extraction and analysis that warranted further examination. To accomplish these aims, we performed a methodological systematic review of the 4 highest impact-factor obesity journals from 2013-2015. This study did not meet the regulatory definition of human subjects research according to 45 CFR 46.102(d) and (f) of the Department of Health and Human Services’ Code of Federal Regulations 10 and was not subject to Institutional Review Board oversight. We consulted Li, et al.; the Cochrane Handbook for Systematic Reviews of Interventions; and the National Academies of Science, Engineering, and Medicine’s (previously the Institute of Medicine) Standards for Systematic Reviews to ensure best practices regarding data extraction and management. We applied PRISMA guideline items 1, 3, 5-11, 13, 16-18, 24-27 to ensure reporting quality for systematic reviews as well as SAMPL guidelines for reporting descriptive statistics. Prior to initiation of the study, we registered this study with the University hospital Medical Information Network Clinical Trials Registry (UMIN-CTR) with registry number: R000025787UMIN000022576.
After screening, the citations were imported into the Agency for Healthcare Research and Quality’s Systematic Review Data Repository (SRDR) for data extraction.Two investigators (J.R.,A.R.) independently reviewed the full-text articles for each study and extracted data using SRDR. At least once per day, these investigators would trade articles and repeat the other’s data extraction. This allowed each to cross-validate the other’s work and improve the accuracy and efficiency of data extraction. Any disagreements were resolved by discussion between the pair. A third party reviewer (M.V.) was available for further adjudication, but was not needed. We extracted the following items from the published randomized controlled trials: primary outcome(s), secondary outcome(s), date of subject enrollment, trial registry database and registration number, timing of assessment in primary outcomes (ex.change in weight at 5 months, change in HbA1c at 6 weeks), sample size, any discrepancies between publication and registry disclosed by the author in the publication, and funding source. For the purpose of our study we classified funding source into the following categories: (1) private (ex. Mayo Clinic or philanthropic), (2) public (government or university), (3) industry/corporate (ex. GlaxoSmithKline), (4) University Hospital, (5) mixed funding source, or (6) undisclosed funding source. For RCTs which reported multiple primary and secondary outcomes, we recorded each explicitly stated outcome. If a primary outcome was not explicitly stated as such in the publication, the outcome stated in the sample size estimation was used. If none was explicitly stated in the text or in the sample size calculation, the article was excluded from the study. When sample size was not explicitly stated in the article, we used the “number randomized”. If author’s failed to differentiate between primary and secondary outcomes in the publication, these non-delineated outcomes were coded as “unable to assess” and excluded from comparison. The clinical trial registry or registration number was obtained from each published RCT, if stated, during full-text review/data extraction. If a registration number was listed in the RCT without a trial registry, a search was made of Clinicaltrials.gov, the International Standard Randomized Controlled Trial Number Register (ISRCTNR), the World Health Organization’s International Clinical Trial Registry Platform (ICTRP), and any country specific clinical trial registry identified in the publication. The following characteristics were used to match registered study to publication: title, author(s), keyword, country of origin, sponsoring organization, description of study intervention, projected sample size, and dates of enrollment. When a publication did not explicitly state information regarding registration of a study, the authors were contacted via email using a standardized email template and asked about registration status. If after 2 days there was no reply, a 2nd email was sent. If there was no reply from authors 1 week after the 2nd email, the study was considered unregistered and excluded from the study.Each registered study was located within its respective registry and data was extracted individually by 2 independent investigators (J.R., A.R.). Prior to registry data extraction both investigators underwent trial registry training including: training videos on how to perform searches and access the history of changes in clinicaltrials.gov and the WHO trial registry, tutorial video about locating desired content from trial registry entry, access to a list of all WHO approved trial registries, and each had to successfully complete a sample data extraction from an unrelated study registry entry. The following data was extracted using a standardized form on SRDR: date of trial registration, date range of subject enrollment, original primary registered outcome(s), final primary registered outcome(s), date of initial primary outcome registration, secondary registered outcome(s), sample size if listed, and funding source, if disclosed, using previously defined categories. Although registration quality was not the focus of this study, registered trials lacking a clearly stated primary outcome and timing of assessment were excluded from consideration. Studies that were found to be registered after the end of subject enrollment were excluded from the study due to the inability to adequately assess outcome reporting bias.To be approved by the WHO, a trial registry must meet ICMJE criteria, including documentation of when changes are made to that particular study’s registry entries. If an included study employed this feature, we recorded both the primary outcome from time of initial registration as well as the primary outcome listed in the final version in the registry entry. Departing from the methods of previous authors in this field of research, we did not exclude studies in WHO-approved registries that did not time-stamp the date of initial primary outcome registration. Per the International Standards for Clinical Trial Registries section 2.4, WHO-approved registries are required to time-stamp registry-approved changes to any registered trial including data additions, deletions, and revisions. Therefore, if a WHO-approved trial registry did not display a history of changes, we recorded the date the registry application was approved as the date of initial primary outcome registration. Additionally, the listed primary outcome was recorded as both the initial registered and final registered primary outcome. In non-WHO-approved trial registries, if a date of initial primary outcome registration was not listed, this trial was excluded from our study.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
Obtaining informed consent is an ethical imperative when conducting research involving human participants. However, participants’ actual level of understanding is often difficult and impractical to assess in operational research. One setting where the stakes for understanding are high due to the potential consequences of research participation is randomised controlled trials (RCTs), which test the effectiveness and safety of medical treatments. However, ethics committees' gatekeeping mechanisms often mean that legalese is mandated in consent forms, which can work against patients’ understanding. The goal of this text-based study was, therefore, to build and analyse a corpus of patient information sheets (PIS) and consent forms (CF) from RCTs conducted in the UK.This data collection consists of 27 participant information sheets and 23 consent forms freely available on-line. Materials were collected following a comprehensive search for publicly available ethical materials from randomised control trials (RCTs) targeting cancer (2007-17), primarily by systematically searching key on-line databases and monograph series. These corpora, which are different, to our knowledge, than any existing collection of medical English, could further research on information provision for patients in RCTs specifically and in healthcare settings more generally, in addition to advancing the study of the language of written ethical documents. Secondary analyses of these data could be undertaken using techniques from corpus linguistics, computational linguistics, and/or discourse analysis, for example, to investigate the nature and complexity of the language used and/or broach participants’ understanding of ethical principles or preference for how different language functions are expressed. All ethical materials that comprise the corpora were freely obtained from the public domain via the web searches described. The ethical material that make up these corpora were drawn from a total of 28 distinct RCTs. The data and metadata are free to download (open access) on the UK Data Archive's ReShare without needing to register on the site at the following link: http://reshare.ukdataservice.ac.uk/853933/Citation: Isaacs, Talia and Murdoch, Jamie and Demjén, Zsófia and Stevenson, Fiona (2019). Corpora of patient information sheets and consent forms for UK cancer trials 2007-2017. [Data Collection]. Colchester, Essex: UK Data Service. 10.5255/UKDA-SN-853933
This data collection consists of 27 participant information sheets and 23 consent forms freely available on-line. Materials were collected following a comprehensive search for publicly available ethical materials from randomised control trials (RCTs) targeting cancer (2007-17), primarily by systematically searching key on-line databases and monograph series. These corpora, which are different, to our knowledge, than any existing collection of medical English, could further research on information provision for patients in RCTs specifically and in healthcare settings more generally, in addition to advancing the study of the language of written ethical documents. Secondary analyses of these data could be undertaken using techniques from corpus linguistics, computational linguistics, and/or discourse analysis, for example, to investigate the nature and complexity of the language used and/or broach participants’ understanding of ethical principles or preference for how different language functions are expressed.
Obtaining informed consent is an ethical imperative when conducting research involving human participants. However, participants’ actual level of understanding is often difficult and impractical to assess in operational research. One setting where the stakes for understanding are high due to the potential consequences of research participation is randomised controlled trials (RCTs), which test the effectiveness and safety of medical treatments. However, ethics committees' gatekeeping mechanisms often mean that legalese is mandated in consent forms, which can work against patients’ understanding. The goal of this text-based study was, therefore, to build and analyse a corpus of patient information sheets (PIS) and consent forms (CF) from RCTs conducted in the UK.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Laparoscopic hysterectomy is a commonly performed procedure. However, one high-risk complication is vaginal cuff dehiscence. Currently, there is no standardization regarding thread material or suturing technique for vaginal cuff closure. Therefore, this study aimed to compare extracorporeal and intracorporeal suturing techniques for vaginal cuff closure using a pelvic trainer model. Eighteen experts in laparoscopic surgery performed vaginal cuff closures with interrupted sutures using intracorporeal knotting, extracorporeal knotting and continuous, unidirectional barbed sutures. While using an artificial tissue suturing pad in a pelvic trainer, experts performed vaginal cuff closure using each technique according to block randomization. Task completion time, tension resistance, and the number of errors were recorded. After completing the exercises, participants answered a questionnaire concerning the suturing techniques and their performance. Experts completed suturing more quickly (p < 0.001, p < 0.001, respectively) and with improved tension resistance (p < 0.001, p < 0.001) when using barbed suturing compared to intracorporeal and extracorporeal knotting. Furthermore, the intracorporeal knotting technique was performed faster (p = 0.04) and achieved greater tension resistance (p = 0.023) compared to extracorporeal knotting. The number of laparoscopic surgeries performed per year was positively correlated with vaginal cuff closure duration (p = 0.007). Barbed suturing was a time-saving technique with improved tension resistance for vaginal cuff closure. Methods Study design We conducted a prospective, randomized controlled study at the University Hospital Basel from the 1st November 2021 to 30th April 2022. Eighteen participants were randomized in block randomizations of three. All participants performed interrupted intracorporeal, extracorporeal, and continuous barbed suturing for VCC using each technique according to their randomization. For the primary endpoint, the time required to complete a task was recorded during each participant’s performance. Following the task, the secondary endpoints (i) precision, (ii) knot strength, (iii) cuff closure spreadability, and (iv) number of mistakes made were measured. Before and after completing the tasks, participants were given questionnaires. The first asked about their background and the second about their experience while completing the exercises. (Figure 1). Power analysis The required sample size was estimated using a power analysis, calculated based on two-sided, paired t-tests with power set to 90% and the significance level at 0.05. While a minimum of 18 subjects was indicated, the statistical distribution of the task duration time was unknown and, therefore, the calculation was a pragmatic approximation. Study population In total, 18 experts were successfully recruited with a dropout rate of zero, and 54 measurements were obtained. Qualifying as an expert required one to be a surgeon with more than five years of operative experience and more than thirty laparoscopic interventions per year. Study participants were recruited from one tertiary and three secondary hospitals. All methods were carried out in accordance with the CONSORT statement guidelines. A formal IRB certification of exemption (Req-2021-01075) was provided by the ethics committee of Northwest and Central Switzerland (EKNZ) on September 22, 2021, which confirmed that the research project fulfilled the general ethical and scientific standards for research with human subjects. All participants gave their written, informed consent to participate in the study. The anonymization of personal data was guaranteed. Instrument set-up All exercises were carried out on a box trainer. An endoscopy tower was equipped with a 24-inch monitor and a 300 W Xenon light source (Karl Storz SE & Co., Tuttlingen, Germany) and a Storz Hopkins II, 10 mm, 0° telescope with a Xenon Nova 300 light source and an Image 1 H3-Z Full HD camera (Karl Storz SE & Co., Tuttlingen, Germany) were used. Two access points equivalent to the lateral ancillary trocar entry points were used for the instruments. Two needle holders (Geyl Medical 801.023), laparoscopic scissors, and a closed jaw type knot pusher (Karl Storz 26596 D closed jaw end) were used. Exercises The colpotomy model was made of mesh-augmented silicone with a similar shape and size as a real colpotomy and was set up on the posterior wall of the box trainer. A brief instructional video showed the three different suturing techniques. After a short individual warm-up (20 minutes maximum) the experts began the different suturing techniques according to their randomization, completing one run per suture type for three runs in total. A. Vaginal cuff closure with intracorporeal interrupted suturing Suture A was a closure technique using three interrupted figure-eight sutures and intracorporeal knotting with a polyfilament thread (Vicryl, polyglactin 910, Johnson & Johnson). These three sutures were performed with a surgeon’s knot, which translated to securing the knot with three loops. (Video 1). B. Vaginal cuff closure with extracorporeal interrupted suturing Suture B used the same closure technique as suture A but with knotted extracorporeal polyfilament thread (Vicryl, polyglactin 910, Johnson & Johnson). The three interrupted figure-eight sutures were performed with a surgeon’s knot. The tightening of the knots was made with one hand and tightened with a knot pusher (Video 2). C. Vaginal cuff closure with barbed continuous suturing Suture C was a continuous, unidirectional barbed suture made with V-Loc™ (180 Absorbable Wound Closure Device; Covidien, Mansfield, MA, USA). This suture was made unidirectional. Thus, after the first stitch, the thread was pulled through the loop. After 5 more stitches from right to left, the thread was cut at the end of the colpotomy without extra anchoring stitches. (Video 3). Questionnaires Participants answered questionnaires before and after the exercises. The questionnaire given before the exercises concerned general participant characteristics including sex, age, whether and how often they played video games, what types of sports and instruments they played, and their background regarding surgical and technical skills. After completing the exercises, participants answered a questionnaire regarding how they felt, both mentally and physically, about their experience with the different suturing techniques. Statistical analyses Descriptive statistics are presented as counts and frequencies for categorical data. For metric variables, means with standard deviations, medians, and interquartile ranges were used. Linear mixed-effects models were used to predict spreading capacity with technique and run as predictor variables. Results are presented as mean differences. For total run time and changes in knot strength, the variables were log-transformed, and the results presented as geometric mean ratios. A p-value <0.05 was considered significant. The statistical software R (version 4.1.3) was used for the analyses.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Includes: BetterHomesTO Codes & Themes developed through reflexive thematic analysisConversational surveysDocument ReviewOnline survey resultsReport with findings Methods: document review, conversational surveys featuring closed and open questions (transcribed), online survey, participant observation of meetings Reflexive Thematic Analytical Approach: an initial set of primary, semantic codes were drawn from the literature review and aligned with the evaluation framework principles and criteria. Through co-production with study participants, a set of secondary codes emerged. In this phase, I hard-copy coded the entire dataset, then collated the codes and relevant data extracts in preparation for later stages of analysis (See Braun et al, 2019). Ethics: The research detailed here was conducted with human subjects, approval for which was granted by the University of Toronto’s Research and Ethics Board (REB) per protocol no. 00037210
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This study employed a qualitative method to achieve the research objectives. This study included 20 semi-structured interviews with 20 individual HR managers and operational (non-HR) managers/supervisors from four organisations in the IT industry in Australia. Primary data for this study were collected through interviews with participants. This research also included secondary data came from policy documents on workforce analytics and remote working.
Secondary data derived from the Human Connectome Project. Including the subject list used for the current study, the preprocessed resting-state functional connectivity, and the behavioral prediction accuracy of African Americans and white Americans in this dataset across multiple data splits. Data has been de-identified. People who want to use this data for replicating our study should follow the data usage agreement of the Human Connectome Project. For data protection purposes, behavioral scores and phenotypical information are not released in this repository. One should apply for data access permission from the Human Connectome Project to obtain such data.
Archive of data relevant to gerontological and aging research. Used to advance research on aging. Subjects include demographic, social, economic, and psychological characteristics of older adults, physical health and functioning of older adults, and health care needs of older adults. NACDA staff represents team of professional researchers, archivists and technicians who work together to obtain, process, distribute, and promote data relevant to aging research.
This scoping review will systematically map existing literature on consensus methodologies (Delphi, modified Delphi, nominal group technique, RAND/UCLA) in health research, identify gaps, and propose a checklist to standardize future studies. We will include studies using consensus methods with human participants, written in English, and exclude articles with incomplete information, meeting abstracts, or congress reports. We will search databases including Medline, Embase, SciELO, LILACS, and Scopus using relevant search terms. Two independent reviewers will screen titles and abstracts, with full-text review for eligible studies. Key information will be extracted on study characteristics and consensus process phases. We will use the CASP checklist to assess risk of bias and study quality, and perform a narrative synthesis of compliance with proposed checklist items. The review will develop a standardized checklist to guide future consensus studies, ensuring methodological rigor and transparency. The study follows the Declaration of Helsinki principles, uses secondary data posing no risk to individuals, and will disseminate findings via peer-reviewed publications and conferences. The project has no external funding and is supported by the researchers' own resources.
This dataset originates from a series of experimental studies titled “Tough on People, Tolerant to AI? Differential Effects of Human vs. AI Unfairness on Trust” The project investigates how individuals respond to unfair behavior (distributive, procedural, and interactional unfairness) enacted by artificial intelligence versus human agents, and how such behavior affects cognitive and affective trust.1 Experiment 1a: The Impact of AI vs. Human Distributive Unfairness on TrustOverview: This dataset comes from an experimental study aimed at examining how individuals respond in terms of cognitive and affective trust when distributive unfairness is enacted by either an artificial intelligence (AI) agent or a human decision-maker. Experiment 1a specifically focuses on the main effect of the “type of decision-maker” on trust.Data Generation and Processing: The data were collected through Credamo, an online survey platform. Initially, 98 responses were gathered from students at a university in China. Additional student participants were recruited via Credamo to supplement the sample. Attention check items were embedded in the questionnaire, and participants who failed were automatically excluded in real-time. Data collection continued until 202 valid responses were obtained. SPSS software was used for data cleaning and analysis.Data Structure and Format: The data file is named “Experiment1a.sav” and is in SPSS format. It contains 28 columns and 202 rows, where each row corresponds to one participant. Columns represent measured variables, including: grouping and randomization variables, one manipulation check item, four items measuring distributive fairness perception, six items on cognitive trust, five items on affective trust, three items for honesty checks, and four demographic variables (gender, age, education, and grade level). The final three columns contain computed means for distributive fairness, cognitive trust, and affective trust.Additional Information: No missing data are present. All variable names are labeled in English abbreviations to facilitate further analysis. The dataset can be directly opened in SPSS or exported to other formats.2 Experiment 1b: The Mediating Role of Perceived Ability and Benevolence (Distributive Unfairness)Overview: This dataset originates from an experimental study designed to replicate the findings of Experiment 1a and further examine the potential mediating role of perceived ability and perceived benevolence.Data Generation and Processing: Participants were recruited via the Credamo online platform. Attention check items were embedded in the survey to ensure data quality. Data were collected using a rolling recruitment method, with invalid responses removed in real time. A total of 228 valid responses were obtained.Data Structure and Format: The dataset is stored in a file named Experiment1b.sav in SPSS format and can be directly opened in SPSS software. It consists of 228 rows and 40 columns. Each row represents one participant’s data record, and each column corresponds to a different measured variable. Specifically, the dataset includes: random assignment and grouping variables; one manipulation check item; four items measuring perceived distributive fairness; six items on perceived ability; five items on perceived benevolence; six items on cognitive trust; five items on affective trust; three items for attention check; and three demographic variables (gender, age, and education). The last five columns contain the computed mean scores for perceived distributive fairness, ability, benevolence, cognitive trust, and affective trust.Additional Notes: There are no missing values in the dataset. All variables are labeled using standardized English abbreviations to facilitate reuse and secondary analysis. The file can be analyzed directly in SPSS or exported to other formats as needed.3 Experiment 2a: Differential Effects of AI vs. Human Procedural Unfairness on TrustOverview: This dataset originates from an experimental study aimed at examining whether individuals respond differently in terms of cognitive and affective trust when procedural unfairness is enacted by artificial intelligence versus human decision-makers. Experiment 2a focuses on the main effect of the decision agent on trust outcomes.Data Generation and Processing: Participants were recruited via the Credamo online survey platform from two universities located in different regions of China. A total of 227 responses were collected. After excluding those who failed the attention check items, 204 valid responses were retained for analysis. Data were processed and analyzed using SPSS software.Data Structure and Format: The dataset is stored in a file named Experiment2a.sav in SPSS format and can be directly opened in SPSS software. It contains 204 rows and 30 columns. Each row represents one participant’s response record, while each column corresponds to a specific variable. Variables include: random assignment and grouping; one manipulation check item; seven items measuring perceived procedural fairness; six items on cognitive trust; five items on affective trust; three attention check items; and three demographic variables (gender, age, and education). The final three columns contain computed average scores for procedural fairness, cognitive trust, and affective trust.Additional Notes: The dataset contains no missing values. All variables are labeled using standardized English abbreviations to facilitate reuse and secondary analysis. The file can be directly analyzed in SPSS or exported to other formats as needed.4 Experiment 2b: Mediating Role of Perceived Ability and Benevolence (Procedural Unfairness)Overview: This dataset comes from an experimental study designed to replicate the findings of Experiment 2a and to further examine the potential mediating roles of perceived ability and perceived benevolence in shaping trust responses under procedural unfairness.Data Generation and Processing: Participants were working adults recruited through the Credamo online platform. A rolling data collection strategy was used, where responses failing attention checks were excluded in real time. The final dataset includes 235 valid responses. All data were processed and analyzed using SPSS software.Data Structure and Format: The dataset is stored in a file named Experiment2b.sav, which is in SPSS format and can be directly opened using SPSS software. It contains 235 rows and 43 columns. Each row corresponds to a single participant, and each column represents a specific measured variable. These include: random assignment and group labels; one manipulation check item; seven items measuring procedural fairness; six items for perceived ability; five items for perceived benevolence; six items for cognitive trust; five items for affective trust; three attention check items; and three demographic variables (gender, age, education). The final five columns contain the computed average scores for procedural fairness, perceived ability, perceived benevolence, cognitive trust, and affective trust.Additional Notes: There are no missing values in the dataset. All variables are labeled using standardized English abbreviations to support future reuse and secondary analysis. The dataset can be directly analyzed in SPSS and easily converted into other formats if needed.5 Experiment 3a: Effects of AI vs. Human Interactional Unfairness on TrustOverview: This dataset comes from an experimental study that investigates how interactional unfairness, when enacted by either artificial intelligence or human decision-makers, influences individuals’ cognitive and affective trust. Experiment 3a focuses on the main effect of the “decision-maker type” under interactional unfairness conditions.Data Generation and Processing: Participants were college students recruited from two universities in different regions of China through the Credamo survey platform. After excluding responses that failed attention checks, a total of 203 valid cases were retained from an initial pool of 223 responses. All data were processed and analyzed using SPSS software.Data Structure and Format: The dataset is stored in the file named Experiment3a.sav, in SPSS format and compatible with SPSS software. It contains 203 rows and 27 columns. Each row represents a single participant, while each column corresponds to a specific measured variable. These include: random assignment and condition labels; one manipulation check item; four items measuring interactional fairness perception; six items for cognitive trust; five items for affective trust; three attention check items; and three demographic variables (gender, age, education). The final three columns contain computed average scores for interactional fairness, cognitive trust, and affective trust.Additional Notes: There are no missing values in the dataset. All variable names are provided using standardized English abbreviations to facilitate secondary analysis. The data can be directly analyzed using SPSS and exported to other formats as needed.6 Experiment 3b: The Mediating Role of Perceived Ability and Benevolence (Interactional Unfairness)Overview: This dataset comes from an experimental study designed to replicate the findings of Experiment 3a and further examine the potential mediating roles of perceived ability and perceived benevolence under conditions of interactional unfairness.Data Generation and Processing: Participants were working adults recruited via the Credamo platform. Attention check questions were embedded in the survey, and responses that failed these checks were excluded in real time. Data collection proceeded in a rolling manner until a total of 227 valid responses were obtained. All data were processed and analyzed using SPSS software.Data Structure and Format: The dataset is stored in the file named Experiment3b.sav, in SPSS format and compatible with SPSS software. It includes 227 rows and
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This survey-based study examines health science scholars’ perceptions of collaborative research behavior and sharing open research data in university settings. A total of 362 health science scholars from U.S. universities participated in an online questionnaire consisting of 59 questions. Descriptive and inferential statistical analyses of the data included frequencies, cross-tabulations, descriptive ratio statistics, and the non-parametric Kruskal-Wallis one-way analysis of variance. Four open-ended questions were also analyzed to provide further insights into the survey findings. The study reveals that health scholars share their data with colleagues within their institution or project, demonstrating a lesser inclination toward open research data sharing practices through institutional repositories and journal supplements. Motivating factors and challenges influencing researchers’ decisions to share their research data were also identified. While scientific and knowledge advancement served as major incentives, health scholars working with human-related data expressed concerns about privacy and confidentiality breaches, which are primary barriers to data sharing. Some participants indicated that requirements and policies also influenced their willingness to share data. Disciplinary variations were observed regarding data-sharing practices through journal supplements, secondary data analysis, and personal communication. Furthermore, significant differences emerged between funded and non-funded scholars, impacting their practices, motivations, and challenges in sharing open research data. Important factors driving health science scholars to share open research data include resources, policy compliance, and requirements. This study contributes valuable insights for policy development by investigating factors that can foster openness and sharing of research data in the health sciences. The findings shed light on the complexities and considerations associated with open data-sharing practices, enabling stakeholders to develop effective strategies and frameworks.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionHuman milk (HM) contains a multitude of nutritive and nonnutritive bioactive compounds that support infant growth, immunity and development, yet its complex composition remains poorly understood. Integrating diverse scientific disciplines from nutrition and global health to data science, the International Milk Composition (IMiC) Consortium was established to undertake a comprehensive harmonized analysis of HM from low, middle and high-resource settings to inform novel strategies for supporting maternal-child nutrition and health.Methods and analysisIMiC is a collaboration of HM experts, data scientists and four mother-infant health studies, each contributing a subset of participants: Canada (CHILD Cohort, n = 400), Tanzania (ELICIT Trial, n = 200), Pakistan (VITAL-LW Trial, n = 150), and Burkina Faso (MISAME-3 Trial, n = 290). Altogether IMiC includes 1,946 HM samples across time-points ranging from birth to 5 months. Using HM-validated assays, we are measuring macronutrients, minerals, B-vitamins, fat-soluble vitamins, HM oligosaccharides, selected bioactive proteins, and untargeted metabolites, proteins, and bacteria. Multi-modal machine learning methods (extreme gradient boosting with late fusion and two-layered cross-validation) will be applied to predict infant growth and identify determinants of HM variation. Feature selection and pathway enrichment analyses will identify key HM components and biological pathways, respectively. While participant data (e.g., maternal characteristics, health, household characteristics) will be harmonized across studies to the extent possible, we will also employ a meta-analytic structure approach where HM effects will be estimated separately within each study, and then meta-analyzed across studies.Ethics and disseminationIMiC was approved by the human research ethics board at the University of Manitoba. Contributing studies were approved by their respective primary institutions and local study centers, with all participants providing informed consent. Aiming to inform maternal, newborn, and infant nutritional recommendations and interventions, results will be disseminated through Open Access platforms, and data will be available for secondary analysis.Clinical trial registrationClinicalTrials.gov, identifier, NCT05119166.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionIntestinal constipation is a substantive global health concern, significantly impairing patient quality of life. An emerging view is that the gut microbiota plays a critical role in intestinal function, and probiotics could offer therapeutic benefits. This study aims to consolidate evidence from randomized controlled trials (RCTs) that assess the effectiveness of probiotics in modulating microbiota and ameliorating symptoms of constipation.MethodsWe will execute a systematic evidence search across Medline (via PubMed), Embase, Cochrane CENTRAL, Web of Science, Scopus, and CINAHL, employing explicit search terms and further reference exploration. Two independent reviewers will ensure study selection and data integrity while assessing methodological quality via the Cochrane Collaboration’s Risk of Bias-2 tool. Our primary goal is to outline changes in microbiota composition, with secondary outcomes addressing symptom relief and stool characteristics. Meta-analyses will adopt a random-effects model to quantify the effects of interventions, supplemented by subgroup analyses and publication bias assessments to fortify the rigor of our findings.DiscussionThis study endeavors to provide a rigorous, synthesized overview of the probiotics interventions evidence for modulating gut microbiota in individuals with intestinal constipation. The insights derived could inform clinical guidelines, nurture the creation of novel constipation management strategies, and direct future research in this field.Ethics and disseminationAs this study aggregates and analyzes existing data without direct human subject involvement, no ethical approval is required. We will disseminate the study’s findings through scientific forums and seek publication in well-regarded, peer-reviewed journals.Trial registrationOSF registration number:10.17605/OSF.IO/MEAHT.
The primary objective of the study is to evaluate the long-term safety of recombinant human Factor VIII Fc fusion protein (rFVIIIFc) in participants with hemophilia A. The secondary objective of the study is to evaluate the efficacy of rFVIIIFc in the prevention and treatment of bleeding episodes in participants with hemophilia A.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The file contains the study reference, blood parameter data derived from the studies and number of subjects in each group. (XLSX)
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The primary outcome of this study was to examine discrepancies between the reported primary and secondary outcomes in registered and published randomized controlled trials in high impact-factor obesity journals. The secondary outcomes were to address whether outcome reporting discrepancies favor statistically significant outcomes, whether there was a correlation between funding source and likelihood of outcome reporting bias, and whether there were temporal trends in outcome reporting bias. We also catalogued any incidental findings during data extraction and analysis that warranted further examination. To accomplish these aims, we performed a methodological systematic review of the 4 highest impact-factor obesity journals from 2013-2015. This study did not meet the regulatory definition of human subjects research according to 45 CFR 46.102(d) and (f) of the Department of Health and Human Services’ Code of Federal Regulations 10 and was not subject to Institutional Review Board oversight. We consulted Li, et al.; the Cochrane Handbook for Systematic Reviews of Interventions; and the National Academies of Science, Engineering, and Medicine’s (previously the Institute of Medicine) Standards for Systematic Reviews to ensure best practices regarding data extraction and management. We applied PRISMA guideline items 1, 3, 5-11, 13, 16-18, 24-27 to ensure reporting quality for systematic reviews as well as SAMPL guidelines for reporting descriptive statistics. Prior to initiation of the study, we registered this study with the University hospital Medical Information Network Clinical Trials Registry (UMIN-CTR) with registry number: R000025787UMIN000022576.
After screening, the citations were imported into the Agency for Healthcare Research and Quality’s Systematic Review Data Repository (SRDR) for data extraction.Two investigators (J.R.,A.R.) independently reviewed the full-text articles for each study and extracted data using SRDR. At least once per day, these investigators would trade articles and repeat the other’s data extraction. This allowed each to cross-validate the other’s work and improve the accuracy and efficiency of data extraction. Any disagreements were resolved by discussion between the pair. A third party reviewer (M.V.) was available for further adjudication, but was not needed. We extracted the following items from the published randomized controlled trials: primary outcome(s), secondary outcome(s), date of subject enrollment, trial registry database and registration number, timing of assessment in primary outcomes (ex.change in weight at 5 months, change in HbA1c at 6 weeks), sample size, any discrepancies between publication and registry disclosed by the author in the publication, and funding source. For the purpose of our study we classified funding source into the following categories: (1) private (ex. Mayo Clinic or philanthropic), (2) public (government or university), (3) industry/corporate (ex. GlaxoSmithKline), (4) University Hospital, (5) mixed funding source, or (6) undisclosed funding source. For RCTs which reported multiple primary and secondary outcomes, we recorded each explicitly stated outcome. If a primary outcome was not explicitly stated as such in the publication, the outcome stated in the sample size estimation was used. If none was explicitly stated in the text or in the sample size calculation, the article was excluded from the study. When sample size was not explicitly stated in the article, we used the “number randomized”. If author’s failed to differentiate between primary and secondary outcomes in the publication, these non-delineated outcomes were coded as “unable to assess” and excluded from comparison. The clinical trial registry or registration number was obtained from each published RCT, if stated, during full-text review/data extraction. If a registration number was listed in the RCT without a trial registry, a search was made of Clinicaltrials.gov, the International Standard Randomized Controlled Trial Number Register (ISRCTNR), the World Health Organization’s International Clinical Trial Registry Platform (ICTRP), and any country specific clinical trial registry identified in the publication. The following characteristics were used to match registered study to publication: title, author(s), keyword, country of origin, sponsoring organization, description of study intervention, projected sample size, and dates of enrollment. When a publication did not explicitly state information regarding registration of a study, the authors were contacted via email using a standardized email template and asked about registration status. If after 2 days there was no reply, a 2nd email was sent. If there was no reply from authors 1 week after the 2nd email, the study was considered unregistered and excluded from the study.Each registered study was located within its respective registry and data was extracted individually by 2 independent investigators (J.R., A.R.). Prior to registry data extraction both investigators underwent trial registry training including: training videos on how to perform searches and access the history of changes in clinicaltrials.gov and the WHO trial registry, tutorial video about locating desired content from trial registry entry, access to a list of all WHO approved trial registries, and each had to successfully complete a sample data extraction from an unrelated study registry entry. The following data was extracted using a standardized form on SRDR: date of trial registration, date range of subject enrollment, original primary registered outcome(s), final primary registered outcome(s), date of initial primary outcome registration, secondary registered outcome(s), sample size if listed, and funding source, if disclosed, using previously defined categories. Although registration quality was not the focus of this study, registered trials lacking a clearly stated primary outcome and timing of assessment were excluded from consideration. Studies that were found to be registered after the end of subject enrollment were excluded from the study due to the inability to adequately assess outcome reporting bias.To be approved by the WHO, a trial registry must meet ICMJE criteria, including documentation of when changes are made to that particular study’s registry entries. If an included study employed this feature, we recorded both the primary outcome from time of initial registration as well as the primary outcome listed in the final version in the registry entry. Departing from the methods of previous authors in this field of research, we did not exclude studies in WHO-approved registries that did not time-stamp the date of initial primary outcome registration. Per the International Standards for Clinical Trial Registries section 2.4, WHO-approved registries are required to time-stamp registry-approved changes to any registered trial including data additions, deletions, and revisions. Therefore, if a WHO-approved trial registry did not display a history of changes, we recorded the date the registry application was approved as the date of initial primary outcome registration. Additionally, the listed primary outcome was recorded as both the initial registered and final registered primary outcome. In non-WHO-approved trial registries, if a date of initial primary outcome registration was not listed, this trial was excluded from our study.