Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundThere is widespread evidence that statistical methods play an important role in original research articles, especially in medical research. The evaluation of statistical methods and reporting in journals suffers from a lack of standardized methods for assessing the use of statistics. The objective of this study was to develop and evaluate an instrument to assess the statistical intensity in research articles in a standardized way.MethodsA checklist-type measure scale was developed by selecting and refining items from previous reports about the statistical contents of medical journal articles and from published guidelines for statistical reporting. A total of 840 original medical research articles that were published between 2007–2015 in 16 journals were evaluated to test the scoring instrument. The total sum of all items was used to assess the intensity between sub-fields and journals. Inter-rater agreement was examined using a random sample of 40 articles. Four raters read and evaluated the selected articles using the developed instrument.ResultsThe scale consisted of 66 items. The total summary score adequately discriminated between research articles according to their study design characteristics. The new instrument could also discriminate between journals according to their statistical intensity. The inter-observer agreement measured by the ICC was 0.88 between all four raters. Individual item analysis showed very high agreement between the rater pairs, the percentage agreement ranged from 91.7% to 95.2%.ConclusionsA reliable and applicable instrument for evaluating the statistical intensity in research papers was developed. It is a helpful tool for comparing the statistical intensity between sub-fields and journals. The novel instrument may be applied in manuscript peer review to identify papers in need of additional statistical review.
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
STATO is the statistical methods ontology. It contains concepts and properties related to statistical methods, probability distributions and other concepts related to statistical analysis, including relationships to study designs and plots.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The case-cohort study design combines the advantages of a cohort study with the efficiency of a nested case-control study. However, unlike more standard observational study designs, there are currently no guidelines for reporting results from case-cohort studies. Our aim was to review recent practice in reporting these studies, and develop recommendations for the future. By searching papers published in 24 major medical and epidemiological journals between January 2010 and March 2013 using PubMed, Scopus and Web of Knowledge, we identified 32 papers reporting case-cohort studies. The median subcohort sampling fraction was 4.1% (interquartile range 3.7% to 9.1%). The papers varied in their approaches to describing the numbers of individuals in the original cohort and the subcohort, presenting descriptive data, and in the level of detail provided about the statistical methods used, so it was not always possible to be sure that appropriate analyses had been conducted. Based on the findings of our review, we make recommendations about reporting of the study design, subcohort definition, numbers of participants, descriptive information and statistical methods, which could be used alongside existing STROBE guidelines for reporting observational studies.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Characteristics of baseline covariates and standardized bias before and after PS adjusted using weighting by the odds in 20% of the total respondents, a cross sectional study in five cities, china, 2007–2008 (n = 3,179). (PDF)
Scientific investigation is of value only insofar as relevant results are obtained and communicated, a task that requires organizing, evaluating, analysing and unambiguously communicating the significance of data. In this context, working with ecological data, reflecting the complexities and interactions of the natural world, can be a challenge. Recent innovations for statistical analysis of multifaceted interrelated data make obtaining more accurate and meaningful results possible, but key decisions of the analyses to use, and which components to present in a scientific paper or report, may be overwhelming. We offer a 10-step protocol to streamline analysis of data that will enhance understanding of the data, the statistical models and the results, and optimize communication with the reader with respect to both the procedure and the outcomes. The protocol takes the investigator from study design and organization of data (formulating relevant questions, visualizing data collection, data...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Good quality medical research generally requires not only an expertise in the chosen medical field of interest but also a sound knowledge of statistical methodology. The number of medical research articles which have been published in Indian medical journals has increased quite substantially in the past decade. The aim of this study was to collate all evidence on study design quality and statistical analyses used in selected leading Indian medical journals. Ten (10) leading Indian medical journals were selected based on impact factors and all original research articles published in 2003 (N = 588) and 2013 (N = 774) were categorized and reviewed. A validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation of the articles. Main outcomes considered in the present study were – study design types and their frequencies, error/defects proportion in study design, statistical analyses, and implementation of CONSORT checklist in RCT (randomized clinical trials). From 2003 to 2013: The proportion of erroneous statistical analyses did not decrease (χ2=0.592, Φ=0.027, p=0.4418), 25% (80/320) in 2003 compared to 22.6% (111/490) in 2013. Compared with 2003, significant improvement was seen in 2013; the proportion of papers using statistical tests increased significantly (χ2=26.96, Φ=0.16, p
https://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.7910/DVN/CC2LMAhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.7910/DVN/CC2LMA
This dataset represents a group of paper records (a "series") within the Steven Lawrence Gortmaker papers, 1955-1998 (inclusive), 1977-1997 (bulk), which can be accessed on-site at the Center for the History of Medicine at the Francis A. Countway Library of Medicine in Boston, Massachusetts. The series consists of readings, class session records, and course evaluations generated and compiled by Steven Lawrence Gortmaker as a product of teaching two courses in the Harvard School of Public Health Department of Health and Social Behavior: HSB 206 (HIV, Transmission, & Social Behavior); and HSB 230 (Social & Behavioral Research Methods). Class readings include reprints, newspaper clippings, brochures, and pamphlets assigned to students as part of the course curriculum. Class session records of: course syllabi; lecture and presentation records; class handouts; student lists and information sheets; and student assignments. Course evaluation records include raw data course evaluations completed by students, and related summarized and analyzed data tables. Subjects covered in HSB 206 include various topics related to sexuality, HIV, and AIDS, such as: sexual behavior; drug and alcohol use; sexual abuse; bisexuality and homosexuality; prostitution; and educational and behavioral intervention. The HSB 230 course covered various areas of study design and statistical analysis, including: study implementation and delivery; data collection and processing methods; sample size and statistical significance; statistical models; and regression, among other topics. Some papers are in Spanish. Data and associated records are accessible onsite at the Center for the History of Medicine per the conditions governing access described below. Conditions Governing Access to Original Collection Materials: The series represented by this dataset includes student information that is restricted for 80 years from the date of record creation, and Harvard University records that are restricted for 50 years from the date of record creation. Researchers should contact Public Services for more information. The Steven Lawrence Gortmaker papers were processed with grant funding from the Andrew W. Mellon Foundation, as awarded and administered by the Council on Library and Information Resources (CLIR) in 2016. View the Steven Lawrence Gortmaker Papers finding aid for a full collection inventory of the records, and for more information about accessing and using the collection.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Clustered observational studies (COSs) are a critical analytic tool for educational effectiveness research. We present a design framework for the development and critique of COSs. The framework is built on the counterfactual model for causal inference and promotes the concept of designing COSs that emulate the targeted randomized trial that would have been conducted were it feasible. We emphasize the key role of understanding the assignment mechanism to study design. We review methods for statistical adjustment and highlight a recently developed form of matching designed specifically for COSs. We review how regression models can be profitably combined with matching and note best practices for estimates of statistical uncertainty. Finally, we review how sensitivity analyses can determine whether conclusions are sensitive to bias from potential unobserved confounders. We demonstrate concepts with an evaluation of a summer school reading intervention in a large U.S. school district.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundA diverse range of study designs (e.g. case-control or cohort) are used in the evaluation of adverse effects. We aimed to ascertain whether the risk estimates from meta-analyses of case-control studies differ from that of other study designs.MethodsSearches were carried out in 10 databases in addition to reference checking, contacting experts, and handsearching key journals and conference proceedings. Studies were included where a pooled relative measure of an adverse effect (odds ratio or risk ratio) from case-control studies could be directly compared with the pooled estimate for the same adverse effect arising from other types of observational studies.ResultsWe included 82 meta-analyses. Pooled estimates of harm from the different study designs had 95% confidence intervals that overlapped in 78/82 instances (95%). Of the 23 cases of discrepant findings (significant harm identified in meta-analysis of one type of study design, but not with the other study design), 16 (70%) stemmed from significantly elevated pooled estimates from case-control studies. There was associated evidence of funnel plot asymmetry consistent with higher risk estimates from case-control studies. On average, cohort or cross-sectional studies yielded pooled odds ratios 0.94 (95% CI 0.88–1.00) times lower than that from case-control studies.InterpretationEmpirical evidence from this overview indicates that meta-analysis of case-control studies tend to give slightly higher estimates of harm as compared to meta-analyses of other observational studies. However it is impossible to rule out potential confounding from differences in drug dose, duration and populations when comparing between study designs.
https://www.marketreportanalytics.com/privacy-policyhttps://www.marketreportanalytics.com/privacy-policy
The Design of Experiments (DOE) software market is experiencing robust growth, driven by the increasing adoption of data-driven decision-making across various industries. The market's expansion is fueled by the need for efficient experimentation and optimization in areas such as manufacturing, pharmaceuticals, and research and development. Companies are increasingly utilizing DOE software to reduce costs associated with experimentation, accelerate product development cycles, and improve overall product quality. The cloud-based segment is witnessing faster growth compared to on-premise solutions, primarily due to improved accessibility, scalability, and cost-effectiveness. Large enterprises are the primary adopters of DOE software, owing to their larger budgets and greater need for sophisticated data analysis capabilities. However, the SME segment is also showing significant growth potential as awareness and affordability of these solutions increase. Geographic distribution reveals strong market presence in North America and Europe, driven by established industries and early adoption of advanced technologies. The Asia-Pacific region, however, is poised for substantial growth in the coming years due to rapid industrialization and increasing investments in R&D. Competitive rivalry is moderate, with established players and emerging companies coexisting. This competitive landscape fosters innovation and contributes to the market's overall expansion. Factors such as the complexity of DOE software and the need for specialized expertise present challenges to broader market penetration, but these are expected to be mitigated through user-friendly interfaces and increased training opportunities. The forecast period (2025-2033) anticipates continued market expansion, fueled by advancements in machine learning and artificial intelligence integration within DOE software. These advancements are streamlining analysis, automating tasks, and enabling more complex experimental designs. The growing adoption of Industry 4.0 principles further contributes to the demand for DOE software, as companies strive to improve operational efficiency and optimize production processes. Despite potential economic fluctuations, the long-term outlook for the DOE software market remains positive, driven by the ongoing need for efficient and effective experimentation across diverse sectors. The market is expected to see a substantial increase in the adoption of advanced analytics and predictive modelling capabilities integrated into DOE software solutions, providing further impetus for growth.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ObjectiveTo provide a practical guidance for the analysis of N-of-1 trials by comparing four commonly used models.MethodsThe four models, paired t-test, mixed effects model of difference, mixed effects model and meta-analysis of summary data were compared using a simulation study. The assumed 3-cycles and 4-cycles N-of-1 trials were set with sample sizes of 1, 3, 5, 10, 20 and 30 respectively under normally distributed assumption. The data were generated based on variance-covariance matrix under the assumption of (i) compound symmetry structure or first-order autoregressive structure, and (ii) no carryover effect or 20% carryover effect. Type I error, power, bias (mean error), and mean square error (MSE) of effect differences between two groups were used to evaluate the performance of the four models.ResultsThe results from the 3-cycles and 4-cycles N-of-1 trials were comparable with respect to type I error, power, bias and MSE. Paired t-test yielded type I error near to the nominal level, higher power, comparable bias and small MSE, whether there was carryover effect or not. Compared with paired t-test, mixed effects model produced similar size of type I error, smaller bias, but lower power and bigger MSE. Mixed effects model of difference and meta-analysis of summary data yielded type I error far from the nominal level, low power, and large bias and MSE irrespective of the presence or absence of carryover effect.ConclusionWe recommended paired t-test to be used for normally distributed data of N-of-1 trials because of its optimal statistical performance. In the presence of carryover effects, mixed effects model could be used as an alternative.
description: Tetlin National Wildlife Refuge monitors waterfowl brood production every year. The same eleven clusters of water bodies have been observed each year since 1985. Given the constraints on personnel, time, and budgets, it is imperative that brood production surveys be conducted efficiently in terms of both choice of sampling design and choice of sampling effort. We review the current survey design, identify statistical issues, and recommend potential solutions. Major topics include the lack of a clearly defined target universe and sample frame, measurement issues (focusing on survey timing and within season mortality), and minimum sample size required to achieve desired level of precision in brood production estimates. The study concludes with a series of recommended tasks that the Refuge should undertake to improve the brood survey's efficiency and effectiveness.; abstract: Tetlin National Wildlife Refuge monitors waterfowl brood production every year. The same eleven clusters of water bodies have been observed each year since 1985. Given the constraints on personnel, time, and budgets, it is imperative that brood production surveys be conducted efficiently in terms of both choice of sampling design and choice of sampling effort. We review the current survey design, identify statistical issues, and recommend potential solutions. Major topics include the lack of a clearly defined target universe and sample frame, measurement issues (focusing on survey timing and within season mortality), and minimum sample size required to achieve desired level of precision in brood production estimates. The study concludes with a series of recommended tasks that the Refuge should undertake to improve the brood survey's efficiency and effectiveness.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Correlation; significant (p
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
PRISMA Checklist. (DOC)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Prisma checklist. (DOC)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ARV, antiretroviral; IQR, interquartile range; MSM, men who have sex with men; ART, antiretroviral therapy; AIDS, acquired immunodeficiency syndrome; VL, viral load.aUnavailable.b
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
File S1 includes Appendix S1, Appendix S2, Appendix S3, Appendix S4. Appendix S1: Search terms used to identify studies of one year mortality on antiretroviral therapy. Appendix S2: Full citations for studies reviewed. Appendix S3: Illustration of a distribution used to impute CD4 count with bands. Appendix S4: CD4 coefficient (bottom) and model fit (F-statistic – top) for the relationship between one year mortality on ART and baseline CD4 count using varying assumptions about the amount of mortality among those lost to follow-up. (DOCX)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
List of the 28 matched indicators and the corresponding questions used in the community questionnaire, in comparison with those used in the DHS and MICS. (DOCX)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Results of boilerplate analysis applied to the ANZCTR dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Logistic regression results for study characteristics associated with missing statistical methods sections in ANZCTR.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundThere is widespread evidence that statistical methods play an important role in original research articles, especially in medical research. The evaluation of statistical methods and reporting in journals suffers from a lack of standardized methods for assessing the use of statistics. The objective of this study was to develop and evaluate an instrument to assess the statistical intensity in research articles in a standardized way.MethodsA checklist-type measure scale was developed by selecting and refining items from previous reports about the statistical contents of medical journal articles and from published guidelines for statistical reporting. A total of 840 original medical research articles that were published between 2007–2015 in 16 journals were evaluated to test the scoring instrument. The total sum of all items was used to assess the intensity between sub-fields and journals. Inter-rater agreement was examined using a random sample of 40 articles. Four raters read and evaluated the selected articles using the developed instrument.ResultsThe scale consisted of 66 items. The total summary score adequately discriminated between research articles according to their study design characteristics. The new instrument could also discriminate between journals according to their statistical intensity. The inter-observer agreement measured by the ICC was 0.88 between all four raters. Individual item analysis showed very high agreement between the rater pairs, the percentage agreement ranged from 91.7% to 95.2%.ConclusionsA reliable and applicable instrument for evaluating the statistical intensity in research papers was developed. It is a helpful tool for comparing the statistical intensity between sub-fields and journals. The novel instrument may be applied in manuscript peer review to identify papers in need of additional statistical review.