100+ datasets found
  1. Most used tools in market research in the U.S. 2017-2018

    • statista.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista, Most used tools in market research in the U.S. 2017-2018 [Dataset]. https://www.statista.com/statistics/917601/market-research-industry-us-most-used-tools/
    Explore at:
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Jan 21, 2018 - Jan 29, 2018
    Area covered
    United States
    Description

    This statistic displays the market research tools most used by professionals in the market research industry in the United States in 2017 and 2018. During the 2018 the survey, ** percent of respondents stated they usef Microsoft Excel, compared to ** percent in the 2017 survey.

  2. Collection of example datasets used for the book - R Programming -...

    • figshare.com
    txt
    Updated Dec 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kingsley Okoye; Samira Hosseini (2023). Collection of example datasets used for the book - R Programming - Statistical Data Analysis in Research [Dataset]. http://doi.org/10.6084/m9.figshare.24728073.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Dec 4, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Kingsley Okoye; Samira Hosseini
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This book is written for statisticians, data analysts, programmers, researchers, teachers, students, professionals, and general consumers on how to perform different types of statistical data analysis for research purposes using the R programming language. R is an open-source software and object-oriented programming language with a development environment (IDE) called RStudio for computing statistics and graphical displays through data manipulation, modelling, and calculation. R packages and supported libraries provides a wide range of functions for programming and analyzing of data. Unlike many of the existing statistical softwares, R has the added benefit of allowing the users to write more efficient codes by using command-line scripting and vectors. It has several built-in functions and libraries that are extensible and allows the users to define their own (customized) functions on how they expect the program to behave while handling the data, which can also be stored in the simple object system.For all intents and purposes, this book serves as both textbook and manual for R statistics particularly in academic research, data analytics, and computer programming targeted to help inform and guide the work of the R users or statisticians. It provides information about different types of statistical data analysis and methods, and the best scenarios for use of each case in R. It gives a hands-on step-by-step practical guide on how to identify and conduct the different parametric and non-parametric procedures. This includes a description of the different conditions or assumptions that are necessary for performing the various statistical methods or tests, and how to understand the results of the methods. The book also covers the different data formats and sources, and how to test for reliability and validity of the available datasets. Different research experiments, case scenarios and examples are explained in this book. It is the first book to provide a comprehensive description and step-by-step practical hands-on guide to carrying out the different types of statistical analysis in R particularly for research purposes with examples. Ranging from how to import and store datasets in R as Objects, how to code and call the methods or functions for manipulating the datasets or objects, factorization, and vectorization, to better reasoning, interpretation, and storage of the results for future use, and graphical visualizations and representations. Thus, congruence of Statistics and Computer programming for Research.

  3. Global Statistical Analysis Software Market Size By Deployment Model, By...

    • verifiedmarketresearch.com
    Updated Mar 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    VERIFIED MARKET RESEARCH (2024). Global Statistical Analysis Software Market Size By Deployment Model, By Application, By Component, By Geographic Scope And Forecast [Dataset]. https://www.verifiedmarketresearch.com/product/statistical-analysis-software-market/
    Explore at:
    Dataset updated
    Mar 7, 2024
    Dataset provided by
    Verified Market Researchhttps://www.verifiedmarketresearch.com/
    Authors
    VERIFIED MARKET RESEARCH
    License

    https://www.verifiedmarketresearch.com/privacy-policy/https://www.verifiedmarketresearch.com/privacy-policy/

    Time period covered
    2024 - 2030
    Area covered
    Global
    Description

    Statistical Analysis Software Market size was valued at USD 7,963.44 Million in 2023 and is projected to reach USD 13,023.63 Million by 2030, growing at a CAGR of 7.28% during the forecast period 2024-2030.

    Global Statistical Analysis Software Market Drivers

    The market drivers for the Statistical Analysis Software Market can be influenced by various factors. These may include:

    Growing Data Complexity and Volume: The demand for sophisticated statistical analysis tools has been fueled by the exponential rise in data volume and complexity across a range of industries. Robust software solutions are necessary for organizations to evaluate and extract significant insights from huge datasets. Growing Adoption of Data-Driven Decision-Making: Businesses are adopting a data-driven approach to decision-making at a faster rate. Utilizing statistical analysis tools, companies can extract meaningful insights from data to improve operational effectiveness and strategic planning. Developments in Analytics and Machine Learning: As these fields continue to progress, statistical analysis software is now capable of more. These tools' increasing popularity can be attributed to features like sophisticated modeling and predictive analytics. A greater emphasis is being placed on business intelligence: Analytics and business intelligence are now essential components of corporate strategy. In order to provide business intelligence tools for studying trends, patterns, and performance measures, statistical analysis software is essential. Increasing Need in Life Sciences and Healthcare: Large volumes of data are produced by the life sciences and healthcare sectors, necessitating complex statistical analysis. The need for data-driven insights in clinical trials, medical research, and healthcare administration is driving the market for statistical analysis software. Growth of Retail and E-Commerce: The retail and e-commerce industries use statistical analytic tools for inventory optimization, demand forecasting, and customer behavior analysis. The need for analytics tools is fueled in part by the expansion of online retail and data-driven marketing techniques. Government Regulations and Initiatives: Statistical analysis is frequently required for regulatory reporting and compliance with government initiatives, particularly in the healthcare and finance sectors. In these regulated industries, statistical analysis software uptake is driven by this. Big Data Analytics's Emergence: As big data analytics has grown in popularity, there has been a demand for advanced tools that can handle and analyze enormous datasets effectively. Software for statistical analysis is essential for deriving valuable conclusions from large amounts of data. Demand for Real-Time Analytics: In order to make deft judgments fast, there is a growing need for real-time analytics. Many different businesses have a significant demand for statistical analysis software that provides real-time data processing and analysis capabilities. Growing Awareness and Education: As more people become aware of the advantages of using statistical analysis in decision-making, its use has expanded across a range of academic and research institutions. The market for statistical analysis software is influenced by the academic sector. Trends in Remote Work: As more people around the world work from home, they are depending more on digital tools and analytics to collaborate and make decisions. Software for statistical analysis makes it possible for distant teams to efficiently examine data and exchange findings.

  4. Data from: Statistical Process Control as a Tool for Quality Improvement A...

    • figshare.com
    docx
    Updated Feb 23, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Canberk Elmalı; Özge Ural (2023). Statistical Process Control as a Tool for Quality Improvement A Case Study in Denim Pant Production [Dataset]. http://doi.org/10.6084/m9.figshare.22147508.v2
    Explore at:
    docxAvailable download formats
    Dataset updated
    Feb 23, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Canberk Elmalı; Özge Ural
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In this paper, we show that concept of Statistical Process Control tools was thoroughly examined and the definitions of quality control concepts were presented. This is significant because of it is anticipated that this study will contribute to the literature as an exemplary application that demonstrates the role of statistical process control (SPC) tools in quality improvement in the evaluation and decision-making phase.

    This is significant because of this study is to investigate applications of quality control, to clarify statistical control methods and problem-solving procedures, to generate proposals for problem-solving approaches, and to disseminate improvement studies in the ready-to-wear industry. The basic Statistical Process Control tools used in the study, the most repetitive faults were detected and these faults were divided into sub-headings for more detailed analysis. In this way, it was tried to prevent the repetition of faults by going down to the root causes of any detected fault. With this different perspective, it is expected that the study will contribute to other fields.

    We give consent for the publication of identifiable details, which can include photograph(s) and case history and details within the text (“Material”) to be published in the Journal of Quality Technology. We confirm that have seen and been given the opportunity to read both the Material and the Article (as attached) to be published by Taylor & Francis.

  5. An instrument to assess the statistical intensity of medical research papers...

    • plos.figshare.com
    pdf
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pentti Nieminen; Jorma I. Virtanen; Hannu Vähänikkilä (2023). An instrument to assess the statistical intensity of medical research papers [Dataset]. http://doi.org/10.1371/journal.pone.0186882
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Pentti Nieminen; Jorma I. Virtanen; Hannu Vähänikkilä
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundThere is widespread evidence that statistical methods play an important role in original research articles, especially in medical research. The evaluation of statistical methods and reporting in journals suffers from a lack of standardized methods for assessing the use of statistics. The objective of this study was to develop and evaluate an instrument to assess the statistical intensity in research articles in a standardized way.MethodsA checklist-type measure scale was developed by selecting and refining items from previous reports about the statistical contents of medical journal articles and from published guidelines for statistical reporting. A total of 840 original medical research articles that were published between 2007–2015 in 16 journals were evaluated to test the scoring instrument. The total sum of all items was used to assess the intensity between sub-fields and journals. Inter-rater agreement was examined using a random sample of 40 articles. Four raters read and evaluated the selected articles using the developed instrument.ResultsThe scale consisted of 66 items. The total summary score adequately discriminated between research articles according to their study design characteristics. The new instrument could also discriminate between journals according to their statistical intensity. The inter-observer agreement measured by the ICC was 0.88 between all four raters. Individual item analysis showed very high agreement between the rater pairs, the percentage agreement ranged from 91.7% to 95.2%.ConclusionsA reliable and applicable instrument for evaluating the statistical intensity in research papers was developed. It is a helpful tool for comparing the statistical intensity between sub-fields and journals. The novel instrument may be applied in manuscript peer review to identify papers in need of additional statistical review.

  6. Statistical Methods for Missing Data in Large Observational Studies [Methods...

    • icpsr.umich.edu
    Updated Oct 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Long, Qi (2025). Statistical Methods for Missing Data in Large Observational Studies [Methods Study], Georgia, 2013-2018 [Dataset]. http://doi.org/10.3886/ICPSR39526.v1
    Explore at:
    Dataset updated
    Oct 27, 2025
    Dataset provided by
    Inter-university Consortium for Political and Social Researchhttps://www.icpsr.umich.edu/web/pages/
    Authors
    Long, Qi
    License

    https://www.icpsr.umich.edu/web/ICPSR/studies/39526/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/39526/terms

    Time period covered
    2013 - 2018
    Area covered
    United States, Georgia
    Description

    Health registries record data about patients with a specific health problem. These data may include age, weight, blood pressure, health problems, medical test results, and treatments received. But data in some patient records may be missing. For example, some patients may not report their weight or all of their health problems. Research studies can use data from health registries to learn how well treatments work. But missing data can lead to incorrect results. To address the problem, researchers often exclude patient records with missing data from their studies. But doing this can also lead to incorrect results. The fewer records that researchers use, the greater the chance for incorrect results. Missing data also lead to another problem: it is harder for researchers to find patient traits that could affect diagnosis and treatment. For example, patients who are overweight may get heart disease. But if data are missing, it is hard for researchers to be sure that trait could affect diagnosis and treatment of heart disease. In this study, the research team developed new statistical methods to fill in missing data in large studies. The team also developed methods to use when data are missing to help find patient traits that could affect diagnosis and treatment. To access the methods, software, and R package, please visit the Long Research Group website.

  7. Most used quantitative methods in the market research industry worldwide...

    • statista.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista, Most used quantitative methods in the market research industry worldwide 2022 [Dataset]. https://www.statista.com/statistics/875970/market-research-industry-use-of-traditional-quantitative-methods/
    Explore at:
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    2022
    Area covered
    Worldwide
    Description

    In 2022, online surveys were by far the most used traditional quantitative methodologies in the market research industry worldwide. During the survey, 85 percent of respondents stated that they regularly used online surveys as one of their three most used methods. Moreover, nine percent of respondents stated that they used online surveys only occasionally.

  8. H

    Replication Data for: Navigating the Range of Statistical Tools for...

    • dataverse.harvard.edu
    Updated Apr 23, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Skyler Cranmer; Philip Leifeld; Scott McClurg; Meredith Rolfe (2017). Replication Data for: Navigating the Range of Statistical Tools for Inferential Network Analysis [Dataset]. http://doi.org/10.7910/DVN/2XP8YF
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 23, 2017
    Dataset provided by
    Harvard Dataverse
    Authors
    Skyler Cranmer; Philip Leifeld; Scott McClurg; Meredith Rolfe
    License

    https://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.7910/DVN/2XP8YFhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.7910/DVN/2XP8YF

    Description

    The last decade has seen substantial advances in statistical techniques for the analysis of network data, and a major increase in the frequency with which these tools are used. These techniques are designed to accomplish the same broad goal, statistically valid inference in the presence of highly interdependent relationships, but important differences remain between them. We review three approaches commonly used for inferential network analysis---the Quadratic Assignment Procedure, Exponential Random Graph Model, and Latent Space Network Model---highlighting the strengths and weaknesses of the techniques relative to one another. An illustrative example using climate change policy network data shows that all three network models outperform standard logit estimates on multiple criteria. This paper introduces political scientists to a class of network techniques beyond simple descriptive measures of network structure, and helps researchers choose which model to use in their own research.

  9. r

    Australian and New Zealand journal of statistics Impact Factor 2024-2025 -...

    • researchhelpdesk.org
    Updated Feb 23, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). Australian and New Zealand journal of statistics Impact Factor 2024-2025 - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/impact-factor-if/211/australian-and-new-zealand-journal-of-statistics
    Explore at:
    Dataset updated
    Feb 23, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    Australian and New Zealand journal of statistics Impact Factor 2024-2025 - ResearchHelpDesk - The Australian & New Zealand Journal of Statistics is an international journal managed jointly by the Statistical Society of Australia and the New Zealand Statistical Association. Its purpose is to report significant and novel contributions in statistics, ranging across articles on statistical theory, methodology, applications and computing. The journal has a particular focus on statistical techniques that can be readily applied to real-world problems, and on application papers with an Australasian emphasis. Outstanding articles submitted to the journal may be selected as Discussion Papers, to be read at a meeting of either the Statistical Society of Australia or the New Zealand Statistical Association. The main body of the journal is divided into three sections. The Theory and Methods Section publishes papers containing original contributions to the theory and methodology of statistics, econometrics and probability, and seeks papers motivated by a real problem and which demonstrate the proposed theory or methodology in that situation. There is a strong preference for papers motivated by, and illustrated with, real data. The Applications Section publishes papers demonstrating applications of statistical techniques to problems faced by users of statistics in the sciences, government and industry. A particular focus is the application of newly developed statistical methodology to real data and the demonstration of better use of established statistical methodology in an area of application. It seeks to aid teachers of statistics by placing statistical methods in context. The Statistical Computing Section publishes papers containing new algorithms, code snippets, or software descriptions (for open source software only) which enhance the field through the application of computing. Preference is given to papers featuring publically available code and/or data, and to those motivated by statistical methods for practical problems. In addition, suitable review papers and articles of historical and general interest will be considered. The journal also publishes book reviews on a regular basis. Abstracting and Indexing Information Academic Search (EBSCO Publishing) Academic Search Alumni Edition (EBSCO Publishing) Academic Search Elite (EBSCO Publishing) Academic Search Premier (EBSCO Publishing) CompuMath Citation Index (Clarivate Analytics) Current Index to Statistics (ASA/IMS) Journal Citation Reports/Science Edition (Clarivate Analytics) Mathematical Reviews/MathSciNet/Current Mathematical Publications (AMS) RePEc: Research Papers in Economics Science Citation Index Expanded (Clarivate Analytics) SCOPUS (Elsevier) Statistical Theory & Method Abstracts (Zentralblatt MATH) ZBMATH (Zentralblatt MATH)

  10. Use of AI tools to research purchases worldwide 2024, by age

    • statista.com
    Updated May 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2024). Use of AI tools to research purchases worldwide 2024, by age [Dataset]. https://www.statista.com/statistics/1468643/ai-tools-research-purchases-by-age/
    Explore at:
    Dataset updated
    May 3, 2024
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Mar 2024
    Area covered
    Worldwide
    Description

    According to a 2024 survey, younger age groups are more likely to use AI tools like ChatGPT to research purchases. The age group between 18 and 24 was the most likely, with around **** of them. The 25 to 34-year-old age group ranked second, with roughly ** percent of them having used AI tools to research purchases. These kinds of AI-powered tools were least likely to be used by the age group 55 to 64.

  11. Software tools used for data collection and analysis.

    • plos.figshare.com
    xls
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    John A. Borghi; Ana E. Van Gulick (2023). Software tools used for data collection and analysis. [Dataset]. http://doi.org/10.1371/journal.pone.0252047.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    John A. Borghi; Ana E. Van Gulick
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Software tools used to collect and analyze data. Parentheses for analysis software indicate the tools participants were taught to use as part of their education in research methods and statistics. “Other” responses for data collection software were largely comprised of survey tools (e.g. Survey Monkey, LimeSurvey) and tools for building and running behavioral experiments (e.g. Gorilla, JsPsych). “Other” responses for data analysis software largely consisted of neuroimaging-related tools (e.g. SPM, AFNI).

  12. r

    Australian and New Zealand journal of statistics Acceptance Rate -...

    • researchhelpdesk.org
    Updated Mar 23, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). Australian and New Zealand journal of statistics Acceptance Rate - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/acceptance-rate/211/australian-and-new-zealand-journal-of-statistics
    Explore at:
    Dataset updated
    Mar 23, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    Australian and New Zealand journal of statistics Acceptance Rate - ResearchHelpDesk - The Australian & New Zealand Journal of Statistics is an international journal managed jointly by the Statistical Society of Australia and the New Zealand Statistical Association. Its purpose is to report significant and novel contributions in statistics, ranging across articles on statistical theory, methodology, applications and computing. The journal has a particular focus on statistical techniques that can be readily applied to real-world problems, and on application papers with an Australasian emphasis. Outstanding articles submitted to the journal may be selected as Discussion Papers, to be read at a meeting of either the Statistical Society of Australia or the New Zealand Statistical Association. The main body of the journal is divided into three sections. The Theory and Methods Section publishes papers containing original contributions to the theory and methodology of statistics, econometrics and probability, and seeks papers motivated by a real problem and which demonstrate the proposed theory or methodology in that situation. There is a strong preference for papers motivated by, and illustrated with, real data. The Applications Section publishes papers demonstrating applications of statistical techniques to problems faced by users of statistics in the sciences, government and industry. A particular focus is the application of newly developed statistical methodology to real data and the demonstration of better use of established statistical methodology in an area of application. It seeks to aid teachers of statistics by placing statistical methods in context. The Statistical Computing Section publishes papers containing new algorithms, code snippets, or software descriptions (for open source software only) which enhance the field through the application of computing. Preference is given to papers featuring publically available code and/or data, and to those motivated by statistical methods for practical problems. In addition, suitable review papers and articles of historical and general interest will be considered. The journal also publishes book reviews on a regular basis. Abstracting and Indexing Information Academic Search (EBSCO Publishing) Academic Search Alumni Edition (EBSCO Publishing) Academic Search Elite (EBSCO Publishing) Academic Search Premier (EBSCO Publishing) CompuMath Citation Index (Clarivate Analytics) Current Index to Statistics (ASA/IMS) Journal Citation Reports/Science Edition (Clarivate Analytics) Mathematical Reviews/MathSciNet/Current Mathematical Publications (AMS) RePEc: Research Papers in Economics Science Citation Index Expanded (Clarivate Analytics) SCOPUS (Elsevier) Statistical Theory & Method Abstracts (Zentralblatt MATH) ZBMATH (Zentralblatt MATH)

  13. f

    Data from: Univariate and multivariate statistical tools for in vitro...

    • datasetcatalog.nlm.nih.gov
    • scielo.figshare.com
    Updated Mar 24, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Souza, Fernanda Vidigal Duarte; da Silva Ledo, Carlos Alberto; dos Santos Soares Filho, Walter; de Carvalho, Mariane de Jesus da Silva; Santos, Emanuela Barbosa; da Silva Souza, Antônio (2021). Univariate and multivariate statistical tools for in vitro conservation of citrus genotypes [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000803989
    Explore at:
    Dataset updated
    Mar 24, 2021
    Authors
    Souza, Fernanda Vidigal Duarte; da Silva Ledo, Carlos Alberto; dos Santos Soares Filho, Walter; de Carvalho, Mariane de Jesus da Silva; Santos, Emanuela Barbosa; da Silva Souza, Antônio
    Description

    ABSTRACT. This study aimed to evaluate the influence of the growing environment on the in vitro conservation of citrus genotypes obtained from the Active Citrus Germplasm Bank of Embrapa Cassava and Fruit. The study used multivariate statistic tools in order to improve the efficiency in the analysis of the results. Approximately 1-cm of length microcuttings from plantlets derived from ten genotypes previously cultured in vitro were inoculated in test tubes containing 20 mL of WPM culture medium supplemented with 25 g L-1 sucrose, solidified with 7 g L-1 agar and adjusted to a pH of 5.8, and maintained under three environmental conditions for 180 days. The experiment was carried out in a completely randomized design in a split-plot in the space, with 15 replications. The results indicate that the principal component analysis is an effective tool in studying the behavior of different genotypes conserved under different in vitro growing conditionsThe growing conditions of 22±1°C, a light intensity of 10 μmol m-2.s-1 and a 12 hours photoperiod was the most adequate for reducing the growth of in vitro conserved plants, increasing the subculture time interval while keeping the plants healthy.

  14. Z

    Replication package for "Evolution of statistical analysis in ESE research"

    • data.niaid.nih.gov
    Updated Jan 24, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    de Oliveira Neto, Francisco Gomes; Torkar, Richard; Feldt, Robert; Gren, Lucas; Furia, Carlo; Huang, Ziewi (2020). Replication package for "Evolution of statistical analysis in ESE research" [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3294507
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Chalmers and the University of Gothenburg
    Authors
    de Oliveira Neto, Francisco Gomes; Torkar, Richard; Feldt, Robert; Gren, Lucas; Furia, Carlo; Huang, Ziewi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is the replication package for the analysis done in the paper "Evolution of statistical analysis in empirical software engineering research: Current state and steps forward" (DOI: https://doi.org/10.1016/j.jss.2019.07.002, preprint: https://arxiv.org/abs/1706.00933).

    The package includes CSV files with data on statistical usage extracted from 5 journals in SE (EMSE, IST, JSS, TOSEM, TSE). The data was extracted from papers between 2001 - 2015. The package also contains forms, scripts and figures (generated using the scripts) used in the paper.

    The extraction tool mentioned in the paper is available in dockerhub via: https://hub.docker.com/r/robertfeldt/sept

  15. Statistical Analysis of Individual Participant Data Meta-Analyses: A...

    • plos.figshare.com
    • datasetcatalog.nlm.nih.gov
    tiff
    Updated Jun 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gavin B. Stewart; Douglas G. Altman; Lisa M. Askie; Lelia Duley; Mark C. Simmonds; Lesley A. Stewart (2023). Statistical Analysis of Individual Participant Data Meta-Analyses: A Comparison of Methods and Recommendations for Practice [Dataset]. http://doi.org/10.1371/journal.pone.0046042
    Explore at:
    tiffAvailable download formats
    Dataset updated
    Jun 8, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Gavin B. Stewart; Douglas G. Altman; Lisa M. Askie; Lelia Duley; Mark C. Simmonds; Lesley A. Stewart
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundIndividual participant data (IPD) meta-analyses that obtain “raw” data from studies rather than summary data typically adopt a “two-stage” approach to analysis whereby IPD within trials generate summary measures, which are combined using standard meta-analytical methods. Recently, a range of “one-stage” approaches which combine all individual participant data in a single meta-analysis have been suggested as providing a more powerful and flexible approach. However, they are more complex to implement and require statistical support. This study uses a dataset to compare “two-stage” and “one-stage” models of varying complexity, to ascertain whether results obtained from the approaches differ in a clinically meaningful way. Methods and FindingsWe included data from 24 randomised controlled trials, evaluating antiplatelet agents, for the prevention of pre-eclampsia in pregnancy. We performed two-stage and one-stage IPD meta-analyses to estimate overall treatment effect and to explore potential treatment interactions whereby particular types of women and their babies might benefit differentially from receiving antiplatelets. Two-stage and one-stage approaches gave similar results, showing a benefit of using anti-platelets (Relative risk 0.90, 95% CI 0.84 to 0.97). Neither approach suggested that any particular type of women benefited more or less from antiplatelets. There were no material differences in results between different types of one-stage model. ConclusionsFor these data, two-stage and one-stage approaches to analysis produce similar results. Although one-stage models offer a flexible environment for exploring model structure and are useful where across study patterns relating to types of participant, intervention and outcome mask similar relationships within trials, the additional insights provided by their usage may not outweigh the costs of statistical support for routine application in syntheses of randomised controlled trials. Researchers considering undertaking an IPD meta-analysis should not necessarily be deterred by a perceived need for sophisticated statistical methods when combining information from large randomised trials.

  16. D

    Statistical Software Market Report | Global Forecast From 2025 To 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Jan 7, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Statistical Software Market Report | Global Forecast From 2025 To 2033 [Dataset]. https://dataintelo.com/report/global-statistical-software-market
    Explore at:
    csv, pdf, pptxAvailable download formats
    Dataset updated
    Jan 7, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Statistical Software Market Outlook



    The global statistical software market size was estimated to be USD 11.5 billion in 2023 and is projected to reach USD 21.9 billion by 2032, growing at a compound annual growth rate (CAGR) of 7.2% during the forecast period. The increasing demand for data-driven decision-making in various industries acts as a pivotal growth factor. Organizations across the globe are increasingly leveraging statistical software to analyze and interpret complex datasets, thus boosting market expansion. The increasing dependence on big data and the need for detailed analytical tools to make sense of this data deluge are major drivers for the growth of the statistical software market globally.



    One of the primary growth factors of the statistical software market is the escalating need for data analytics in the healthcare industry. With the adoption of electronic health records and other digital health systems, there is a growing need to analyze vast amounts of health data to improve patient outcomes and operational efficiency. Statistical software plays a crucial role in predictive analytics, helping healthcare providers anticipate trends and make informed decisions. Furthermore, the ongoing innovation in healthcare technologies, such as artificial intelligence and machine learning, drives the demand for sophisticated statistical tools capable of handling complex algorithms, thus fueling market growth.



    Moreover, the financial sector is witnessing an increased demand for statistical software due to the necessity of risk management, fraud detection, and regulatory compliance. Financial institutions rely heavily on statistical tools to manage and analyze financial data, assess market trends, and develop strategic plans. The use of statistical software enables financial analysts to perform complex calculations and generate insights that are essential for investment decision-making and financial planning. This growing reliance on statistical tools in finance is expected to significantly contribute to the overall market growth during the forecast period.



    In the education and research sectors, the need for statistical software is booming as institutions and researchers require robust tools to process and analyze research data. Universities and research organizations extensively use statistical software for academic research, enabling them to perform complex data analyses and draw meaningful conclusions. The increasing focus on data-driven research methodologies is encouraging the adoption of statistical tools, further driving the market. This trend is especially evident in regions with significant research and academic activities, supporting the upward trajectory of the statistical software market.



    In the realm of education and research, Mathematics Software has emerged as a vital tool for enhancing data analysis capabilities. As educational institutions increasingly incorporate data-driven methodologies into their curricula, the demand for specialized software that can handle complex mathematical computations is on the rise. Mathematics Software provides researchers and educators with the ability to model, simulate, and analyze data with precision, facilitating deeper insights and fostering innovation. This trend is particularly significant in fields such as engineering, physics, and economics, where mathematical modeling is essential. The integration of Mathematics Software into academic settings not only supports advanced research but also equips students with critical analytical skills, preparing them for data-centric careers. As the focus on STEM education intensifies globally, the role of Mathematics Software in academic and research environments is expected to expand, contributing to the growth of the statistical software market.



    The regional outlook for the statistical software market indicates a strong presence in North America, driven by the high adoption rate of advanced technologies and the presence of major market players. The region's strong emphasis on research and development across various sectors further supports the demand for statistical software. Meanwhile, Asia Pacific is expected to exhibit the highest growth rate, attributed to the expanding IT infrastructure and growing digital transformation across industries. The increasing emphasis on data analytics in developing countries will continue to be a significant driving factor in these regions, contributing to the overall growth of the market.



    Component Analysis



  17. f

    Most popular statistical methods used to assess agreement according to area...

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    Updated May 25, 2012
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bulgiba, Awang; Zaki, Rafdzah; Ismail, Noor Azina; Ismail, Roshidi (2012). Most popular statistical methods used to assess agreement according to area of specialty in medicine. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001146931
    Explore at:
    Dataset updated
    May 25, 2012
    Authors
    Bulgiba, Awang; Zaki, Rafdzah; Ismail, Noor Azina; Ismail, Roshidi
    Description

    n = Total number of studies retrieved for each specialty, x = number of studies.

  18. Z

    Data Analysis for the Systematic Literature Review of DL4SE

    • data.niaid.nih.gov
    • data-staging.niaid.nih.gov
    Updated Jul 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cody Watson; Nathan Cooper; David Nader; Kevin Moran; Denys Poshyvanyk (2024). Data Analysis for the Systematic Literature Review of DL4SE [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4768586
    Explore at:
    Dataset updated
    Jul 19, 2024
    Dataset provided by
    Washington and Lee University
    College of William and Mary
    Authors
    Cody Watson; Nathan Cooper; David Nader; Kevin Moran; Denys Poshyvanyk
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data Analysis is the process that supports decision-making and informs arguments in empirical studies. Descriptive statistics, Exploratory Data Analysis (EDA), and Confirmatory Data Analysis (CDA) are the approaches that compose Data Analysis (Xia & Gong; 2014). An Exploratory Data Analysis (EDA) comprises a set of statistical and data mining procedures to describe data. We ran EDA to provide statistical facts and inform conclusions. The mined facts allow attaining arguments that would influence the Systematic Literature Review of DL4SE.

    The Systematic Literature Review of DL4SE requires formal statistical modeling to refine the answers for the proposed research questions and formulate new hypotheses to be addressed in the future. Hence, we introduce DL4SE-DA, a set of statistical processes and data mining pipelines that uncover hidden relationships among Deep Learning reported literature in Software Engineering. Such hidden relationships are collected and analyzed to illustrate the state-of-the-art of DL techniques employed in the software engineering context.

    Our DL4SE-DA is a simplified version of the classical Knowledge Discovery in Databases, or KDD (Fayyad, et al; 1996). The KDD process extracts knowledge from a DL4SE structured database. This structured database was the product of multiple iterations of data gathering and collection from the inspected literature. The KDD involves five stages:

    Selection. This stage was led by the taxonomy process explained in section xx of the paper. After collecting all the papers and creating the taxonomies, we organize the data into 35 features or attributes that you find in the repository. In fact, we manually engineered features from the DL4SE papers. Some of the features are venue, year published, type of paper, metrics, data-scale, type of tuning, learning algorithm, SE data, and so on.

    Preprocessing. The preprocessing applied was transforming the features into the correct type (nominal), removing outliers (papers that do not belong to the DL4SE), and re-inspecting the papers to extract missing information produced by the normalization process. For instance, we normalize the feature “metrics” into “MRR”, “ROC or AUC”, “BLEU Score”, “Accuracy”, “Precision”, “Recall”, “F1 Measure”, and “Other Metrics”. “Other Metrics” refers to unconventional metrics found during the extraction. Similarly, the same normalization was applied to other features like “SE Data” and “Reproducibility Types”. This separation into more detailed classes contributes to a better understanding and classification of the paper by the data mining tasks or methods.

    Transformation. In this stage, we omitted to use any data transformation method except for the clustering analysis. We performed a Principal Component Analysis to reduce 35 features into 2 components for visualization purposes. Furthermore, PCA also allowed us to identify the number of clusters that exhibit the maximum reduction in variance. In other words, it helped us to identify the number of clusters to be used when tuning the explainable models.

    Data Mining. In this stage, we used three distinct data mining tasks: Correlation Analysis, Association Rule Learning, and Clustering. We decided that the goal of the KDD process should be oriented to uncover hidden relationships on the extracted features (Correlations and Association Rules) and to categorize the DL4SE papers for a better segmentation of the state-of-the-art (Clustering). A clear explanation is provided in the subsection “Data Mining Tasks for the SLR od DL4SE”. 5.Interpretation/Evaluation. We used the Knowledge Discover to automatically find patterns in our papers that resemble “actionable knowledge”. This actionable knowledge was generated by conducting a reasoning process on the data mining outcomes. This reasoning process produces an argument support analysis (see this link).

    We used RapidMiner as our software tool to conduct the data analysis. The procedures and pipelines were published in our repository.

    Overview of the most meaningful Association Rules. Rectangles are both Premises and Conclusions. An arrow connecting a Premise with a Conclusion implies that given some premise, the conclusion is associated. E.g., Given that an author used Supervised Learning, we can conclude that their approach is irreproducible with a certain Support and Confidence.

    Support = Number of occurrences this statement is true divided by the amount of statements Confidence = The support of the statement divided by the number of occurrences of the premise

  19. Semiparametric Causal Inference Methods for Adaptive Statistical Learning in...

    • icpsr.umich.edu
    Updated Aug 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hubbard, Alan (2025). Semiparametric Causal Inference Methods for Adaptive Statistical Learning in Trauma Patient-Centered Outcomes Research [Methods Study], 2013-2018 [Dataset]. http://doi.org/10.3886/ICPSR39471.v1
    Explore at:
    Dataset updated
    Aug 26, 2025
    Dataset provided by
    Inter-university Consortium for Political and Social Researchhttps://www.icpsr.umich.edu/web/pages/
    Authors
    Hubbard, Alan
    License

    https://www.icpsr.umich.edu/web/ICPSR/studies/39471/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/39471/terms

    Time period covered
    2013 - 2018
    Area covered
    United States
    Description

    Electronic health records store a lot of data about a patient. These data often include age, health problems, current medicines, and lab results. Looking at these data may help doctors treating patients after a trauma predict how likely it is that they will respond well to a treatment and survive. This information can help doctors make better treatment decisions. But first, researchers need to figure out how to combine and analyze data to make accurate predictions. In this study, the research team created new statistical methods to combine data from patient records. They used these methods to predict patient health outcomes. Then the team used health record data collected from patients in hospital trauma centers to test their predictions. To access the methods and software, please visit the following GitHubs: origami varimpact opttx

  20. Most used qualitative methods used in the market research industry worldwide...

    • statista.com
    Updated Nov 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Most used qualitative methods used in the market research industry worldwide 2022 [Dataset]. https://www.statista.com/statistics/875985/market-research-industry-use-of-traditional-qualitative-methods/
    Explore at:
    Dataset updated
    Nov 24, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Oct 25, 2022 - Dec 16, 2022
    Area covered
    Worldwide
    Description

    In 2022, ************** were the most used traditional qualitative methodologies in the market research industry worldwide. During the survey, ** percent of respondents stated that they regularly used this method. Second in the list was data visualization/dashboards, where ** percent of respondents gave this as their answer.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Statista, Most used tools in market research in the U.S. 2017-2018 [Dataset]. https://www.statista.com/statistics/917601/market-research-industry-us-most-used-tools/
Organization logo

Most used tools in market research in the U.S. 2017-2018

Explore at:
Dataset authored and provided by
Statistahttp://statista.com/
Time period covered
Jan 21, 2018 - Jan 29, 2018
Area covered
United States
Description

This statistic displays the market research tools most used by professionals in the market research industry in the United States in 2017 and 2018. During the 2018 the survey, ** percent of respondents stated they usef Microsoft Excel, compared to ** percent in the 2017 survey.

Search
Clear search
Close search
Google apps
Main menu