Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
R Scripts contain statistical data analisys for streamflow and sediment data, including Flow Duration Curves, Double Mass Analysis, Nonlinear Regression Analysis for Suspended Sediment Rating Curves, Stationarity Tests and include several plots.
Facebook
TwitterIn 2023, Morningstar Advisor Workstation was by far the most popular data analytics software worldwide. According to a survey carried out between December 2022 and March 2023, the market share of Morningstar Advisor Workstation was ***** percent. It was followed by Riskalyze Elite, with ***** percent, and YCharts, with ***** percent.
Facebook
TwitterThis page lists ad-hoc statistics released during the period July - September 2020. These are additional analyses not included in any of the Department for Digital, Culture, Media and Sport’s standard publications.
If you would like any further information please contact evidence@dcms.gov.uk.
This analysis considers businesses in the DCMS Sectors split by whether they had reported annual turnover above or below £500 million, at one time the threshold for the Coronavirus Business Interruption Loan Scheme (CBILS). Please note the DCMS Sectors totals here exclude the Tourism and Civil Society sectors, for which data is not available or has been excluded for ease of comparability.
The analysis looked at number of businesses; and total GVA generated for both turnover bands. In 2018, an estimated 112 DCMS Sector businesses had an annual turnover of £500m or more (0.03% of the total DCMS Sector businesses). These businesses generated 35.3% (£73.9bn) of all GVA by the DCMS Sectors.
These are trends are broadly similar for the wider non-financial UK business economy, where an estimated 823 businesses had an annual turnover of £500m or more (0.03% of the total) and generated 24.3% (£409.9bn) of all GVA.
The Digital Sector had an estimated 89 businesses (0.04% of all Digital Sector businesses) – the largest number – with turnover of £500m or more; and these businesses generated 41.5% (£61.9bn) of all GVA for the Digital Sector. By comparison, the Creative Industries had an estimated 44 businesses with turnover of £500m or more (0.01% of all Creative Industries businesses), and these businesses generated 23.9% (£26.7bn) of GVA for the Creative Industries sector.
<p class="gem-c-attachment_metadata"><span class="gem-c-attachment_attribute">MS Excel Spreadsheet</span>, <span class="gem-c-attachment_attribute">42.5 KB</span></p>
This analysis shows estimates from the ONS Opinion and Lifestyle Omnibus Survey Data Module, commissioned by DCMS in February 2020. The Opinions and Lifestyles Survey (OPN) is run by the Office for National Statistics. For more information on the survey, please see the https://www.ons.gov.uk/aboutus/whatwedo/paidservices/opinions" class="govuk-link">ONS website.
DCMS commissioned 19 questions to be included in the February 2020 survey relating to the public’s views on a range of data related issues, such as trust in different types of organisations when handling personal data, confidence using data skills at work, understanding of how data is managed by companies and the use of data skills at work.
The high level results are included in the accompanying tables. The survey samples adults (16+) across the whole of Great Britain (excluding the Isles of Scilly).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sheet 1 (Raw-Data): The raw data of the study is provided, presenting the tagging results for the used measures described in the paper. For each subject, it includes multiple columns: A. a sequential student ID B an ID that defines a random group label and the notation C. the used notation: user Story or use Cases D. the case they were assigned to: IFA, Sim, or Hos E. the subject's exam grade (total points out of 100). Empty cells mean that the subject did not take the first exam F. a categorical representation of the grade L/M/H, where H is greater or equal to 80, M is between 65 included and 80 excluded, L otherwise G. the total number of classes in the student's conceptual model H. the total number of relationships in the student's conceptual model I. the total number of classes in the expert's conceptual model J. the total number of relationships in the expert's conceptual model K-O. the total number of encountered situations of alignment, wrong representation, system-oriented, omitted, missing (see tagging scheme below) P. the researchers' judgement on how well the derivation process explanation was explained by the student: well explained (a systematic mapping that can be easily reproduced), partially explained (vague indication of the mapping ), or not present.
Tagging scheme:
Aligned (AL) - A concept is represented as a class in both models, either
with the same name or using synonyms or clearly linkable names;
Wrongly represented (WR) - A class in the domain expert model is
incorrectly represented in the student model, either (i) via an attribute,
method, or relationship rather than class, or
(ii) using a generic term (e.g., user'' instead ofurban
planner'');
System-oriented (SO) - A class in CM-Stud that denotes a technical
implementation aspect, e.g., access control. Classes that represent legacy
system or the system under design (portal, simulator) are legitimate;
Omitted (OM) - A class in CM-Expert that does not appear in any way in
CM-Stud;
Missing (MI) - A class in CM-Stud that does not appear in any way in
CM-Expert.
All the calculations and information provided in the following sheets
originate from that raw data.
Sheet 2 (Descriptive-Stats): Shows a summary of statistics from the data collection,
including the number of subjects per case, per notation, per process derivation rigor category, and per exam grade category.
Sheet 3 (Size-Ratio):
The number of classes within the student model divided by the number of classes within the expert model is calculated (describing the size ratio). We provide box plots to allow a visual comparison of the shape of the distribution, its central value, and its variability for each group (by case, notation, process, and exam grade) . The primary focus in this study is on the number of classes. However, we also provided the size ratio for the number of relationships between student and expert model.
Sheet 4 (Overall):
Provides an overview of all subjects regarding the encountered situations, completeness, and correctness, respectively. Correctness is defined as the ratio of classes in a student model that is fully aligned with the classes in the corresponding expert model. It is calculated by dividing the number of aligned concepts (AL) by the sum of the number of aligned concepts (AL), omitted concepts (OM), system-oriented concepts (SO), and wrong representations (WR). Completeness on the other hand, is defined as the ratio of classes in a student model that are correctly or incorrectly represented over the number of classes in the expert model. Completeness is calculated by dividing the sum of aligned concepts (AL) and wrong representations (WR) by the sum of the number of aligned concepts (AL), wrong representations (WR) and omitted concepts (OM). The overview is complemented with general diverging stacked bar charts that illustrate correctness and completeness.
For sheet 4 as well as for the following four sheets, diverging stacked bar
charts are provided to visualize the effect of each of the independent and mediated variables. The charts are based on the relative numbers of encountered situations for each student. In addition, a "Buffer" is calculated witch solely serves the purpose of constructing the diverging stacked bar charts in Excel. Finally, at the bottom of each sheet, the significance (T-test) and effect size (Hedges' g) for both completeness and correctness are provided. Hedges' g was calculated with an online tool: https://www.psychometrica.de/effect_size.html. The independent and moderating variables can be found as follows:
Sheet 5 (By-Notation):
Model correctness and model completeness is compared by notation - UC, US.
Sheet 6 (By-Case):
Model correctness and model completeness is compared by case - SIM, HOS, IFA.
Sheet 7 (By-Process):
Model correctness and model completeness is compared by how well the derivation process is explained - well explained, partially explained, not present.
Sheet 8 (By-Grade):
Model correctness and model completeness is compared by the exam grades, converted to categorical values High, Low , and Medium.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Crime isn't a topic most people want to use mental energy to think about. We want to avoid harm, protect our loved ones, and hold on to what we claim is ours. So how do we remain vigilant without digging too deep into the filth that is crime? Data, of course. The focus of our study is to explore possible trends between crime and communities in the city of Calgary. Our purpose is visualize Calgary criminal behaviour in order to help increase awareness for both citizens and law enforcement. Through the use of our visuals, individuals can make more informed decisions to improve the overall safety of their lives. Some of the main concerns of the study include: how crime rates increase with population, which areas in Calgary have the most crime, and if crime adheres to time-sensative patterns.
Facebook
TwitterThis dataset was created by Md Farhan Ishrak
Facebook
TwitterThe objective of this study was to identify the patterns of juvenile salmonid distribution and relative abundance in relation to habitat correlates. It is the first dataset of its kind because the entire river was snorkeled by one person in multiple years. During two consecutive summers, we completed a census of juvenile salmonids and stream habitat across a stream network. We used the data to test the ability of habitat models to explain the distribution of juvenile coho salmon (Oncorhynchus kisutch), young-of-the-year (age 0) steelhead (Oncorhynchus mykiss), and steelhead parr (= age 1) for a network consisting of several different sized streams. Our network-scale models, which included five stream habitat variables, explained 27%, 11%, and 19% of the variation in the density of juvenile coho salmon, age 0 steelhead, and steelhead parr, respectively. We found weak to strong levels of spatial auto-correlation in the model residuals (Moran's I values ranging from 0.25 - 0.71). Explanatory power of base habitat models increased substantially and the level of spatial auto-correlation decreased with sequential inclusion of variables accounting for stream size, year, stream, and reach location. The models for specific streams underscored the variability that was implied in the network-scale models. Associations between juvenile salmonids and individual habitat variables were rarely linear and ranged from negative to positive, and the variable accounting for location of the habitat within a stream was often more important than any individual habitat variable. The limited success in predicting the summer distribution and density of juvenile coho salmon and steelhead with our network-scale models was apparently related to variation in the strength and shape of fish-habitat associations across and within streams and years. Summary of statistical analysis of the Calawah Riverscape data. NOAA was not involved and did not pay for the collection of this data. This data represents the statistical analysis carried out by Martin Liermann as a NOAA employee.
Facebook
Twitterhttps://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The size of the Statistical Analysis Software market was valued at USD 66770 million in 2024 and is projected to reach USD 77756.67 million by 2033, with an expected CAGR of 2.2 % during the forecast period.
Facebook
Twitterhttps://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The biostatistics software market is experiencing robust growth, driven by the increasing adoption of data-driven approaches in pharmaceutical research, clinical trials, and academic studies. The market, valued at approximately $2.5 billion in 2025, is projected to exhibit a Compound Annual Growth Rate (CAGR) of 12% from 2025 to 2033. This expansion is fueled by several key factors. Firstly, the rising volume of complex biological data necessitates sophisticated software solutions for analysis and interpretation. Secondly, advancements in machine learning and artificial intelligence are enhancing the capabilities of biostatistics software, enabling more accurate and efficient data processing. Thirdly, regulatory pressures demanding robust data analysis in the pharmaceutical and healthcare sectors are boosting demand for validated and compliant biostatistics tools. The market is segmented by software type (general-purpose versus specialized) and end-user (pharmaceutical companies, academic institutions, and others). Pharmaceutical companies represent a significant portion of the market due to their extensive reliance on clinical trial data analysis. However, the academic and research segments are also exhibiting strong growth due to increased research activities and funding. Geographically, North America and Europe currently dominate the market, but Asia-Pacific is expected to witness substantial growth in the coming years due to increasing healthcare spending and technological advancements in the region. The competitive landscape is characterized by a mix of established players offering comprehensive suites and specialized niche vendors. While leading players like IBM SPSS Statistics and Minitab enjoy significant market share based on their brand recognition and established user bases, smaller companies specializing in specific statistical methods or user interfaces are gaining traction by catering to niche demands. This competitive dynamic will likely drive innovation and further segmentation within the market, resulting in specialized software offerings tailored to particular research areas and user requirements. The challenges the market faces include the high cost of software licensing, the need for specialized training for effective utilization, and the potential integration complexities with existing data management systems. However, the overall growth trajectory remains positive, driven by the inherent need for sophisticated biostatistical analysis in various sectors.
Facebook
TwitterLoad and view a real-world dataset in RStudio
• Calculate “Measure of Frequency” metrics
• Calculate “Measure of Central Tendency” metrics
• Calculate “Measure of Dispersion” metrics
• Use R’s in-built functions for additional data quality metrics
• Create a custom R function to calculate descriptive statistics on any given dataset
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundIndividual participant data (IPD) meta-analyses that obtain “raw” data from studies rather than summary data typically adopt a “two-stage” approach to analysis whereby IPD within trials generate summary measures, which are combined using standard meta-analytical methods. Recently, a range of “one-stage” approaches which combine all individual participant data in a single meta-analysis have been suggested as providing a more powerful and flexible approach. However, they are more complex to implement and require statistical support. This study uses a dataset to compare “two-stage” and “one-stage” models of varying complexity, to ascertain whether results obtained from the approaches differ in a clinically meaningful way. Methods and FindingsWe included data from 24 randomised controlled trials, evaluating antiplatelet agents, for the prevention of pre-eclampsia in pregnancy. We performed two-stage and one-stage IPD meta-analyses to estimate overall treatment effect and to explore potential treatment interactions whereby particular types of women and their babies might benefit differentially from receiving antiplatelets. Two-stage and one-stage approaches gave similar results, showing a benefit of using anti-platelets (Relative risk 0.90, 95% CI 0.84 to 0.97). Neither approach suggested that any particular type of women benefited more or less from antiplatelets. There were no material differences in results between different types of one-stage model. ConclusionsFor these data, two-stage and one-stage approaches to analysis produce similar results. Although one-stage models offer a flexible environment for exploring model structure and are useful where across study patterns relating to types of participant, intervention and outcome mask similar relationships within trials, the additional insights provided by their usage may not outweigh the costs of statistical support for routine application in syntheses of randomised controlled trials. Researchers considering undertaking an IPD meta-analysis should not necessarily be deterred by a perceived need for sophisticated statistical methods when combining information from large randomised trials.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Public health-related decision-making on policies aimed at controlling the COVID-19 pandemic outbreak depends on complex epidemiological models that are compelled to be robust and use all relevant available data. This data article provides a new combined worldwide COVID-19 dataset obtained from official data sources with improved systematic measurement errors and a dedicated dashboard for online data visualization and summary. The dataset adds new measures and attributes to the normal attributes of official data sources, such as daily mortality, and fatality rates. We used comparative statistical analysis to evaluate the measurement errors of COVID-19 official data collections from the Chinese Center for Disease Control and Prevention (Chinese CDC), World Health Organization (WHO) and European Centre for Disease Prevention and Control (ECDC). The data is collected by using text mining techniques and reviewing pdf reports, metadata, and reference data. The combined dataset includes complete spatial data such as countries area, international number of countries, Alpha-2 code, Alpha-3 code, latitude, longitude, and some additional attributes such as population. The improved dataset benefits from major corrections on the referenced data sets and official reports such as adjustments in the reporting dates, which suffered from a one to two days lag, removing negative values, detecting unreasonable changes in historical data in new reports and corrections on systematic measurement errors, which have been increasing as the pandemic outbreak spreads and more countries contribute data for the official repositories. Additionally, the root mean square error of attributes in the paired comparison of datasets was used to identify the main data problems. The data for China is presented separately and in more detail, and it has been extracted from the attached reports available on the main page of the CCDC website. This dataset is a comprehensive and reliable source of worldwide COVID-19 data that can be used in epidemiological models assessing the magnitude and timeline for confirmed cases, long-term predictions of deaths or hospital utilization, the effects of quarantine, stay-at-home orders and other social distancing measures, the pandemic’s turning point or in economic and social impact analysis, helping to inform national and local authorities on how to implement an adaptive response approach to re-opening the economy, re-open schools, alleviate business and social distancing restrictions, design economic programs or allow sports events to resume.
Facebook
TwitterThe global big data and business analytics (BDA) market was valued at ***** billion U.S. dollars in 2018 and is forecast to grow to ***** billion U.S. dollars by 2021. In 2021, more than half of BDA spending will go towards services. IT services is projected to make up around ** billion U.S. dollars, and business services will account for the remainder. Big data High volume, high velocity and high variety: one or more of these characteristics is used to define big data, the kind of data sets that are too large or too complex for traditional data processing applications. Fast-growing mobile data traffic, cloud computing traffic, as well as the rapid development of technologies such as artificial intelligence (AI) and the Internet of Things (IoT) all contribute to the increasing volume and complexity of data sets. For example, connected IoT devices are projected to generate **** ZBs of data in 2025. Business analytics Advanced analytics tools, such as predictive analytics and data mining, help to extract value from the data and generate business insights. The size of the business intelligence and analytics software application market is forecast to reach around **** billion U.S. dollars in 2022. Growth in this market is driven by a focus on digital transformation, a demand for data visualization dashboards, and an increased adoption of cloud.
Facebook
TwitterThis page lists ad-hoc statistics released during the period October - December 2019. These are additional analyses not included in any of the Department for Digital, Culture, Media and Sport’s standard publications.
If you would like any further information please contact evidence@culture.gov.uk.
MS Excel Spreadsheet, 42.4 KB
Facebook
TwitterThis page lists ad hoc statistics released during the period July-September 2021. These are additional analyses not included in any of the Department for Digital, Culture, Media and Sport’s standard publications.
If you would like any further information please contact evidence@dcms.gov.uk
This analysis provides estimates of data use amongst UK organisations, using the UK Business Survey (UKBDS). This accompanies analysis within the consultation for UK data reform. This is an abridged set of specific findings from the UKBDS, a telephone-based quantitative and qualitative study of UK businesses, which seeks to understand the role and importance of personal and non-personal data in UK businesses, domestic and international transfers of data, and the awareness of, and attitudes toward, data protection legislation and policy.
ODS, 32.8 KB
This file is in an OpenDocument format
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The validity of empirical research often relies upon the accuracy of self-reported behavior and beliefs. Yet, eliciting truthful answers in surveys is challenging especially when studying sensitive issues such as racial prejudice, corruption, and support for militant groups. List experiments have attracted much attention recently as a potential solution to this measurement problem. Many researchers, however, have used a simple difference-in-means estimator without being able to efficiently examine multivariate relationships between respondents' characteristics and their answers to sensitive items. Moreover, no systematic means exist to investigate role of underlying assumptions. We fill these gaps by developing a set of new statistical methods for list experiments. We identify the commonly invoked assumptions, propose new multivariate regression estimators, and develop methods to detect and adjust for potential violations of key assumptions. For empirical illustrations, we analyze list experiments concerning racial prejudice. Open-source software is made available to implement the proposed methodology.
Facebook
TwitterThis page lists ad-hoc statistics released during the period October - December 2024. These are additional analyses not currently included in any of the Department for Culture, Media and Sport’s standard publications.
If you would like any further information please contact evidence@dcms.gov.uk
This is an ad-hoc release that provides economic estimates for the art and antiques market. This release includes estimates for the art and antiques market for:
These statistics for the art and antiques market show that:
GVA was provisionally estimated to be £0.8 billion in 2023.
There were 39,000 filled jobs in 2023.
Exports of goods totalled £3.5 billion and imports of goods totalled £1.3 billion in 2021.
<p class="gem-c-attachment_metadata"><span class="gem-c-attachment_attribute"><abbr title="OpenDocument Spreadsheet" class="gem-c-attachment_abbr">ODS</abbr></span>, <span class="gem-c-attachment_attribute">10.5 KB</span></p>
<p class="gem-c-attachment_metadata">
This file is in an <a href="https://www.gov.uk/guidance/using-open-document-formats-odf-in-your-organisation" target="_self" class="govuk-link">OpenDocument</a> format
<p class="gem-c-attachment_metadata"><span class="gem-c-attachm
Facebook
TwitterThe global big data market is forecasted to grow to 103 billion U.S. dollars by 2027, more than double its expected market size in 2018. With a share of 45 percent, the software segment would become the large big data market segment by 2027. What is Big data? Big data is a term that refers to the kind of data sets that are too large or too complex for traditional data processing applications. It is defined as having one or some of the following characteristics: high volume, high velocity or high variety. Fast-growing mobile data traffic, cloud computing traffic, as well as the rapid development of technologies such as artificial intelligence (AI) and the Internet of Things (IoT) all contribute to the increasing volume and complexity of data sets. Big data analytics Advanced analytics tools, such as predictive analytics and data mining, help to extract value from the data and generate new business insights. The global big data and business analytics market was valued at 169 billion U.S. dollars in 2018 and is expected to grow to 274 billion U.S. dollars in 2022. As of November 2018, 45 percent of professionals in the market research industry reportedly used big data analytics as a research method.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This graph shows how the impact factor of ^ is computed. The left axis depicts the number of papers published in years X-1 and X-2, and the right axis displays their citations in year X.
Facebook
Twitterhttps://scoop.market.us/privacy-policyhttps://scoop.market.us/privacy-policy
By leveraging them, you may remain competitive. Data can be used to discover what others are doing. It is always feasible to stay ahead of the competition. Using statistical data, you can prioritize your actions. To carry out a cross-marketing strategy, it is vital to compare the performance of various platforms.
When you have statistical support, it is easier to make effective decisions. Using digital marketing analytics can provide confidence in knowing what works. Reduce your time spent strategizing. With the time saved, it is able to accomplish other critical activities such as SEO or auditing.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
R Scripts contain statistical data analisys for streamflow and sediment data, including Flow Duration Curves, Double Mass Analysis, Nonlinear Regression Analysis for Suspended Sediment Rating Curves, Stationarity Tests and include several plots.