Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This excel file will do a statistical tests of whether two ROC curves are different from each other based on the Area Under the Curve. You'll need the coefficient from the presented table in the following article to enter the correct AUC value for the comparison: Hanley JA, McNeil BJ (1983) A method of comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology 148:839-843.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sheet 1 (Raw-Data): The raw data of the study is provided, presenting the tagging results for the used measures described in the paper. For each subject, it includes multiple columns: A. a sequential student ID B an ID that defines a random group label and the notation C. the used notation: user Story or use Cases D. the case they were assigned to: IFA, Sim, or Hos E. the subject's exam grade (total points out of 100). Empty cells mean that the subject did not take the first exam F. a categorical representation of the grade L/M/H, where H is greater or equal to 80, M is between 65 included and 80 excluded, L otherwise G. the total number of classes in the student's conceptual model H. the total number of relationships in the student's conceptual model I. the total number of classes in the expert's conceptual model J. the total number of relationships in the expert's conceptual model K-O. the total number of encountered situations of alignment, wrong representation, system-oriented, omitted, missing (see tagging scheme below) P. the researchers' judgement on how well the derivation process explanation was explained by the student: well explained (a systematic mapping that can be easily reproduced), partially explained (vague indication of the mapping ), or not present.
Tagging scheme:
Aligned (AL) - A concept is represented as a class in both models, either
with the same name or using synonyms or clearly linkable names;
Wrongly represented (WR) - A class in the domain expert model is
incorrectly represented in the student model, either (i) via an attribute,
method, or relationship rather than class, or
(ii) using a generic term (e.g., user'' instead ofurban
planner'');
System-oriented (SO) - A class in CM-Stud that denotes a technical
implementation aspect, e.g., access control. Classes that represent legacy
system or the system under design (portal, simulator) are legitimate;
Omitted (OM) - A class in CM-Expert that does not appear in any way in
CM-Stud;
Missing (MI) - A class in CM-Stud that does not appear in any way in
CM-Expert.
All the calculations and information provided in the following sheets
originate from that raw data.
Sheet 2 (Descriptive-Stats): Shows a summary of statistics from the data collection,
including the number of subjects per case, per notation, per process derivation rigor category, and per exam grade category.
Sheet 3 (Size-Ratio):
The number of classes within the student model divided by the number of classes within the expert model is calculated (describing the size ratio). We provide box plots to allow a visual comparison of the shape of the distribution, its central value, and its variability for each group (by case, notation, process, and exam grade) . The primary focus in this study is on the number of classes. However, we also provided the size ratio for the number of relationships between student and expert model.
Sheet 4 (Overall):
Provides an overview of all subjects regarding the encountered situations, completeness, and correctness, respectively. Correctness is defined as the ratio of classes in a student model that is fully aligned with the classes in the corresponding expert model. It is calculated by dividing the number of aligned concepts (AL) by the sum of the number of aligned concepts (AL), omitted concepts (OM), system-oriented concepts (SO), and wrong representations (WR). Completeness on the other hand, is defined as the ratio of classes in a student model that are correctly or incorrectly represented over the number of classes in the expert model. Completeness is calculated by dividing the sum of aligned concepts (AL) and wrong representations (WR) by the sum of the number of aligned concepts (AL), wrong representations (WR) and omitted concepts (OM). The overview is complemented with general diverging stacked bar charts that illustrate correctness and completeness.
For sheet 4 as well as for the following four sheets, diverging stacked bar
charts are provided to visualize the effect of each of the independent and mediated variables. The charts are based on the relative numbers of encountered situations for each student. In addition, a "Buffer" is calculated witch solely serves the purpose of constructing the diverging stacked bar charts in Excel. Finally, at the bottom of each sheet, the significance (T-test) and effect size (Hedges' g) for both completeness and correctness are provided. Hedges' g was calculated with an online tool: https://www.psychometrica.de/effect_size.html. The independent and moderating variables can be found as follows:
Sheet 5 (By-Notation):
Model correctness and model completeness is compared by notation - UC, US.
Sheet 6 (By-Case):
Model correctness and model completeness is compared by case - SIM, HOS, IFA.
Sheet 7 (By-Process):
Model correctness and model completeness is compared by how well the derivation process is explained - well explained, partially explained, not present.
Sheet 8 (By-Grade):
Model correctness and model completeness is compared by the exam grades, converted to categorical values High, Low , and Medium.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset tabulates the Excel township population over the last 20 plus years. It lists the population for each year, along with the year on year change in population, as well as the change in percentage terms for each year. The dataset can be utilized to understand the population change of Excel township across the last two decades. For example, using this dataset, we can identify if the population is declining or increasing. If there is a change, when the population peaked, or if it is still growing and has not reached its peak. We can also compare the trend with the overall trend of United States population over the same period of time.
Key observations
In 2023, the population of Excel township was 300, a 0.99% decrease year-by-year from 2022. Previously, in 2022, Excel township population was 303, a decline of 0.98% compared to a population of 306 in 2021. Over the last 20 plus years, between 2000 and 2023, population of Excel township increased by 17. In this period, the peak population was 308 in the year 2020. The numbers suggest that the population has already reached its peak and is showing a trend of decline. Source: U.S. Census Bureau Population Estimates Program (PEP).
When available, the data consists of estimates from the U.S. Census Bureau Population Estimates Program (PEP).
Data Coverage:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Excel township Population by Year. You can refer the same here
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Excel files containing the data for the paper titled: "Diffuse blue vs. structural silver—comparing alternative strategies for pelagic background matching between two coral reef fishes." See Data for creole wrasse vs bar jack.docx for more details
Facebook
TwitterAge-depth models for Pb-210 datasets. The St Croix Watershed Research Station, of the Science Museum of Minnesota, kindly made available 210Pb datasets that have been measured in their lab over the past decades. The datasets come mostly from North American lakes. These datasets were used to produce both chronologies using the 'classical' CRS (Constant Rate of Supply) approach and also using a recently developed Bayesian alternative called 'Plum'. Both approaches were used in order to compare the two approaches. The 210Pb data will also be deposited in the neotomadb.org database. The dataset consists of 3 files; 1. Rcode_Pb210.R R code to process the data files, produce age-depth models and compare them. 2. StCroix_agemodel_output.zip Output of age-model runs of the St Croix datasets 3. StCroix_xlxs_files.zip Excel files of the St Croix Pb-210 datasets
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This database contains the results from questionnaires gathered during user testing of the SELFEX solution, a training system utilizing motion-tracking gloves, augmented reality (AR), and screen-based interfaces. Participants were asked to complete paper- and tablet-based questionnaires after interacting with both AR and screen-guided training environments. The data provided allows for a comparative analysis between the two training methods (AR vs. screen) and assesses the suitability of the MAGOS hand-tracking gloves for this application. Additionally, it facilitates the exploration of correlations between various user experience factors, such as ease of use, usefulness, satisfaction, and ease of learning.
The folder is divided into two types of files:
- PDF files: These contain the three questionnaires administered during testing.
- "dataset.xlsx": This file includes the questionnaire results.
Within the Excel file, the data is organized across three sheets:
- "Results with AR glasses": Displays data from the experiment conducted using Hololens 2 AR glasses. Participants are anonymized and coded by gender (e.g., M01 for the first male participant).
- "Results without AR glasses": Shows data from the experiment conducted with five participants using a TV screen instead of Hololens 2 to follow the assembly training instructions.
- "Demographic data": Contains demographic information related to the participants.
This dataset enables comprehensive evaluation and comparison of the training methods and user experiences.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains one Excel sheet and five Word documents. In this dataset, Simulation.xlsx describes the parameter values used for the numerical analysis based on empirical data. In this Excel sheet, we calculated the values of each capped call-option model parameter. Computation of Table 2.docx and other documents show the results of the comparative statistics.
Facebook
TwitterThe dataset includes a PDF file containing the results and an Excel file with the following tables:
Table S1 Results of comparing the performance of MetaFetcheR to MetaboAnalystR using Diamanti et al. Table S2 Results of comparing the performance of MetaFetcheR to MetaboAnalystR for Priolo et al. Table S3 Results of comparing the performance of MetaFetcheR to MetaboAnalyst 5.0 webtool using Diamanti et al. Table S4 Results of comparing the performance of MetaFetcheR to MetaboAnalyst 5.0 webtool for Priolo et al. Table S5 Data quality test results for running 100 iterations on HMDB database. Table S6 Data quality test results for running 100 iterations on KEGG database. Table S7 Data quality test results for running 100 iterations on ChEBI database. Table S8 Data quality test results for running 100 iterations on PubChem database. Table S9 Data quality test results for running 100 iterations on LIPID MAPS database. Table S10 The list of metabolites that were not mapped by MetaboAnalystR for Diamanti et al. Table S11 An example of an input matrix for MetaFetcheR. Table S12 Results of comparing the performance of MetaFetcheR to MS_targeted using Diamanti et al. Table S13 Data set from Diamanti et al. Table S14 Data set from Priolo et al. Table S15 Results of comparing the performance of MetaFetcheR to CTS using KEGG identifiers available in Diamanti et al. Table S16 Results of comparing the performance of MetaFetcheR to CTS using LIPID MAPS identifiers available in Diamanti et al. Table S17 Results of comparing the performance of MetaFetcheR to CTS using KEGG identifiers available in Priolo et al. Table S18 Results of comparing the performance of MetaFetcheR to CTS using KEGG identifiers available in Priolo et al. (See the "index" tab in the Excel file for more information)
Small-compound databases contain a large amount of information for metabolites and metabolic pathways. However, the plethora of such databases and the redundancy of their information lead to major issues with analysis and standardization. Lack of preventive establishment of means of data access at the infant stages of a project might lead to mislabelled compounds, reduced statistical power and large delays in delivery of results.
We developed MetaFetcheR, an open-source R package that links metabolite data from several small-compound databases, resolves inconsistencies and covers a variety of use-cases of data fetching. We showed that the performance of MetaFetcheR was superior to existing approaches and databases by benchmarking the performance of the algorithm in three independent case studies based on two published datasets.
The dataset was originally published in DiVA and moved to SND in 2024.
Facebook
TwitterThe gender pay gap or gender wage gap is the average difference between the remuneration for men and women who are working. Women are generally considered to be paid less than men. There are two distinct numbers regarding the pay gap: non-adjusted versus adjusted pay gap. The latter typically takes into account differences in hours worked, occupations were chosen, education, and job experience. In the United States, for example, the non-adjusted average female's annual salary is 79% of the average male salary, compared to 95% for the adjusted average salary.
The reasons link to legal, social, and economic factors, and extend beyond "equal pay for equal work".
The gender pay gap can be a problem from a public policy perspective because it reduces economic output and means that women are more likely to be dependent upon welfare payments, especially in old age.
This dataset aims to replicate the data used in the famous paper "The Gender Wage Gap: Extent, Trends, and Explanations", which provides new empirical evidence on the extent of and trends in the gender wage gap, which declined considerably during the 1980–2010 period.
fedesoriano. (January 2022). Gender Pay Gap Dataset. Retrieved [Date Retrieved] from https://www.kaggle.com/fedesoriano/gender-pay-gap-dataset.
There are 2 files in this dataset: a) the Panel Study of Income Dynamics (PSID) microdata over the 1980-2010 period, and b) the Current Population Survey (CPS) to provide some additional US national data on the gender pay gap.
PSID variables:
NOTES: THE VARIABLES WITH fz ADDED TO THEIR NAME REFER TO EXPERIENCE WHERE WE HAVE FILLED IN SOME ZEROS IN THE MISSING PSID YEARS WITH DATA FROM THE RESPONDENTS’ ANSWERS TO QUESTIONS ABOUT JOBS WORKED ON DURING THESE MISSING YEARS. THE fz variables WERE USED IN THE REGRESSION ANALYSES THE VARIABLES WITH A predict PREFIX REFER TO THE COMPUTATION OF ACTUAL EXPERIENCE ACCUMULATED DURING THE YEARS IN WHICH THE PSID DID NOT SURVEY THE RESPONDENTS. THERE ARE MORE PREDICTED EXPERIENCE LEVELS THAT ARE NEEDED TO IMPUTE EXPERIENCE IN THE MISSING YEARS IN SOME CASES. NOTE THAT THE VARIABLES yrsexpf, yrsexpfsz, etc., INCLUDE THESE COMPUTATIONS, SO THAT IF YOU WANT TO USE FULL TIME OR PART TIME EXPERIENCE, YOU DON’T NEED TO ADD THESE PREDICT VARIABLES IN. THEY ARE INCLUDED IN THE DATA SET TO ILLUSTRATE THE RESULTS OF THE COMPUTATION PROCESS. THE VARIABLES WITH AN orig PREFIX ARE THE ORIGINAL PSID VARIABLES. THESE HAVE BEEN PROCESSED AND IN SOME CASES RENAMED FOR CONVENIENCE. THE hd SUFFIX MEANS THAT THE VARIABLE REFERS TO THE HEAD OF THE FAMILY, AND THE wf SUFFIX MEANS THAT IT REFERS TO THE WIFE OR FEMALE COHABITOR IF THERE IS ONE. AS SHOWN IN THE ACCOMPANYING REGRESSION PROGRAM, THESE orig VARIABLES AREN’T USED DIRECTLY IN THE REGRESSIONS. THERE ARE MORE OF THE ORIGINAL PSID VARIABLES, WHICH WERE USED TO CONSTRUCT THE VARIABLES USED IN THE REGRESSIONS. HD MEANS HEAD AND WF MEANS WIFE OR FEMALE COHABITOR.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Poseidon 2.0 is a user-oriented, simple and fast Excel-Tool which aims to compare different wastewater treatment techniques based on their pollutant removal efficiencies, their costs and additional assessment criteria. Poseidon can be applied for pre-feasibility studies in order to assess possible water reuse options and can show decision makers and other stakeholders that implementable solutions are available to comply with local requirements. This upload consists in:
Poseidon 2.0 Excel File that can be used with Microsoft Excel - XLSM
Handbook presenting main features of the decision support tool - PDF
Externally hosted supplementary file 1, Oertlé, Emmanuel. (2018, December 5). Poseidon - Decision Support Tool for Water Reuse (Microsoft Excel) and Handbook (Version 1.1.1). Zenodo. http://doi.org/10.5281/zenodo.3341573
Externally hosted supplementary file 2, Oertlé, Emmanuel. (2018). Wastewater Treatment Unit Processes Datasets: Pollutant removal efficiencies, evaluation criteria and cost estimations (Version 1.0.0) [Data set]. Zenodo. http://doi.org/10.5281/zenodo.1247434
Externally hosted supplementary file 3, Oertlé, Emmanuel. (2018). Treatment Trains for Water Reclamation (Dataset) (Version 1.0.0) [Data set]. Zenodo. http://doi.org/10.5281/zenodo.1972627
Externally hosted supplementary file 4, Oertlé, Emmanuel. (2018). Water Quality Classes - Recommended Water Quality Based on Guideline and Typical Wastewater Qualities (Version 1.0.2) [Data set]. Zenodo. http://doi.org/10.5281/zenodo.3341570
Facebook
Twitterhttps://dataverse-staging.rdmc.unc.edu/api/datasets/:persistentId/versions/2.2/customlicense?persistentId=doi:10.15139/S3/12157https://dataverse-staging.rdmc.unc.edu/api/datasets/:persistentId/versions/2.2/customlicense?persistentId=doi:10.15139/S3/12157
This study consists of data files that code the data availability policies of top-20 academic journals in the fields of Business & Finance, Economics, International Relations, Political Science, and Sociology. Journals that were ranked as top-20 titles based on 2003-vintage ISI Impact Factor scores were coded on their data policies in 2003 and on their data policies in 2015. In addition, journals that were ranked as top-20 titles based on most recent ISI Impact Factor scores were likewise coded on their data polices in 2015. The included Stata .do file imports the contents of each of the Excel files, cleans and labels the data, and produces two tables: one comparing the data policies of 2003-vintage top-20 journals in 2003 those journals' policies in 2015, and one comparing the data policies of 2003-vintage top-20 journals in 2003 to the data policies of current top-20 journals in 2015.
Facebook
TwitterCompare state-level estimates from the 2021-2022 National Surveys on Drug Use and Health (NSDUH) using p-values. The tables accompany the2021-2022 NSDUH State Estimates of Substance Use and Mental Disorders, and can be used to determine whether the difference in estimates between two geographic areas are statistically significant. Aguide to their useis also available.The tables are available in an Excel spreadsheet or a zip file containing CSV text files. Each tab or text file contains p-values for a particular measure and a particular age group.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the raw data used for a systematic review of the impact of background music on cognitive task performance (Cheah et al., 2022). Our intention is to facilitate future updates to this work. Contents description This repository contains eight Microsoft Excel files, each containing the synthesised data pertaining to each of the six cognitive domains analysed in the review, as well as task difficulty, and population characteristics:
raw-data-attention raw-data-inhibition raw-data-language raw-data-memory raw-data-thinking raw-data-processing-speed raw-data-task-difficulty raw-data--population Files description Tabs organisation The files pertaining to each cognitive domain include individual tabs for each cognitive task analysed (c.f. Figure 2 in the original paper for the list of cognitive tasks). The file with the population characteristics data also contains separate tabs for each characteristic (extraversion, music training, gender, and working memory capacity). Tabs contents In all files and tabs, each row corresponds to the data of a test. The same article can have more than one row if it reports multiple tests. For instance, the study by Cassidy and MacDonald (2007; cf. Memory.xlsx, tab: Memory-all) contains two experiments (immediate and delayed free recall) each with multiple test (immediate free recall: tests 25 – 32; delayed free recall: tests 58 – 61). Each test (one per row), in this experiment, pertains to comparisons between conditions where the background music has different levels of arousal, between groups of participants with different extraversion levels, between different tasks material (words or paragraphs) and different combinations of the previous (e.g., high arousing music vs silence test among extraverts whilst completing an immediate free recall task involving paragraphs; cf. test 30). The columns are organised as follows:
"TESTS": the index of the test in a particular tab (for easy reference); "ID": abbreviation of the cognitive tasks involved in a specific experiment (see glossary for meaning); "REFERENCE": the article where the data was taken from (see main publications for list of articles); "CONDITIONS": an abbreviated description of the music condition of a given test; "MEANS (music)": the average performance across all participants in a given experiment with background music; "MEANS (silence)": the average performance across all participants in a given experiment without background music. Then, in horizontal arrangement, we also include groups of two columns that breakdown specific comparisons related to each test (i.e., all tests comparing the same two types of condition, e.g., L-BgM vs I-BgM, will appear under the same set of columns). For each one, we indicate mean difference between the respective conditions ("MD" column) and the direction of effect ("Standard Metric" column). Each file also contains a "Glossary" tab that explains all the abbreviations used in each document. Bibliography Cheah, Y., Wong, H. K., Spitzer, M., & Coutinho, E. (2022). Background music and cognitive task performance: A systematic review of task, music and population impact. Music & Science, 5(1), 1-38. https://doi.org/10.1177/20592043221134392
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
The data presented here have been collected in the context of the EU LIFE EuroLargeCarnivores Outreach Project (LIFE16 GIE/DE/000661). The data set provided is part of a much larger set of data assembled during two different online stakeholder surveys conducted in late 2018/early 2019 (Baseline) and 2021 (Outcome Survey, last year of the project) in 14 countries participating in the project. The data selected are the basis for the analysis and results presented and discussed in the Research Article “Did we achieve what we aimed for? Assessing the outcomes of a human-carnivore conflict mitigation and coexistence project in Europe” by Carol Grossmann and Laszló Pátkó, published in Wildlife Biology in 2024. The dataset is provided as an excel sheet displaying anonymized numerical respondent IDs (rows), and coded answers to selected questions (columns) of these two surveys. The table includes full explanatory wording for all codes used. The data set provided contains n=1262 individual data-subsets from the Baseline Survey and n=1056 individual data subsets from the Outcome Survey in 2021. Part of the questions are identical in both survey sets for direct comparison. Cross references are provided for questions posed in both surveys for comparison but denominated with different numbers in the respective surveys. Part of the questions were posed only in the 2021 survey. Some questions/answers serve as filters for a differentiated analysis according to stakeholder categories, engagement in networking activities, or stakeholder participation and non-participation in project interventions. For more details about the methods of data collection and analysis see Grossmann et al. 2020 and Grossmann and Patko 2024. The reuse potential of this data set lies in the opportunity to assess project outcomes with further stakeholder categories in correlation with respondents’ (non-)participation in project interventions. No further legal or ethical considerations need to be taken, as all individual respondent sets have been fully anonymized. Methods We conducted two online stakeholder surveys in the 14 project partner countries, within the European outreach project "EuroLargeCarnivores". We used google forms for the questionnaires, as mandated from ELC project lead. In late 2018 and early 2019, we conducted a baseline survey (t0) and in 2021 (t0+3), an ‘endline’ survey to assess changes over the project’s lifetime on the stakeholder level. The baseline survey ‘Large Carnivores in Europe 2018’ took place during the first year of the project in all fourteen countries. In 2021, the second comparative stakeholder perception survey ‘Monitoring the Impact’ was launched during the final year of the outreach project in the same distribution range applying the same distribution method. The Forest Research Institute of Baden-Württemberg (FVA) designed, provided, and coordinated both survey questionnaires and data collection procedures, while staff of the regional project partners provided additional preparations, such as translation of the English master questionnaires into the twelve regional languages, as well as the actual data collection. We used a prearranged multi-channel and pyramid distribution system (Atkinson and Flint 2004, Dillman et al 2014, Grossmann et al. 2020). The links to the surveys were distributed via the partners’ systematically updated distribution lists, individual in-person interviews, websites, and social media propagation, offering survey respondents further distribution of the survey through a snowball system, thereby reaching out to as many stakeholders in the 14 project partner countries as possible (Atkinson and Flint 2004, Dillman et al. 2014, Grossmann et al. 2020). After the closure of the surveys, the country datasets were aggregated, re-translated, cleaned and fully coded for analysis. The 2018 survey received n = 1262 returns, the 2021 survey resulted in n = 1056 data, a delta of 16%. Due to the strict enforcement of the European Union’s General Data Protection Regulation (GDPR), we could not address the respondents of Survey 2018 directly again. Additionally, due to the open accessibility of the survey on social media, no concise distribution list of the recipient population is available. We still assumed a comparability of the two datasets for the research questions at hand (Grossmann et al. 2019, 2020). The statistical analysis used descriptive statistics and the X² Test, including Cramer V and Post Hoc tests (differences in standardized residuals) (Cohen 1988, Agresti 2007) for comparing the samples from 2018 and 2021, as well as subsamples of the 2021 sample for a more focused analysis. The analyses were performed using the statistics programs SPSS and Microsoft Excel. For more details about the methods of data collection and analysis see Grossmann et al. 2020 and Grossmann and Patko 2024.
Facebook
TwitterCompare state-level estimates from the 2017-2018 National Surveys on Drug Use and Health (NSDUH) using p-values. The tables accompany the2017-2018 NSDUH State Estimates of Substance Use and Mental Disorders, and can be used to determine whether the difference in estimates between two geographic areas are statistically significant. A guide to their use is also included.The tables are available in an Excel spreadsheet or a zip file containing CSV text files. Each tab or text file contains p-values for a particular measure and a particular age group.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Materials and Methods The study was held in the Oral and Maxillofacial Surgery department and Kasturba Hospital, Manipal, from November 2019 to October 2021 after approval from the Institutional Ethics Committee (IEC: 924/2019). The study included patients between 18-70 years. Patients with associated diseases like cysts or tumors of the jaw bones, pregnant women, and those with underlying psychological issues were excluded from the study. The patients were assessed 8-12 weeks after surgical intervention. A data schedule was prepared to document age, sex, and fracture type. The study consisted of 182 subjects divided into two groups of 91 each (Group A: Mild to moderate facial injury and Group B: Severe facial injury) based on the severity of maxillofacial fractures and facial injury. Informed consent was obtained from each of the study participants. We followed Facial Injury Severity Scale (FISS) to determine the severity of facial fractures and injuries. The face is divided horizontally into the mandibular, mid-facial, and upper facial thirds. Fractures in these thirds are given points based on their type (Table 1). Injuries with a total score above 4.4 were considered severe facial injuries (Group A), and those with a total score below 4.4 were considered mild/ moderate facial injuries (Group B). The QOL was compared between the two groups. Meticulous management of hard and soft tissue injuries in our state-of-the-art tertiary care hospital was implemented. All elective cases were surgically treated at least 72 hours after the initial trauma. The facial fractures were adequately reduced and fixed with high–end Titanium miniplates and screws (AO Principles of Fracture Management). Soft tissue injuries were managed by wound debridement, removal of foreign bodies, and layered wound closure. Adequate pain-relieving medication was prescribed to the patients postoperatively for effective pain control. The QOL of the subjects was assessed using the 'Twenty-point Quality of life assessment in facial trauma patients in Indian population' assessment tool. This tool contains 20 questions and uses a five-point Likert response scale. The Twenty – point quality of life assessment tool included two zones: Zone 1 (Psychosocial impact) and Zone 2 (Functional and esthetic impact), with ten questions (domains) each (Table 2). The scores for each question ranged from 1- 5, the higher score denoting better Quality of life. Accordingly, the score in each zone for a patient ranged from 10 -50, and the total scores of both zones were recorded to determine the QOL. The sum of both zones determined the prognosis following surgery (Table 2). The data collected was entered into a Microsoft Excel spreadsheet and analyzed using IBM SPSS Statistics, Version 22(Armonk, NY: IBM Corp). Descriptive data were presented in the form of frequency and percentage for categorical variables and in the form of mean, median, standard deviation, and quartiles for continuous variables. Since the data were not following normal distribution, a non-parametric test was used. QOL scores were compared between the study groups using the Mann-Whitney U test. P value < 0.05 was considered statistically significant.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Spreadsheets targeted at the analysis of GHS safety fingerprints.AbstractOver a 20-year period, the UN developed the Globally Harmonized System (GHS) to address international variation in chemical safety information standards. By 2014, the GHS became widely accepted internationally and has become the cornerstone of OSHA’s Hazard Communication Standard. Despite this progress, today we observe that there are inconsistent results when different sources apply the GHS to specific chemicals, in terms of the GHS pictograms, hazard statements, precautionary statements, and signal words assigned to those chemicals. In order to assess the magnitude of this problem, this research uses an extension of the “chemical fingerprints” used in 2D chemical structure similarity analysis to GHS classifications. By generating a chemical safety fingerprint, the consistency of the GHS information for specific chemicals can be assessed. The problem is the sources for GHS information can differ. For example, the SDS for sodium hydroxide pellets found on Fisher Scientific’s website displays two pictograms, while the GHS information for sodium hydroxide pellets on Sigma Aldrich’s website has only one pictogram. A chemical information tool, which identifies such discrepancies within a specific chemical inventory, can assist in maintaining the quality of the safety information needed to support safe work in the laboratory. The tools for this analysis will be scaled to the size of a moderate large research lab or small chemistry department as a whole (between 1000 and 3000 chemical entities) so that labelling expectations within these universes can be established as consistently as possible.Most chemists are familiar with programs such as excel and google sheets which are spreadsheet programs that are used by many chemists daily. Though a monadal programming approach with these tools, the analysis of GHS information can be made possible for non-programmers. This monadal approach employs single spreadsheet functions to analyze the data collected rather than long programs, which can be difficult to debug and maintain. Another advantage of this approach is that the single monadal functions can be mixed and matched to meet new goals as information needs about the chemical inventory evolve over time. These monadal functions will be used to converts GHS information into binary strings of data called “bitstrings”. This approach is also used when comparing chemical structures. The binary approach make data analysis more manageable, as GHS information comes in a variety of formats such as pictures or alphanumeric strings which are difficult to compare on their face. Bitstrings generated using the GHS information can be compared using an operator such as the tanimoto coefficent to yield values from 0 for strings that have no similarity to 1 for strings that are the same. Once a particular set of information is analyzed the hope is the same techniques could be extended to more information. For example, if GHS hazard statements are analyzed through a spreadsheet approach the same techniques with minor modifications could be used to tackle more GHS information such as pictograms.Intellectual Merit. This research indicates that the use of the cheminformatic technique of structural fingerprints can be used to create safety fingerprints. Structural fingerprints are binary bit strings that are obtained from the non-numeric entity of 2D structure. This structural fingerprint allows comparison of 2D structure through the use of the tanimoto coefficient. The use of this structural fingerprint can be extended to safety fingerprints, which can be created by converting a non-numeric entity such as GHS information into a binary bit string and comparing data through the use of the tanimoto coefficient.Broader Impact. Extension of this research can be applied to many aspects of GHS information. This research focused on comparing GHS hazard statements, but could be further applied to other bits of GHS information such as pictograms and GHS precautionary statements. Another facet of this research is allowing the chemist who uses the data to be able to compare large dataset using spreadsheet programs such as excel and not need a large programming background. Development of this technique will also benefit the Chemical Health and Safety community and Chemical Information communities by better defining the quality of GHS information available and providing a scalable and transferable tool to manipulate this information to meet a variety of other organizational needs.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This comprehensive dataset provides a wealth of information about all countries worldwide, covering a wide range of indicators and attributes. It encompasses demographic statistics, economic indicators, environmental factors, healthcare metrics, education statistics, and much more. With every country represented, this dataset offers a complete global perspective on various aspects of nations, enabling in-depth analyses and cross-country comparisons.
- Country: Name of the country.
- Density (P/Km2): Population density measured in persons per square kilometer.
- Abbreviation: Abbreviation or code representing the country.
- Agricultural Land (%): Percentage of land area used for agricultural purposes.
- Land Area (Km2): Total land area of the country in square kilometers.
- Armed Forces Size: Size of the armed forces in the country.
- Birth Rate: Number of births per 1,000 population per year.
- Calling Code: International calling code for the country.
- Capital/Major City: Name of the capital or major city.
- CO2 Emissions: Carbon dioxide emissions in tons.
- CPI: Consumer Price Index, a measure of inflation and purchasing power.
- CPI Change (%): Percentage change in the Consumer Price Index compared to the previous year.
- Currency_Code: Currency code used in the country.
- Fertility Rate: Average number of children born to a woman during her lifetime.
- Forested Area (%): Percentage of land area covered by forests.
- Gasoline_Price: Price of gasoline per liter in local currency.
- GDP: Gross Domestic Product, the total value of goods and services produced in the country.
- Gross Primary Education Enrollment (%): Gross enrollment ratio for primary education.
- Gross Tertiary Education Enrollment (%): Gross enrollment ratio for tertiary education.
- Infant Mortality: Number of deaths per 1,000 live births before reaching one year of age.
- Largest City: Name of the country's largest city.
- Life Expectancy: Average number of years a newborn is expected to live.
- Maternal Mortality Ratio: Number of maternal deaths per 100,000 live births.
- Minimum Wage: Minimum wage level in local currency.
- Official Language: Official language(s) spoken in the country.
- Out of Pocket Health Expenditure (%): Percentage of total health expenditure paid out-of-pocket by individuals.
- Physicians per Thousand: Number of physicians per thousand people.
- Population: Total population of the country.
- Population: Labor Force Participation (%): Percentage of the population that is part of the labor force.
- Tax Revenue (%): Tax revenue as a percentage of GDP.
- Total Tax Rate: Overall tax burden as a percentage of commercial profits.
- Unemployment Rate: Percentage of the labor force that is unemployed.
- Urban Population: Percentage of the population living in urban areas.
- Latitude: Latitude coordinate of the country's location.
- Longitude: Longitude coordinate of the country's location.
- Analyze population density and land area to study spatial distribution patterns.
- Investigate the relationship between agricultural land and food security.
- Examine carbon dioxide emissions and their impact on climate change.
- Explore correlations between economic indicators such as GDP and various socio-economic factors.
- Investigate educational enrollment rates and their implications for human capital development.
- Analyze healthcare metrics such as infant mortality and life expectancy to assess overall well-being.
- Study labor market dynamics through indicators such as labor force participation and unemployment rates.
- Investigate the role of taxation and its impact on economic development.
- Explore urbanization trends and their social and environmental consequences.
Data Source: This dataset was compiled from multiple data sources
If this was helpful, a vote is appreciated ❤️ Thank you 🙂
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The Transportation and Logistics Tracking Dataset comprises multiple datasets related to various aspects of transportation and logistics operations. It includes information on on-time delivery impact, routes by rating, customer ratings, delivery times with and without congestion, weather conditions, and differences between fixed and main delivery times across different regions.
On-Time Delivery Impact: This dataset provides insights into the impact of on-time delivery, categorizing deliveries based on their impact and counting the occurrences for each category. Routes by Rating: Here, the dataset illustrates the relationship between routes and their corresponding ratings, offering a visual representation of route performance across different rating categories. Customer Ratings and On-Time Delivery: This dataset explores the relationship between customer ratings and on-time delivery, presenting a comparison of delivery counts based on customer ratings and on-time delivery status. Delivery Time with and Without Congestion: It contains information on delivery times in various cities, both with and without congestion, allowing for an analysis of how congestion affects delivery efficiency. Weather Conditions: This dataset provides a summary of weather conditions, including counts for different weather conditions such as partly cloudy, patchy light rain with thunder, and sunny. Difference between Fixed and Main Delivery Times: Lastly, the dataset highlights the differences between fixed and main delivery times across different regions, shedding light on regional variations in delivery schedules. Overall, this dataset offers valuable insights into the transportation and logistics domain, enabling analysis and decision-making to optimize delivery processes and enhance customer satisfaction.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The dataset deposited here underpins a PhD thesis sucessfully defended by Stian Kjeksrud on June 6, 2019, at the University of Oslo, Faculty of Social Sciences, Department of Political Science. This dataset consists of two data files, each in its original excel-file format as well as in a Unicode-text format: 1. UN_POC_Operations_UNPOCO_2019-01-25 This file (United Nations Protection of Civilians Operations (UNPOCO)) captures and codes the core empirical characteristics of 200 UN military operations to protect civilians from violence in African conflicts between 1999 and 2017. 2. UN_POC_Operations_UNPOCO_fsQCA_2019-01-25 This file (UNPOCO fsQCA) builds directly on the UNPOCO dataset, but consists of a sub-set of 126 cases tailored to fuzzy set Qualitative Comparative Analysis (fsQCA), and therefore includes a QCA-matrix and some additional information for each case. Both data files are built by Stian Kjeksrud to support the analysis of variations in outcomes of operations and to explore success factors of UN military protection operations across time and UN missions. The data are captured from the United Nations Secretary-General’s openly available reporting to the United Nations Security Council.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This excel file will do a statistical tests of whether two ROC curves are different from each other based on the Area Under the Curve. You'll need the coefficient from the presented table in the following article to enter the correct AUC value for the comparison: Hanley JA, McNeil BJ (1983) A method of comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology 148:839-843.