Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This excel file will do a statistical tests of whether two ROC curves are different from each other based on the Area Under the Curve. You'll need the coefficient from the presented table in the following article to enter the correct AUC value for the comparison: Hanley JA, McNeil BJ (1983) A method of comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology 148:839-843.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Vrinda Store: Interactive Ms Excel dashboardVrinda Store: Interactive Ms Excel dashboard Feb 2024 - Mar 2024Feb 2024 - Mar 2024 The owner of Vrinda store wants to create an annual sales report for 2022. So that their employees can understand their customers and grow more sales further. Questions asked by Owner of Vrinda store are as follows:- 1) Compare the sales and orders using single chart. 2) Which month got the highest sales and orders? 3) Who purchased more - women per men in 2022? 4) What are different order status in 2022?
And some other questions related to business. The owner of Vrinda store wanted a visual story of their data. Which can depict all the real time progress and sales insight of the store. This project is a Ms Excel dashboard which presents an interactive visual story to help the Owner and employees in increasing their sales. Task performed : Data cleaning, Data processing, Data analysis, Data visualization, Report. Tool used : Ms Excel The owner of Vrinda store wants to create an annual sales report for 2022. So that their employees can understand their customers and grow more sales further. Questions asked by Owner of Vrinda store are as follows:- 1) Compare the sales and orders using single chart. 2) Which month got the highest sales and orders? 3) Who purchased more - women per men in 2022? 4) What are different order status in 2022? And some other questions related to business. The owner of Vrinda store wanted a visual story of their data. Which can depict all the real time progress and sales insight of the store. This project is a Ms Excel dashboard which presents an interactive visual story to help the Owner and employees in increasing their sales. Task performed : Data cleaning, Data processing, Data analysis, Data visualization, Report. Tool used : Ms Excel Skills: Data Analysis · Data Analytics · ms excel · Pivot Tables
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Spreadsheets targeted at the analysis of GHS safety fingerprints.AbstractOver a 20-year period, the UN developed the Globally Harmonized System (GHS) to address international variation in chemical safety information standards. By 2014, the GHS became widely accepted internationally and has become the cornerstone of OSHA’s Hazard Communication Standard. Despite this progress, today we observe that there are inconsistent results when different sources apply the GHS to specific chemicals, in terms of the GHS pictograms, hazard statements, precautionary statements, and signal words assigned to those chemicals. In order to assess the magnitude of this problem, this research uses an extension of the “chemical fingerprints” used in 2D chemical structure similarity analysis to GHS classifications. By generating a chemical safety fingerprint, the consistency of the GHS information for specific chemicals can be assessed. The problem is the sources for GHS information can differ. For example, the SDS for sodium hydroxide pellets found on Fisher Scientific’s website displays two pictograms, while the GHS information for sodium hydroxide pellets on Sigma Aldrich’s website has only one pictogram. A chemical information tool, which identifies such discrepancies within a specific chemical inventory, can assist in maintaining the quality of the safety information needed to support safe work in the laboratory. The tools for this analysis will be scaled to the size of a moderate large research lab or small chemistry department as a whole (between 1000 and 3000 chemical entities) so that labelling expectations within these universes can be established as consistently as possible.Most chemists are familiar with programs such as excel and google sheets which are spreadsheet programs that are used by many chemists daily. Though a monadal programming approach with these tools, the analysis of GHS information can be made possible for non-programmers. This monadal approach employs single spreadsheet functions to analyze the data collected rather than long programs, which can be difficult to debug and maintain. Another advantage of this approach is that the single monadal functions can be mixed and matched to meet new goals as information needs about the chemical inventory evolve over time. These monadal functions will be used to converts GHS information into binary strings of data called “bitstrings”. This approach is also used when comparing chemical structures. The binary approach make data analysis more manageable, as GHS information comes in a variety of formats such as pictures or alphanumeric strings which are difficult to compare on their face. Bitstrings generated using the GHS information can be compared using an operator such as the tanimoto coefficent to yield values from 0 for strings that have no similarity to 1 for strings that are the same. Once a particular set of information is analyzed the hope is the same techniques could be extended to more information. For example, if GHS hazard statements are analyzed through a spreadsheet approach the same techniques with minor modifications could be used to tackle more GHS information such as pictograms.Intellectual Merit. This research indicates that the use of the cheminformatic technique of structural fingerprints can be used to create safety fingerprints. Structural fingerprints are binary bit strings that are obtained from the non-numeric entity of 2D structure. This structural fingerprint allows comparison of 2D structure through the use of the tanimoto coefficient. The use of this structural fingerprint can be extended to safety fingerprints, which can be created by converting a non-numeric entity such as GHS information into a binary bit string and comparing data through the use of the tanimoto coefficient.Broader Impact. Extension of this research can be applied to many aspects of GHS information. This research focused on comparing GHS hazard statements, but could be further applied to other bits of GHS information such as pictograms and GHS precautionary statements. Another facet of this research is allowing the chemist who uses the data to be able to compare large dataset using spreadsheet programs such as excel and not need a large programming background. Development of this technique will also benefit the Chemical Health and Safety community and Chemical Information communities by better defining the quality of GHS information available and providing a scalable and transferable tool to manipulate this information to meet a variety of other organizational needs.
Facebook
TwitterThe Delta Produce Sources Study was an observational study designed to measure and compare food environments of farmers markets (n=3) and grocery stores (n=12) in 5 rural towns located in the Lower Mississippi Delta region of Mississippi. Data were collected via electronic surveys from June 2019 to March 2020 using a modified version of the Nutrition Environment Measures Survey (NEMS) Farmers Market Audit tool. The tool was modified to collect information pertaining to source of fresh produce and also for use with both farmers markets and grocery stores. Availability, source, quality, and price information were collected and compared between farmers markets and grocery stores for 13 fresh fruits and 32 fresh vegetables via SAS software programming. Because the towns were not randomly selected and the sample sizes are relatively small, the data may not be generalizable to all rural towns in the Lower Mississippi Delta region of Mississippi. Resources in this dataset:Resource Title: Delta Produce Sources Study dataset . File Name: DPS Data Public.csvResource Description: The dataset contains variables corresponding to availability, source (country, state and town if country is the United States), quality, and price (by weight or volume) of 13 fresh fruits and 32 fresh vegetables sold in farmers markets and grocery stores located in 5 Lower Mississippi Delta towns.Resource Software Recommended: Microsoft Excel,url: https://www.microsoft.com/en-us/microsoft-365/excel Resource Title: Delta Produce Sources Study data dictionary. File Name: DPS Data Dictionary Public.csvResource Description: This file is the data dictionary corresponding to the Delta Produce Sources Study dataset.Resource Software Recommended: Microsoft Excel,url: https://www.microsoft.com/en-us/microsoft-365/excel
Facebook
Twitterhttps://borealisdata.ca/api/datasets/:persistentId/versions/2.1/customlicense?persistentId=doi:10.5683/SP3/SZHJFYhttps://borealisdata.ca/api/datasets/:persistentId/versions/2.1/customlicense?persistentId=doi:10.5683/SP3/SZHJFY
This CD-ROM product is an authoritative reference source of 15 key financial ratios by industry groupings compiled from the North American Industry Classification System (NAICS 2007). It is based on up-to-date, reliable and comprehensive data on Canadian businesses, derived from Statistics Canada databases of financial statements for three reference years. The CD-ROM enables users to compare their enterprise's performance to that of their industry and to address issues such as profitability, efficiency and business risk. Financial Performance Indicators can also be used for inter-industry comparisons. Volume 1 covers large enterprises in both the financial and non-financial sectors, at the national level, with annual operating revenue of $25 million or more. Volume 2 covers medium-sized enterprises in the non-financial sector, at the national level, with annual operating revenue of $5 million to less than $25 million. Volume 3 covers small enterprises in the non-financial sector, at the national, provincial, territorial, Atlantic region and Prairie region levels, with annual operating revenue of $30,000 to less than $5 million. Note: FPICB has been discontinued as of 2/23/2015. Statistics Canada continues to provide information on Canadian businesses through alternative data sources. Information on specific financial ratios will continue to be available through the annual Financial and Taxation Statistics for Enterprises program: CANSIM table 180-0003 ; the Quarterly Survey of Financial Statements: CANSIM tables 187-0001 and 187-0002 ; and the Small Business Profiles, which present financial data for small businesses in Canada, available on Industry Canada's website: Financial Performance Data.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Poseidon 2.0 is a user-oriented, simple and fast Excel-Tool which aims to compare different wastewater treatment techniques based on their pollutant removal efficiencies, their costs and additional assessment criteria. Poseidon can be applied for pre-feasibility studies in order to assess possible water reuse options and can show decision makers and other stakeholders that implementable solutions are available to comply with local requirements. This upload consists in:
Poseidon 2.0 Excel File that can be used with Microsoft Excel - XLSM
Handbook presenting main features of the decision support tool - PDF
Externally hosted supplementary file 1, Oertlé, Emmanuel. (2018, December 5). Poseidon - Decision Support Tool for Water Reuse (Microsoft Excel) and Handbook (Version 1.1.1). Zenodo. http://doi.org/10.5281/zenodo.3341573
Externally hosted supplementary file 2, Oertlé, Emmanuel. (2018). Wastewater Treatment Unit Processes Datasets: Pollutant removal efficiencies, evaluation criteria and cost estimations (Version 1.0.0) [Data set]. Zenodo. http://doi.org/10.5281/zenodo.1247434
Externally hosted supplementary file 3, Oertlé, Emmanuel. (2018). Treatment Trains for Water Reclamation (Dataset) (Version 1.0.0) [Data set]. Zenodo. http://doi.org/10.5281/zenodo.1972627
Externally hosted supplementary file 4, Oertlé, Emmanuel. (2018). Water Quality Classes - Recommended Water Quality Based on Guideline and Typical Wastewater Qualities (Version 1.0.2) [Data set]. Zenodo. http://doi.org/10.5281/zenodo.3341570
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
Directory of Files:
A. Filename: Combine_CCTDI.zip
Short description: Quantitative Data. The zip files contain 6 Excel files which store students' raw data. This raw data set consists of student's input on each CCTDI item. The pre-data were collected through an online survey, while post-data were collected through pen and paper. The data will be analysed by ANOVA to compare the effectiveness of the intervention.
(California Critical Thinking Disposition Inventory (CCTDI) has been widely employed in the field of education to investigate the changes in students’ Critical Thinking (CT) attitudes resulting from teaching interventions by comparing the pre- and post-tests. This 6-point scale self-reporting instrument requires respondents to rate themselves, ranging from “rating 1” for not describing them at all to “rating 6” for extremely well. The instrument has 40 questions categorized in seven subsets covering various CT dispositions dimensions, namely: i) truth-seeking, ii) open-mindedness, iii) analyticity, iv) systematicity, v) inquisitiveness, vi) maturity, and vii) self-confidence.
B. Filename: Combine_TCTSPS.zip
Short description: Quantitative Data. The zip files contains 6 excel files which stores students' raw data. consists of student's input on each TCTSPS item. The pre-data were collected through an online survey, while post-data were collected through pen and paper. The data will be analysed by ANOVA to compare the effectiveness of the intervention.
(Test of Critical Thinking Skills for Primary and Secondary School Students (TCTS-PS) consists of 24 items divided into five subscales measuring distinct yet correlated aspects of CT skills, namely: (I) differentiating theory from assumptions, (II) deciding evidence, (III) inference, (IV) finding an alternative theory, and (V) evaluation of arguments. The instrument yields a possible total score of 72. The instrument is intended for use in measuring gains in CT skills resulting from instruction, predicting success in programs where CT is crucial, and examining relationships between CT skills and other abilities or traits.)
C. Filename: Combine_SMTSL.zip
Short description: Quantitative Data. The zip files contains 5 excel files which stores students' raw data. consists of student's input on each SMTSL item. The pre-data were collected through an online survey, while post-data were collected through pen and paper. The data will be analysed by ANOVA to compare the effectiveness of the intervention.
(Students' Motivation Towards Science learning (SMTSL) defined six factors that related to the motivation in science learning including self-efficacy, active learning strategies and so on, in order to measure participants' motivation towards science learning: A. Self-efficacy, B. Active learning , trategies, C. Science learning value, D. Performance goal, E. Achievement goal, and F. Learning environment stimulation)
D. Filename: Combine_Discourse Transcription_1.zip and Combine_Discourse Transcription_2.zip
Short description: Qualitative Data.The zip files contains 6 excel files which 6 teachers' classroom teaching discourse transcriptions. The data will be analysed by thematic analysis to compare the effectiveness of the intervention.
(38 science classroom discourse videos of 8th graders were transcribed and coded by Academically Productive Talk framework (APT). APT is drawing from sociological, linguistic, and anthropological perspectives, comprises four primary constructs or objectives.)
E. Filename: Combine_Inquiry Report.zip
Short description: Qualitative Data. The zip files contains 2 excel files which 2 schools' inquiry report scores according rubrics. The data will be analysed by thematic analysis to compare the effectiveness of the intervention.
(To assess the quality of students' arguments, a validated scoring rubric was employed to evaluate the student's written argument. These aspects primarily concentrated on the student's proficiency in five perspectives (Walker & Sampson, 2013, p. 573):
(AR1) Provide a well-articulated, adequate, and accurate claim that answers the research question, (AR2) Use genuine evidence to support the claim and to present the evidence in an appropriate manner, (AR3) Provide enough valid and reliable evidence to support the claim, (AR4) Provide a rationale is sufficient and appropriate, and (AR5) Compare his or her findings with other groups in the project.)
F. Filename: Combined_Interview Transcription.xlsx
Short description: Qualitative Data. The file contains all the students' interview transcriptions. The data will be analysed by thematic analysis to compare the effectiveness of the intervention.
(A semi-structured interviews was conducted to gather interviewees' motivation of CT and learning motivation in the context of science. The interview data would be used to complement the quantitative results (i.e., TCTS-PS, CCTDI, and SMTSL scores).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset tabulates the Excel township population over the last 20 plus years. It lists the population for each year, along with the year on year change in population, as well as the change in percentage terms for each year. The dataset can be utilized to understand the population change of Excel township across the last two decades. For example, using this dataset, we can identify if the population is declining or increasing. If there is a change, when the population peaked, or if it is still growing and has not reached its peak. We can also compare the trend with the overall trend of United States population over the same period of time.
Key observations
In 2023, the population of Excel township was 300, a 0.99% decrease year-by-year from 2022. Previously, in 2022, Excel township population was 303, a decline of 0.98% compared to a population of 306 in 2021. Over the last 20 plus years, between 2000 and 2023, population of Excel township increased by 17. In this period, the peak population was 308 in the year 2020. The numbers suggest that the population has already reached its peak and is showing a trend of decline. Source: U.S. Census Bureau Population Estimates Program (PEP).
When available, the data consists of estimates from the U.S. Census Bureau Population Estimates Program (PEP).
Data Coverage:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Excel township Population by Year. You can refer the same here
Facebook
TwitterIn this project, I analysed the employees of an organization located in two distinct countries using Excel. This project covers:
1) How to approach a data analysis project 2) How to systematically clean data 3) Doing EDA with Excel formulas & tables 4) How to use Power Query to combine two datasets 5) Statistical Analysis of data 6) Using formulas like COUNTIFS, SUMIFS, XLOOKUP 7) Making an information finder with your data 8) Male vs. Female Analysis with Pivot tables 9) Calculating Bonuses based on business rules 10) Visual analytics of data with 4 topics 11) Analysing the salary spread (Histograms & Box plots) 12) Relationship between Salary & Rating 13) Staff growth over time - trend analysis 14) Regional Scorecard to compare NZ with India
Including various Excel features such as: 1) Using Tables 2) Working with Power Query 3) Formulas 4) Pivot Tables 5) Conditional formatting 6) Charts 7) Data Validation 8) Keyboard Shortcuts & tricks 9) Dashboard Design
Facebook
TwitterThis project delves into the sales data of Maven Coffee Shop to uncover valuable insights and trends. Using Excel, we meticulously cleaned, analyzed, and visualized the data to help understand the business's performance across various dimensions. The original data set was posted on Kaggle.
Key Objectives:
Analyze Sales Trends: Identify sales patterns over time to understand peak periods and sales growth. Evaluate Product Performance: Determine which products and categories drive the most revenue. Assess Store Performance: Compare sales across different store locations to highlight top-performing stores. Interactive Insights: Create an interactive dashboard that allows stakeholders to explore the data without the risk of unintentional edits.
Features:
Comprehensive Data Cleaning: Ensured the data is accurate and ready for analysis. Detailed Revenue Analysis: Explored total sales, average transaction values, and unit sales. Product and Store Analysis: Investigated sales by product category, type, and store location. Interactive Dashboard: Designed a user-friendly Excel dashboard for dynamic data interaction.
This project serves as a practical example of using Excel for data analysis and visualization, providing actionable insights into the operational and financial aspects of a coffee shop business.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset provides a dynamic Excel model for prioritizing projects based on Feasibility, Impact, and Size.
It visualizes project data on a Bubble Chart that updates automatically when new projects are added.
Use this tool to make data-driven prioritization decisions by identifying which projects are most feasible and high-impact.
Organizations often struggle to compare multiple initiatives objectively.
This matrix helps teams quickly determine which projects to pursue first by visualizing:
Example (partial data):
| Criteria | Project 1 | Project 2 | Project 3 | Project 4 | Project 5 | Project 6 | Project 7 | Project 8 |
|---|---|---|---|---|---|---|---|---|
| Feasibility | 7 | 9 | 5 | 2 | 7 | 2 | 6 | 8 |
| Impact | 8 | 4 | 4 | 6 | 6 | 7 | 7 | 7 |
| Size | 10 | 2 | 3 | 7 | 4 | 4 | 3 | 1 |
| Quadrant | Description | Action |
|---|---|---|
| High Feasibility / High Impact | Quick wins | Top Priority |
| High Impact / Low Feasibility | Valuable but risky | Plan carefully |
| Low Impact / High Feasibility | Easy but minor value | Optional |
| Low Impact / Low Feasibility | Low return | Defer or drop |
Project_Priority_Matrix.xlsx. You can use this for:
- Portfolio management
- Product or feature prioritization
- Strategy planning workshops
Project_Priority_Matrix.xlsxFree for personal and organizational use.
Attribution is appreciated if you share or adapt this file.
Author: [Asjad]
Contact: [m.asjad2000@gmail.com]
Compatible With: Microsoft Excel 2019+ / Office 365
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset tabulates the Excel population over the last 20 plus years. It lists the population for each year, along with the year on year change in population, as well as the change in percentage terms for each year. The dataset can be utilized to understand the population change of Excel across the last two decades. For example, using this dataset, we can identify if the population is declining or increasing. If there is a change, when the population peaked, or if it is still growing and has not reached its peak. We can also compare the trend with the overall trend of United States population over the same period of time.
Key observations
In 2022, the population of Excel was 539, a 1.46% decrease year-by-year from 2021. Previously, in 2021, Excel population was 547, a decline of 1.08% compared to a population of 553 in 2020. Over the last 20 plus years, between 2000 and 2022, population of Excel decreased by 36. In this period, the peak population was 713 in the year 2010. The numbers suggest that the population has already reached its peak and is showing a trend of decline. Source: U.S. Census Bureau Population Estimates Program (PEP).
When available, the data consists of estimates from the U.S. Census Bureau Population Estimates Program (PEP).
Data Coverage:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Excel Population by Year. You can refer the same here
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These data accompanied with the paper "MULTISEGMENTAL KINEMATIC BEHAVIOR OF NORMAL AND PRONATED FEET DURING THE SQUAT PHASE OF THE ANTERIOR AND LATERAL STEP DOWN TESTS". The aim of the study was to compare the multisegmental kinematic behavior of neutral and pronated feet during the squat phase of the anterior and lateral step down tests. Our hypothesis is that neutral and pronated feet would exhibit different kinematic behaviors during the step down tests based on the analysis of the segments of the tibia, hindfoot and forefoot, as was the case in previous studies that analyzed gait The data were obtained through the kinematic analysis with a three-dimensional system during the execution of two functional tests: Anterior and Lateral Step Down Tests. Using a multisegmental foot model in pronated and neutral foot of health's individuals. The mean for the nine cycles of each variables for the two tasks was calculated and the data were save in excel format. In the data repository are three documents attached: - INSTRUCTIONS FOR EXCEL FILES (word): Instructions and legends of the data and how to interpret the excel files. - ANTERIOR STEP DOWN TEST (excel): There are five tabs with all the variables utilized for compare the two groups. - LATERAL STEP DOWN TEST (excel): There are five tabs with all the variables utilized for compare the two groups. - Anthropometrics Characteristics (excel): contains the data of anthropometrics characteristics of each volunteers. The variables were compared between groups using multivariate analysis of variance. The mean of each variables and segments were used to identify the movement realized by the groups. The findings of these study allowed identify that the most of the differences between groups were in the frontal plane and in the FFHFA segment. The pronated foot presented decreased movement for most variables of the foot.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
When water is pumped slowly from saturated sediment-water inteface sediments, the more highly connected, mobile porosity domain is prefferentially sampled, compared to less-mobile pore spaces. Changes in fluid electrical conductivity (EC) during controlled downward ionic tracer injections into interface sediments can be assumed to represent mobile porosity dynamics, which are therefore distinguished from less-mobile porosity dynamics that is measured using bulk EC geoelectrical methods. Fluid EC samples were drawn at flow rates similar to tracer injection rates to prevent inducing preferential flow. The data were collected using a stainless steel tube with slits cut into the bottom (USGS MINIPOINT style) connected to an EC meter via c-flex or neoprene tubing, and drawn up through the system via a peristaltic pump. The data were compiled into an excel spreadsheet and time corrected to compare to bulk EC data that were collected simultaneously and contained in another section of t ...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sheet 1 (Raw-Data): The raw data of the study is provided, presenting the tagging results for the used measures described in the paper. For each subject, it includes multiple columns: A. a sequential student ID B an ID that defines a random group label and the notation C. the used notation: user Story or use Cases D. the case they were assigned to: IFA, Sim, or Hos E. the subject's exam grade (total points out of 100). Empty cells mean that the subject did not take the first exam F. a categorical representation of the grade L/M/H, where H is greater or equal to 80, M is between 65 included and 80 excluded, L otherwise G. the total number of classes in the student's conceptual model H. the total number of relationships in the student's conceptual model I. the total number of classes in the expert's conceptual model J. the total number of relationships in the expert's conceptual model K-O. the total number of encountered situations of alignment, wrong representation, system-oriented, omitted, missing (see tagging scheme below) P. the researchers' judgement on how well the derivation process explanation was explained by the student: well explained (a systematic mapping that can be easily reproduced), partially explained (vague indication of the mapping ), or not present.
Tagging scheme:
Aligned (AL) - A concept is represented as a class in both models, either
with the same name or using synonyms or clearly linkable names;
Wrongly represented (WR) - A class in the domain expert model is
incorrectly represented in the student model, either (i) via an attribute,
method, or relationship rather than class, or
(ii) using a generic term (e.g., user'' instead ofurban
planner'');
System-oriented (SO) - A class in CM-Stud that denotes a technical
implementation aspect, e.g., access control. Classes that represent legacy
system or the system under design (portal, simulator) are legitimate;
Omitted (OM) - A class in CM-Expert that does not appear in any way in
CM-Stud;
Missing (MI) - A class in CM-Stud that does not appear in any way in
CM-Expert.
All the calculations and information provided in the following sheets
originate from that raw data.
Sheet 2 (Descriptive-Stats): Shows a summary of statistics from the data collection,
including the number of subjects per case, per notation, per process derivation rigor category, and per exam grade category.
Sheet 3 (Size-Ratio):
The number of classes within the student model divided by the number of classes within the expert model is calculated (describing the size ratio). We provide box plots to allow a visual comparison of the shape of the distribution, its central value, and its variability for each group (by case, notation, process, and exam grade) . The primary focus in this study is on the number of classes. However, we also provided the size ratio for the number of relationships between student and expert model.
Sheet 4 (Overall):
Provides an overview of all subjects regarding the encountered situations, completeness, and correctness, respectively. Correctness is defined as the ratio of classes in a student model that is fully aligned with the classes in the corresponding expert model. It is calculated by dividing the number of aligned concepts (AL) by the sum of the number of aligned concepts (AL), omitted concepts (OM), system-oriented concepts (SO), and wrong representations (WR). Completeness on the other hand, is defined as the ratio of classes in a student model that are correctly or incorrectly represented over the number of classes in the expert model. Completeness is calculated by dividing the sum of aligned concepts (AL) and wrong representations (WR) by the sum of the number of aligned concepts (AL), wrong representations (WR) and omitted concepts (OM). The overview is complemented with general diverging stacked bar charts that illustrate correctness and completeness.
For sheet 4 as well as for the following four sheets, diverging stacked bar
charts are provided to visualize the effect of each of the independent and mediated variables. The charts are based on the relative numbers of encountered situations for each student. In addition, a "Buffer" is calculated witch solely serves the purpose of constructing the diverging stacked bar charts in Excel. Finally, at the bottom of each sheet, the significance (T-test) and effect size (Hedges' g) for both completeness and correctness are provided. Hedges' g was calculated with an online tool: https://www.psychometrica.de/effect_size.html. The independent and moderating variables can be found as follows:
Sheet 5 (By-Notation):
Model correctness and model completeness is compared by notation - UC, US.
Sheet 6 (By-Case):
Model correctness and model completeness is compared by case - SIM, HOS, IFA.
Sheet 7 (By-Process):
Model correctness and model completeness is compared by how well the derivation process is explained - well explained, partially explained, not present.
Sheet 8 (By-Grade):
Model correctness and model completeness is compared by the exam grades, converted to categorical values High, Low , and Medium.
Facebook
TwitterComprehensive YouTube channel statistics for Learn Excel to excel, featuring 187,000 subscribers and 13,006,000 total views. This dataset includes detailed performance metrics such as subscriber growth, video views, engagement rates, and estimated revenue. The channel operates in the Lifestyle category. Track 174 videos with daily and monthly performance data, including view counts, subscriber changes, and earnings estimates. Analyze growth trends, engagement patterns, and compare performance against similar channels in the same category.
Facebook
TwitterThis dataset contains raw data (Excel spreadsheet, .xlsx), R statistical code (RMarkdown notebook, .Rmd), and rendered output of the R notebook (HTML). This comprises all raw data and code needed to reproduce the analyses in the manuscript:Pokoo-Aikins, A., C. M. McDonough, T. R. Mitchell, J. A. Hawkins, L. F. Adams, Q. D. Read, X. Li, R. Shanmugasundaram, E. Rodewald, P. Acharya, A. E. Glenn, and S. E. Gold. 2024. Mycotoxin contamination and the nutritional content of corn targeted for animal feed. Poultry Science, 104303. DOI: 10.1016/j.psj.2024.104303.The data consist of the mycotoxin concentration, nutrient content, and color of different samples of corn (maize). We model the effect of mycotoxin concentration on the concentration of several different nutrients in corn. We include main effects of the different mycotoxins as well as two-way interactions between each pair of mycotoxins. We also include analysis of mycotoxin effects on the L variable from the color analysis, because it seems to be the one most important for determining the overall color of the corn. We use AIC to compare the models with and without interaction terms. We find that the models without interaction terms are better so we omit the interactions. We present adjusted R-squared values for each model as well as the p-values associated with the average slopes (effect of each mycotoxin on each nutrient). Finally, we produce the figures that appear in the above cited manuscript.Column metadata can be found in the Excel spreadsheet.Included filesCombined LCMS NIR Color data.xlsx: Excel file with all raw data (sheet 1) and column metadata (sheet 2).corn_mycotoxin_analysis_archived.Rmd: RMarkdown notebook with all analysis codecorn_mycotoxin_analysis_archived.html: rendered output of R notebook
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
1. File: Maqbool_SLR_2023_JSS_Inclusion_610.xlsm.
There are two sheets in file. A. Final_Selected_papers, (sheet) aims to provide a comprehensive list of articles (n=610) selected for our SLR, whose process and data items specified and detailed in the article.
B. Rejected_After_Full_Review, (sheet) aims to provide a comprehensive list of articles (n=153) rejected for our SLR based on inclusion or exclusion criteria after full article review process, whose process and data items specified and detailed in the article.
2. File: Maqbool_SLR_2023_JSS_Data_Extraction_Form.pdf
This file aims to provide a comprehensive data extraction form, whose process and data items specified and detailed in the article. The form was used to elicit data relevant to answer the postulated research questions. This form served as the foundation for the additional information presented in the final paper.
3. File: Bilal_SLR_JSS_Primary_Studies_References.pdf
This file contains the primary selected studies (n=610) for the systematic literature review. The systematic review aims to explore and analyse research literature related to usability evaluation methods and their effectiveness and efficiency in the context of digital health applications. This file will help to identity reference of the primary selected study that is cited in the paper using a prefix (S, e.g. S137). This file can be used for peer review, ensuring the reliability and correctness of findings.
4. File: SLR_Analysis_updated_2023.nvp
The data extracted from each article was recorded in a worksheet (Excel) and then coded in NVivo 12/14 to categorise (classify) and compare extracted facets. Each data item's category and related paper id are coded in the given Excel file. Papers were not included in the NVivo project due to copyright concerns. Relevant papers can be tracked using the provided spreadsheet file (see Paper ID cell).
The file(s) are cleaned as much as reasonable and other raw data is removed. This file does not include the matrix tables or codes, which were produced and analysed run-time during the analysis phase. Although the given package allows for re-generation.
-------- UPDATE: --------
5. File: SLR_Analysis_updated_2023_for_MAC.nvpx
This is an extra copy of NVivo project, created for the MAC user.
This replication package is produced and published here. Research conducted by Karlstad University researchers. We publish data sets to improve coverage and accessibility. For more info or concerns, contact us.
Linked paper published at: Maqbool, Bilal, and Sebastian Herold. "Potential effectiveness and efficiency issues in usability evaluation within digital health: A systematic literature review." Journal of Systems and Software (2023): 111881.
DOI: https://doi.org/10.1016/j.jss.2023.111881
This work was funded, in parts, by Region Värmland through the DHINO project, Sweden (Grant: RUN/220266) and Vinnova through the DigitalWell Arena (DWA) project, Sweden (Grant: 2018-03025).
Facebook
TwitterThis spreadsheet model calculates the net income for irrigated agricultural production. The model is designed to evaluate the economics of deficit irrigation (irrigation at less than the amount required to produce maximum yield). The spreadsheet first models the water production function for a crop, then uses that relationship along with crop price and production costs to calculate net income and the irrigation amount that maximizes net income. This spreadsheet is similar to another posted at Ag Data Commons: "Economic Model of Deficit Irrigation" (http://dx.doi.org/10.15482/USDA.ADC/1504421). That model was designed primarily to evaluate deficit irrigation as a means to compare revenue with reduced water consumption to income gained by transferring the saved water. The model includes two common scenarios: 1) irrigation water supply is adequate but expensive, and 2) irrigation water supply is inadequate to fully irrigate the available land. In the first scenario, net income is maximized when the marginal costs of production, including water, is equal to the marginal revenue. In the second scenario, net income is maximized when the value of the water is maximized by selecting the portion of the land that should be irrigated. In the second scenario, the value and costs of the un-irrigated land are included. The first worksheet of the spreadsheet describes the relationships used in each worksheet and the input parameters required. Additional worksheets calculate the water production function, the irrigation water production function, and the net income for each of the two scenarios. The worksheets allow the user to input the various biophysical and economic parameters relevant to their conditions and allows evaluating various parameter combinations. Each worksheet contains graphs to visualize the results. Resources in this dataset:Resource Title: Economic Model of Deficit Irrigation II (spreadsheet). File Name: WPF Econ Model V2 Mod.xlsxResource Description: Spreadsheet contains 5 worksheets. The first worksheet describes the relationships in the remaining worksheets and the parameters required by the model.Resource Software Recommended: Microsoft Excel 365 (may work on earlier versions),url: https://www.microsoft.com/en-us/microsoft-365/get-started-with-office-2019 Resource Title: Description of the Model. File Name: DataDictionary.pdfResource Description: Description of the model and input parameters.Resource Software Recommended: Adobe Reader,url: https://get.adobe.com/reader/otherversions/
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
We present ChemPager, a freely available tool for systematically evaluating chemical syntheses. By processing and visualizing chemical data, the impact of past changes is uncovered and future work guided. The tool calculates commonly used metrics such as process mass intensity (PMI), Volume–Time Output, and production costs. Also, a set of scores is introduced aiming to measure crucial but elusive characteristics such as process robustness, design, and safety. Our tool employs a hierarchical data layout built on common software for data entry (Excel, Google Sheets, etc.) and visualization (Spotfire). With all project data being stored in one place, cross-project comparison and data aggregation becomes possible as well as cross-linking with other data sources or visualizations.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This excel file will do a statistical tests of whether two ROC curves are different from each other based on the Area Under the Curve. You'll need the coefficient from the presented table in the following article to enter the correct AUC value for the comparison: Hanley JA, McNeil BJ (1983) A method of comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology 148:839-843.