Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This excel file will do a statistical tests of whether two ROC curves are different from each other based on the Area Under the Curve. You'll need the coefficient from the presented table in the following article to enter the correct AUC value for the comparison: Hanley JA, McNeil BJ (1983) A method of comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology 148:839-843.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Quantity estimate and cost analysis of a unit of Sewage treatment plant (STP) is done by manual method and with BIM automation. The components of the unit include inlet chamber, screen chamber (manual and automatic), grit chamber (manual and automatic) and distribution chamber. Construction specifications and unit rate are obtained from state schedule of rates for all the components of the STP unit. Non dimensional drawings of the STP are provided in pdf format for better visibility and excel sheets of quantity estimate is also provided.
Facebook
TwitterState estimates for these years are no longer available due to methodological concerns with combining 2019 and 2020 data. We apologize for any inconvenience or confusion this may causeBecause of the COVID-19 pandemic, most respondents answered the survey via the web in Quarter 4 of 2020, even though all responses in Quarter 1 were from in-person interviews. It is known that people may respond to the survey differently while taking it online, thus introducing what is called a mode effect.When the state estimates were released, it was assumed that the mode effect was similar for different groups of people. However, later analyses have shown that this assumption should not be made. Because of these analyses, along with concerns about the rapid societal changes in 2020, it was determined that averages across the two years could be misleading.For more detail on this decision, see the 2019-2020state data page.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sheet 1 (Raw-Data): The raw data of the study is provided, presenting the tagging results for the used measures described in the paper. For each subject, it includes multiple columns: A. a sequential student ID B an ID that defines a random group label and the notation C. the used notation: user Story or use Cases D. the case they were assigned to: IFA, Sim, or Hos E. the subject's exam grade (total points out of 100). Empty cells mean that the subject did not take the first exam F. a categorical representation of the grade L/M/H, where H is greater or equal to 80, M is between 65 included and 80 excluded, L otherwise G. the total number of classes in the student's conceptual model H. the total number of relationships in the student's conceptual model I. the total number of classes in the expert's conceptual model J. the total number of relationships in the expert's conceptual model K-O. the total number of encountered situations of alignment, wrong representation, system-oriented, omitted, missing (see tagging scheme below) P. the researchers' judgement on how well the derivation process explanation was explained by the student: well explained (a systematic mapping that can be easily reproduced), partially explained (vague indication of the mapping ), or not present.
Tagging scheme:
Aligned (AL) - A concept is represented as a class in both models, either
with the same name or using synonyms or clearly linkable names;
Wrongly represented (WR) - A class in the domain expert model is
incorrectly represented in the student model, either (i) via an attribute,
method, or relationship rather than class, or
(ii) using a generic term (e.g., user'' instead ofurban
planner'');
System-oriented (SO) - A class in CM-Stud that denotes a technical
implementation aspect, e.g., access control. Classes that represent legacy
system or the system under design (portal, simulator) are legitimate;
Omitted (OM) - A class in CM-Expert that does not appear in any way in
CM-Stud;
Missing (MI) - A class in CM-Stud that does not appear in any way in
CM-Expert.
All the calculations and information provided in the following sheets
originate from that raw data.
Sheet 2 (Descriptive-Stats): Shows a summary of statistics from the data collection,
including the number of subjects per case, per notation, per process derivation rigor category, and per exam grade category.
Sheet 3 (Size-Ratio):
The number of classes within the student model divided by the number of classes within the expert model is calculated (describing the size ratio). We provide box plots to allow a visual comparison of the shape of the distribution, its central value, and its variability for each group (by case, notation, process, and exam grade) . The primary focus in this study is on the number of classes. However, we also provided the size ratio for the number of relationships between student and expert model.
Sheet 4 (Overall):
Provides an overview of all subjects regarding the encountered situations, completeness, and correctness, respectively. Correctness is defined as the ratio of classes in a student model that is fully aligned with the classes in the corresponding expert model. It is calculated by dividing the number of aligned concepts (AL) by the sum of the number of aligned concepts (AL), omitted concepts (OM), system-oriented concepts (SO), and wrong representations (WR). Completeness on the other hand, is defined as the ratio of classes in a student model that are correctly or incorrectly represented over the number of classes in the expert model. Completeness is calculated by dividing the sum of aligned concepts (AL) and wrong representations (WR) by the sum of the number of aligned concepts (AL), wrong representations (WR) and omitted concepts (OM). The overview is complemented with general diverging stacked bar charts that illustrate correctness and completeness.
For sheet 4 as well as for the following four sheets, diverging stacked bar
charts are provided to visualize the effect of each of the independent and mediated variables. The charts are based on the relative numbers of encountered situations for each student. In addition, a "Buffer" is calculated witch solely serves the purpose of constructing the diverging stacked bar charts in Excel. Finally, at the bottom of each sheet, the significance (T-test) and effect size (Hedges' g) for both completeness and correctness are provided. Hedges' g was calculated with an online tool: https://www.psychometrica.de/effect_size.html. The independent and moderating variables can be found as follows:
Sheet 5 (By-Notation):
Model correctness and model completeness is compared by notation - UC, US.
Sheet 6 (By-Case):
Model correctness and model completeness is compared by case - SIM, HOS, IFA.
Sheet 7 (By-Process):
Model correctness and model completeness is compared by how well the derivation process is explained - well explained, partially explained, not present.
Sheet 8 (By-Grade):
Model correctness and model completeness is compared by the exam grades, converted to categorical values High, Low , and Medium.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The uploaded files are:
1) Excel file containing 6 sheets in respective Order: "Data Extraction" (summarized final data extractions from the three reviewers involved), "Comparison Data" (data related to the comparisons investigated), "Paper level data" (summaries at paper level), "Outcome Event Data" (information with respect to number of events for every outcome investigated within a paper), "Tuning Classification" (data related to the manner of hyperparameter tuning of Machine Learning Algorithms).
2) R script used for the Analysis (In order to read the data, please: Save "Comparison Data", "Paper level data", "Outcome Event Data" Excel sheets as txt files. In the R script srpap: Refers to the "Paper level data" sheet, srevents: Refers to the "Outcome Event Data" sheet and srcompx: Refers to " Comparison data Sheet".
3) Supplementary Material: Including Search String, Tables of data, Figures
4) PRISMA checklist items
Facebook
TwitterIn the beginning, the case was just data for a company that did not indicate any useful information that would help decision-makers. In this case, after collecting a number of revenues and expenses over the months.
Needed to know the answers to a number of questions to make important decisions based on intuition-free data.
The Questions:-
About Rev. & Exp.
- What is the total sales and profit for the whole period? And What Total products sold? And What is Net profit?
- In which month was the highest percentage of revenue achieved? And in the same month, what is the largest day have amount of revenue?
- In which month was the highest percentage of expenses achieved? And in the same month, what is the largest day have amount of exp.?
- What is the extent of the change in expenditures for each month?
Percentage change in net profit over the months?
About Distribution
- What is the number of products sold each month in the largest state?
-The top 3 largest states buying products during the two years?
Comparison
- Between Sales Method by Sales?
- Between Men and Women’s Product by Sales?
- Between Retailer by Profit?
What I did? - Understanding the data - preprocessing and clean the data - Solve The problems in the cleaning like missing data or false type data - querying the data and make some calculations like "COGS" with power query "Excel". - Modeling and make some measures on the data with power pivot "Excel" - After finishing processing and preparation, I made Some Pivot tables to answers the questions. - Last, I made a dashboard with Power BI to visualize The Results.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The attachment includes three folders:
The first folder, Data classification (testing and training), consists of two folders (crown_radius and height), the first crown_radius folder It contains excel data of three plant functional types (PFTs) - temperate needleleaf trees (MN), temperate broadleaf trees (MB) and tropical broadleaf trees (TB), these three excel data all contain 19 soil factors data, 22 climate factors data and information such as crown_radius_m, mask, stem_diameter_cm, etc. The information in the second height folder is similar, and it corresponds to Table 1.Data summary and Figure 3 for each PFT in the article;
The second folder, Feather importance, contains two excel spreadsheets (crown_radius-FI and height-FI), the first excel spreadsheet of crown_radius-FI Feather importance containing three plant functional types (PFTs) is temperate needleleaf trees (MN), temperate broadleaf trees (MB), and tropical broadleaf trees (TB); The excel table information of the second height-FI is similar, and its information corresponds to Figure 5 and Figure S3 in the article;
The third folder "program" contains two packages (make_model1 and make_model2) and a calling program "Source program". Among them, the make_model1 package is mainly used to obtain the best parameters for selecting the model; The make_model2 package is based on the selection of the make_model1 package to further analyze the specific FI values of the factors in the best model. The Source program is used to make specific calls to the package according to the requirements.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Excel files containing the data for the paper titled: "Diffuse blue vs. structural silver—comparing alternative strategies for pelagic background matching between two coral reef fishes." See Data for creole wrasse vs bar jack.docx for more details
Facebook
TwitterThis project delves into the sales data of Maven Coffee Shop to uncover valuable insights and trends. Using Excel, we meticulously cleaned, analyzed, and visualized the data to help understand the business's performance across various dimensions. The original data set was posted on Kaggle.
Key Objectives:
Analyze Sales Trends: Identify sales patterns over time to understand peak periods and sales growth. Evaluate Product Performance: Determine which products and categories drive the most revenue. Assess Store Performance: Compare sales across different store locations to highlight top-performing stores. Interactive Insights: Create an interactive dashboard that allows stakeholders to explore the data without the risk of unintentional edits.
Features:
Comprehensive Data Cleaning: Ensured the data is accurate and ready for analysis. Detailed Revenue Analysis: Explored total sales, average transaction values, and unit sales. Product and Store Analysis: Investigated sales by product category, type, and store location. Interactive Dashboard: Designed a user-friendly Excel dashboard for dynamic data interaction.
This project serves as a practical example of using Excel for data analysis and visualization, providing actionable insights into the operational and financial aspects of a coffee shop business.
Facebook
TwitterThis resource, a MS Excel refresher, extends the level for this Data Nugget. Students are given an Excel workbook with the data and asked to graph and calculate diversity using Excel functions (rather than drawing graphs by hand as in the original data nugget). The data set used is the same. I use this activity in an upper division Environmental Science course for majors that focuses on Restoration Ecology. The simplicity of the data set and the comparisons of reptile diversity among urban, non-urban and urban rehabilitated lend for a great example for doing calculations in spreadsheets.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset for "Comparing Transaction Logs to ILL requests to Determine the Persistence of Library Patrons In Obtaining Materials" article. Excel file contains all data in four worksheets Zip file contains four csv files, one for each worksheet: - Comparing Transaction Logs to ILL - 2016 ILL Raw ...Data.csv - Comparing Transaction Logs to ILL - 2015 ILL Raw Data.csv - Comparing Transaction Logs to ILL - 2016 Zero Search Raw Data.csv - Comparing Transaction Logs to ILL - 2015 Zero Search Raw Data.csv [more]
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Vrinda Store: Interactive Ms Excel dashboardVrinda Store: Interactive Ms Excel dashboard Feb 2024 - Mar 2024Feb 2024 - Mar 2024 The owner of Vrinda store wants to create an annual sales report for 2022. So that their employees can understand their customers and grow more sales further. Questions asked by Owner of Vrinda store are as follows:- 1) Compare the sales and orders using single chart. 2) Which month got the highest sales and orders? 3) Who purchased more - women per men in 2022? 4) What are different order status in 2022?
And some other questions related to business. The owner of Vrinda store wanted a visual story of their data. Which can depict all the real time progress and sales insight of the store. This project is a Ms Excel dashboard which presents an interactive visual story to help the Owner and employees in increasing their sales. Task performed : Data cleaning, Data processing, Data analysis, Data visualization, Report. Tool used : Ms Excel The owner of Vrinda store wants to create an annual sales report for 2022. So that their employees can understand their customers and grow more sales further. Questions asked by Owner of Vrinda store are as follows:- 1) Compare the sales and orders using single chart. 2) Which month got the highest sales and orders? 3) Who purchased more - women per men in 2022? 4) What are different order status in 2022? And some other questions related to business. The owner of Vrinda store wanted a visual story of their data. Which can depict all the real time progress and sales insight of the store. This project is a Ms Excel dashboard which presents an interactive visual story to help the Owner and employees in increasing their sales. Task performed : Data cleaning, Data processing, Data analysis, Data visualization, Report. Tool used : Ms Excel Skills: Data Analysis · Data Analytics · ms excel · Pivot Tables
Facebook
TwitterThese tables show the significance testing results between a particular state and other states or the national estimate. They are in CSV or Excel format.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Spreadsheets targeted at the analysis of GHS safety fingerprints.AbstractOver a 20-year period, the UN developed the Globally Harmonized System (GHS) to address international variation in chemical safety information standards. By 2014, the GHS became widely accepted internationally and has become the cornerstone of OSHA’s Hazard Communication Standard. Despite this progress, today we observe that there are inconsistent results when different sources apply the GHS to specific chemicals, in terms of the GHS pictograms, hazard statements, precautionary statements, and signal words assigned to those chemicals. In order to assess the magnitude of this problem, this research uses an extension of the “chemical fingerprints” used in 2D chemical structure similarity analysis to GHS classifications. By generating a chemical safety fingerprint, the consistency of the GHS information for specific chemicals can be assessed. The problem is the sources for GHS information can differ. For example, the SDS for sodium hydroxide pellets found on Fisher Scientific’s website displays two pictograms, while the GHS information for sodium hydroxide pellets on Sigma Aldrich’s website has only one pictogram. A chemical information tool, which identifies such discrepancies within a specific chemical inventory, can assist in maintaining the quality of the safety information needed to support safe work in the laboratory. The tools for this analysis will be scaled to the size of a moderate large research lab or small chemistry department as a whole (between 1000 and 3000 chemical entities) so that labelling expectations within these universes can be established as consistently as possible.Most chemists are familiar with programs such as excel and google sheets which are spreadsheet programs that are used by many chemists daily. Though a monadal programming approach with these tools, the analysis of GHS information can be made possible for non-programmers. This monadal approach employs single spreadsheet functions to analyze the data collected rather than long programs, which can be difficult to debug and maintain. Another advantage of this approach is that the single monadal functions can be mixed and matched to meet new goals as information needs about the chemical inventory evolve over time. These monadal functions will be used to converts GHS information into binary strings of data called “bitstrings”. This approach is also used when comparing chemical structures. The binary approach make data analysis more manageable, as GHS information comes in a variety of formats such as pictures or alphanumeric strings which are difficult to compare on their face. Bitstrings generated using the GHS information can be compared using an operator such as the tanimoto coefficent to yield values from 0 for strings that have no similarity to 1 for strings that are the same. Once a particular set of information is analyzed the hope is the same techniques could be extended to more information. For example, if GHS hazard statements are analyzed through a spreadsheet approach the same techniques with minor modifications could be used to tackle more GHS information such as pictograms.Intellectual Merit. This research indicates that the use of the cheminformatic technique of structural fingerprints can be used to create safety fingerprints. Structural fingerprints are binary bit strings that are obtained from the non-numeric entity of 2D structure. This structural fingerprint allows comparison of 2D structure through the use of the tanimoto coefficient. The use of this structural fingerprint can be extended to safety fingerprints, which can be created by converting a non-numeric entity such as GHS information into a binary bit string and comparing data through the use of the tanimoto coefficient.Broader Impact. Extension of this research can be applied to many aspects of GHS information. This research focused on comparing GHS hazard statements, but could be further applied to other bits of GHS information such as pictograms and GHS precautionary statements. Another facet of this research is allowing the chemist who uses the data to be able to compare large dataset using spreadsheet programs such as excel and not need a large programming background. Development of this technique will also benefit the Chemical Health and Safety community and Chemical Information communities by better defining the quality of GHS information available and providing a scalable and transferable tool to manipulate this information to meet a variety of other organizational needs.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset contains video game sales data prepared for an Excel data analysis and dashboard project.
It includes detailed information on:
Game titles
Platforms
Genres
Publishers
Regional and global sales
The dataset was cleaned, structured, and analyzed in Microsoft Excel to explore patterns in the global video game market. It can be used to:
Practice data cleaning and pivot tables
Build interactive dashboards
Perform sales comparisons across regions and genres
Develop business insights from entertainment data
🧩 File Information
Format: .xlsx (Excel Workbook)
Columns: Name, Platform, Year, Genre, Publisher, NA_Sales, EU_Sales, JP_Sales, Other_Sales, Global_Sales
💡 Use Cases
Excel dashboard and chart creation
Data visualization and storytelling
Business and market analysis practice
Portfolio or learning projects
👤 Prepared by
Adewale Lateef W — for data analysis and Excel dashboard learning purposes.
Facebook
TwitterThis report from the GLA Intelligence Unit compares 2011 census estimates of the population aged 0-18 to the following alternative data sources:
• ONS 2010 based sub-national population projections (SNPP);
• GLA 2011 round population projections;
• General Practitioner registrations; and
• Child benefit claims.
The report is available to download here.
An Excel file containing the data behind charts and tables in the report is available to download here
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The open repository consists of two folders; Dataset and Picture. The dataset folder consists file “AWS Dataset Pangandaraan.xlsx”. There are 10 columns with three first columns as time attributes and the other six as atmosphere datasets. Each parameter has 8085 data, and Each parameter has a parameter index at the bottom of the column we added, including mMinimum, mMaximum, and Average values.
For further use, the user can choose one or more parameters for calculating or analyzing. For example, wind data (speed and direction) can be utilized to calculate Waves using the Hindcast method. Furthermore, the user can filter data by using the feature in Excel to extract the exact time range for analyzing various phenomena considered correlated to atmosphere data around Pangandaran, Indonesia.
The second folder, named “Picture,” contains three figures, including the monthly distribution of datasets, temporal data, and wind rose. Furthermore, the user can filter data by using the feature in Excel sheet to extract the exact time range for analyzing various phenomena considered correlated to atmosphere data around Pangandaran, Indonesia
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data archive accompanying the peer-reviewed journal article "Comparison of co–located rBC and EC mass concentration measurements during field campaigns at several European sites". In January 2021 this article was accepted for publication in the journal Atmospheric Measurement Techniques. Data are uploaded in the form of Igor 8.0 graphics source files (.pxp) and data exported to Excel spreadsheet (.xlsx).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This excel file contains individual data from two cohorts presented in our publication.
Each excel sheet presents a separate portion of data (=>one sheet per figure)
Facebook
TwitterThis dataset contains Normalized Difference Vegetation Index (NDVI) images of the 1999 growing season of the Toolik Lake Field station to document differences in on study site in control and treatment plots. For more information, please see the readme file. NOTE: This dataset contains the data in EXCEL format.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This excel file will do a statistical tests of whether two ROC curves are different from each other based on the Area Under the Curve. You'll need the coefficient from the presented table in the following article to enter the correct AUC value for the comparison: Hanley JA, McNeil BJ (1983) A method of comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology 148:839-843.