25 datasets found
  1. Sales Dashboard in Microsoft Excel

    • kaggle.com
    zip
    Updated Apr 14, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bhavana Joshi (2023). Sales Dashboard in Microsoft Excel [Dataset]. https://www.kaggle.com/datasets/bhavanajoshij/sales-dashboard-in-microsoft-excel/discussion
    Explore at:
    zip(253363 bytes)Available download formats
    Dataset updated
    Apr 14, 2023
    Authors
    Bhavana Joshi
    Description

    This interactive sales dashboard is designed in Excel for B2C type of Businesses like Dmart, Walmart, Amazon, Shops & Supermarkets, etc. using Slicers, Pivot Tables & Pivot Chart.

    Dashboard Overview

    1. Sales dashboard ==> basically, it is designed for the B2C type of business. like Dmart, Walmart, Amazon, Shops & supermarkets, etc.
    2. Slices ==> slices are used to drill down the data, on the basis of yearly, monthly, by sales type, and by mode of payment.
    3. Total Sales/Total Profits ==> here is, the total sales, total profit, and profit percentage these all are combined into a monthly format and we can hide or unhide it to view it as individually or comparative.
    4. Product Visual ==> the visual indicates product-wise sales for the selected period. Only 10 products are visualized at a glance, and you can scroll up & down to view other products in the list.
    5. Daily Sales ==> It shows day-wise sales. (Area Chart)
    6. Sales Type/Payment Mode ==> It shows sales percentage contribution based on the type of selling and mode of payment.
    7. Top Product & Category ==> this is for the top-selling product and product category.
    8. Category ==> the final one is the category-wise sales contribution.

    Datasheets Overview

    1. The dataset has the master data sheet or you can call it a catalog. It is added in the table form.
    2. The first column is the product ID the list of items in this column is unique.
    3. Then we have the product column instead of these two columns, we can manage with only one also but I kept it separate because sometimes product names can be the same, but some parameters will be different, like price, supplier, etc.
    4. The next column is the category column, which is the product category. like cosmetics, foods, drinks, electronics, etc.
    5. Then we have 4th column which is the unit of measure (UOM) you can update it also, based on the products you have.
    6. And the last two columns are buying price and selling price, which means unit purchasing price and unit selling price.

    Input Sheet

    The first column is the date of Selling. The second column is the product ID. The third column is quantity. The fourth column is sales types, like direct selling, are purchased by a wholesaler or ordered online. The fifth column is a mode of payment, which is online or in cash. You can update these two as per requirements. The last one is a discount percentage. if you want to offer any discount, you can add it here.

    Analysis Sheet: where all backend calculations are performed.

    So, basically these are the four sheets mentioned above with different tasks.

    However, a sales dashboard enables organizations to visualize their real-time sales data and boost productivity.

    A dashboard is a very useful tool that brings together all the data in the forms of charts, graphs, statistics and many more visualizations which lead to data-driven and decision making.

    Questions & Answers

    1. What percentage of profit ratio of sales are displayed in the year 2021 and year 2022? ==> Total profit ratio of sales in the year 2021 is 19% with large sales of PRODUCT42, whereas profit ratio of sales for 2022 is 22% with large sales of PRODUCT30.
    2. Which is the top product that have large number of sales in year 2021-2022? ==> The top product in the year 2021 is PRODUCT42 with the total sales of $12,798 whereas in the year 2022 the top product is PRODUCT30 with the total sales of $13,888.
    3. In Area Chart which product is highly sold on 28th April 2022? ==> The large number of sales on 28th April 2022 is for PRODUCT14 with a 24% of profit ratio.
    4. What is the sales type and payment mode present? ==> The sale type and payment modes show the sales percentage contribution based on the type of selling and mode of payment. Here, the sale types are Direct Sales with 52%, Online Sales with 33% and Wholesaler with 15%. Also, the payment modes are Online mode and Cash equally distributed with 50%.
    5. In which month the direct sales are highest in the year 2022? ==> The highest direct sales can be easily identified which is designed by monthly format and it’s the November month where direct sales are highest with 28% as compared with other months.
    6. Which payment mode is highly received in the year 2021 and year 2022? ==> The payments received in the year 2021 are the cash payments with 52% as compared with online transactions which are 48%. Also, the cash payment highly received is in the month of March, July and October with direct sales of 42%, Online with 45% and wholesaler with 13% with large sales of PRODUCT24. ==> The payments received in the year 2022 are the Online payments with 52% as compared with cash payments which are 48%. Also, the online payment highly received is in the month of Jan, Sept and December with direct sales of 45%, Online with 37% and whole...
  2. Graph Input Data Example.xlsx

    • figshare.com
    xlsx
    Updated Dec 26, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dr Corynen (2018). Graph Input Data Example.xlsx [Dataset]. http://doi.org/10.6084/m9.figshare.7506209.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Dec 26, 2018
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Dr Corynen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The various performance criteria applied in this analysis include the probability of reaching the ultimate target, the costs, elapsed times and system vulnerability resulting from any intrusion. This Excel file contains all the logical, probabilistic and statistical data entered by a user, and required for the evaluation of the criteria. It also reports the results of all the computations.

  3. Employee Analysis In Excel

    • kaggle.com
    zip
    Updated Mar 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Afolabi Raymond (2024). Employee Analysis In Excel [Dataset]. https://www.kaggle.com/datasets/afolabiraymond/employee-analysis-in-excel
    Explore at:
    zip(190258 bytes)Available download formats
    Dataset updated
    Mar 20, 2024
    Authors
    Afolabi Raymond
    Description

    In this project, I analysed the employees of an organization located in two distinct countries using Excel. This project covers:

    1) How to approach a data analysis project 2) How to systematically clean data 3) Doing EDA with Excel formulas & tables 4) How to use Power Query to combine two datasets 5) Statistical Analysis of data 6) Using formulas like COUNTIFS, SUMIFS, XLOOKUP 7) Making an information finder with your data 8) Male vs. Female Analysis with Pivot tables 9) Calculating Bonuses based on business rules 10) Visual analytics of data with 4 topics 11) Analysing the salary spread (Histograms & Box plots) 12) Relationship between Salary & Rating 13) Staff growth over time - trend analysis 14) Regional Scorecard to compare NZ with India

    Including various Excel features such as: 1) Using Tables 2) Working with Power Query 3) Formulas 4) Pivot Tables 5) Conditional formatting 6) Charts 7) Data Validation 8) Keyboard Shortcuts & tricks 9) Dashboard Design

  4. Z

    A study on real graphs of fake news spreading on Twitter

    • data.niaid.nih.gov
    Updated Aug 20, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amirhosein Bodaghi (2021). A study on real graphs of fake news spreading on Twitter [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3711599
    Explore at:
    Dataset updated
    Aug 20, 2021
    Dataset provided by
    Federal University of Rio de Janeiro
    Authors
    Amirhosein Bodaghi
    Description

    *** Fake News on Twitter ***

    These 5 datasets are the results of an empirical study on the spreading process of newly fake news on Twitter. Particularly, we have focused on those fake news which have given rise to a truth spreading simultaneously against them. The story of each fake news is as follow:

    1- FN1: A Muslim waitress refused to seat a church group at a restaurant, claiming "religious freedom" allowed her to do so.

    2- FN2: Actor Denzel Washington said electing President Trump saved the U.S. from becoming an "Orwellian police state."

    3- FN3: Joy Behar of "The View" sent a crass tweet about a fatal fire in Trump Tower.

    4- FN4: The animated children's program 'VeggieTales' introduced a cannabis character in August 2018.

    5- FN5: In September 2018, the University of Alabama football program ended its uniform contract with Nike, in response to Nike's endorsement deal with Colin Kaepernick.

    The data collection has been done in two stages that each provided a new dataset: 1- attaining Dataset of Diffusion (DD) that includes information of fake news/truth tweets and retweets 2- Query of neighbors for spreaders of tweets that provides us with Dataset of Graph (DG).

    DD

    DD for each fake news story is an excel file, named FNx_DD where x is the number of fake news, and has the following structure:

    The structure of excel files for each dataset is as follow:

    Each row belongs to one captured tweet/retweet related to the rumor, and each column of the dataset presents a specific information about the tweet/retweet. These columns from left to right present the following information about the tweet/retweet:

    User ID (user who has posted the current tweet/retweet)

    The description sentence in the profile of the user who has published the tweet/retweet

    The number of published tweet/retweet by the user at the time of posting the current tweet/retweet

    Date and time of creation of the account by which the current tweet/retweet has been posted

    Language of the tweet/retweet

    Number of followers

    Number of followings (friends)

    Date and time of posting the current tweet/retweet

    Number of like (favorite) the current tweet had been acquired before crawling it

    Number of times the current tweet had been retweeted before crawling it

    Is there any other tweet inside of the current tweet/retweet (for example this happens when the current tweet is a quote or reply or retweet)

    The source (OS) of device by which the current tweet/retweet was posted

    Tweet/Retweet ID

    Retweet ID (if the post is a retweet then this feature gives the ID of the tweet that is retweeted by the current post)

    Quote ID (if the post is a quote then this feature gives the ID of the tweet that is quoted by the current post)

    Reply ID (if the post is a reply then this feature gives the ID of the tweet that is replied by the current post)

    Frequency of tweet occurrences which means the number of times the current tweet is repeated in the dataset (for example the number of times that a tweet exists in the dataset in the form of retweet posted by others)

    State of the tweet which can be one of the following forms (achieved by an agreement between the annotators):

    r : The tweet/retweet is a fake news post

    a : The tweet/retweet is a truth post

    q : The tweet/retweet is a question about the fake news, however neither confirm nor deny it

    n : The tweet/retweet is not related to the fake news (even though it contains the queries related to the rumor, but does not refer to the given fake news)

    DG

    DG for each fake news contains two files:

    A file in graph format (.graph) which includes the information of graph such as who is linked to whom. (This file named FNx_DG.graph, where x is the number of fake news)

    A file in Jsonl format (.jsonl) which includes the real user IDs of nodes in the graph file. (This file named FNx_Labels.jsonl, where x is the number of fake news)

    Because in the graph file, the label of each node is the number of its entrance in the graph. For example if node with user ID 12345637 be the first node which has been entered into the graph file then its label in the graph is 0 and its real ID (12345637) would be at the row number 1 (because the row number 0 belongs to column labels) in the jsonl file and so on other node IDs would be at the next rows of the file (each row corresponds to 1 user id). Therefore, if we want to know for example what the user id of node 200 (labeled 200 in the graph) is, then in jsonl file we should look at row number 202.

    The user IDs of spreaders in DG (those who have had a post in DD) would be available in DD to get extra information about them and their tweet/retweet. The other user IDs in DG are the neighbors of these spreaders and might not exist in DD.

  5. National Teacher and Principal Survey: Tables Library Data

    • datalumos.org
    delimited
    Updated Jun 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States Department of Education (2025). National Teacher and Principal Survey: Tables Library Data [Dataset]. http://doi.org/10.3886/E234604V1
    Explore at:
    delimitedAvailable download formats
    Dataset updated
    Jun 27, 2025
    Dataset authored and provided by
    United States Department of Educationhttps://ed.gov/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    2015 - 2021
    Area covered
    United States
    Description

    About NTPSThe National Teacher and Principal Survey (NTPS) is a system of related questionnaires that provide descriptive data on the context of elementary and secondary education while also giving policymakers a variety of statistics on the condition of education in the United States.The NTPS is a redesign of the Schools and Staffing Survey (SASS), which the National Center for Education Statistics (NCES) conducted from 1987 to 2011. The design of the NTPS is a product of three key goals coming out of the SASS program: flexibility, timeliness, and integration with other Department of Education collections. The NTPS collects data on core topics including teacher and principal preparation, classes taught, school characteristics, and demographics of the teacher and principal labor force every two to three years. In addition, each administration of NTPS contains rotating modules on important education topics such as: professional development, working conditions, and evaluation. This approach allows policy makers and researchers to assess trends on both stable and dynamic topics.Data OrganizationEach table has an associated excel and excel SE file, which are grouped together in a folder in the dataset (one folder per table). The folders are named based on the excel file names, as they were when downloaded from the National Center for Education Statistics (NCES) website.In the NTPS folder, there is a catalog csv that provides a crosswalk between the folder names and the table titles.The documentation folder contains (1) codebooks for NTPS generated in NCES datalabs, (2) questionnaires for NTPS downloaded from the study website and (3) reports related to NTPS found in the NCES resource library

  6. Petre_Slide_CategoricalScatterplotFigShare.pptx

    • figshare.com
    pptx
    Updated Sep 19, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Benj Petre; Aurore Coince; Sophien Kamoun (2016). Petre_Slide_CategoricalScatterplotFigShare.pptx [Dataset]. http://doi.org/10.6084/m9.figshare.3840102.v1
    Explore at:
    pptxAvailable download formats
    Dataset updated
    Sep 19, 2016
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Benj Petre; Aurore Coince; Sophien Kamoun
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Categorical scatterplots with R for biologists: a step-by-step guide

    Benjamin Petre1, Aurore Coince2, Sophien Kamoun1

    1 The Sainsbury Laboratory, Norwich, UK; 2 Earlham Institute, Norwich, UK

    Weissgerber and colleagues (2015) recently stated that ‘as scientists, we urgently need to change our practices for presenting continuous data in small sample size studies’. They called for more scatterplot and boxplot representations in scientific papers, which ‘allow readers to critically evaluate continuous data’ (Weissgerber et al., 2015). In the Kamoun Lab at The Sainsbury Laboratory, we recently implemented a protocol to generate categorical scatterplots (Petre et al., 2016; Dagdas et al., 2016). Here we describe the three steps of this protocol: 1) formatting of the data set in a .csv file, 2) execution of the R script to generate the graph, and 3) export of the graph as a .pdf file.

    Protocol

    • Step 1: format the data set as a .csv file. Store the data in a three-column excel file as shown in Powerpoint slide. The first column ‘Replicate’ indicates the biological replicates. In the example, the month and year during which the replicate was performed is indicated. The second column ‘Condition’ indicates the conditions of the experiment (in the example, a wild type and two mutants called A and B). The third column ‘Value’ contains continuous values. Save the Excel file as a .csv file (File -> Save as -> in ‘File Format’, select .csv). This .csv file is the input file to import in R.

    • Step 2: execute the R script (see Notes 1 and 2). Copy the script shown in Powerpoint slide and paste it in the R console. Execute the script. In the dialog box, select the input .csv file from step 1. The categorical scatterplot will appear in a separate window. Dots represent the values for each sample; colors indicate replicates. Boxplots are superimposed; black dots indicate outliers.

    • Step 3: save the graph as a .pdf file. Shape the window at your convenience and save the graph as a .pdf file (File -> Save as). See Powerpoint slide for an example.

    Notes

    • Note 1: install the ggplot2 package. The R script requires the package ‘ggplot2’ to be installed. To install it, Packages & Data -> Package Installer -> enter ‘ggplot2’ in the Package Search space and click on ‘Get List’. Select ‘ggplot2’ in the Package column and click on ‘Install Selected’. Install all dependencies as well.

    • Note 2: use a log scale for the y-axis. To use a log scale for the y-axis of the graph, use the command line below in place of command line #7 in the script.

    7 Display the graph in a separate window. Dot colors indicate

    replicates

    graph + geom_boxplot(outlier.colour='black', colour='black') + geom_jitter(aes(col=Replicate)) + scale_y_log10() + theme_bw()

    References

    Dagdas YF, Belhaj K, Maqbool A, Chaparro-Garcia A, Pandey P, Petre B, et al. (2016) An effector of the Irish potato famine pathogen antagonizes a host autophagy cargo receptor. eLife 5:e10856.

    Petre B, Saunders DGO, Sklenar J, Lorrain C, Krasileva KV, Win J, et al. (2016) Heterologous Expression Screens in Nicotiana benthamiana Identify a Candidate Effector of the Wheat Yellow Rust Pathogen that Associates with Processing Bodies. PLoS ONE 11(2):e0149035

    Weissgerber TL, Milic NM, Winham SJ, Garovic VD (2015) Beyond Bar and Line Graphs: Time for a New Data Presentation Paradigm. PLoS Biol 13(4):e1002128

    https://cran.r-project.org/

    http://ggplot2.org/

  7. H

    Data from: From CAS to EAS – Calculating and Plotting the Compressibility...

    • dataverse.harvard.edu
    • search.dataone.org
    Updated Apr 10, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Danny Steeven Sarmiento Beltran (2024). From CAS to EAS – Calculating and Plotting the Compressibility Correction Chart [Dataset]. http://doi.org/10.7910/DVN/6QWEX1
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 10, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Danny Steeven Sarmiento Beltran
    License

    https://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.7910/DVN/6QWEX1https://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.7910/DVN/6QWEX1

    Description

    Purpose – The conversion between calibrated airspeed (CAS) and equivalent airspeed (EAS) is relatively cumbersome, because it involves the calculation of incompressible flow, for which the equations are quite long. If calculations on the computer are required, conversions with equations are necessary. In contrast, this project calculates a CAS to EAS Compressibility Correction Chart, which allows to convert CAS to EAS very quickly by reading the correction from a graph. --- Methodology – In Excel, compressibility correction is achieved through flight mechanics formulas. The correction is calculated with two distinct functions, one based on Mach Number and the other on pressure altitude. These functions are graphed individually and then integrated to produce the Compressibility Correction Chart. --- Findings – The Compressibility Correction Chart was successfully recreated as a 2-D graph. Upon comparison with other correction charts, the EAS-CAS-results demonstrate a mere 0% deviation, proving the accuracy of the findings and validating their near-perfect alignment. --- Research Limitations – Due to a limitation in Excel, which allows for 255 series for plotting, the range of input parameters had to be adjusted accordingly. The iterations of altitude span 1000 ft intervals, while those for Mach Number span 0.05 intervals. --- Practical Implications – Pilots can easily use the Compressibility Correction Chart for quick and highly accurate calculations when needed. --- Originality – CAS-EAS Compressibility Correction Charts are available in other sources. This paper represents a recreation of the 2-D Correction Chart by the combination of plots: one as function of Mach Number and the other of pressure altitude, using the Excel Software.

  8. d

    Data from: National Diabetes Audit

    • digital.nhs.uk
    pdf, ppt, pptx, xlsm +1
    Updated Jan 31, 2017
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2017). National Diabetes Audit [Dataset]. https://digital.nhs.uk/data-and-information/publications/statistical/national-diabetes-audit
    Explore at:
    pdf(339.3 kB), xlsm(379.1 kB), pdf(553.6 kB), xlsx(1.1 MB), xlsm(1.8 MB), xlsx(1.3 MB), pptx(2.1 MB), pdf(567.3 kB), pdf(1.1 MB), pptx(1.5 MB), ppt(618.0 kB), xlsm(23.4 MB)Available download formats
    Dataset updated
    Jan 31, 2017
    License

    https://digital.nhs.uk/about-nhs-digital/terms-and-conditionshttps://digital.nhs.uk/about-nhs-digital/terms-and-conditions

    Time period covered
    Jan 1, 2015 - Mar 31, 2016
    Area covered
    England, Wales
    Description

    The National Diabetes Audit (NDA) continues to provide a comprehensive view of Diabetes Care in England and Wales and measures the effectiveness of diabetes healthcare against NICE Clinical Guidelines and NICE Quality Standards, in England and Wales. This national report presents the key findings and recommendations on care processes and treatment target achievement rates from 2015-2016 in all age groups in England and Wales along with information on offers and attendance for structured education places. This year, for the first time information is reported on the number of people with diabetes who also have a learning disability and completion of care processes and treatment target achievement. A separate national report presents the key findings and recommendations; The Learning Disability - Supplementary Information report has also been developed as a power point presentation. As with last year's publication the main report contains information on the national key findings and recommendations and has also been developed as a power point presentation, along with slides highlighting the national findings there is also space to allow the incorporation of locally produced slides using the tables and charts from the interactive spreadsheets. We hope that users will find this beneficial to help disseminate the results of the audit locally. Supplementary data for England and Wales are contained in the excel spreadsheets. There are 6 excel spreadsheets; two spreadsheets contains the tables and charts in the national report and learning disability report along with some supplementary national figures, a further spreadsheet provides all 8 care process completion and all 3 treatment target achievement for CCGs/LHBs by age group. There are also 3 interactive excel spreadsheets which allow users to select the CCG/GP practice (England only), Local Health Board (Wales only) or Secondary Care Service (England only) of choice, information for the chosen site is then displayed in tables and charts. Please note that the interactive excel spreadsheets are large files (approximately 12MB) and may take some time to open. This report was updated on 09/02/17. The following amendments have been made to the report: The CCG/GP spreadsheet was updated as some of the CCGs/general practices were not available in the interactive aspect. We have also added a reference table for practice codes and names. All the data for care processes and treatment targets was correct in the supporting data tables. The spreadsheet report for Wales and LHBs has been amended. A practice wrongly appeared in a LHB, this practice has now been assigned to the correct LHB which has changed the results for LHB 7A2 and 7A3. The specialist service spreadsheet has been updated as the interactive aspect was not working for all hospitals. This does not change the results for specialist services. Both the CCG/GP and LHB spreadsheets have been updated for structured education offered and attendance. This has changed the results for individual CCGs/Practices and LHBs but not the national results. We have updated the methodology documentation for structured education to explain more fully how we have analysed and reported on the structured education data for the 2015-16 audit report. We have also added a link, which can be found below in resources, to our interactive dashboard for the 2015-16 report. This dashboard provides CCGs, LHBs and GPs (England only) with an alternative way to view their data for completion of all 8 care process and achievement of all 3 treatment targets as well their data on registrations by age, sex, deprivation and ethnicity.

  9. European Mountain Territory and Value Chains: Knowledge Graphs, CSV, HTML,...

    • figshare.com
    txt
    Updated Jul 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    aimhdhgroup (2024). European Mountain Territory and Value Chains: Knowledge Graphs, CSV, HTML, and Excel Data [Dataset]. http://doi.org/10.6084/m9.figshare.25243009.v8
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jul 29, 2024
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    aimhdhgroup
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains a collection of data about 454 value chains from 23 rural European areas of 16 countries. This data is obtained through a semi-automatic workflow that transforms raw textual data from an unstructured MS Excel sheet into semantic knowledge graphs.In particular, the repository contains:MS Excel sheet containing different value chains details provided by MOuntain Valorisation through INterconnectedness and Green growth (MOVING) European project;454 CSV files containing events, titles, entities and coordinates of narratives of each value chain, obtained by pre-processing the MS Excel sheet454 Web Ontology Language (OWL) files. This collection of files is the result of the semi-automatic workflow, and is organized as a semantic knowledge graph of narratives, where each narrative is a sub-graph explaining one among the 454 value chains and its territory aspects. The knowledge graph is based on the Narrative Ontology, an ontology developed by Institute of Information Science and Technologies (ISTI-CNR) as an extension of CIDOC CRM, FRBRoo, and OWL Time.Two CSV files that compile all the possible available information extracted from 454 Web Ontology Language (OWL) files.GeoPackage files with the geographic coordinates related to the narratives.The HTML files that show all the different SPARQL and GeoSPARQL queries.The HTML files that show the story maps about the 454 value chains.An image showing how the various components of the dataset interact with each other.

  10. c

    ckanext-excelforms

    • catalog.civicdataecosystem.org
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). ckanext-excelforms [Dataset]. https://catalog.civicdataecosystem.org/dataset/ckanext-excelforms
    Explore at:
    Dataset updated
    Jun 4, 2025
    Description

    The excelforms extension for CKAN provides a mechanism for users to input data into Table Designer tables using Excel-based forms, enhancing data entry efficiency. This extension focuses on streamlining the process of adding data rows to tables within CKAN's Table Designer. A key component of the functionality is the ability to import multiple rows in a single operation, which significant reduces overhead associated with entering multiple data points. Key Features: Excel-Based Forms: Users can enter data using familiar Excel spreadsheets, leveraging their existing skills and software. Table Designer Integration: Designed to work seamlessly with CKAN's Table Designer, extending its functionality to include Excel-based data entry. Multiple Row Import: Supports importing multiple rows of data at once, improving data entry efficiency, especially when dealing with large datasets. Data mapping: Simplifies the process of aligning excel column headers to their corresponding data fields in tables. Improved Data Entry Speed: Provides an alternative to manual data entry, resulting in faster population and easier updates. Technical Integration: The excelforms extension integrates with CKAN by introducing new functionalities and workflows around the Table Designer plugin. The installation instructions specify that this plugin to be added before the tabledesigner plugin. Benefits & Impact: By enabling Excel-based data entry, the excelforms extension improves the user experience for those familiar with spreadsheet software. The ability to import multiple rows simultaneously significantly reduces the time and effort required to populate tables, particularly when dealing with large amounts of data. The impact is better data accessibility through the streamlining of data population workflows.

  11. f

    UC_vs_US Statistic Analysis.xlsx

    • figshare.com
    xlsx
    Updated Jul 9, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    F. (Fabiano) Dalpiaz (2020). UC_vs_US Statistic Analysis.xlsx [Dataset]. http://doi.org/10.23644/uu.12631628.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jul 9, 2020
    Dataset provided by
    Utrecht University
    Authors
    F. (Fabiano) Dalpiaz
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Sheet 1 (Raw-Data): The raw data of the study is provided, presenting the tagging results for the used measures described in the paper. For each subject, it includes multiple columns: A. a sequential student ID B an ID that defines a random group label and the notation C. the used notation: user Story or use Cases D. the case they were assigned to: IFA, Sim, or Hos E. the subject's exam grade (total points out of 100). Empty cells mean that the subject did not take the first exam F. a categorical representation of the grade L/M/H, where H is greater or equal to 80, M is between 65 included and 80 excluded, L otherwise G. the total number of classes in the student's conceptual model H. the total number of relationships in the student's conceptual model I. the total number of classes in the expert's conceptual model J. the total number of relationships in the expert's conceptual model K-O. the total number of encountered situations of alignment, wrong representation, system-oriented, omitted, missing (see tagging scheme below) P. the researchers' judgement on how well the derivation process explanation was explained by the student: well explained (a systematic mapping that can be easily reproduced), partially explained (vague indication of the mapping ), or not present.

    Tagging scheme:
    Aligned (AL) - A concept is represented as a class in both models, either
    

    with the same name or using synonyms or clearly linkable names; Wrongly represented (WR) - A class in the domain expert model is incorrectly represented in the student model, either (i) via an attribute, method, or relationship rather than class, or (ii) using a generic term (e.g., user'' instead ofurban planner''); System-oriented (SO) - A class in CM-Stud that denotes a technical implementation aspect, e.g., access control. Classes that represent legacy system or the system under design (portal, simulator) are legitimate; Omitted (OM) - A class in CM-Expert that does not appear in any way in CM-Stud; Missing (MI) - A class in CM-Stud that does not appear in any way in CM-Expert.

    All the calculations and information provided in the following sheets
    

    originate from that raw data.

    Sheet 2 (Descriptive-Stats): Shows a summary of statistics from the data collection,
    

    including the number of subjects per case, per notation, per process derivation rigor category, and per exam grade category.

    Sheet 3 (Size-Ratio):
    

    The number of classes within the student model divided by the number of classes within the expert model is calculated (describing the size ratio). We provide box plots to allow a visual comparison of the shape of the distribution, its central value, and its variability for each group (by case, notation, process, and exam grade) . The primary focus in this study is on the number of classes. However, we also provided the size ratio for the number of relationships between student and expert model.

    Sheet 4 (Overall):
    

    Provides an overview of all subjects regarding the encountered situations, completeness, and correctness, respectively. Correctness is defined as the ratio of classes in a student model that is fully aligned with the classes in the corresponding expert model. It is calculated by dividing the number of aligned concepts (AL) by the sum of the number of aligned concepts (AL), omitted concepts (OM), system-oriented concepts (SO), and wrong representations (WR). Completeness on the other hand, is defined as the ratio of classes in a student model that are correctly or incorrectly represented over the number of classes in the expert model. Completeness is calculated by dividing the sum of aligned concepts (AL) and wrong representations (WR) by the sum of the number of aligned concepts (AL), wrong representations (WR) and omitted concepts (OM). The overview is complemented with general diverging stacked bar charts that illustrate correctness and completeness.

    For sheet 4 as well as for the following four sheets, diverging stacked bar
    

    charts are provided to visualize the effect of each of the independent and mediated variables. The charts are based on the relative numbers of encountered situations for each student. In addition, a "Buffer" is calculated witch solely serves the purpose of constructing the diverging stacked bar charts in Excel. Finally, at the bottom of each sheet, the significance (T-test) and effect size (Hedges' g) for both completeness and correctness are provided. Hedges' g was calculated with an online tool: https://www.psychometrica.de/effect_size.html. The independent and moderating variables can be found as follows:

    Sheet 5 (By-Notation):
    

    Model correctness and model completeness is compared by notation - UC, US.

    Sheet 6 (By-Case):
    

    Model correctness and model completeness is compared by case - SIM, HOS, IFA.

    Sheet 7 (By-Process):
    

    Model correctness and model completeness is compared by how well the derivation process is explained - well explained, partially explained, not present.

    Sheet 8 (By-Grade):
    

    Model correctness and model completeness is compared by the exam grades, converted to categorical values High, Low , and Medium.

  12. Store Data Analysis using MS excel

    • kaggle.com
    zip
    Updated Mar 10, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NisshaaChoudhary (2024). Store Data Analysis using MS excel [Dataset]. https://www.kaggle.com/datasets/nisshaachoudhary/store-data-analysis-using-ms-excel/discussion
    Explore at:
    zip(13048217 bytes)Available download formats
    Dataset updated
    Mar 10, 2024
    Authors
    NisshaaChoudhary
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Vrinda Store: Interactive Ms Excel dashboardVrinda Store: Interactive Ms Excel dashboard Feb 2024 - Mar 2024Feb 2024 - Mar 2024 The owner of Vrinda store wants to create an annual sales report for 2022. So that their employees can understand their customers and grow more sales further. Questions asked by Owner of Vrinda store are as follows:- 1) Compare the sales and orders using single chart. 2) Which month got the highest sales and orders? 3) Who purchased more - women per men in 2022? 4) What are different order status in 2022?

    And some other questions related to business. The owner of Vrinda store wanted a visual story of their data. Which can depict all the real time progress and sales insight of the store. This project is a Ms Excel dashboard which presents an interactive visual story to help the Owner and employees in increasing their sales. Task performed : Data cleaning, Data processing, Data analysis, Data visualization, Report. Tool used : Ms Excel The owner of Vrinda store wants to create an annual sales report for 2022. So that their employees can understand their customers and grow more sales further. Questions asked by Owner of Vrinda store are as follows:- 1) Compare the sales and orders using single chart. 2) Which month got the highest sales and orders? 3) Who purchased more - women per men in 2022? 4) What are different order status in 2022? And some other questions related to business. The owner of Vrinda store wanted a visual story of their data. Which can depict all the real time progress and sales insight of the store. This project is a Ms Excel dashboard which presents an interactive visual story to help the Owner and employees in increasing their sales. Task performed : Data cleaning, Data processing, Data analysis, Data visualization, Report. Tool used : Ms Excel Skills: Data Analysis · Data Analytics · ms excel · Pivot Tables

  13. a

    Meteorological data from datalogger and sensors near 21 plots at Thule Air...

    • arcticdata.io
    • search.dataone.org
    Updated Oct 8, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Steven F. Oberbauer (2020). Meteorological data from datalogger and sensors near 21 plots at Thule Air Base, Greenland, 2013 [Dataset]. http://doi.org/10.18739/A2K931726
    Explore at:
    Dataset updated
    Oct 8, 2020
    Dataset provided by
    Arctic Data Center
    Authors
    Steven F. Oberbauer
    Area covered
    Description

    Data from a small meteorological station set-up near 21 plots in 2013. Campbell Scientific CR10 datalogger, Campbell 215 temp/humidity sensor, two Apogee PAR sensors (one facing up, another facing down), soil temperature with type T thermocouple, Campbell CS616 soil reflectometer for soil water content. Data collected between DOY153 and DOY224. Logger collected a measurement every 60 seconds and averaged to 5 min data table. Post-processing to 60 min averages and daily mean, max, and min. MS Excel (.xls) workbook with three worksheets. Worksheet 5_min data columns: year, day of year, hour, minute, fractional day of year, incoming PAR (umol m-2 s-1), reflected PAR (umol m-2 s-1), albedo calculated as (par_out/par_in)*100, air temperature (C), relative humidity (%), soil temp (C), raw reflectance time reported by CS616, calculated volumetric water content corrected for soil temperature (v/v), battery voltage. Worksheet 60_min data columns (units as above): day of year, hour, fractional day of year, week of year, air temperature, relative humidity, incoming PAR, outgoing PAR, albedo, soil temperature, and volumetric water content. Worksheet daily (units as above unless indicated): date, day of year, air temperature min, air temperature max, air temperature mean, relative humidity min, relative humidity max, relative humidity mean, soil temperature mean, soil water content mean, total incoming PAR (mol m-2 d-1), out going PAR (mol m-2 d-1), albedo, minimum battery voltage. missing values are -6999 or 6999. Soil temperature and VWC not valid until instruments could be installed in the soil DOY 163. RH sensor failed DOY177, did not function again. Battery issue DOY 183.

  14. ICSE 2025 - Artifact

    • figshare.com
    pdf
    Updated Jan 24, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FARIDAH AKINOTCHO (2025). ICSE 2025 - Artifact [Dataset]. http://doi.org/10.6084/m9.figshare.28194605.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jan 24, 2025
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    FARIDAH AKINOTCHO
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Mobile Application Coverage: The 30% Curse and Ways Forward## Purpose In this artifact, we provide the information about our benchmarks used for manual and tool exploration. We include coverage results achieved by tools and human analysts as well as plots of the coverage progression over time for analysts. We further provide manual analysis results for our case study, more specifically extracted reasons for unreachability for the case study apps and extracted code-level properties, which constitute a ground truth for future work in coverage explainability. Finally, we identify a list of beyond-GUI exploration tools and categorize them for future work to take inspiration from. We are claiming available and reusable badges; the artifact is fully aligned with the results described in our paper and comprehensively documented.## ProvenanceThe paper preprint is available here: https://people.ece.ubc.ca/mjulia/publications/Mobile_Application_Coverage_ICSE2025.pdf## Data The artifact submission is organized into five parts:- 'BenchInfo' excel sheet describing our experiment dataset- 'Coverage' folder containing coverage results for tools and analysts (RQ1) - 'Reasons' excel sheet describing our manually extracted reasons for unreachability (RQ2)- 'ActivationProperties' excel sheet describing our manually extracted code properties of unreached activities (RQ3)- 'ActivationProperties-Graph' pdf which presents combinations of the extracted code properties in a graph format.- 'BeyondGUI' folder containing information about identified techniques which go beyond GUI exploration.The artifact requires about 15MB of storage.### Dataset: 'BenchInfo.xlsx'This file list the full application dataset used for experiments into three tabs: 'BenchNotGP' (apps from AndroTest dataset which are not on Google Play), 'BenchGP' (apps from AndroTest which are also on Google Play) and 'TopGP' (top ranked free apps from Google Play). Each tab contains the following information:- Application Name- Package Name- Version Used (Latest)- Original Version- # Activities- Minimum SDK- Target SDK- # Permissions (in Manifest)- List of Permissions (in Manifest)- # Features (in Manifest)- List of Features (in Manifest)The 'TopGP' sheet also includes Google-Play-specific information, namely:- Category (one of 32 app categories)- Downloads- Popularity RankThe 'BenchGP' and 'BenchNotGP' sheets also include the original version (included in the AndroTest benchmark) and the source (one of F-Droid, Github or Google Code Archives).### RQ1: 'Coverage'The 'Coverage' folder includes coverage results for tools and analysts, and is structured as follows:- 'CoverageResults.xlsx": An excel sheet containing the coverage results achieved by each human analysts and tool. - The first tab described the results over all apps for analysts combined, tools combined, and analysts + tools, which map to Table II in the paper. - Each of the following 42 tab, one per app in TopGP, marks the activities reached by Analyst 1, Analyst 2, Tool 1 (ape) and Tool 2 (fastbot), with an 'x' in the corresponding column to indicate that the activity was reached by the given agent.- 'Plots': A folder containing plots of the progressive coverage over time of analysts, split into one folder for 'Analyst1' and one for 'Analyst2'. - Each of the analysts' folder includes a subfolder per benchmark ('BenchNotGP', 'BenchGP' and 'TopGP'), containing as many png files as applications in the benchmark (respectively 47, 14 and 42 image files) named 'ANALYST_[X]_[APP_PACKAGE_NAME]'.png.### RQ2: 'Reasons.xslx'This file contains the extracted reasons for unreachability for the 11 apps manually analyzed. - The 'Summary' tab provides an overview of unreached activities per reasons over all apps and per app, which corresponds to Table III in the paper. - The following 11 tabs, each corresponding to and named after a single application, describe the reasons associated with each activity of that application. Each column corresponds to a single reason and 'x' indicates that the activity is unreached due to the reason in that column. The top row sums up the total number of activities unreached due to a given reason in each column.- The activities at the bottom which are greyed out correspond to activities that were reached during exploration, and are thus excluded from the reason extraction.### RQ3: 'ActivationProperties.xslx'This file contains the full list of activation properties extracted for each of the 185 activities analyzed for RQ2.The first half of the columns (columns C-M) correspond to the reasons (excluding Transitive, Inconclusive and No Caller) and the second half (columns N-AD) correspond to properties described in Figure 5 in the paper, namely:- Exported- Activation Location: - Code: GUI/lifecycle, Other Android or App-specific - Manifest- Activation Guards: - Enforcement: In Code or In Resources - Restriction: Mandatory or Discretionary- Data: - Type: Parameters, Execution Dependencies - Format: Primitive, Strings, ObjectsThe rows are grouped by applications, and each row correspond to an activity of that application. 'x' in a given column indicates the presence of the property in that column within the analyzed path to the activity. The third and fourth rows sums up the numbers and percentages for each property, as reported in Figure 5.### RQ3: 'ActivationProperties-Graph.pdf'This file shows combinations of the individual properties listed in 'ActivationProperties.xlsx' in a graph format, extending the combinations described in Table IV with data (types and format) and reasons for unreachability.### BeyondGUIThis folder includes:- 'ToolInfo.xlsx': an excel sheet listing the identified 22 beyond-GUI papers, the date of publication, availability, invasiveness (Source code, Bytecode, framework, OS) and their targeting strategy (None, Manual or Automated).- ToolClassification.pdf': a pdf file describing our paper selection methodology as well as a classication of the techniques in terms of Invocation Strategy, Navigation Strategy, Value Generation Strategy, and Value Generation Types. We fully introduced these categories in the pdf file.## Requirements & technology skills assumed by the reviewer evaluating the artifactThe artifact entirely consists of Excel sheets which can be opened with common Excel visualization software, i.e., Microsoft Excel, coverage plots as PNG files and PDF files. It requires about 15MB of storage in total.No other specific technology skills are required of the reviewer evaluating the artifact.

  15. D

    Graph Database For Telecom Networks Market Research Report 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Sep 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Graph Database For Telecom Networks Market Research Report 2033 [Dataset]. https://dataintelo.com/report/graph-database-for-telecom-networks-market
    Explore at:
    pdf, csv, pptxAvailable download formats
    Dataset updated
    Sep 30, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Graph Database for Telecom Networks Market Outlook



    According to our latest research, the global market size for Graph Database for Telecom Networks in 2024 stands at USD 1.47 billion, with a robust compound annual growth rate (CAGR) of 22.1% projected from 2025 to 2033. By the end of 2033, the market is expected to reach USD 7.02 billion. This remarkable growth is primarily fueled by the increasing complexity of telecom networks, the proliferation of connected devices, and the urgent need for real-time data processing and analytics to drive operational efficiency and competitive differentiation. As per our latest research, the adoption of graph database technologies is accelerating in the telecom sector, enabling organizations to address challenges related to data interconnectivity, fraud detection, and network optimization.




    One of the most significant growth factors in the Graph Database for Telecom Networks market is the exponential rise in data generated by telecom networks, driven by the widespread adoption of 5G technology, IoT devices, and digital transformation initiatives. Telecom operators are increasingly leveraging graph databases to model and manage complex relationships between network elements, subscribers, and services. These databases enable organizations to gain a holistic view of their networks, streamline network management processes, and quickly identify and resolve issues. The ability of graph databases to handle dynamic, highly connected data structures gives telecom operators a strategic advantage in managing network topologies, optimizing routing, and delivering superior customer experiences. As the volume and complexity of telecom data continue to surge, the demand for advanced graph database solutions is expected to grow at a rapid pace, underpinning the market's impressive CAGR.




    Another critical driver for the Graph Database for Telecom Networks market is the increasing emphasis on fraud detection and prevention. Telecom networks are frequent targets for sophisticated fraud schemes, including subscription fraud, SIM card cloning, and international revenue share fraud. Traditional relational databases often fall short in detecting complex fraud patterns that span multiple entities and relationships. In contrast, graph databases excel at uncovering hidden connections and suspicious activity in real-time, enabling telecom operators to proactively mitigate risks and reduce financial losses. By integrating graph analytics with machine learning algorithms, telecom companies can enhance their ability to detect anomalies, improve security, and comply with regulatory requirements. This growing need for advanced fraud detection capabilities is a key factor propelling the adoption of graph database technologies in the telecom industry.




    The evolution of customer analytics and personalized service offerings is also playing a pivotal role in driving the Graph Database for Telecom Networks market. Telecom operators are increasingly focused on delivering tailored services and experiences to retain customers and increase revenue. Graph databases empower organizations to analyze customer interactions, preferences, and behavior across multiple touchpoints, enabling hyper-personalized marketing, targeted upselling, and improved customer support. The ability to map and analyze complex customer journeys in real-time allows telecom companies to identify high-value segments, predict churn, and design effective retention strategies. As customer expectations continue to rise, the adoption of graph database solutions for advanced analytics and personalized service delivery is expected to accelerate, further fueling market expansion.




    Regionally, the Graph Database for Telecom Networks market is witnessing significant growth in Asia Pacific, North America, and Europe, with emerging economies in Latin America and the Middle East & Africa also showing considerable potential. North America currently leads the market, driven by the presence of major telecom operators, advanced network infrastructure, and early adoption of cutting-edge technologies. Asia Pacific is projected to exhibit the highest CAGR during the forecast period, supported by rapid digitalization, expanding mobile subscriber base, and substantial investments in 5G and IoT deployments. Europe remains a key market, benefiting from regulatory initiatives, strong R&D capabilities, and a mature telecom ecosystem. As telecom operators across regions strive to modernize their netw

  16. B

    Input-Output Tables, 1998 [Canada] [Excel]

    • borealisdata.ca
    • dataone.org
    Updated Sep 28, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statistics Canada (2023). Input-Output Tables, 1998 [Canada] [Excel] [Dataset]. http://doi.org/10.5683/SP/YEYAFX
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 28, 2023
    Dataset provided by
    Borealis
    Authors
    Statistics Canada
    License

    https://borealisdata.ca/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.5683/SP/YEYAFXhttps://borealisdata.ca/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.5683/SP/YEYAFX

    Area covered
    Canada
    Description

    Interprovincial Trade Flows (15F0002XDB) The interprovincial and international trade flows for goods and services by province and territory are available at the S-level of commodity aggregation in EXCEL files. National Input - Output Tables (15F0041XDB) The Input-Output accounting system consists of three tables. The input tables (USE tables) detail the commodities that are consumed by various industries. Output tables (MAKE tables) detail the commodities that are prod uced by various industries. Final demand tables detail the commodities bought by many categories of buyers (consumers, industries and government) for both consumption and investment purposes. These tables allow users to track intersectional exchanges of goods and services between industries and final demand categories such as personal expenditures, capital expenditures and public sector expenditures. There are four levels of detail: the "W" or Worksheet level with 303 industries, 727 commodities and 170 final demand categories, the "L" or Link level (the most detailed level that allows the construction of consistent time series of annual data from 1961 to 2002) with 117 industries, 469 commodities and 123 final demand categories, the "M" or Medium level with 62 industries, 111 commodities and 39 final demand categories, and the "S" or Small level with 25 industries, 59 commodities a nd 16 final demand categories. In 2009, several changes were made to the accounting system: there is a new level "D" that is the Detailed level, there are no "M" or "W" level tables, and there are two "L" level tables representing 1961 and 1997 aggregations. Provincial Input-Output Tables (15F0042XDB) The provincial input-output tables are constructed every year. The tables are available at the "S" level only. National and Provincial Multipliers (15F0046XDB) These are a series of Input-Output multipliers and ratios that allow users to quickly estimate the direct, indirect and total impacts of increases in industrial output or increases in an industry's labour force. These are the GDP, labour income, employment and gross output multipliers and ratios. Capital income multipliers and ratios can be calculated by subtracting the labour income figures from the GDP figures. National Symmetric Input-Output Tables - Aggregation Level S (15-207-XC B) The Industry Accounts Division of Statistics Canada publishes annual supply and use input-output (I-O) tables. While these rectangular, industry by commodity closely reflect actual economic transactions, certain analytical and modeling purposes, however, require symmetric industry-by-industry I-O tables. The symmetric industry by industry table shows the inter-industry transactions, that is, all purchases of an industry from all other industries including expenditures on imports and i nventory withdrawals as well as all expenditures on primary inputs. Similarly, the symmetric final demand table shows all purchases by a final demand category from all other industries, including expenditures on imports and inventory withdrawals as well as all expenditures on indirect taxes. National Symmetric Input-Output Tables - Aggregation Level L (15-208-XCB). The Industry Accounts Division of Statistics Canada publishes annual symmetric industry-by-industry I-O tables at the L level. The symmetric industry by industry table shows the inter-industry transactions, that is, all purchases of an industry from all other industries including expenditures on imports and inventory withdrawals as well as all expenditures on primary inputs. Similarly, the symmetric final demand table shows all purchases by a final demand category from all other industries, including expenditures on imports and inventory withdrawals as well as all expenditures on indirect taxes. Provincial GDP by Industry and Sector, at Basic Prices (15-209-XCB). This product presents estimates of Gross Domestic Product (GDP) by industry, in current dollars, evaluated at basic price for all provinces and territories. These estimates are derived from the provincial Input-Output tables. GDP measures the unduplicated value of production. The GDP by industry estimates are derived using a "value added" approach, that is, the value that a producer adds to their intermediate inputs before generating their own output. This allows not only for the computation of total economic production but also the industrial composition and origin of the economic production. When evaluated at basic prices, an industry's GDP is the sum of its factor incomes (wages and salaries, supplementary labour income, mixed income and other operating surplus) plus taxes less subsidies on production (labour and capital). Provincial Gross Output by Industry and Sector (15-210-XCB). This product presents estimates of gross output by industry, in current dollars, evaluated at modified basic price for all provinces and territories. These estimates are derived from the provincial Input-Output tables. Gross output...

  17. Adventure Works 2022 CSVs

    • kaggle.com
    zip
    Updated Nov 2, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Algorismus (2022). Adventure Works 2022 CSVs [Dataset]. https://www.kaggle.com/datasets/algorismus/adventure-works-in-excel-tables
    Explore at:
    zip(567646 bytes)Available download formats
    Dataset updated
    Nov 2, 2022
    Authors
    Algorismus
    License

    http://www.gnu.org/licenses/lgpl-3.0.htmlhttp://www.gnu.org/licenses/lgpl-3.0.html

    Description

    Adventure Works 2022 dataset

    How this Dataset is created?

    On the official website the dataset is available over SQL server (localhost) and CSVs to be used via Power BI Desktop running on Virtual Lab (Virtaul Machine). As per first two steps of Importing data are executed in the virtual lab and then resultant Power BI tables are copied in CSVs. Added records till year 2022 as required.

    How this Dataset may help you?

    this dataset will be helpful in case you want to work offline with Adventure Works data in Power BI desktop in order to carry lab instructions as per training material on official website. The dataset is useful in case you want to work on Power BI desktop Sales Analysis example from Microsoft website PL 300 learning.

    How to use this Dataset?

    Download the CSV file(s) and import in Power BI desktop as tables. The CSVs are named as tables created after first two steps of importing data as mentioned in the PL-300 Microsoft Power BI Data Analyst exam lab.

  18. f

    Numerical data (Excel spreadsheet) that underly the graphs in Figs 2G, 2H,...

    • figshare.com
    xlsx
    Updated Jun 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chinkyu Lee; Ewa Joachimiak; Wolfgang Maier; Yu-Yang Jiang; Mireya Parra; Karl F. Lechtreck; Eric S. Cole; Jacek Gaertig (2025). Numerical data (Excel spreadsheet) that underly the graphs in Figs 2G, 2H, 4E, 5E, 6I, 6J, 7C, [Dataset]. http://doi.org/10.1371/journal.pgen.1011735.s007
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 10, 2025
    Dataset provided by
    PLOS Genetics
    Authors
    Chinkyu Lee; Ewa Joachimiak; Wolfgang Maier; Yu-Yang Jiang; Mireya Parra; Karl F. Lechtreck; Eric S. Cole; Jacek Gaertig
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Numerical data (Excel spreadsheet) that underly the graphs in Figs 2G, 2H, 4E, 5E, 6I, 6J, 7C,

  19. U

    Statistical Abstract of the United States 1998

    • dataverse-staging.rdmc.unc.edu
    Updated Nov 30, 2007
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    UNC Dataverse (2007). Statistical Abstract of the United States 1998 [Dataset]. https://dataverse-staging.rdmc.unc.edu/dataset.xhtml?persistentId=hdl:1902.29/CD-0013
    Explore at:
    Dataset updated
    Nov 30, 2007
    Dataset provided by
    UNC Dataverse
    License

    https://dataverse-staging.rdmc.unc.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=hdl:1902.29/CD-0013https://dataverse-staging.rdmc.unc.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=hdl:1902.29/CD-0013

    Description

    The Statistical Abstract is the Nation's best known and most popular single source of statistics on the social, political, and economic organization of the country. The print version of this reference source has been published since 1878 while the compact disc version first appeared in 1993. This disc is designed to serve as a convenient, easy-to-use statistical reference source and guide to statistical publications and sources. The disc contains over 1,400 tables from over 250 different gove rnmental, private, and international organizations. The 1998 Statistical Abstract on CD-ROM, like the book, is a statistical reference and guide to over 250 statistical publications and sources from government and private organizations. This compact disc (CD) has 1,500 tables and charts from over 250 sources. Text and tables can be viewed or searched with the software. Tables and charts cover these subjects in 31 sections and 2 appendices: Population, Vital Statistics, Health and Nutrition, Education, Law Enforcement, Courts and Prisons, Geography and Environment, Parks, Recreation and Travel, Elections, State and Local Government, Finances and Employment, Federal Government, Finances and Employment, National Defense and Veterans Affairs, Social Insurance and Human Services, Labor Force, Employment and Earnings, Income, Expenditure and Wealth, Prices, Banking, Finance and Insurance, Business Enterprise, Communications, Energy, Science, Transportation -- Land, Transportation -- Air and Water, Agriculture, Forests and Fisheries, Mining and Mineral Products, Construction and Housing, Manufactures, Domestic Trade and Services, Foreign Commerce and Aid, Outlying Areas, Comparative International Statistics, State Rankings, Population of MSAs, Congressional District Profiles. There are changes this year in both the content of the information on the disc and software used for accessing and installing the information. As usual, updates have been made to most of the more than 1,500 tables and charts that were on the previous disc with new or more recent data. The spreadsheet files which are available in both the Excel or Lotus formats for these ta bles will usually have more information than displayed in the book or Adobe Acrobat files. There are also 93 new tables on such subjects as family planning, women's health, persons with disabilities, health insurance coverage, ambulatory surgery, school violence, household use of public libraries, public library of the Internet, toxic chemical releases, leisure activity, NCAA sports and high school athletic programs, voter registration, licensed child care centers, foster care, home-based businesses, employee benefits, home equity debt, use of debit credit cards, alcohol-related fatal accidents, computer shipments, and foreign stock market indices. See Appendix V on the disc for a complete list of the new tables presented. In the software area, a new opening screen using the DemoShield software has been added. This provide better access to the electronic version of the booklet which is available from the opening screen, the new tutorial step the user through the principal ways to search for information on this disc and other related helpful information. It will also facilitate the installation process for the Adobe Acrobat Reader, the new Microsoft Excel Viewer, and QuickTime for viewing movies. The Adobe Acrobat Reader and Search engine, version 3.01, is on the disc. The Acrobat Reader allows users to view, navigate, search, and print on demand any of the pages from the book. Note to Users: This CD is part of a collection located in the Data Archive of the Odum Institute for Research in Social Science, at the University of North Carolina at Chapel Hill. The collection is located in Room 10, Manning Hall. Users may check the CDs out subscribing to the honor system. Items can be checked out for a period of two weeks. Loan forms are located adjacent to the collection.

  20. S

    A dataset of knowledge graph construction for patents, sci-tech achievements...

    • scidb.cn
    Updated Oct 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    hu hui ling; Zhai Jun; Li Mei; Li Xin; Shen Lixin (2025). A dataset of knowledge graph construction for patents, sci-tech achievements and papers in agriculture, industry and service industry [Dataset]. http://doi.org/10.57760/sciencedb.j00001.01576
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 22, 2025
    Dataset provided by
    Science Data Bank
    Authors
    hu hui ling; Zhai Jun; Li Mei; Li Xin; Shen Lixin
    License

    Open Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
    License information was derived automatically

    Description

    As important carriers of innovation activities, patents, sci-tech achievements and papers play an increasingly prominent role in national political and economic development under the background of a new round of technological revolution and industrial transformation. However, in a distributed and heterogeneous environment, the integration and systematic description of patents, sci-tech achievements and papers data are still insufficient, which limits the in-depth analysis and utilization of related data resources. The dataset of knowledge graph construction for patents, sci-tech achievements and papers is an important means to promote innovation network research, and is of great significance for strengthening the development, utilization, and knowledge mining of innovation data. This work collected data on patents, sci-tech achievements and papers from China's authoritative websites spanning the three major industries—agriculture, industry, and services—during the period 2022-2025. After processes of cleaning, organizing, and normalization, a patents-sci-tech achievements-papers knowledge graph dataset was formed, containing 10 entity types and 8 types of entity relationships. To ensure quality and accuracy of data, the entire process involved strict preprocessing, semantic extraction and verification, with the ontology model introduced as the schema layer of the knowledge graph. The dataset establishes direct correlations among patents, sci-tech achievements and papers through inventors/contributors/authors, and utilizes the Neo4j graph database for storage and visualization. The open dataset constructed in this study can serve as important foundational data for building knowledge graphs in the field of innovation, providing structured data support for innovation activity analysis, scientific research collaboration network analysis and knowledge discovery.The dataset consists of two parts. The first part includes three Excel tables: 1,794 patent records with 10 fields, 181 paper records with 7 fields, and 1,156 scientific and technological achievement records with 11 fields. The second part is a knowledge graph dataset in CSV format that can be imported into Neo4j, comprising 10 entity files and 8 relationship files.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Bhavana Joshi (2023). Sales Dashboard in Microsoft Excel [Dataset]. https://www.kaggle.com/datasets/bhavanajoshij/sales-dashboard-in-microsoft-excel/discussion
Organization logo

Sales Dashboard in Microsoft Excel

Explore at:
zip(253363 bytes)Available download formats
Dataset updated
Apr 14, 2023
Authors
Bhavana Joshi
Description

This interactive sales dashboard is designed in Excel for B2C type of Businesses like Dmart, Walmart, Amazon, Shops & Supermarkets, etc. using Slicers, Pivot Tables & Pivot Chart.

Dashboard Overview

  1. Sales dashboard ==> basically, it is designed for the B2C type of business. like Dmart, Walmart, Amazon, Shops & supermarkets, etc.
  2. Slices ==> slices are used to drill down the data, on the basis of yearly, monthly, by sales type, and by mode of payment.
  3. Total Sales/Total Profits ==> here is, the total sales, total profit, and profit percentage these all are combined into a monthly format and we can hide or unhide it to view it as individually or comparative.
  4. Product Visual ==> the visual indicates product-wise sales for the selected period. Only 10 products are visualized at a glance, and you can scroll up & down to view other products in the list.
  5. Daily Sales ==> It shows day-wise sales. (Area Chart)
  6. Sales Type/Payment Mode ==> It shows sales percentage contribution based on the type of selling and mode of payment.
  7. Top Product & Category ==> this is for the top-selling product and product category.
  8. Category ==> the final one is the category-wise sales contribution.

Datasheets Overview

  1. The dataset has the master data sheet or you can call it a catalog. It is added in the table form.
  2. The first column is the product ID the list of items in this column is unique.
  3. Then we have the product column instead of these two columns, we can manage with only one also but I kept it separate because sometimes product names can be the same, but some parameters will be different, like price, supplier, etc.
  4. The next column is the category column, which is the product category. like cosmetics, foods, drinks, electronics, etc.
  5. Then we have 4th column which is the unit of measure (UOM) you can update it also, based on the products you have.
  6. And the last two columns are buying price and selling price, which means unit purchasing price and unit selling price.

Input Sheet

The first column is the date of Selling. The second column is the product ID. The third column is quantity. The fourth column is sales types, like direct selling, are purchased by a wholesaler or ordered online. The fifth column is a mode of payment, which is online or in cash. You can update these two as per requirements. The last one is a discount percentage. if you want to offer any discount, you can add it here.

Analysis Sheet: where all backend calculations are performed.

So, basically these are the four sheets mentioned above with different tasks.

However, a sales dashboard enables organizations to visualize their real-time sales data and boost productivity.

A dashboard is a very useful tool that brings together all the data in the forms of charts, graphs, statistics and many more visualizations which lead to data-driven and decision making.

Questions & Answers

  1. What percentage of profit ratio of sales are displayed in the year 2021 and year 2022? ==> Total profit ratio of sales in the year 2021 is 19% with large sales of PRODUCT42, whereas profit ratio of sales for 2022 is 22% with large sales of PRODUCT30.
  2. Which is the top product that have large number of sales in year 2021-2022? ==> The top product in the year 2021 is PRODUCT42 with the total sales of $12,798 whereas in the year 2022 the top product is PRODUCT30 with the total sales of $13,888.
  3. In Area Chart which product is highly sold on 28th April 2022? ==> The large number of sales on 28th April 2022 is for PRODUCT14 with a 24% of profit ratio.
  4. What is the sales type and payment mode present? ==> The sale type and payment modes show the sales percentage contribution based on the type of selling and mode of payment. Here, the sale types are Direct Sales with 52%, Online Sales with 33% and Wholesaler with 15%. Also, the payment modes are Online mode and Cash equally distributed with 50%.
  5. In which month the direct sales are highest in the year 2022? ==> The highest direct sales can be easily identified which is designed by monthly format and it’s the November month where direct sales are highest with 28% as compared with other months.
  6. Which payment mode is highly received in the year 2021 and year 2022? ==> The payments received in the year 2021 are the cash payments with 52% as compared with online transactions which are 48%. Also, the cash payment highly received is in the month of March, July and October with direct sales of 42%, Online with 45% and wholesaler with 13% with large sales of PRODUCT24. ==> The payments received in the year 2022 are the Online payments with 52% as compared with cash payments which are 48%. Also, the online payment highly received is in the month of Jan, Sept and December with direct sales of 45%, Online with 37% and whole...
Search
Clear search
Close search
Google apps
Main menu