26 datasets found
  1. f

    Enhancing UNCDF Operations: Power BI Dashboard Development and Data Mapping

    • figshare.com
    Updated Jan 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Maryam Binti Haji Abdul Halim (2025). Enhancing UNCDF Operations: Power BI Dashboard Development and Data Mapping [Dataset]. http://doi.org/10.6084/m9.figshare.28147451.v1
    Explore at:
    Dataset updated
    Jan 6, 2025
    Dataset provided by
    figshare
    Authors
    Maryam Binti Haji Abdul Halim
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This project focuses on data mapping, integration, and analysis to support the development and enhancement of six UNCDF operational applications: OrgTraveler, Comms Central, Internal Support Hub, Partnership 360, SmartHR, and TimeTrack. These apps streamline workflows for travel claims, internal support, partnership management, and time tracking within UNCDF.Key Features and Tools:Data Mapping for Salesforce CRM Migration: Structured and mapped data flows to ensure compatibility and seamless migration to Salesforce CRM.Python for Data Cleaning and Transformation: Utilized pandas, numpy, and APIs to clean, preprocess, and transform raw datasets into standardized formats.Power BI Dashboards: Designed interactive dashboards to visualize workflows and monitor performance metrics for decision-making.Collaboration Across Platforms: Integrated Google Collab for code collaboration and Microsoft Excel for data validation and analysis.

  2. [Superseded] Intellectual Property Government Open Data 2019

    • data.gov.au
    • researchdata.edu.au
    csv-geo-au, pdf
    Updated Jan 26, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    IP Australia (2022). [Superseded] Intellectual Property Government Open Data 2019 [Dataset]. https://data.gov.au/data/dataset/activity/intellectual-property-government-open-data-2019
    Explore at:
    csv-geo-au(59281977), csv-geo-au(680030), csv-geo-au(39873883), csv-geo-au(37247273), csv-geo-au(25433945), csv-geo-au(92768371), pdf(702054), csv-geo-au(208449), csv-geo-au(166844), csv-geo-au(517357734), csv-geo-au(32100526), csv-geo-au(33981694), csv-geo-au(21315), csv-geo-au(6828919), csv-geo-au(86824299), csv-geo-au(359763), csv-geo-au(567412), csv-geo-au(153175), csv-geo-au(165051861), csv-geo-au(115749297), csv-geo-au(79743393), csv-geo-au(55504675), csv-geo-au(221026), csv-geo-au(50760305), csv-geo-au(2867571), csv-geo-au(212907250), csv-geo-au(4352457), csv-geo-au(4843670), csv-geo-au(1032589), csv-geo-au(1163830), csv-geo-au(278689420), csv-geo-au(28585330), csv-geo-au(130674), csv-geo-au(13968748), csv-geo-au(11926959), csv-geo-au(4802733), csv-geo-au(243729054), csv-geo-au(64511181), csv-geo-au(592774239), csv-geo-au(149948862)Available download formats
    Dataset updated
    Jan 26, 2022
    Dataset authored and provided by
    IP Australiahttp://ipaustralia.gov.au/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    What is IPGOD?

    The Intellectual Property Government Open Data (IPGOD) includes over 100 years of registry data on all intellectual property (IP) rights administered by IP Australia. It also has derived information about the applicants who filed these IP rights, to allow for research and analysis at the regional, business and individual level. This is the 2019 release of IPGOD.

    How do I use IPGOD?

    IPGOD is large, with millions of data points across up to 40 tables, making them too large to open with Microsoft Excel. Furthermore, analysis often requires information from separate tables which would need specialised software for merging. We recommend that advanced users interact with the IPGOD data using the right tools with enough memory and compute power. This includes a wide range of programming and statistical software such as Tableau, Power BI, Stata, SAS, R, Python, and Scalar.

    IP Data Platform

    IP Australia is also providing free trials to a cloud-based analytics platform with the capabilities to enable working with large intellectual property datasets, such as the IPGOD, through the web browser, without any installation of software. IP Data Platform

    References

    The following pages can help you gain the understanding of the intellectual property administration and processes in Australia to help your analysis on the dataset.

    Updates

    Tables and columns

    Due to the changes in our systems, some tables have been affected.

    • We have added IPGOD 225 and IPGOD 325 to the dataset!
    • The IPGOD 206 table is not available this year.
    • Many tables have been re-built, and as a result may have different columns or different possible values. Please check the data dictionary for each table before use.

    Data quality improvements

    Data quality has been improved across all tables.

    • Null values are simply empty rather than '31/12/9999'.
    • All date columns are now in ISO format 'yyyy-mm-dd'.
    • All indicator columns have been converted to Boolean data type (True/False) rather than Yes/No, Y/N, or 1/0.
    • All tables are encoded in UTF-8.
    • All tables use the backslash \ as the escape character.
    • The applicant name cleaning and matching algorithms have been updated. We believe that this year's method improves the accuracy of the matches. Please note that the "ipa_id" generated in IPGOD 2019 will not match with those in previous releases of IPGOD.
  3. Superstore Sales Analysis

    • kaggle.com
    Updated Oct 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ali Reda Elblgihy (2023). Superstore Sales Analysis [Dataset]. https://www.kaggle.com/datasets/aliredaelblgihy/superstore-sales-analysis/code
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 21, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Ali Reda Elblgihy
    Description

    Analyzing sales data is essential for any business looking to make informed decisions and optimize its operations. In this project, we will utilize Microsoft Excel and Power Query to conduct a comprehensive analysis of Superstore sales data. Our primary objectives will be to establish meaningful connections between various data sheets, ensure data quality, and calculate critical metrics such as the Cost of Goods Sold (COGS) and discount values. Below are the key steps and elements of this analysis:

    1- Data Import and Transformation:

    • Gather and import relevant sales data from various sources into Excel.
    • Utilize Power Query to clean, transform, and structure the data for analysis.
    • Merge and link different data sheets to create a cohesive dataset, ensuring that all data fields are connected logically.

    2- Data Quality Assessment:

    • Perform data quality checks to identify and address issues like missing values, duplicates, outliers, and data inconsistencies.
    • Standardize data formats and ensure that all data is in a consistent, usable state.

    3- Calculating COGS:

    • Determine the Cost of Goods Sold (COGS) for each product sold by considering factors like purchase price, shipping costs, and any additional expenses.
    • Apply appropriate formulas and calculations to determine COGS accurately.

    4- Discount Analysis:

    • Analyze the discount values offered on products to understand their impact on sales and profitability.
    • Calculate the average discount percentage, identify trends, and visualize the data using charts or graphs.

    5- Sales Metrics:

    • Calculate and analyze various sales metrics, such as total revenue, profit margins, and sales growth.
    • Utilize Excel functions to compute these metrics and create visuals for better insights.

    6- Visualization:

    • Create visualizations, such as charts, graphs, and pivot tables, to present the data in an understandable and actionable format.
    • Visual representations can help identify trends, outliers, and patterns in the data.

    7- Report Generation:

    • Compile the findings and insights into a well-structured report or dashboard, making it easy for stakeholders to understand and make informed decisions.

    Throughout this analysis, the goal is to provide a clear and comprehensive understanding of the Superstore's sales performance. By using Excel and Power Query, we can efficiently manage and analyze the data, ensuring that the insights gained contribute to the store's growth and success.

  4. d

    GP Practice Prescribing Presentation-level Data - July 2014

    • digital.nhs.uk
    csv, zip
    Updated Oct 31, 2014
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2014). GP Practice Prescribing Presentation-level Data - July 2014 [Dataset]. https://digital.nhs.uk/data-and-information/publications/statistical/practice-level-prescribing-data
    Explore at:
    csv(1.4 GB), zip(257.7 MB), csv(1.7 MB), csv(275.8 kB)Available download formats
    Dataset updated
    Oct 31, 2014
    License

    https://digital.nhs.uk/about-nhs-digital/terms-and-conditionshttps://digital.nhs.uk/about-nhs-digital/terms-and-conditions

    Time period covered
    Jul 1, 2014 - Jul 31, 2014
    Area covered
    United Kingdom
    Description

    Warning: Large file size (over 1GB). Each monthly data set is large (over 4 million rows), but can be viewed in standard software such as Microsoft WordPad (save by right-clicking on the file name and selecting 'Save Target As', or equivalent on Mac OSX). It is then possible to select the required rows of data and copy and paste the information into another software application, such as a spreadsheet. Alternatively, add-ons to existing software, such as the Microsoft PowerPivot add-on for Excel, to handle larger data sets, can be used. The Microsoft PowerPivot add-on for Excel is available from Microsoft http://office.microsoft.com/en-gb/excel/download-power-pivot-HA101959985.aspx Once PowerPivot has been installed, to load the large files, please follow the instructions below. Note that it may take at least 20 to 30 minutes to load one monthly file. 1. Start Excel as normal 2. Click on the PowerPivot tab 3. Click on the PowerPivot Window icon (top left) 4. In the PowerPivot Window, click on the "From Other Sources" icon 5. In the Table Import Wizard e.g. scroll to the bottom and select Text File 6. Browse to the file you want to open and choose the file extension you require e.g. CSV Once the data has been imported you can view it in a spreadsheet. What does the data cover? General practice prescribing data is a list of all medicines, dressings and appliances that are prescribed and dispensed each month. A record will only be produced when this has occurred and there is no record for a zero total. For each practice in England, the following information is presented at presentation level for each medicine, dressing and appliance, (by presentation name): - the total number of items prescribed and dispensed - the total net ingredient cost - the total actual cost - the total quantity The data covers NHS prescriptions written in England and dispensed in the community in the UK. Prescriptions written in England but dispensed outside England are included. The data includes prescriptions written by GPs and other non-medical prescribers (such as nurses and pharmacists) who are attached to GP practices. GP practices are identified only by their national code, so an additional data file - linked to the first by the practice code - provides further detail in relation to the practice. Presentations are identified only by their BNF code, so an additional data file - linked to the first by the BNF code - provides the chemical name for that presentation.

  5. Enhanced Pizza Sales Data (2024–2025)

    • kaggle.com
    Updated May 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    akshay gaikwad (2025). Enhanced Pizza Sales Data (2024–2025) [Dataset]. https://www.kaggle.com/datasets/akshaygaikwad448/pizza-delivery-data-with-enhanced-features
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 12, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    akshay gaikwad
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    This is a realistic and structured pizza sales dataset covering the time span from **2024 to 2025. ** Whether you're a beginner in data science, a student working on a machine learning project, or an experienced analyst looking to test out time series forecasting and dashboard building, this dataset is for you.

    📁 What’s Inside? The dataset contains rich details from a pizza business including:

    ✅ Order Dates & Times ✅ Pizza Names & Categories (Veg, Non-Veg, Classic, Gourmet, etc.) ✅ Sizes (Small, Medium, Large, XL) ✅ Prices ✅ Order Quantities ✅ Customer Preferences & Trends

    It is neatly organized in Excel format and easy to use with tools like Python (Pandas), Power BI, Excel, or Tableau.

    💡** Why Use This Dataset?** This dataset is ideal for:

    📈 Sales Analysis & Reporting 🧠 Machine Learning Models (demand forecasting, recommendations) 📅 Time Series Forecasting 📊 Data Visualization Projects đŸœïž Customer Behavior Analysis 🛒 Market Basket Analysis 📩 Inventory Management Simulations

    🧠 Perfect For: Data Science Beginners & Learners BI Developers & Dashboard Designers MBA Students (Marketing, Retail, Operations) Hackathons & Case Study Competitions

    pizza, sales data, excel dataset, retail analysis, data visualization, business intelligence, forecasting, time series, customer insights, machine learning, pandas, beginner friendly

  6. f

    Enhancing Healthcare Transparency: Leveraging Machine Learning, GIS Mapping...

    • figshare.com
    Updated Jan 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Maryam Binti Haji Abdul Halim (2025). Enhancing Healthcare Transparency: Leveraging Machine Learning, GIS Mapping and Power BI for Private Hospital Insurance Claims Analysis [Dataset]. http://doi.org/10.6084/m9.figshare.28147421.v1
    Explore at:
    Dataset updated
    Jan 6, 2025
    Dataset provided by
    figshare
    Authors
    Maryam Binti Haji Abdul Halim
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This project focuses on developing a machine learning-driven system to classify hospital claims and treatment outcomes, offering a second opinion on healthcare costs and decision-making for insurance claims and treatment efficacy.Key Features and Tools:Machine Learning Algorithms: Leveraging Python (pandas, numpy, scikit-learn) for predictive modeling to assess claim validity and treatment outcomes.APIs Integration: Used Google Maps API to retrieve and map the locations of private hospitals in Malaysia.GIS Mapping Dashboard: Created a GIS-enabled dashboard in Microsoft Power BI to visualize private hospital distribution across Malaysia, aiding healthcare planning and analysis.Advanced Analytics Tools: Integrated Microsoft Excel, Python, and Google Collab for data processing and automation workflows.

  7. S

    Spreadsheets Software Report

    • marketresearchforecast.com
    doc, pdf, ppt
    Updated Mar 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Research Forecast (2025). Spreadsheets Software Report [Dataset]. https://www.marketresearchforecast.com/reports/spreadsheets-software-42585
    Explore at:
    doc, ppt, pdfAvailable download formats
    Dataset updated
    Mar 20, 2025
    Dataset authored and provided by
    Market Research Forecast
    License

    https://www.marketresearchforecast.com/privacy-policyhttps://www.marketresearchforecast.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The global spreadsheets software market is experiencing robust growth, driven by increasing digitalization across industries and the rising adoption of cloud-based solutions. The market, estimated at $20 billion in 2025, is projected to exhibit a Compound Annual Growth Rate (CAGR) of 8% from 2025 to 2033, reaching approximately $35 billion by 2033. This growth is fueled by several factors, including the expanding need for data analysis and visualization across SMEs and large enterprises, the increasing availability of user-friendly and feature-rich spreadsheet software, and the growing preference for collaborative tools that facilitate seamless teamwork. The market is segmented by operating system (Windows, Macintosh, Linux, Others) and user type (SMEs, Large Enterprises). While Microsoft Excel maintains a dominant market share, the rise of cloud-based alternatives like Google Sheets and collaborative platforms is challenging this dominance, fostering competition and innovation. Furthermore, the increasing integration of spreadsheets with other business intelligence tools further enhances their utility and fuels demand. Geographic expansion, particularly in developing economies with rising internet penetration, also contributes significantly to market expansion. However, factors such as the high initial investment in software licenses and the need for specialized training can restrain market growth, particularly for smaller businesses with limited budgets and technical expertise. The increasing complexity of data analysis necessitates enhanced security features and data protection measures, which add cost and require ongoing investment. Moreover, the emergence of advanced analytical tools and specialized data visualization software presents a competitive challenge, demanding continuous innovation and adaptation from existing spreadsheet software providers. Nevertheless, the overall market outlook remains positive, driven by sustained demand from diverse industries and technological advancements within the software landscape.

  8. d

    Data from: PUMFs and Pivot Tables: Using Excel to Create Cross-Tabulations...

    • dataone.org
    Updated Dec 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Peter Peller (2023). PUMFs and Pivot Tables: Using Excel to Create Cross-Tabulations from Public Use Microdata Files [Dataset]. http://doi.org/10.5683/SP3/1P1FPR
    Explore at:
    Dataset updated
    Dec 28, 2023
    Dataset provided by
    Borealis
    Authors
    Peter Peller
    Description

    This step-by-step exercise demonstrates how to use Excel pivot tables to create cross-tabulations from public use microdata files.

  9. g

    IP Australia - [Superseded] Intellectual Property Government Open Data 2019...

    • gimi9.com
    Updated Jul 20, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). IP Australia - [Superseded] Intellectual Property Government Open Data 2019 | gimi9.com [Dataset]. https://gimi9.com/dataset/au_intellectual-property-government-open-data-2019
    Explore at:
    Dataset updated
    Jul 20, 2018
    Area covered
    Australia
    Description

    What is IPGOD? The Intellectual Property Government Open Data (IPGOD) includes over 100 years of registry data on all intellectual property (IP) rights administered by IP Australia. It also has derived information about the applicants who filed these IP rights, to allow for research and analysis at the regional, business and individual level. This is the 2019 release of IPGOD. # How do I use IPGOD? IPGOD is large, with millions of data points across up to 40 tables, making them too large to open with Microsoft Excel. Furthermore, analysis often requires information from separate tables which would need specialised software for merging. We recommend that advanced users interact with the IPGOD data using the right tools with enough memory and compute power. This includes a wide range of programming and statistical software such as Tableau, Power BI, Stata, SAS, R, Python, and Scalar. # IP Data Platform IP Australia is also providing free trials to a cloud-based analytics platform with the capabilities to enable working with large intellectual property datasets, such as the IPGOD, through the web browser, without any installation of software. IP Data Platform # References The following pages can help you gain the understanding of the intellectual property administration and processes in Australia to help your analysis on the dataset. * Patents * Trade Marks * Designs * Plant Breeder’s Rights # Updates ### Tables and columns Due to the changes in our systems, some tables have been affected. * We have added IPGOD 225 and IPGOD 325 to the dataset! * The IPGOD 206 table is not available this year. * Many tables have been re-built, and as a result may have different columns or different possible values. Please check the data dictionary for each table before use. ### Data quality improvements Data quality has been improved across all tables. * Null values are simply empty rather than '31/12/9999'. * All date columns are now in ISO format 'yyyy-mm-dd'. * All indicator columns have been converted to Boolean data type (True/False) rather than Yes/No, Y/N, or 1/0. * All tables are encoded in UTF-8. * All tables use the backslash \ as the escape character. * The applicant name cleaning and matching algorithms have been updated. We believe that this year's method improves the accuracy of the matches. Please note that the "ipa_id" generated in IPGOD 2019 will not match with those in previous releases of IPGOD.

  10. March Madness Historical DataSet (2002 to 2025)

    • kaggle.com
    Updated Apr 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jonathan Pilafas (2025). March Madness Historical DataSet (2002 to 2025) [Dataset]. https://www.kaggle.com/datasets/jonathanpilafas/2024-march-madness-statistical-analysis/discussion?sort=undefined
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 22, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Jonathan Pilafas
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    This Kaggle dataset comes from an output dataset that powers my March Madness Data Analysis dashboard in Domo. - Click here to view this dashboard: Dashboard Link - Click here to view this dashboard features in a Domo blog post: Hoops, Data, and Madness: Unveiling the Ultimate NCAA Dashboard

    This dataset offers one the most robust resource you will find to discover key insights through data science and data analytics using historical NCAA Division 1 men's basketball data. This data, sourced from KenPom, goes as far back as 2002 and is updated with the latest 2025 data. This dataset is meticulously structured to provide every piece of information that I could pull from this site as an open-source tool for analysis for March Madness.

    Key features of the dataset include: - Historical Data: Provides all historical KenPom data from 2002 to 2025 from the Efficiency, Four Factors (Offense & Defense), Point Distribution, Height/Experience, and Misc. Team Stats endpoints from KenPom's website. Please note that the Height/Experience data only goes as far back as 2007, but every other source contains data from 2002 onward. - Data Granularity: This dataset features an individual line item for every NCAA Division 1 men's basketball team in every season that contains every KenPom metric that you can possibly think of. This dataset has the ability to serve as a single source of truth for your March Madness analysis and provide you with the granularity necessary to perform any type of analysis you can think of. - 2025 Tournament Insights: Contains all seed and region information for the 2025 NCAA March Madness tournament. Please note that I will continually update this dataset with the seed and region information for previous tournaments as I continue to work on this dataset.

    These datasets were created by downloading the raw CSV files for each season for the various sections on KenPom's website (Efficiency, Offense, Defense, Point Distribution, Summary, Miscellaneous Team Stats, and Height). All of these raw files were uploaded to Domo and imported into a dataflow using Domo's Magic ETL. In these dataflows, all of the column headers for each of the previous seasons are standardized to the current 2025 naming structure so all of the historical data can be viewed under the exact same field names. All of these cleaned datasets are then appended together, and some additional clean up takes place before ultimately creating the intermediate (INT) datasets that are uploaded to this Kaggle dataset. Once all of the INT datasets were created, I joined all of the tables together on the team name and season so all of these different metrics can be viewed under one single view. From there, I joined an NCAAM Conference & ESPN Team Name Mapping table to add a conference field in its full length and respective acronyms they are known by as well as the team name that ESPN currently uses. Please note that this reference table is an aggregated view of all of the different conferences a team has been a part of since 2002 and the different team names that KenPom has used historically, so this mapping table is necessary to map all of the teams properly and differentiate the historical conferences from their current conferences. From there, I join a reference table that includes all of the current NCAAM coaches and their active coaching lengths because the active current coaching length typically correlates to a team's success in the March Madness tournament. I also join another reference table to include the historical post-season tournament teams in the March Madness, NIT, CBI, and CIT tournaments, and I join another reference table to differentiate the teams who were ranked in the top 12 in the AP Top 25 during week 6 of the respective NCAA season. After some additional data clean-up, all of this cleaned data exports into the "DEV _ March Madness" file that contains the consolidated view of all of this data.

    This dataset provides users with the flexibility to export data for further analysis in platforms such as Domo, Power BI, Tableau, Excel, and more. This dataset is designed for users who wish to conduct their own analysis, develop predictive models, or simply gain a deeper understanding of the intricacies that result in the excitement that Division 1 men's college basketball provides every year in March. Whether you are using this dataset for academic research, personal interest, or professional interest, I hope this dataset serves as a foundational tool for exploring the vast landscape of college basketball's most riveting and anticipated event of its season.

  11. WILIAM Task 7.4

    • zenodo.org
    bin
    Updated Jan 5, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gonzalo Parrado-Hernando; Gonzalo Parrado-Hernando; Antun Pfeifer; Luka Herc; Vladimir Gjorgievski; Ilija Batas Bjelic; Neven Duić; Fernando Frechoso Escudero; Fernando Frechoso Escudero; Luis Javier Miguel GonzĂĄlez; Iñigo CapellĂĄn-Perez; Antun Pfeifer; Luka Herc; Vladimir Gjorgievski; Ilija Batas Bjelic; Neven Duić; Luis Javier Miguel GonzĂĄlez; Iñigo CapellĂĄn-Perez (2022). WILIAM Task 7.4 [Dataset]. http://doi.org/10.5281/zenodo.5820401
    Explore at:
    binAvailable download formats
    Dataset updated
    Jan 5, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Gonzalo Parrado-Hernando; Gonzalo Parrado-Hernando; Antun Pfeifer; Luka Herc; Vladimir Gjorgievski; Ilija Batas Bjelic; Neven Duić; Fernando Frechoso Escudero; Fernando Frechoso Escudero; Luis Javier Miguel GonzĂĄlez; Iñigo CapellĂĄn-Perez; Antun Pfeifer; Luka Herc; Vladimir Gjorgievski; Ilija Batas Bjelic; Neven Duić; Luis Javier Miguel GonzĂĄlez; Iñigo CapellĂĄn-Perez
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is the official repository of the Task 7.4 of H2020 Locomotion project. Feel free to use our data by citing this work and comment about our work by referencing the main authors of it. The article explaining this work is under revision. it will be referenced as soon as posible.

    Text files ("create_inputs.txt" and "run_simulations.txt") --> The first Python script creates the input files for EnergyPLAN. The second one runs iteratively EnergyPLAN to generate the outputs of combinations. Hourly distributions required to run the energy model are contained in the RAR file ("EUdist.rar").

    The PowerPoint file ("EnergyPLAN_instructions.pptx") explains the procedure to carry out the runs of combinations in Python/Excel.

    In case you couldn't properly do the combinations, the Excel file ("EU.xlsx") saves this information, so the steps of the approach could be followed from this point with the Excel file. We have used Power Query (Excel) to prepare the data for the next step of building the regression models.

    The Matlab file ("CreateRegressionModels.m") automatically generates the regression models for the European region of WILIAM (official model of the Locomotion project).

  12. d

    Australian Public Holidays Dates Machine Readable Dataset

    • data.gov.au
    • cloud.csiss.gmu.edu
    • +1more
    .csv, csv
    Updated Nov 7, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department of the Prime Minister and Cabinet (2024). Australian Public Holidays Dates Machine Readable Dataset [Dataset]. https://data.gov.au/data/dataset/australian-holidays-machine-readable-dataset
    Explore at:
    csv(20311), csv, csv(18054), csv(8924), csv(9354), .csv(19689), csv(18328), csv(18277), csv(13191), csv(16432), csv(88999)Available download formats
    Dataset updated
    Nov 7, 2024
    Dataset authored and provided by
    Department of the Prime Minister and Cabinet
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Area covered
    Australia
    Description

    The Department of the Prime Minister and Cabinet is no longer maintaining this dataset. If you would like to take ownership of this dataset for ongoing maintenance please contact us.

    PLEASE READ BEFORE USING

    The data format has been updated to align with a tidy data style (http://vita.had.co.nz/papers/tidy-data.html).

    The data in this dataset is manually collected and combined in a csv format from the following state and territory portals:

    The data API by default returns only the first 100 records. The JSON response will contain a key that shows the link for the next page of records. Alternatively you can view all records by updating the limit on the endpoint or using a query to select all records, i.e. /api/3/action/datastore_search_sql?sql=SELECT * from "{{resource_id}}".

  13. l

    Loughborough University East Midlands campus meteorological data, 2008-2021

    • repository.lboro.ac.uk
    xlsx
    Updated Apr 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Richard Hodgkins (2025). Loughborough University East Midlands campus meteorological data, 2008-2021 [Dataset]. http://doi.org/10.17028/rd.lboro.28704884.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    Loughborough University
    Authors
    Richard Hodgkins
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Area covered
    Loughborough, East Midlands
    Description

    The weather station on the campus of Loughborough University, in the East Midlands of the UK, had fallen into disuse and disrepair by the mid-2000s, but in 2007 the availability of infrastructure funding made it possible to re-establish regular weather observation with new equipment. The meteorological dataset subsequently collected at this facility between 2008 and 2021 is archived here. The dataset comes as fourteen Excel (.xlsx) files of annual data, with explanatory notes in each.Site descriptionThe campus weather station is located at latitude 52.7632°, longitude -1.235° and 68 m a.s.l., in a dedicated paddock on a green space near the centre-east boundary of the campus. A cabin, which houses power and network points, sits 10 m to the northeast of the main meteorological instrument tower. The paddock is otherwise mostly open on an arc from the northwest to the northeast, but on the other sides there are fruit trees (mainly varieties of prunus domestica) at distances of 13–16 m, forming part of the university's "Fruit Routes" biodiversity initiative.Data collectionInstruments were fixed to a 3 m lattice mast which is concreted into the ground in the centre of the paddock described above. Up to late July 2013, the instruments were controlled by a solar-charged, battery-powered Campbell Scientific CR1000 data logger, and periodically manually downloaded. From early November 2013, this logger was replaced with a Campbell Scientific CR3000, run from the mains power supply from the cabin and connected to the campus network by ethernet. At the same time, the station's Young 01503 Wind Monitor was replaced by a Gill WindSonic ultrasonic anemometer. This combination remained in place for the rest of the measurement period described here. Frustratingly, the CS215 temperature/relative humidity sensor failed shortly before the peak of the 2018 heatwave, and had to be replaced with another CS215. Likewise, the ARG100 rain gauge was replaced in 2011 and 2016. The main cause of data gaps is the unreliable power supply from the cabin, particularly in 2013 and 2021 (the latter leading to the complete replacement of the cabin and all other equipment). Furthermore, even though the post-2013 CR3000 logger had a backup battery, it sometimes failed to restart after mains power was lost, yielding data gaps until it was manually restarted. Nevertheless, out of 136 instrument-years deployment, only 36 are less than 90% complete, and 21 less than 75% complete.Data processingData retrieved manually or downloaded remotely were filtered for invalid measurements. The 15-minute data were then processed to daily and monthly values, using the pivot table function in Microsoft Excel. Most variables could be output simply as midnight-to-midnight daily means (e.g. solar and net radiation, wind speed). However, certain variables needed to be referred to the UK and Ireland standard ‘Climatological Day’ (Burt, 2012:272), 0900-0900: namely, air temperature minimum and maximum, plus rainfall total. The procedure for this follows Burt (2012; https://www.measuringtheweather.net/) and requires the insertion of additional date columns into the spreadsheet, to define two further, separate ‘Climate Dates’ for maximum temperature and rainfall total (the 24 hours commencing at 0900 on the date given, ‘ClimateDateMax’), and for minimum temperatures (24 hours ending at 0900 on the date given, ‘ClimateDateMin’). For the archived data, in the spreadsheet tabs labelled ‘Output - Daily 09-09 minima’, the pivot table function derives daily minimum temperatures by the correct 0900-0900 date, given by the ClimateDateMin variable. Similarly, in the tabs labelled ‘Output - Daily 09-09 maxima’, the pivot table function derives daily maximum temperatures and daily rainfall totals by the correct 0900-0900 date, given by the ClimateDateMax variable. Then in the tabs labelled ‘Output - Daily 00-00 means’, variables with midnight-to-midnight means use the unmodified date variable. To take into account the effect of missing data, the tab ‘Completeness’ again uses a pivot table to count the numbers of daily and monthly observations where the 15-minute data are not at least 99.99% complete. Values are only entered into the ‘Daily data’ tab of the archived spreadsheets where 15-minute data are at least 75% complete; values are only entered into ‘Monthly data’ tabs where daily data are at least 75% complete.Wind directions are particularly important in UK meteorology because they indicate the origin of air masses with potentially contrasting characteristics. But wind directions are not averaged in the same way as other variables, as they are measured on a circular scale. Instead, 15-minute wind direction data in degrees are converted to 16 compass points (the formula is included in the spreadsheets), and a pivot table is used to summarise these into wind speed categories, giving the frequency and strength of winds by compass point.In order to evaluate the reliability of the collected dataset, it was compared to equivalent variables from the HadUK-Grid dataset (Hollis et al., 2019). HadUK-Grid is a collection of gridded climate variables derived from the network of UK land surface observations, which have been interpolated from meteorological station data onto a uniform grid to provide coherent coverage across the UK at 1 km x 1 km resolution. Daily and monthly air temperature and rainfall variables from the HadUK-Grid v1.1.0.0 Met Office (2022) were downloaded from the Centre for Environmental Data Analysis (CEDA) archive (https://catalogue.ceda.ac.uk/uuid/bbca3267dc7d4219af484976734c9527/). Then the grid square containing the campus weather station was identified using the Point Subset Tool of the NOAA Weather and Climate Toolkit (https://www.ncdc.noaa.gov/wct/index.php) in order to retrieve data from that specific location. Daily and monthly HadUK-grid data are included in the spreadsheets for convenience.Campus temperatures are slightly, but consistently, higher than those indicated by HadUK-grid, while HadUK-Grid rainfall is on average almost 10% higher than that recorded on the campus. Trend-free statistical relationships between campus and HadUK-grid data implies that there is unlikely to be any significant temporal bias in the campus dataset.ReferencesBurt, S. (2012). The Weather Observer's Handbook. Cambridge University Press, https://doi.org/10.1017/CBO9781139152167.Hollis, D, McCarthy, M, Kendon, M., Legg, T., Simpson, I. (2019). HadUK‐Grid—A new UK dataset of gridded climate observations. Geoscience Data Journal 6, 151–159, https://doi.org/10.1002/gdj3.78.Met Office; Hollis, D.; McCarthy, M.; Kendon, M.; Legg, T. (2022). HadUK-Grid Gridded Climate Observations on a 1km grid over the UK, v1.1.0.0 (1836-2021). NERC EDS Centre for Environmental Data Analysis, https://dx.doi.org/10.5285/bbca3267dc7d4219af484976734c9527.

  14. W

    Joint Need Assessment Palu Earthquake & Tsunami, 28 September 2018

    • cloud.csiss.gmu.edu
    xlsx
    Updated Jun 18, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    UN Humanitarian Data Exchange (2019). Joint Need Assessment Palu Earthquake & Tsunami, 28 September 2018 [Dataset]. https://cloud.csiss.gmu.edu/uddi/mn_MN/dataset/raw-data-of-joint-need-assessment-palu-earthquake-tsunami-28-september-2018
    Explore at:
    xlsx(279895)Available download formats
    Dataset updated
    Jun 18, 2019
    Dataset provided by
    UN Humanitarian Data Exchange
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Palu City
    Description

    excel data. link to the power BI dashboard Data live online: https://bit.ly/2RnUd7x

  15. Bank Loan Analysis Project in Excel

    • kaggle.com
    Updated May 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sanjana Murthy (2024). Bank Loan Analysis Project in Excel [Dataset]. https://www.kaggle.com/datasets/sanjanamurthy392/bank-loan-analysis-project/discussion?sort=undefined
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 4, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Sanjana Murthy
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    About Datasets: - Domain : Finance - Project: Bank loan of customers - Datasets: Finance_1.xlsx & Finance_2.xlsx - Dataset Type: Excel Data - Dataset Size: Each Excel file has 39k+ records

    KPI's: 1. Year wise loan amount Stats 2. Grade and sub grade wise revol_bal 3. Total Payment for Verified Status Vs Total Payment for Non Verified Status 4. State wise loan status 5. Month wise loan status 6. Get more insights based on your understanding of the data

    Process: 1. Understanding the problem 2. Data Collection 3. Data Cleaning 4. Exploring and analyzing the data 5. Interpreting the results

    This data contains Power Query, Power Pivot, Merge data, Clustered Bar Chart, Clustered Column Chart, Line Chart, 3D Pie chart, Dashboard, slicers, timeline, formatting techniques.

  16. McKinsey Solve Assessment Data (2018–2025)

    • kaggle.com
    Updated May 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Oluwademilade Adeniyi (2025). McKinsey Solve Assessment Data (2018–2025) [Dataset]. http://doi.org/10.34740/kaggle/dsv/11720554
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 7, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Oluwademilade Adeniyi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    McKinsey Solve Global Assessment Dataset (2018–2025)

    🧠 Context

    McKinsey's Solve is a gamified problem-solving assessment used globally in the consulting firm’s recruitment process. This dataset simulates assessment results across geographies, education levels, and roles over a 7-year period. It aims to provide deep insights into performance trends, candidate readiness, resume quality, and cognitive task outcomes.

    📌 Inspiration & Purpose

    Inspired by McKinsey’s real-world assessment framework, this dataset was designed to enable: - Exploratory Data Analysis (EDA) - Recruitment trend analysis - Gamified performance modelling - Dashboard development in Excel / Power BI - Resume and education impact evaluation - Regional performance benchmarking - Data storytelling for portfolio projects

    Whether you're building dashboards or training models, this dataset offers practical and relatable data for HR analytics and consulting use cases.

    🔍 Dataset Source

    • Data generated by Oluwademilade Adeniyi (Demibolt) with the assistance of ChatGPT by OpenAI Structure and logic inspired by McKinsey’s public-facing Solve information, including role categories, game types (Ecosystem, Redrock, Seawolf), education tiers, and global office locations The entire dataset is synthetic and designed for analytical learning, ethical use, and professional development

    đŸ§Ÿ Dataset Structure

    This dataset includes 4,000 rows and the following columns: - Testtaker ID: Unique identifier - Country / Region: Geographic segmentation - Gender / Age: Demographics - Year: Assessment year (2018–2025) - Highest Level of Education: From high school to PhD / MBA - School or University Attended: Mapped to country and education level - First-generation University Student: Yes/No - Employment Status: Student, Employed, Unemployed - Role Applied For and Department / Interest: Business/tech disciplines - Past Test Taker: Indicates repeat attempts - Prepared with Online Materials: Indicates test prep involvement - Desired Office Location: Mapped to McKinsey's international offices - Ecosystem / Redrock / Seawolf (%): Game performance scores - Time Spent on Each Game (mins) - Total Product Score: Average of the 3 game scores - Process Score: A secondary assessment component - Resume Score: Scored based on education prestige, role fit, and clarity - Total Assessment Score (%): Final decision metric - Status (Pass/Fail): Based on total score ≄ 75%

    ✅ Why Use This Dataset

    • Benchmark educational and regional trends in global assessments
    • Build KPI cards, donut charts, histograms, or speedometer visuals
    • Train pass/fail classifiers or regression models
    • Segment job applicants by role, location, or game behaviour
    • Showcase portfolio skills across Excel, SQL, Power BI, Python, or R
    • Test dashboards or predictive logic in a business-relevant scenario

    💡 Credit & Collaboration

    • Data Creator: Oluwademilade Adeniyi (Me) (LinkedIn, Twitter, GitHub, Medium)
    • Collaborator: ChatGPT by OpenAI
    • Inspired by: McKinsey & Company’s Solve Assessment
  17. A

    Candidatos - 1947

    • data.amerigeoss.org
    txt
    Updated Oct 31, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Brazil (2022). Candidatos - 1947 [Dataset]. https://data.amerigeoss.org/tl/dataset/showcases/candidatos-1947
    Explore at:
    txtAvailable download formats
    Dataset updated
    Oct 31, 2022
    Dataset provided by
    Brazil
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Legendas - Vagas

    ATENÇÃO!

    UTILIZE SOFTWARE ADEQUADO PARA A COMPLETA VISUALIZAÇÃO DOS DADOS!

    Arquivos de dados com um grande nĂșmero de linhas (particularmente aqueles com extensĂŁo .csv e .txt) podem nĂŁo ser visualizados em sua totalidade a depender do software utilizado. Por exemplo, o Microsoft Excel tem limitação de 1.048.576 linhas. Para evitar o carregamento incompleto do arquivo, utilize softwares com suporte para abrir grande volume de dados ou aplique tĂ©cnicas especĂ­ficas para seu processamento.

    Confira algumas alternativas a seguir:

    • Use ferramentas de anĂĄlise estatĂ­stica, de Business Intelligence (BI), de banco de dados ou de anĂĄlise de dados;
    • Abra o arquivo no Excel usando a opção Obter Dados: utilize o Power Query para carregar o conjunto de dados completo e analisĂĄ-lo com tabelas dinĂąmicas;
    • Use programas de editores de texto com suporte para grandes volumes de dados;
    • Use qualquer outra ferramenta que suporte arquivos de grande volume de dados.
  18. d

    List of all countries with their 2 digit codes (ISO 3166-1)

    • datahub.io
    Updated Aug 29, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2017). List of all countries with their 2 digit codes (ISO 3166-1) [Dataset]. https://datahub.io/core/country-list
    Explore at:
    Dataset updated
    Aug 29, 2017
    License

    ODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
    License information was derived automatically

    Description

    ISO 3166-1-alpha-2 English country names and code elements. This list states the country names (official short names in English) in alphabetical order as given in ISO 3166-1 and the corresponding ISO 3166-1-alpha-2 code elements.

  19. Timac Fuel Distribution & Sales Dataset –

    • kaggle.com
    Updated May 31, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fatolu Peter (2025). Timac Fuel Distribution & Sales Dataset – [Dataset]. https://www.kaggle.com/datasets/olagokeblissman/timac-fuel-distribution-and-sales-dataset/suggestions
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 31, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Fatolu Peter
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    📝 Dataset Overview: This dataset represents real-world, enhanced transactional data from Timac Global Concept, one of Nigeria’s prominent players in fuel and petroleum distribution. It includes comprehensive sales records across multiple stations and product categories (AGO, PMS, Diesel, Lubricants, LPG), along with revenue and shift-based operational tracking.

    The dataset is ideal for analysts, BI professionals, and data science students aiming to explore fuel economy trends, pricing dynamics, and operational analytics.

    🔍 Dataset Features: Column Name Description Date Transaction date Station_Name Name of the fuel station AGO_Sales (L) Automotive Gas Oil sold in liters PMS_Sales (L) Premium Motor Spirit sold in liters Lubricant_Sales (L) Lubricant sales in liters Diesel_Sales (L) Diesel sold in liters LPG_Sales (kg) Liquefied Petroleum Gas sold in kilograms Total_Revenue (₩) Total revenue generated in Nigerian Naira AGO_Price Price per liter of AGO PMS_Price Price per liter of PMS Lubricant_Price Unit price of lubricants Diesel_Price Price per liter of diesel LPG_Price Price per kg of LPG Product_Category Fuel product type Shift Work shift (e.g., Morning, Night) Supervisor Supervisor in charge during shift Weekday Day of the week for each transaction

    🎯 Use Cases: Build Power BI dashboards to track fuel sales trends and shifts

    Perform revenue forecasting using time series models

    Analyze price dynamics vs sales volume

    Visualize station-wise performance and weekday sales patterns

    Conduct operational audits per supervisor or shift

    🧰 Best Tools for Analysis: Power BI, Tableau

    Python (Pandas, Matplotlib, Plotly)

    Excel for pivot tables and summaries

    SQL for fuel category insights

    đŸ‘€ Created By: Fatolu Peter (Emperor Analytics) Data analyst focused on real-life data transformation in Nigeria’s petroleum, healthcare, and retail sectors. This is Project 11 in my growing portfolio of end-to-end analytics challenges.

    ✅ LinkedIn Post: ⛜ New Dataset Alert – Fuel Economy & Sales Data Now on Kaggle! 📊 Timac Fuel Distribution & Revenue Dataset (Nigeria – 500 Records) 🔗 Explore the data here

    Looking to practice business analytics, revenue forecasting, or operational dashboards?

    This dataset contains:

    Daily sales of AGO, PMS, Diesel, LPG & Lubricants

    Revenue breakdowns by station

    Shift & supervisor tracking

    Fuel prices across product categories

    You can use this to: ✅ Build Power BI sales dashboards ✅ Create fuel trend visualizations ✅ Analyze shift-level profitability ✅ Forecast revenue using Python or Excel

    Let’s put real Nigerian data to real analytical work. Tag me when you build with it—I’d love to celebrate your work!

    FuelAnalytics #KaggleDatasets #PowerBI #PetroleumIndustry #NigeriaData #RevenueForecasting #EmperorAnalytics #FatoluPeter #Project11 #TimacGlobal #RealWorldData

  20. g

    Data from: Resultados - 2020

    • gimi9.com
    Updated Oct 25, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Resultados - 2020 [Dataset]. https://gimi9.com/dataset/br_resultados-2020/
    Explore at:
    Dataset updated
    Oct 25, 2022
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Detalhe da apuração por municĂ­pio e zona/seção - Votação nominal/partido por municĂ­pio e zona - Votação por seção eleitoral * * * ATENÇÃO! UTILIZE SOFTWARE ADEQUADO PARA A COMPLETA VISUALIZAÇÃO DOS DADOS! Arquivos de dados com um grande nĂșmero de linhas (particularmente aqueles com extensĂŁo .csv e .txt) podem nĂŁo ser visualizados em sua totalidade a depender do software utilizado. Por exemplo, o Microsoft Excel tem limitação de 1.048.576 linhas. Para evitar o carregamento incompleto do arquivo, utilize softwares com suporte para abrir grande volume de dados ou aplique tĂ©cnicas especĂ­ficas para seu processamento. Confira algumas alternativas a seguir: * Use ferramentas de anĂĄlise estatĂ­stica, de Business Intelligence (BI), de banco de dados ou de anĂĄlise de dados; * Abra o arquivo no Excel usando a opção Obter Dados: utilize o Power Query para carregar o conjunto de dados completo e analisĂĄ-lo com tabelas dinĂąmicas; * Use programas de editores de texto com suporte para grandes volumes de dados; * Use qualquer outra ferramenta que suporte arquivos de grande volume de dados.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Maryam Binti Haji Abdul Halim (2025). Enhancing UNCDF Operations: Power BI Dashboard Development and Data Mapping [Dataset]. http://doi.org/10.6084/m9.figshare.28147451.v1

Enhancing UNCDF Operations: Power BI Dashboard Development and Data Mapping

Explore at:
Dataset updated
Jan 6, 2025
Dataset provided by
figshare
Authors
Maryam Binti Haji Abdul Halim
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

This project focuses on data mapping, integration, and analysis to support the development and enhancement of six UNCDF operational applications: OrgTraveler, Comms Central, Internal Support Hub, Partnership 360, SmartHR, and TimeTrack. These apps streamline workflows for travel claims, internal support, partnership management, and time tracking within UNCDF.Key Features and Tools:Data Mapping for Salesforce CRM Migration: Structured and mapped data flows to ensure compatibility and seamless migration to Salesforce CRM.Python for Data Cleaning and Transformation: Utilized pandas, numpy, and APIs to clean, preprocess, and transform raw datasets into standardized formats.Power BI Dashboards: Designed interactive dashboards to visualize workflows and monitor performance metrics for decision-making.Collaboration Across Platforms: Integrated Google Collab for code collaboration and Microsoft Excel for data validation and analysis.

Search
Clear search
Close search
Google apps
Main menu