36 datasets found
  1. e

    Data Warehousing and Data Mining (Old), 7th Semester, Computer Science and...

    • paper.erudition.co.in
    html
    Updated Nov 23, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Einetic (2025). Data Warehousing and Data Mining (Old), 7th Semester, Computer Science and Engineering, MAKAUT | Erudition Paper [Dataset]. https://paper.erudition.co.in/makaut/btech-in-computer-science-and-engineering/7/data-warehousing-and-data-mining
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Nov 23, 2025
    Dataset authored and provided by
    Einetic
    License

    https://paper.erudition.co.in/termshttps://paper.erudition.co.in/terms

    Description

    Question Paper Solutions of Data Warehousing and Data Mining (Old),7th Semester,Computer Science and Engineering,Maulana Abul Kalam Azad University of Technology

  2. MedSynora DW - Medical Data Warehouse

    • kaggle.com
    zip
    Updated Mar 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    BenMebrar (2025). MedSynora DW - Medical Data Warehouse [Dataset]. https://www.kaggle.com/datasets/mebrar21/medsynora-dw
    Explore at:
    zip(89253728 bytes)Available download formats
    Dataset updated
    Mar 14, 2025
    Authors
    BenMebrar
    License

    http://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/

    Description

    MedSynora DW – A Comprehensive Synthetic Hospital Patient Data Warehouse

    Overview MedSynora DW is a huge synthetic dataset designed to simulate the operation flow by adopting a patient-based approach in a large hospital. This dataset covers patient encounters, treatments, lab tests, vital signs, cost details and more over a full year of 2024. It is developed to support data science, machine learning, and business intelligence projects in the healthcare domain.

    Project Highlights • Realistic Simulation: Generated using advanced Python scripts and statistical models, the dataset reflects realistic hospital operations and patient flows without using any real patient data. • Comprehensive Schema: The data warehouse includes multiple fact and dimension tables: o Fact Tables: Encounter, Treatment, Lab Tests, Special Tests, Vitals, and Cost. o Dimension Tables: Patient, Doctor, Disease, Insurance, Room, Date, Chronic Diseases, Allergies, and Additional Services. o Bridge Tables: For managing many-to-many relationships (e.g., doctors per encounter) and some other… • Synthetic & Scalable: The dataset is entirely synthetic, ensuring privacy and compliance. It is designed to be scalable – the current version simulates around 145,000 encounter records.

    Data Generation • Data Sources & Methods: Data is generated using bunch of Py libraries. Highly customized algorithms simulate realistic patient demographics, doctor assignments, treatment choices, lab test results, and cost breakdowns etc.. • Diverse Scenarios: With over 300 diseases and thousands of treatment variations, along with dozens of lab and special tests, the dataset offers profoundly rich variability to support complex analytical projects.

    How to Use This Dataset • For Data Modeling & ETL Testing: Import the CSV files into your favorite database system (e.g., PostgreSQL, MySQL, or directly into a BI tool like Power BI) and set up relationships as described in the accompanying documentation. • For Machine Learning Projects: Use the dataset to build predictive models related to patient outcomes, cost analysis, or treatment efficacy. • For Educational Purposes: Ideal for learning about data warehousing, star schema design, and advanced analytics in healthcare.

    Final Note MedSynora DW offers a unique opportunity to experiment with a comprehensive, realistic hospital data warehouse without compromising real patient information. Enjoy exploring, analyzing, and building with this dataset – and feel free to reach out if you have any questions or suggestions. In particular, inconsistencies, deficiencies or suggestions about the dataset by experts in the field will contribute to other version improvements.

  3. e

    Module II

    • paper.erudition.co.in
    html
    Updated Nov 23, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Einetic (2025). Module II [Dataset]. https://paper.erudition.co.in/makaut/btech-in-computer-science-and-engineering/7/data-warehousing-and-data-mining
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Nov 23, 2025
    Dataset authored and provided by
    Einetic
    License

    https://paper.erudition.co.in/termshttps://paper.erudition.co.in/terms

    Description

    Question Paper Solutions of chapter Module II of Data Warehousing and Data Mining, 7th Semester , Computer Science and Engineering

  4. d

    Data from: The benefit of augmenting open data with clinical data-warehouse...

    • search.dataone.org
    • data.niaid.nih.gov
    • +1more
    Updated Jul 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thomas Ferté; Vianney Jouhet; Romain Griffier; Boris Hejblum; Rodolphe Thiébaut; , Bordeaux University Hospital Covid-19 Crisis Task Force (2025). The benefit of augmenting open data with clinical data-warehouse EHR for forecasting SARS-CoV-2 hospitalizations in Bordeaux area, France [Dataset]. http://doi.org/10.5061/dryad.hhmgqnkkx
    Explore at:
    Dataset updated
    Jul 22, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Thomas Ferté; Vianney Jouhet; Romain Griffier; Boris Hejblum; Rodolphe Thiébaut; , Bordeaux University Hospital Covid-19 Crisis Task Force
    Time period covered
    Jan 1, 2022
    Area covered
    Bordeaux
    Description

    Objective The aim of this study was to develop an accurate regional forecast algorithm to predict the number of hospitalized patients and to assess the benefit of the Electronic Health Records (EHR) information to perform those predictions. Materials and Methods Aggregated data from SARS-CoV-2 and weather public database and data warehouse of the Bordeaux hospital were extracted from May 16, 2020, to January 17, 2022. The outcomes were the number of hospitalized patients in the Bordeaux Hospital at 7 and 14 days. We compared the performance of different data sources, feature engineering, and machine learning models. Results During the period of 88 weeks, 2561 hospitalizations due to COVID-19 were recorded at the Bordeaux Hospital. The model achieving the best performance was an elastic-net penalized linear regression using all available data with a median relative error at 7 and 14 days of 0.136 [0.063; 0.223] and 0.198 [0.105; 0.302] hospitalizations, respectively. Electronic health r..., Aggregated data from 2020-05-16 to 2022-01-17 regarding Bordeaux Hospital EHR. Bordeaux hospital data warehouse was used, during the pandemic, to describe the current state of the epidemic at the hospital level on a daily basis. Those data were then used in the forecast model including: hospitalizations, hospital and ICU admission and discharge, ambulance service notes and emergency unit notes. Concepts related to COVID-19 were extracted from notes by dictionary-based approaches (e.g. cough, dyspnoea, covid-19). Dictionaries were manually created based on manual chart review to identify terms used by practitioners. Then, the number and proportion of ambulance service calls or hospitalization in emergency units mentioning concepts related to covid-19 were extracted. Due to different data acquisition mechanisms, there was a delay between the occurrence of events and the data acquisition. It was of 1 day for EHR data, 5 days for department hospitalizations and RT-PCR, 4 days for weather, 2..., Data are stored in a .rdata file. Please use R (https://www.r-project.org/) software to open the data.

  5. a

    Veterans Affairs Corporate Data Warehouse

    • atlaslongitudinaldatasets.ac.uk
    url
    Updated Oct 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States Department of Veterans Affairs (VA) (2024). Veterans Affairs Corporate Data Warehouse [Dataset]. https://atlaslongitudinaldatasets.ac.uk/datasets/va-cdw
    Explore at:
    urlAvailable download formats
    Dataset updated
    Oct 21, 2024
    Dataset provided by
    Atlas of Longitudinal Datasets
    Authors
    United States Department of Veterans Affairs (VA)
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    United States of America
    Variables measured
    None
    Measurement technique
    Healthcare records, Secondary data, Registry, None
    Dataset funded by
    National Institutes of Health (NIH)
    Description

    VA CDW is a repository comprising data from multiple Veterans Health Administration (VHA) clinical and administrative systems. VHA is one of the largest integrated healthcare systems in the United States with data from over 20 years of sustained electronic health record (EHR) use. VA CDW was developed in 2006 to accommodate the massive amounts of data being generated and to streamline the process of knowledge discovery to application. The registry consists of approximately 7,500 databases hosted across 86 servers. Information that appears in the VA CDW includes demographic information, information on medication dispensing from VA pharmacies, laboratory test result information, free text from progress notes and radiology reports, as well as billing and claims-related data.

  6. H

    Area Resource File (ARF)

    • dataverse.harvard.edu
    Updated May 30, 2013
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anthony Damico (2013). Area Resource File (ARF) [Dataset]. http://doi.org/10.7910/DVN/8NMSFV
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 30, 2013
    Dataset provided by
    Harvard Dataverse
    Authors
    Anthony Damico
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    analyze the area resource file (arf) with r the arf is fun to say out loud. it's also a single county-level data table with about 6,000 variables, produced by the united states health services and resources administration (hrsa). the file contains health information and statistics for over 3,000 us counties. like many government agencies, hrsa provides only a sas importation script and an as cii file. this new github repository contains two scripts: 2011-2012 arf - download.R download the zipped area resource file directly onto your local computer load the entire table into a temporary sql database save the condensed file as an R data file (.rda), comma-separated value file (.csv), and/or stata-readable file (.dta). 2011-2012 arf - analysis examples.R limit the arf to the variables necessary for your analysis sum up a few county-level statistics merge the arf onto other data sets, using both fips and ssa county codes create a sweet county-level map click here to view these two scripts for mo re detail about the area resource file (arf), visit: the arf home page the hrsa data warehouse notes: the arf may not be a survey data set itself, but it's particularly useful to merge onto other survey data. confidential to sas, spss, stata, and sudaan users: time to put down the abacus. time to transition to r. :D

  7. Adventure Works DW 2008

    • kaggle.com
    zip
    Updated Oct 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    James Vasanth (2024). Adventure Works DW 2008 [Dataset]. https://www.kaggle.com/datasets/jamesvasanth/adventure-works-dw-2008
    Explore at:
    zip(9400055 bytes)Available download formats
    Dataset updated
    Oct 5, 2024
    Authors
    James Vasanth
    Description

    The AdventureWorks DW 2008 dataset, originally provided by Microsoft, has been converted into CSV files for easier use, making it accessible for data exploration on platforms like Kaggle. The dataset is licensed under the Microsoft Public License (MS-PL), which is a permissive open-source license. This means you are free to use, modify, and share the dataset, whether for personal or commercial purposes, provided that you include the original license terms. However, it's important to note that the dataset is provided "as-is" without any warranty or guarantee from Microsoft.

    I really enjoy working with the AdventureWorks DW 2008 dataset. It offers a rich and well-structured environment that's perfect for writing and learning SQL queries. The data warehouse includes a variety of tables, such as facts and dimensions, making it an excellent resource for both beginners and experienced SQL users to practice querying and exploring relational databases.

    Now, with the dataset available in CSV format, it can be easily used with Python for exploratory data analysis (EDA), and it’s also well-suited for applying machine learning techniques such as regression, classification, and clustering.

    If you’re planning to dive into the data, all the best! It's a fantastic resource to learn from and experiment with. Cheers!

  8. Data storage locations in global organizations 2020-2022

    • statista.com
    Updated Jul 16, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2020). Data storage locations in global organizations 2020-2022 [Dataset]. https://www.statista.com/statistics/1186391/worldwide-data-storage-location/
    Explore at:
    Dataset updated
    Jul 16, 2020
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    2020
    Area covered
    Worldwide
    Description

    In 2022, respondents forecast that internally managed enterprise data centers will continue to be the main location for data storage within enterprises, with ** percent of respondents answering as such. However, it is important to note, that most locations for data storage will remain similar from 2020 to 2022, according to respondents.

  9. Chronic Conditions Data Warehouse

    • stanford.redivis.com
    application/jsonl +7
    Updated Jan 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stanford Center for Population Health Sciences (2025). Chronic Conditions Data Warehouse [Dataset]. http://doi.org/10.57761/a8rd-2v95
    Explore at:
    sas, parquet, stata, csv, application/jsonl, avro, arrow, spssAvailable download formats
    Dataset updated
    Jan 23, 2025
    Dataset provided by
    Redivis Inc.
    Authors
    Stanford Center for Population Health Sciences
    Description

    Abstract

    This dataset includes data that is publicly accessible and obtained from the CMS Chronic Conditions Data Warehouse: https://www2.ccwdata.org/web/guest/home

    This data is create by the Centers for Medicare and Medicaid for research purposes and more detail about the data, conditions, and algorithms can be found on the website. Some of the documentation from this website is included here such as references for the algorithms; however, these documents are not exhaustive of the resources on the website.

    Note that CMS periodically updates the data, and users should check the website for the most recent updates. The methodology section below explains when the data was accessed for each table.

    Methodology

    Data for the tables "Chronic Conditions Long" and "Chronic Conditions Wide" come from the url: https://www2.ccwdata.org/web/guest/condition-categories-chronic

    Data for the tables "Chronic Conditions (Other) Long", "Chronic Conditions (Other) Wide ICD-9", "Chronic Conditions (Other) Wide ICD-10" come from the url: https://www2.ccwdata.org/web/guest/condition-categories-other

    There is an R script which scrapes the codes from this website listed in the documentation. This code was last run on January 23, 2025 and data reflects what was posted on the webpages as of this date.

    The "Long" versions of the data most closely mimic the original tables on the websites and include the reference period as well as the number/type of claims used for implementing the algorithm for each chronic condition. The "Wide" versions of the data are included as an alternative data format which some users may find easier to use. Note that the following conditions were removed from these datasets because the nuances and complexity of the algorithms for these conditions did not fit within the structure of the dataset: Benign Prostatic, Hyperplasia, Alcohol Use Disorder, Drug Use Disorder, HIV/AIDS, Liver Disease, Opioid Use Disorder, and Tobacco Use Disorder. The algorithms for these conditions can be found on the webpages.

  10. a

    Traffic Signal Detector Count

    • data-waikatolass.opendata.arcgis.com
    • hub.arcgis.com
    Updated Apr 14, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hamilton City Council (2021). Traffic Signal Detector Count [Dataset]. https://data-waikatolass.opendata.arcgis.com/maps/363fa953fc014ea393b2d5b01ec1710f
    Explore at:
    Dataset updated
    Apr 14, 2021
    Dataset authored and provided by
    Hamilton City Council
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Recorded volume data at SCATS intersections or pedestrian crossings in Hamilton. To get data for this dataset, please call the API directly talking to the HCC Data Warehouse: https://api.hcc.govt.nz/OpenData/get_traffic_signal_detector_count?Page=1&Start_Date=2020-10-01&End_Date=2020-10-02. For this API, there are three mandatory parameters: Page, Start_Date, End_Date. Sample values for these parameters are in the link above. When calling the API for the first time, please always start with Page 1. Then from the returned JSON, you can see more information such as the total page count and page size. For help on using the API in your preferred data analysis software, please contact dale.townsend@hcc.govt.nz. NOTE: Anomalies and missing data may be present in the dataset.

    Column_InfoSite_Number, int : SCATS ID - Unique identifierDetector_Number, int : Detector number that the count is recorded toDate, datetime : Start of the 15 minute time interval that the count was recorded forCount, int : Number of vehicles that passed over the detector

    Relationship
    

    This table reference to table Traffic_Signal_Detector

    Analytics
    

    For convenience Hamilton City Council has also built a Quick Analytics Dashboard over this dataset that you can access here.

    Disclaimer
    
    Hamilton City Council does not make any representation or give any warranty as to the accuracy or exhaustiveness of the data released for public download. Levels, locations and dimensions of works depicted in the data may not be accurate due to circumstances not notified to Council. A physical check should be made on all levels, locations and dimensions before starting design or works.
    
    Hamilton City Council shall not be liable for any loss, damage, cost or expense (whether direct or indirect) arising from reliance upon or use of any data provided, or Council's failure to provide this data.
    
    While you are free to crop, export and re-purpose the data, we ask that you attribute the Hamilton City Council and clearly state that your work is a derivative and not the authoritative data source. Please include the following statement when distributing any work derived from this data:
    
    ‘This work is derived entirely or in part from Hamilton City Council data; the provided information may be updated at any time, and may at times be out of date, inaccurate, and/or incomplete.'
    
  11. Storage and Transit Time Data and Code

    • zenodo.org
    zip
    Updated Nov 15, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrew Felton; Andrew Felton (2024). Storage and Transit Time Data and Code [Dataset]. http://doi.org/10.5281/zenodo.14171251
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 15, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Andrew Felton; Andrew Felton
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Author: Andrew J. Felton
    Date: 11/15/2024

    This R project contains the primary code and data (following pre-processing in python) used for data production, manipulation, visualization, and analysis, and figure production for the study entitled:

    "Global estimates of the storage and transit time of water through vegetation"

    Please note that 'turnover' and 'transit' are used interchangeably. Also please note that this R project has been updated multiple times as the analysis has updated throughout the peer review process.

    #Data information:

    The data folder contains key data sets used for analysis. In particular:

    "data/turnover_from_python/updated/august_2024_lc/" contains the core datasets used in this study including global arrays summarizing five year (2016-2020) averages of mean (annual) and minimum (monthly) transit time, storage, canopy transpiration, and number of months of data able as both an array (.nc) or data table (.csv). These data were produced in python using the python scripts found in the "supporting_code" folder. The remaining files in the "data" and "data/supporting_data" folder primarily contain ground-based estimates of storage and transit found in public databases or through a literature search, but have been extensively processed and filtered here. The "supporting_data"" folder also contains annual (2016-2020) MODIS land cover data used in the analysis and contains separate filters containing the original data (.hdf) and then the final process (filtered) data in .nc format. The resulting annual land cover distributions were used in the pre-processing of data in python.

    #Code information

    Python scripts can be found in the "supporting_code" folder.

    Each R script in this project has a role:

    "01_start.R": This script sets the working directory, loads in the tidyverse package (the remaining packages in this project are called using the `::` operator), and can run two other scripts: one that loads the customized functions (02_functions.R) and one for importing and processing the key dataset for this analysis (03_import_data.R).

    "02_functions.R": This script contains custom functions. Load this using the `source()` function in the 01_start.R script.

    "03_import_data.R": This script imports and processes the .csv transit data. It joins the mean (annual) transit time data with the minimum (monthly) transit data to generate one dataset for analysis: annual_turnover_2. Load this using the
    `source()` function in the 01_start.R script.

    "04_figures_tables.R": This is the main workhouse for figure/table production and supporting analyses. This script generates the key figures and summary statistics used in the study that then get saved in the "manuscript_figures" folder. Note that all maps were produced using Python code found in the "supporting_code"" folder. Also note that within the "manuscript_figures" folder there is an "extended_data" folder, which contains tables of the summary statistics (e.g., quartiles and sample sizes) behind figures containing box plots or depicting regression coefficients.

    "supporting_generate_data.R": This script processes supporting data used in the analysis, primarily the varying ground-based datasets of leaf water content.

    "supporting_process_land_cover.R": This takes annual MODIS land cover distributions and processes them through a multi-step filtering process so that they can be used in preprocessing of datasets in python.

  12. D

    ECMR Solutions For Warehousing Market Research Report 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Sep 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). ECMR Solutions For Warehousing Market Research Report 2033 [Dataset]. https://dataintelo.com/report/ecmr-solutions-for-warehousing-market
    Explore at:
    pptx, csv, pdfAvailable download formats
    Dataset updated
    Sep 30, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    eCMR Solutions for Warehousing Market Outlook



    According to our latest research, the global eCMR solutions for warehousing market size reached USD 1.32 billion in 2024, with a robust year-on-year expansion driven by rapid digitization across supply chains. The market is expected to grow at a CAGR of 14.8% from 2025 to 2033, reaching a forecasted value of USD 4.16 billion by 2033. The primary growth factor fueling this surge is the increasing adoption of digital documentation and compliance solutions in warehousing and logistics, as businesses strive for operational efficiency, transparency, and regulatory adherence in a competitive global landscape.



    The eCMR (electronic consignment note) solutions for warehousing market is experiencing significant momentum due to the escalating need for streamlined logistics operations and the elimination of paper-based processes. As governments and regulatory bodies across the globe enforce stricter compliance mandates for transportation documentation, companies are rapidly adopting eCMR solutions to ensure error-free, real-time data exchange and to reduce administrative burdens. The integration of eCMR with existing warehouse management systems (WMS) and transportation management systems (TMS) is enhancing overall supply chain visibility and traceability, which is a critical factor for both manufacturers and logistics providers seeking to optimize their operations and reduce costs.



    Another key growth driver is the increasing prevalence of cross-border trade, which necessitates efficient and standardized documentation practices. eCMR solutions enable seamless information flow between shippers, carriers, and receivers, thereby reducing transit delays and minimizing the risk of document loss or tampering. The COVID-19 pandemic has further accelerated the shift toward digital solutions in warehousing, as organizations recognize the importance of contactless, automated processes to safeguard employee health and ensure business continuity. This has prompted significant investments in digital infrastructure and cloud-based eCMR platforms, further catalyzing market growth.



    Additionally, the proliferation of advanced technologies such as artificial intelligence, IoT, and blockchain within the warehousing sector is amplifying the value proposition of eCMR solutions. These technologies facilitate real-time monitoring, predictive analytics, and enhanced security for electronic documents, making eCMR a cornerstone of next-generation warehouse management. As sustainability initiatives gain traction, reducing paper consumption through digital documentation aligns with corporate social responsibility goals, thereby attracting environmentally conscious enterprises to adopt eCMR solutions. The convergence of these factors is expected to sustain the market’s upward trajectory over the forecast period.



    From a regional perspective, Europe currently leads the eCMR solutions for warehousing market, owing to early regulatory adoption and strong government support for digital freight documentation. North America is rapidly catching up, driven by the presence of major logistics players and increasing investments in digital transformation initiatives. The Asia Pacific region is poised for the fastest growth, with emerging economies like China and India embracing automation to support their expanding e-commerce and manufacturing sectors. Latin America and the Middle East & Africa are also witnessing gradual adoption, fueled by modernization efforts and growing awareness of the benefits of eCMR solutions.



    Solution Type Analysis



    The solution type segment in the eCMR solutions for warehousing market comprises software, hardware, and services, each playing a pivotal role in the digital transformation of logistics documentation. Software solutions form the backbone of eCMR adoption, providing the platforms and applications necessary for creating, managing, and sharing electronic consignment notes. These software platforms are designed for seamless integration with existing WMS and TMS, offering features such as real-time tracking, automated compliance checks, and secure data storage. The growing demand for scalable, user-friendly, and interoperable software solutions is propelling this segment, as businesses seek to enhance operational efficiency and minimize errors associated with manual documentation.



    Hardware solutions encompass devices such as barcode scanners, RFID readers, mobile terminals, and IoT sensors that enable t

  13. d

    Guidelines for the standardized collection of predictor variables in studies...

    • search.dataone.org
    • borealisdata.ca
    Updated Mar 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Li, Edmond; Mawji, Alishah; Akech, Samuel; Chandna, Arjun; Kissoon, Niranjan; Kortz, Teresa; Lubell, Yoel; Turner, Paul; Wiens, Matthew; Ansermino, Mark (2024). Guidelines for the standardized collection of predictor variables in studies for pediatric sepsis (guidelines)~Pediatric Sepsis Predictors Standardization (PS2) Working Group [Dataset]. http://doi.org/10.5683/SP2/02LVVT
    Explore at:
    Dataset updated
    Mar 16, 2024
    Dataset provided by
    Borealis
    Authors
    Li, Edmond; Mawji, Alishah; Akech, Samuel; Chandna, Arjun; Kissoon, Niranjan; Kortz, Teresa; Lubell, Yoel; Turner, Paul; Wiens, Matthew; Ansermino, Mark
    Time period covered
    Jan 9, 2020
    Description

    These guidelines aim to maximize the efficiency of data-sharing collaborations in pediatric sepsis research by facilitating the standardization of data collection in predictors captured in future studies. Feedback may be submitted via a secure survey here: https://rc.bcchr.ca/redcap/surveys/?s=T849WRXYT8, NOTE for restricted files: If you are not yet a CoLab member, please complete our membership application survey to gain access to restricted files within 2 business days. Some files may remain restricted to CoLab members. These files are deemed more sensitive by the file owner and are meant to be shared on a case-by-case basis. Please contact the CoLab coordinator on this page under "collaborate with the pediatric sepsis colab."

  14. G

    Warehouse Inventory Movement Log

    • gomask.ai
    csv, json
    Updated Nov 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GoMask.ai (2025). Warehouse Inventory Movement Log [Dataset]. https://gomask.ai/marketplace/datasets/warehouse-inventory-movement-log
    Explore at:
    json, csv(10 MB)Available download formats
    Dataset updated
    Nov 21, 2025
    Dataset provided by
    GoMask.ai
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Time period covered
    2024 - 2025
    Area covered
    Global
    Variables measured
    sku, uom, item_id, comments, quantity, item_name, timestamp, movement_id, operator_id, batch_number, and 8 more
    Description

    This dataset provides a granular log of all inventory movements within a warehouse, including item details, movement types, source and destination locations, operator information, and contextual notes. It enables precise tracking of stock flow, supports inventory accuracy analysis, and facilitates workflow optimization for warehouse operations. The dataset is ideal for supply chain analytics, process improvement, and operational auditing.

  15. G

    eCMR Solutions for Warehousing Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Aug 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). eCMR Solutions for Warehousing Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/ecmr-solutions-for-warehousing-market
    Explore at:
    pptx, csv, pdfAvailable download formats
    Dataset updated
    Aug 22, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    eCMR Solutions for Warehousing Market Outlook



    According to our latest research, the global eCMR Solutions for Warehousing market size reached USD 2.37 billion in 2024, reflecting robust adoption across multiple industry verticals. The market is projected to expand at a CAGR of 12.4% from 2025 to 2033, reaching an estimated value of USD 6.67 billion by the end of the forecast period. This remarkable growth trajectory is primarily driven by increasing regulatory mandates for electronic consignment note adoption, the rapid digitalization of supply chain processes, and the growing need for real-time visibility and compliance in warehousing operations.




    The accelerating shift towards digital documentation within logistics and warehousing ecosystems is a key growth driver for the eCMR Solutions for Warehousing market. Traditional paper-based consignment notes are increasingly being replaced by electronic CMR (eCMR) solutions, which streamline processes, reduce human error, and enhance data accuracy. As global supply chains become more complex, the demand for seamless, secure, and compliant data exchange between shippers, carriers, and consignees has surged. This has prompted both regulatory bodies and industry stakeholders to advocate for widespread eCMR adoption, fueling market expansion. Furthermore, the integration of eCMR solutions with existing warehouse management systems (WMS) and transportation management systems (TMS) is enabling organizations to achieve end-to-end visibility and operational efficiency, further boosting market demand.




    Another significant factor contributing to market growth is the increasing emphasis on sustainability and cost reduction across the logistics and warehousing sector. eCMR solutions help organizations minimize paper usage, lower administrative costs, and reduce the environmental footprint of their operations. By digitizing consignment documentation, companies can expedite workflows, enhance traceability, and support green logistics initiatives, which have become central to corporate social responsibility (CSR) agendas. Additionally, the ability of eCMR platforms to provide real-time updates and analytics empowers organizations to make data-driven decisions, optimize route planning, and improve customer service levels, thereby driving competitive advantage and long-term growth.




    The rapid advancement of enabling technologies such as cloud computing, IoT, and artificial intelligence is also fostering innovation within the eCMR Solutions for Warehousing market. Cloud-based deployment models offer scalability, flexibility, and remote accessibility, making them particularly attractive to enterprises with distributed operations. IoT integration allows for real-time tracking and monitoring of consignments, while AI-driven analytics deliver actionable insights for process optimization. As these technologies become more accessible and affordable, their convergence with eCMR platforms is expected to unlock new opportunities for value creation, further accelerating market growth.




    Regionally, Europe remains at the forefront of eCMR adoption, driven by stringent regulatory frameworks and proactive government initiatives. North America is witnessing rapid uptake due to the digitization of logistics networks and the presence of major industry players, while Asia Pacific is emerging as a high-growth market, propelled by booming e-commerce, expanding manufacturing bases, and ongoing supply chain modernization. Latin America and the Middle East & Africa are also showing promising potential, albeit at a relatively nascent stage, as awareness and infrastructure development continue to progress.





    Component Analysis



    The Component segment of the eCMR Solutions for Warehousing market is categorized into software, hardware, and services, each playing a critical role in the overall ecosystem. Software components, including eCMR management platforms and integration modules, represent the largest share of the market. These solutions enab

  16. Practice-Based Evidence: Profiling the Safety of Cilostazol by Text-Mining...

    • plos.figshare.com
    xlsx
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nicholas J. Leeper; Anna Bauer-Mehren; Srinivasan V. Iyer; Paea LePendu; Cliff Olson; Nigam H. Shah (2023). Practice-Based Evidence: Profiling the Safety of Cilostazol by Text-Mining of Clinical Notes [Dataset]. http://doi.org/10.1371/journal.pone.0063499
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Nicholas J. Leeper; Anna Bauer-Mehren; Srinivasan V. Iyer; Paea LePendu; Cliff Olson; Nigam H. Shah
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundPeripheral arterial disease (PAD) is a growing problem with few available therapies. Cilostazol is the only FDA-approved medication with a class I indication for intermittent claudication, but carries a black box warning due to concerns for increased cardiovascular mortality. To assess the validity of this black box warning, we employed a novel text-analytics pipeline to quantify the adverse events associated with Cilostazol use in a clinical setting, including patients with congestive heart failure (CHF).Methods and ResultsWe analyzed the electronic medical records of 1.8 million subjects from the Stanford clinical data warehouse spanning 18 years using a novel text-mining/statistical analytics pipeline. We identified 232 PAD patients taking Cilostazol and created a control group of 1,160 PAD patients not taking this drug using 1∶5 propensity-score matching. Over a mean follow up of 4.2 years, we observed no association between Cilostazol use and any major adverse cardiovascular event including stroke (OR = 1.13, CI [0.82, 1.55]), myocardial infarction (OR = 1.00, CI [0.71, 1.39]), or death (OR = 0.86, CI [0.63, 1.18]). Cilostazol was not associated with an increase in any arrhythmic complication. We also identified a subset of CHF patients who were prescribed Cilostazol despite its black box warning, and found that it did not increase mortality in this high-risk group of patients.ConclusionsThis proof of principle study shows the potential of text-analytics to mine clinical data warehouses to uncover ‘natural experiments’ such as the use of Cilostazol in CHF patients. We envision this method will have broad applications for examining difficult to test clinical hypotheses and to aid in post-marketing drug safety surveillance. Moreover, our observations argue for a prospective study to examine the validity of a drug safety warning that may be unnecessarily limiting the use of an efficacious therapy.

  17. d

    CDPH Environmental Storage Tanks

    • catalog.data.gov
    • data.cityofchicago.org
    Updated Apr 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.cityofchicago.org (2025). CDPH Environmental Storage Tanks [Dataset]. https://catalog.data.gov/dataset/cdph-storage-tanks
    Explore at:
    Dataset updated
    Apr 5, 2025
    Dataset provided by
    data.cityofchicago.org
    Description

    Note: This dataset is no longer being updated. As of 2020, Aboveground Storage Tank data is captured in the CDPH Environmental Permits dataset and Underground Storage Tank data is available from the Office of the Illinois State Fire Marshal. This dataset contains Aboveground Storage Tank (AST) and Underground Storage Tank (UST) information from the Department of Public Health’s (CDPH) Tank Asset Database. The Tank Asset Database contains tank information from CDPH AST and UST permit applications as well as UST records imported from the historic Department of Environment (DOE) database. This dataset also includes AST records from the historic DOE and pre-1992 UST records from the Building Department.

  18. Supply Chain Data

    • kaggle.com
    zip
    Updated Feb 23, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Laurin Brechter (2022). Supply Chain Data [Dataset]. https://www.kaggle.com/datasets/laurinbrechter/supply-chain-data
    Explore at:
    zip(717681 bytes)Available download formats
    Dataset updated
    Feb 23, 2022
    Authors
    Laurin Brechter
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    There are 7 tables in total, the task is, to assign routes to the Orders in the "Order List" Table given the restrictions (e.g. weight restriction). - The order list already contains Historical data of how the orders were assigned in the past.

    Please refer to https://brunel.figshare.com/articles/dataset/Supply_Chain_Logistics_Problem_Dataset/7558679 for further clarification.

    The other 6 tables describe the restrictions imposed on the system. - some customers can only be serviced by a specific plant - plants and ports have to be physically connected. - plants can only handle specific items

    Notes:

    • The terms "Warehouse" and "Plant" are used interchangeably, essentially a warehouse is a plant.
    • This is a (deterministic) optimization problem, there is only one order date since we are only looking at orders from one specific day and trying to assign them to routes/factories.

    • We have to ship all the orders to PORT09

    • The goal is to schedule routes while minimizing freight and warehousing costs.

    • I am also just working on understanding the Dataset, maybe we can have a discussion in the comment section for clarifications.

    Acknowledgements:

    This dataset was taken from the Brunel University of London Website

  19. G

    Germany DE: Time Required to Build a Warehouse

    • ceicdata.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CEICdata.com, Germany DE: Time Required to Build a Warehouse [Dataset]. https://www.ceicdata.com/en/germany/company-statistics/de-time-required-to-build-a-warehouse
    Explore at:
    Dataset provided by
    CEICdata.com
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Dec 1, 2008 - Dec 1, 2019
    Area covered
    Germany
    Variables measured
    Enterprises Statistics
    Description

    Germany DE: Time Required to Build a Warehouse data was reported at 126.000 Day in 2019. This stayed constant from the previous number of 126.000 Day for 2018. Germany DE: Time Required to Build a Warehouse data is updated yearly, averaging 126.000 Day from Dec 2005 (Median) to 2019, with 15 observations. The data reached an all-time high of 158.000 Day in 2005 and a record low of 126.000 Day in 2019. Germany DE: Time Required to Build a Warehouse data remains active status in CEIC and is reported by World Bank. The data is categorized under Global Database’s Germany – Table DE.World Bank.WDI: Company Statistics. Time required to build a warehouse is the number of calendar days needed to complete the required procedures for building a warehouse. If a procedure can be speeded up at additional cost, the fastest procedure, independent of cost, is chosen.;World Bank, Doing Business project (http://www.doingbusiness.org/). NOTE: Doing Business has been discontinued as of 9/16/2021. For more information: https://bit.ly/3CLCbme;Unweighted average;Data are presented for the survey year instead of publication year.

  20. C

    Cameroon CM: Time Required to Build a Warehouse

    • ceicdata.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CEICdata.com, Cameroon CM: Time Required to Build a Warehouse [Dataset]. https://www.ceicdata.com/en/cameroon/company-statistics/cm-time-required-to-build-a-warehouse
    Explore at:
    Dataset provided by
    CEICdata.com
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Dec 1, 2008 - Dec 1, 2019
    Area covered
    Cameroon
    Variables measured
    Enterprises Statistics
    Description

    Cameroon CM: Time Required to Build a Warehouse data was reported at 126.000 Day in 2019. This records a decrease from the previous number of 135.000 Day for 2018. Cameroon CM: Time Required to Build a Warehouse data is updated yearly, averaging 184.000 Day from Dec 2005 (Median) to 2019, with 15 observations. The data reached an all-time high of 227.000 Day in 2009 and a record low of 126.000 Day in 2019. Cameroon CM: Time Required to Build a Warehouse data remains active status in CEIC and is reported by World Bank. The data is categorized under Global Database’s Cameroon – Table CM.World Bank.WDI: Company Statistics. Time required to build a warehouse is the number of calendar days needed to complete the required procedures for building a warehouse. If a procedure can be speeded up at additional cost, the fastest procedure, independent of cost, is chosen.;World Bank, Doing Business project (http://www.doingbusiness.org/). NOTE: Doing Business has been discontinued as of 9/16/2021. For more information: https://bit.ly/3CLCbme;Unweighted average;Data are presented for the survey year instead of publication year.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Einetic (2025). Data Warehousing and Data Mining (Old), 7th Semester, Computer Science and Engineering, MAKAUT | Erudition Paper [Dataset]. https://paper.erudition.co.in/makaut/btech-in-computer-science-and-engineering/7/data-warehousing-and-data-mining

Data Warehousing and Data Mining (Old), 7th Semester, Computer Science and Engineering, MAKAUT | Erudition Paper

Explore at:
htmlAvailable download formats
Dataset updated
Nov 23, 2025
Dataset authored and provided by
Einetic
License

https://paper.erudition.co.in/termshttps://paper.erudition.co.in/terms

Description

Question Paper Solutions of Data Warehousing and Data Mining (Old),7th Semester,Computer Science and Engineering,Maulana Abul Kalam Azad University of Technology

Search
Clear search
Close search
Google apps
Main menu