73 datasets found
  1. Temperature and precipitation gridded data for global and regional domains...

    • cds.climate.copernicus.eu
    netcdf
    Updated Apr 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ECMWF (2025). Temperature and precipitation gridded data for global and regional domains derived from in-situ and satellite observations [Dataset]. http://doi.org/10.24381/cds.11dedf0c
    Explore at:
    netcdfAvailable download formats
    Dataset updated
    Apr 9, 2025
    Dataset provided by
    European Centre for Medium-Range Weather Forecastshttp://ecmwf.int/
    Authors
    ECMWF
    License

    https://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/insitu-gridded-observations-global-and-regional/insitu-gridded-observations-global-and-regional_15437b363f02bf5e6f41fc2995e3d19a590eb4daff5a7ce67d1ef6c269d81d68.pdfhttps://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/insitu-gridded-observations-global-and-regional/insitu-gridded-observations-global-and-regional_15437b363f02bf5e6f41fc2995e3d19a590eb4daff5a7ce67d1ef6c269d81d68.pdf

    Time period covered
    Jan 1, 1750 - Jan 1, 2021
    Description

    This dataset provides high-resolution gridded temperature and precipitation observations from a selection of sources. Additionally the dataset contains daily global average near-surface temperature anomalies. All fields are defined on either daily or monthly frequency. The datasets are regularly updated to incorporate recent observations. The included data sources are commonly known as GISTEMP, Berkeley Earth, CPC and CPC-CONUS, CHIRPS, IMERG, CMORPH, GPCC and CRU, where the abbreviations are explained below. These data have been constructed from high-quality analyses of meteorological station series and rain gauges around the world, and as such provide a reliable source for the analysis of weather extremes and climate trends. The regular update cycle makes these data suitable for a rapid study of recently occurred phenomena or events. The NASA Goddard Institute for Space Studies temperature analysis dataset (GISTEMP-v4) combines station data of the Global Historical Climatology Network (GHCN) with the Extended Reconstructed Sea Surface Temperature (ERSST) to construct a global temperature change estimate. The Berkeley Earth Foundation dataset (BERKEARTH) merges temperature records from 16 archives into a single coherent dataset. The NOAA Climate Prediction Center datasets (CPC and CPC-CONUS) define a suite of unified precipitation products with consistent quantity and improved quality by combining all information sources available at CPC and by taking advantage of the optimal interpolation (OI) objective analysis technique. The Climate Hazards Group InfraRed Precipitation with Station dataset (CHIRPS-v2) incorporates 0.05° resolution satellite imagery and in-situ station data to create gridded rainfall time series over the African continent, suitable for trend analysis and seasonal drought monitoring. The Integrated Multi-satellitE Retrievals dataset (IMERG) by NASA uses an algorithm to intercalibrate, merge, and interpolate “all'' satellite microwave precipitation estimates, together with microwave-calibrated infrared (IR) satellite estimates, precipitation gauge analyses, and potentially other precipitation estimators over the entire globe at fine time and space scales for the Tropical Rainfall Measuring Mission (TRMM) and its successor, Global Precipitation Measurement (GPM) satellite-based precipitation products. The Climate Prediction Center morphing technique dataset (CMORPH) by NOAA has been created using precipitation estimates that have been derived from low orbiter satellite microwave observations exclusively. Then, geostationary IR data are used as a means to transport the microwave-derived precipitation features during periods when microwave data are not available at a location. The Global Precipitation Climatology Centre dataset (GPCC) is a centennial product of monthly global land-surface precipitation based on the ~80,000 stations world-wide that feature record durations of 10 years or longer. The data coverage per month varies from ~6,000 (before 1900) to more than 50,000 stations. The Climatic Research Unit dataset (CRU v4) features an improved interpolation process, which delivers full traceability back to station measurements. The station measurements of temperature and precipitation are public, as well as the gridded dataset and national averages for each country. Cross-validation was performed at a station level, and the results have been published as a guide to the accuracy of the interpolation. This catalogue entry complements the E-OBS record in many aspects, as it intends to provide high-resolution gridded meteorological observations at a global rather than continental scale. These data may be suitable as a baseline for model comparisons or extreme event analysis in the CMIP5 and CMIP6 dataset.

  2. MEG-SCANS - a high quality magneto-encephalography speech dataset with...

    • openneuro.org
    Updated Jul 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Till Habersetzer; Bernd T. Meyer (2025). MEG-SCANS - a high quality magneto-encephalography speech dataset with Stories, Chirps And Noisy Sentences. [Dataset]. http://doi.org/10.18112/openneuro.ds006468.v1.0.0
    Explore at:
    Dataset updated
    Jul 16, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Till Habersetzer; Bernd T. Meyer
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    References

    Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896

    Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J., Oostenveld, R., Schoffelen, J., Tadel, F., Wexler, J., Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5, 180110.https://doi.org/10.1038/sdata.2018.110

    Description

    The MEG-SCANS (Stories, Chirps, And Noisy Sentences) dataset provides raw and MaxFiltered magnetoencephalography (MEG) recordings from 24 German-speaking participants, collected over three months. Each participant engaged in an auditory experiment, listening to approximately one hour of stimuli, including two audiobooks (approx. 20 minutes each), 120 sentences from the Oldenburger Matrix Sentence Test (OLSA) presented at varying speech intelligibility levels (20% to 95%) for Speech Reception Threshold (SRT) assessment, and short up-chirps used for MEG signal quality assessment. For each participant, the dataset comprises raw MEG data, corresponding MaxFiltered data, two empty-room MEG recordings (pre- and post-session), a structural MRI scan of the head, behavioral audiogram and SRT results from hearing screenings, and the corresponding audio stimulus material (audiobooks, envelopes, and chirp stimuli). Auxiliary channels recorded include the left audio channel (MISC001), right audio channel (MISC002), and the instructor's microphone (MISC007), all sampled at 1000 Hz. Organized according to the Brain Imaging Data Structure (BIDS), this dataset offers a robust benchmark for large-scale encoding/decoding analyses of temporally-resolved brain responses to speech. Note that sub-01 served as a pilot so that its data resembles a slightly different experimental design, specifically lacking chirp stimuli and featuring different audiobooks; this variation is accounted for in the provided analysis pipelines. Comprehensive Matlab and Python code are included alongside the entire analysis pipeline [Github DOI] to replicate key data validations, ensuring transparency and reproducibility.

  3. d

    Environment - Air, Noise & Water Quality Enquiry - Dataset - PSB Data...

    • datacatalogue.gov.ie
    Updated Apr 4, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2021). Environment - Air, Noise & Water Quality Enquiry - Dataset - PSB Data Catalogue [Dataset]. https://datacatalogue.gov.ie/dataset/environment-air-noise-and-water-quality-enquiry
    Explore at:
    Dataset updated
    Apr 4, 2021
    Description

    Data gathered through questions asked via Online ask question service

  4. d

    Shot-point calibrated trackline navigation for chirp seismic data collected...

    • catalog.data.gov
    • datadiscoverystudio.org
    • +3more
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Shot-point calibrated trackline navigation for chirp seismic data collected in Indian River Bay, Delaware, on April 13, 2010, on U.S. Geological Survey Field Activity 2010-006-FA (IR_ROUTES_CALIB.SHP, Geographic, WGS 84) [Dataset]. https://catalog.data.gov/dataset/shot-point-calibrated-trackline-navigation-for-chirp-seismic-data-collected-in-indian-rive
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Indian River Bay, Delaware
    Description

    A geophysical survey to delineate the fresh-saline groundwater interface and associated sub-bottom sedimentary structures beneath Indian River Bay, Delaware, was carried out in April 2010. This included surveying at higher spatial resolution in the vicinity of a study site at Holts Landing, where intensive onshore and offshore studies were subsequently completed. The total length of continuous resistivity profiling (CRP) survey lines was 145 kilometers (km), with 36 km of chirp seismic lines surveyed around the perimeter of the bay. Medium-resolution CRP surveying was performed using a 50-meter streamer in a bay-wide grid. Results of the surveying and data inversion showed the presence of many buried paleochannels beneath Indian River Bay that generally extended perpendicular from the shoreline in areas of modern tributaries, tidal creeks, and marshes. An especially wide and deep paleochannel system was imaged in the southeastern part of the bay near White Creek. Many paleochannels also had high-resistivity anomalies corresponding to low-salinity groundwater plumes associated with them, likely due to the presence of fine-grained estuarine mud and peats in the channel fills that act as submarine confining units. Where present, these units allow plumes of low-salinity groundwater that was recharged onshore to move beyond the shoreline, creating a complex fresh-saline groundwater interface in the subsurface. The properties of this interface are important considerations in construction of accurate coastal groundwater flow models. These models are required to help predict how nutrient-rich groundwater, recharged in agricultural watersheds such as this one, makes its way into coastal bays and impacts surface water quality and estuarine ecosystems. For more information on the survey conducted for this project, see https://cmgds.marine.usgs.gov/fan_info.php?fan=2010-006-FA.

  5. Wolfset: A High-Quality Underwater Acoustic Dataset for Algorithm...

    • figshare.com
    xlsx
    Updated Jun 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nuno Pessanha Santos; RICARDO MOURA; Victor Lobo (2025). Wolfset: A High-Quality Underwater Acoustic Dataset for Algorithm Development and Analysis [Dataset]. http://doi.org/10.6084/m9.figshare.25791978.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 9, 2025
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Nuno Pessanha Santos; RICARDO MOURA; Victor Lobo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    As data becomes increasingly available, relying on quality datasets for algorithm analysis and development is essential. However, data gathering can be expensive and time-consuming, and this process must be optimized to allow others to reuse data with simplicity and accuracy. The Wolfset is an acoustic dataset gathered using a Bruel & Kjaer type 8104 hydrophone in an anechoic tank usually used for ships' sonar calibration. The name Wolfset is inspired by the Seawolf submarine class, renowned for its advanced sound source detection and classification capabilities. Using an anechoic tank, we can obtain a high-quality dataset representing acoustic sources without undesired external perturbations. In many operating conditions, several outboard motors and an electric motor from a basic remotely controlled ship model were used as sound sources, usually called targets. Then, external transients and noise sources were added to approximate the dataset to the sounds present in real-world conditions. This dataset uses a systematic approach to demonstrate the diversity and accuracy needed for effective algorithm development.

  6. s

    Noise and Air Quality Monitoring API DCC - Dataset - data.smartdublin.ie

    • data.smartdublin.ie
    Updated Jun 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Noise and Air Quality Monitoring API DCC - Dataset - data.smartdublin.ie [Dataset]. https://data.smartdublin.ie/dataset/sonitus
    Explore at:
    Dataset updated
    Jun 14, 2022
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is an api that provides continuous real time as well as historic data from the network of air quality monitoring stations that are part of the national air quality monitoring network managed in cooperation between the Environmental Protection Agency and Dublin City Council, as well as other stations set up by Dublin City Council to monitor local air quality conditions. This api also provides access to Dublin City Council's network of environmental sound level monitors. For more information, visit https://dublincityairandnoise.ie/

  7. E

    Test dataset for separation of speech, traffic sounds, wind noise, and...

    • live.european-language-grid.eu
    audio wav
    Updated Apr 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Test dataset for separation of speech, traffic sounds, wind noise, and general sounds [Dataset]. https://live.european-language-grid.eu/catalogue/corpus/7681
    Explore at:
    audio wavAvailable download formats
    Dataset updated
    Apr 24, 2024
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset was generated as part of the paper:DCUnet-Based Multi-Model Approach for Universal Sound Separation,K. Arendt, A. Szumaczuk, B. Jasik, K. Piaskowski, P. Masztalski, M. Matuszewski, K. Nowicki, P. Zborowski.It contains various sounds from the Audio Set [1] and spoken utterances from VCTK [2] and DNS [3] datasets.Contents:sr_8k/ mix_clean/ s1/ s2/ s3/ s4/sr_16k/ mix_clean/ s1/ s2/ s3/ s4/sr_48k/ mix_clean/ s1/ s2/ s3/ s4/Each directory contains 512 audio samples in different sampling rate (sr_8k - 8 kHz, sr_16k - 16 kHz, sr_48k - 48 kHz).The audio samples for each sampling rate are different as they were generated randomly and separately.Each directory contains 5 subdirectories:- mix_clean - mixed sources,- s1 - source #1 (general sounds),- s2 - source #2 (speech),- s3 - source #3 (traffic sounds),- s4 - source #4 (wind noise).The sound mixtures were generated by adding s2, s3, s4 to s1 with SNR ranging from -10 to 10 dB w.r.t. s1.REFERENCES:[1] Jort F. Gemmeke, Daniel P. W. Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R. Channing Moore, Manoj Plakal, and Marvin Ritter, “Audio set: An ontology and human-labeled dataset for audio events,” in Proc. IEEE ICASSP 2017, New Orleans, LA, 2017.[2] Christophe Veaux, Junichi Yamagishi, and Kirsten Mac- Donald, “CSTR VCTK corpus: English multi-speaker corpus for CSTR voice cloning toolkit, [sound],” https://doi.org/10.7488/ds/1994, University of Edinburgh. The Centre for Speech Technology Research (CSTR). 2017.[3] Chandan K. A. Reddy, Ebrahim Beyrami, Harishchandra Dubey, Vishak Gopal, Roger Cheng, Ross Cutler, Sergiy Matusevych, Robert Aichner, Ashkan Aazami, Sebastian Braun, Puneet Rana, Sriram Srinivasan, and Johannes Gehrke, “The interspeech 2020 deep noise suppression challenge: Datasets, subjective speech quality and testing framework,” 2020.

  8. i

    SDU-Haier-ND: A Dataset for Noise Detection

    • ieee-dataport.org
    Updated Feb 24, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mingqiang Zhang (2022). SDU-Haier-ND: A Dataset for Noise Detection [Dataset]. https://ieee-dataport.org/documents/sdu-haier-nd-dataset-noise-detection
    Explore at:
    Dataset updated
    Feb 24, 2022
    Authors
    Mingqiang Zhang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    including normal sound samples and abnormal sound samples.

  9. Z

    Low-dose Computed Tomography Perceptual Image Quality Assessment Grand...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jun 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jang-Hwan Choi (2023). Low-dose Computed Tomography Perceptual Image Quality Assessment Grand Challenge Dataset (MICCAI 2023) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7833095
    Explore at:
    Dataset updated
    Jun 9, 2023
    Dataset provided by
    Wonkyeong Lee
    Jang-Hwan Choi
    Fabian Wagner
    Andreas Maier
    Jongduk Baek
    Adam Wang
    Scott S. Hsieh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Image quality assessment (IQA) is extremely important in computed tomography (CT) imaging, since it facilitates the optimization of radiation dose and the development of novel algorithms in medical imaging, such as restoration. In addition, since an excessive dose of radiation can cause harmful effects in patients, generating high- quality images from low-dose images is a popular topic in the medical domain. However, even though peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) are the most widely used evaluation metrics for these algorithms, their correlation with radiologists’ opinion of the image quality has been proven to be insufficient in previous studies, since they calculate the image score based on numeric pixel values (1-3). In addition, the need for pristine reference images to calculate these metrics makes them ineffective in real clinical environments, considering that pristine, high-quality images are often impossible to obtain due to the risk posed to patients as a result of radiation dosage. To overcome these limitations, several studies have aimed to develop a no-reference novel image quality metric that correlates well with radiologists’ opinion on image quality without any reference images (2, 4, 5).

    Nevertheless, due to the lack of open-source datasets specifically for CT IQA, experiments have been conducted with datasets that differ from each other, rendering their results incomparable and introducing difficulties in determining a standard image quality metric for CT imaging. Besides, unlike real low-dose CT images with quality degradation due to various combinations of artifacts, most studies are conducted with only one type of artifact (e.g., low-dose noise (6-11), view aliasing (12), metal artifacts (13), scattering (14-16), motion artifacts (17-22), etc.). Therefore, this challenge aims to 1) evaluate various NR-IQA models on CT images containing complex noise/artifacts, 2) to compare their correlations with scores produced by radiologists, and 3) to grant insights into the determination of the best-performing metric of CT imaging in terms of correlating with the perception of radiologists’.

    Furthermore, considering that low-dose CT images are achieved by reducing the number of projections per rotation and by reducing the X-ray current, the combination of two major artifacts, namely the sparse view streak and noise generated by these methods, is dealt with in this challenge so that the best-performing IQA model applicable in real clinical environments can be verified.

    Funding Declaration:

    This research was partly supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.RS-2022-00155966, Artificial Intelligence Convergence Innovation Human Resources Development (Ewha Womans University)), and by the National Research Foundation of Korea (NRF-2022R1A2C1092072), and by the Korea Medical Device Development Fund grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety) (Project Number: 1711174276, RS-2020-KD000016).

    References:

    Lee W, Cho E, Kim W, Choi J-H. Performance evaluation of image quality metrics for perceptual assessment of low-dose computed tomography images. Medical Imaging 2022: Image Perception, Observer Performance, and Technology Assessment: SPIE, 2022.

    Lee W, Cho E, Kim W, Choi H, Beck KS, Yoon HJ, Baek J, Choi J-H. No-reference perceptual CT image quality assessment based on a self-supervised learning framework. Machine Learning: Science and Technology 2022.

    Choi D, Kim W, Lee J, Han M, Baek J, Choi J-H. Integration of 2D iteration and a 3D CNN-based model for multi-type artifact suppression in C-arm cone-beam CT. Machine Vision and Applications 2021;32(116):1-14.

    Pal D, Patel B, Wang A. SSIQA: Multi-task learning for non-reference CT image quality assessment with self-supervised noise level prediction. 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI): IEEE, 2021; p. 1962-1965.

    Mittal A, Moorthy AK, Bovik AC. No-reference image quality assessment in the spatial domain. IEEE Trans Image Process 2012;21(12):4695-4708. doi: 10.1109/TIP.2012.2214050

    Lee J-YK, Wonjin; Lee, Yebin; Lee, Ji-Yeon; Ko, Eunji; Choi, Jang-Hwan. Unsupervised Domain Adaptation for Low-dose Computed Tomography Denoising. IEEE Access 2022.

    Jeon S-Y, Kim W, Choi J-H. MM-Net: Multi-frame and Multi-mask-based Unsupervised Deep Denoising for Low-dose Computed Tomography. IEEE Transactions on Radiation and Plasma Medical Sciences 2022.

    Kim W, Lee J, Kang M, Kim JS, Choi J-H. Wavelet subband-specific learning for low-dose computed tomography denoising. PloS one 2022;17(9):e0274308.

    Han M, Shim H, Baek J. Low-dose CT denoising via convolutional neural network with an observer loss function. Med Phys 2021;48(10):5727-5742. doi: 10.1002/mp.15161

    Kim B, Shim H, Baek J. Weakly-supervised progressive denoising with unpaired CT images. Med Image Anal 2021;71:102065. doi: 10.1016/j.media.2021.102065

    Wagner F, Thies M, Gu M, Huang Y, Pechmann S, Patwari M, Ploner S, Aust O, Uderhardt S, Schett G, Christiansen S, Maier A. Ultralow-parameter denoising: Trainable bilateral filter layers in computed tomography. Med Phys 2022;49(8):5107-5120. doi: 10.1002/mp.15718

    Kim B, Shim H, Baek J. A streak artifact reduction algorithm in sparse-view CT using a self-supervised neural representation. Med Phys 2022. doi: 10.1002/mp.15885

    Kim S, Ahn J, Kim B, Kim C, Baek J. Convolutional neural network-based metal and streak artifacts reduction in dental CT images with sparse-view sampling scheme. Med Phys 2022;49(9):6253-6277. doi: 10.1002/mp.15884

    Bier B, Berger M, Maier A, Kachelrieß M, Ritschl L, Müller K, Choi JH, Fahrig R. Scatter correction using a primary modulator on a clinical angiography Carm CT system. Med Phys 2017;44(9):e125-e137.

    Maul N, Roser P, Birkhold A, Kowarschik M, Zhong X, Strobel N, Maier A. Learning-based occupational x-ray scatter estimation. Phys Med Biol 2022;67(7). doi: 10.1088/1361-6560/ac58dc

    Roser P, Birkhold A, Preuhs A, Syben C, Felsner L, Hoppe E, Strobel N, Kowarschik M, Fahrig R, Maier A. X-Ray Scatter Estimation Using Deep Splines. IEEE Trans Med Imaging 2021;40(9):2272-2283. doi: 10.1109/TMI.2021.3074712

    Maier J, Nitschke M, Choi JH, Gold G, Fahrig R, Eskofier BM, Maier A. Rigid and Non-Rigid Motion Compensation in Weight-Bearing CBCT of the Knee Using Simulated Inertial Measurements. IEEE Trans Biomed Eng 2022;69(5):1608-1619. doi: 10.1109/TBME.2021.3123673

    Choi JH, Maier A, Keil A, Pal S, McWalter EJ, Beaupré GS, Gold GE, Fahrig R. Fiducial markerbased correction for involuntary motion in weightbearing Carm CT scanning of knees. II. Experiment. Med Phys 2014;41(6Part1):061902.

    Choi JH, Fahrig R, Keil A, Besier TF, Pal S, McWalter EJ, Beaupré GS, Maier A. Fiducial markerbased correction for involuntary motion in weightbearing Carm CT scanning of knees. Part I. Numerical modelbased optimization. Med Phys 2013;40(9):091905.

    Berger M, Muller K, Aichert A, Unberath M, Thies J, Choi JH, Fahrig R, Maier A. Marker-free motion correction in weight-bearing cone-beam CT of the knee joint. Med Phys 2016;43(3):1235-1248. doi: 10.1118/1.4941012

    Ko Y, Moon S, Baek J, Shim H. Rigid and non-rigid motion artifact reduction in X-ray CT using attention module. Med Image Anal 2021;67:101883. doi: 10.1016/j.media.2020.101883

    Preuhs A, Manhart M, Roser P, Hoppe E, Huang Y, Psychogios M, Kowarschik M, Maier A. Appearance Learning for Image-Based Motion Estimation in Tomography. IEEE Trans Med Imaging 2020;39(11):3667-3678. doi: 10.1109/TMI.2020.3002695

  10. Noise Round 4 Industry Agglomeration (Lden) - Dataset - data.gov.ie

    • data.gov.ie
    Updated Apr 18, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.gov.ie (2024). Noise Round 4 Industry Agglomeration (Lden) - Dataset - data.gov.ie [Dataset]. https://data.gov.ie/dataset/noise-round-4-industry-agglomeration-lden
    Explore at:
    Dataset updated
    Apr 18, 2024
    Dataset provided by
    data.gov.ie
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is a polygon dataset of the strategic noise mapping of agglomeration industry for Round 4 (2022), representing the situation during 2021, in the form of noise contours for the Lden (day and evening) period. The dB value represents the annual average Lden indicator value in decibels over 24 hours. The values are calculated at a height of 4.0m above local terrain, not measured, and should be treated with caution when looking at specific locations. The strategic noise mapping of industry was undertaken by Noise Consultants Limited inside the three noise agglomerations, under contract to the agglomeration local authorities. The outputs of the Round 4 noise mapping exercise were generated using a new common noise assessment method for Europe (CNOSSOS-EU), as set out in the revised Annex II of Directive 2002/49/EC, and they are not directly comparable to any strategic noise maps previously generated under Rounds 1 to 3, as these revised methods calculate noise emissions, propagation and residential population exposure differently from the methods used in previous rounds. The noise maps are the product of assimilating a collection of digital datasets, and over the last 15 years there have been ongoing significant improvements to the quality of the digital datasets describing the natural and built environment in Ireland, therefore the Round 4 strategic noise mapping includes changes to the model input datasets being used, compared to previous rounds, particularly related to the industrial areas modelled, the terrain model, building heights and ground cover. The strategic noise maps should not be relied upon in the context of planning applications for noise sensitive developments in the vicinity of the mapped sources.

  11. Radio Frequency Noise Dataset – 20 Hours Indoor Microphone Audio

    • nexdata.ai
    Updated Dec 20, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nexdata (2023). Radio Frequency Noise Dataset – 20 Hours Indoor Microphone Audio [Dataset]. https://www.nexdata.ai/datasets/speechrecog/34
    Explore at:
    Dataset updated
    Dec 20, 2023
    Dataset authored and provided by
    Nexdata
    Variables measured
    Format, Content category, Recording device, Recording condition
    Description

    This dataset contains 20 hours of radio frequency noise audio recorded via high-quality microphones in 66 different rooms, with 2–4 recording points per room and multiple recording angles per point. The setup simulates real-world RF interference and ambient indoor noise scenarios, supporting tasks like sound source localization, acoustic modeling, and noise classification. The dataset has been validated by leading AI firms and complies with all major privacy regulations, including GDPR, CCPA, and PIPL.

  12. Music Instrument Sounds for Classification

    • kaggle.com
    Updated Aug 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Abdulvahap (2024). Music Instrument Sounds for Classification [Dataset]. https://www.kaggle.com/datasets/abdulvahap/music-instrunment-sounds-for-classification
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 15, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Abdulvahap
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Data Description

    This dataset comprises high-quality 3-second audio clips of various musical instruments, meticulously curated to facilitate research and development in audio processing, machine learning, deep learning and music analysis. Each audio file captures the essence of a specific instrument, ensuring a clear and distinct sound that is ideal for training and testing models in tasks such as instrument recognition, sound classification, and audio synthesis.

    • Audio Length: All audio clips are uniformly trimmed to 3 seconds, providing consistency across the dataset.
    • Silent Files: Any audio clips that contained silence have been removed to maintain the quality and relevance of the dataset.
    • Instrument Diversity: The dataset includes a wide range of musical instruments: Accordion, Acoustic Guitar, Banjo, Bass Guitar, Clarinet, Cowbell, Cymbals, Dobro, Drum set, Electro Guitar, Floor Tom, Flute, Harmonica, Harmonium, Hi-Hats, Horn, Keyboard, Mandolin, Organ, Piano, Saxophone, Shakers, Tambourine, Trombone, Trumpet, Ukulele, Vibraphone and Violin.
    • Format: The audio files are provided in a standard format that is compatible with various audio processing tools and libraries.
    • Quantity: Accordion: 3581, Acoustic Guitar: 3654, Banjo: 2998, Bass Guitar: 3613, Clarinet: 634, Cowbell: 621, Cymbals: 208, Dobro: 487, Drum set: 3648, Electro Guitar: 1316, Floor Tom: 406, Flute: 3719, Harmonica: 131, Harmonium: 1314, Hi-Hats: 444, Horn: 1258, Keyboard: 2041, Mandolin: 2458, Organ: 1442, Piano: 575, Saxophone: 454, Shakers: 1357, Tambourine: 558, Trombone: 2965, Trumpet: 503, Ukulele: 790, Vibraphone: 506, Violin: 630.
  13. n

    Far-Field In-Home Noise Dataset – 10 Hours from Microphone Arrays

    • nexdata.ai
    Updated Oct 16, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nexdata (2023). Far-Field In-Home Noise Dataset – 10 Hours from Microphone Arrays [Dataset]. https://www.nexdata.ai/datasets/speechrecog/255
    Explore at:
    Dataset updated
    Oct 16, 2023
    Dataset provided by
    nexdata technology inc
    Nexdata
    Authors
    Nexdata
    Variables measured
    Format, Content category, Recording device, Recording condition
    Description

    This 10-hour Far-Field In-Home Noise Dataset was collected using multiple types of microphone arrays installed in real family home environments. Each mic array setup offers varied spatial capture perspectives, making the dataset ideal for AI tasks such as far-field automatic speech recognition (ASR), voice enhancement, smart speaker training, and multi-microphone signal processing. All data has undergone rigorous quality validation and complies with global privacy regulations including GDPR, CCPA, and PIPL.

  14. e

    The impact of objective and subjective measures of air quality and noise on...

    • b2find.eudat.eu
    Updated Dec 31, 2007
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2007). The impact of objective and subjective measures of air quality and noise on house prices: A multilevel approach for downtown Madrid [Data set] - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/b576d6a5-d0ae-5138-b858-1487a946b2c9
    Explore at:
    Dataset updated
    Dec 31, 2007
    Area covered
    Madrid
    Description

    Dataset accompanying the publication "The Impact of Objective and Subjective Measures of Air Quality and Noise on House Prices: A Multilevel Approach for Downtown Madrid" (Geographical Analysis 2013, 89-2, 127-148). Air quality and urban noise are major concerns in big cities. This paper aims at evaluating how they impact transaction prices in downtown Madrid. For that purpose, we incorporate both objective and subjective measures for air quality and noise and we use multilevel models since our sample is hierarchically organized into 3 levels: 5080 houses (level 1) in 759 census tracts (level 2) and 43 neighborhoods (level 3). Variables are available for each level, individual characteristics for the first level and various socio-economic data for the other levels. First, we combine a set of noise and air pollutants measured at a number of monitoring stations available for each census tract. Second, we apply kriging to match the monitoring station records to the census data. We also use subjective measures of air quality and noise based on a survey. Third, we estimate hedonic models in order to measure the marginal willingness to pay for air quality and reduced noise in downtown Madrid. We show that housing prices are better explained by subjective evaluation factors rather than objective measurements.

  15. v

    Data from: Chirp seismic reflection data- shotpoints, tracklines, profile...

    • res1catalogd-o-tdatad-o-tgov.vcapture.xyz
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Chirp seismic reflection data- shotpoints, tracklines, profile images, and SEG-Y traces for EdgeTech 3400 chirp data collected during USGS field activity 2022-001-FA (point and polyline shapefiles, CSV text, PNG Images, and SEGY data, GCS WGS 84) [Dataset]. https://res1catalogd-o-tdatad-o-tgov.vcapture.xyz/dataset/chirp-seismic-reflection-data-shotpoints-tracklines-profile-images-and-seg-y-traces-for-ed
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    In June 2022, the U.S. Geological Survey, in collaboration with the Massachusetts Office of Coastal Zone Management, collected high-resolution geophysical data, in Nantucket Sound to understand the regional geology in the vicinity of Horseshoe Shoal. This effort is part of a long-term collaboration between the USGS and the Commonwealth of Massachusetts to map the State’s waters, support research on the Quaternary evolution of coastal Massachusetts, resolve the influence of sea-level change and sediment supply on coastal evolution, and strengthen efforts to understand the type, distribution, and quality of subtidal marine habitats. This collaboration produces high-resolution geologic data that serve the needs of research, management and the public. Data collected as part of this mapping cooperative continue to be released in a series of USGS Open-File Reports and Data Releases https://res1wwwd-o-tusgsd-o-tgov.vcapture.xyz/centers/whcmsc/science/geologic-mapping-massachusetts-seafloor.

  16. Z

    DeLTA (Deep Learning Techniques for noise Annoyance detection) Dataset

    • data.niaid.nih.gov
    • zenodo.org
    Updated Nov 11, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aletta, Francesco (2022). DeLTA (Deep Learning Techniques for noise Annoyance detection) Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7158056
    Explore at:
    Dataset updated
    Nov 11, 2022
    Dataset provided by
    Oberman, Tin
    Soelitsyo, Christopher
    Mitchell, Andrew
    Erfanian, Mercede
    Aletta, Francesco
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Deep Learning Techniques for noise Annoyance detection (DeLTA) dataset comprises 2,980 15-second binaural audio recordings collected in urban public spaces across London, Venice, Granada, and Groningen (sourced from International Soundscape Database). A remote listening experiment was designed and hosted on Gorilla Experiment Builder, a professional online platform used for studying complex behaviours. The survey was then distributed via Prolific to a pool of pre-registered participants (N=1,221), and data collected between July 5th and July 23rd, 2021.

    During the listening experiment, participants listened to ten 15-second-long binaural recordings of urban environments and were instructed to select all the sound sources they could identify within the recording and then to provide an annoyance rating (from 1 to 10). For the sound source recognition task, participants were provided with a list of 24 labels they could select from. To collapse these into a single set of sound sources per recording, a “consensus” approach was considered, i.e., if two or more participants identified a source as being present in a recording, this source was considered to be effectively present. This resulted in a 2890 by 24 data frame (2890 recordings, each with up to 23 possible labels present and an average annoyance rating). On average, each recording has 3.2 identified sound sources present.

    Due to the constraints of the online survey software, Mp3 files were used for the listening experiment. Higher quality 24- or 32-bit 48kHz WAV files can be made available from the authors upon request. Each binaural audio recording consists of a 2 channel Mp3 file.

  17. f

    Data from: Quality of life, perception and knowledge of dentists on noise

    • scielo.figshare.com
    xls
    Updated Jun 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sonia Regina Lazarotto Schettini; Cláudia Giglio de Oliveira Gonçalves (2023). Quality of life, perception and knowledge of dentists on noise [Dataset]. http://doi.org/10.6084/m9.figshare.5718853.v1
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 5, 2023
    Dataset provided by
    SciELO journals
    Authors
    Sonia Regina Lazarotto Schettini; Cláudia Giglio de Oliveira Gonçalves
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ABSTRACT Purpose: to analyze the perception and knowledge of dentists on occupational noise, its prevention, and effects on their health and quality of life. Methods: a cross-sectional study carried out with 54 dentists of both genders. Two questionnaires were applied: one addressing issues of perception and knowledge on noise and its effects, and another on Quality of Life (SF 36). Results: the workplace noise was considered within medium intensity, and a health risk. Some professionals (59.2%) reported knowing noise prevention methods, although they do not use them. Complaints and the most frequently reported symptoms were irritability, difficulty in understanding speech and tinnitus. The perception of the Quality of Life was worse among men. There was association between pain and perception of noise intensity. Conclusion: noise was considered, regardless of gender, harmful to health and associated with perception of musculoskeletal pain. Symptoms and complaints caused by noise have been reported to negatively impact the professional activity of dentists, however, most of them do not adopt preventive measures.

  18. Noise Round 4 Airport National (Lnight) - Dataset - data.gov.ie

    • data.gov.ie
    Updated Oct 12, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.gov.ie (2023). Noise Round 4 Airport National (Lnight) - Dataset - data.gov.ie [Dataset]. https://data.gov.ie/dataset/noise-round-4-airport-national-lnight
    Explore at:
    Dataset updated
    Oct 12, 2023
    Dataset provided by
    data.gov.ie
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is a polygon dataset of the strategic noise mapping of major airports for Round 4 (2022), representing the situation during 2021, in the form of noise contours for the Lnight period. Major airports were identified under the Regulaitons as those airports exceeding the threshold of 50,000 aircraft movements per year during 2021. The dB value represents the annual average Lnight indicator value in decibels over the night time. The values are calculated at a height of 4.0m above local terrain, not measured, and should be treated with caution when looking at specific locations. The strategic noise mapping of the major airports was undertaken by Dublin Airport Authority. The outputs of the Round 4 noise mapping exercise were generated using a new common noise assessment method for Europe (CNOSSOS-EU), as set out in the revised Annex II of Directive 2002/49/EC, and they are not directly comparable to any strategic noise maps previously generated under Rounds 1 to 3, as these revised methods calculate noise emissions, propagation and residential population exposure differently from the methods used in previous rounds. The noise maps are the product of assimilating a collection of digital datasets, and over the last 15 years there have been ongoing significant improvements to the quality of the digital datasets describing the natural and built environment in Ireland, therefore the Round 4 strategic noise mapping includes changes to the model input datasets being used, compared to previous rounds, particularly related to the aircraft modelled and the terrain model. The strategic noise maps should not be relied upon in the context of planning applications for noise sensitive developments in the vicinity of the mapped sources.

  19. Noise and Air Quality Monitoring API DCC

    • datasalsa.com
    .txt, api
    Updated Apr 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dublin City Council (2025). Noise and Air Quality Monitoring API DCC [Dataset]. https://datasalsa.com/dataset/?catalogue=data.gov.ie&name=sonitus
    Explore at:
    .txt, apiAvailable download formats
    Dataset updated
    Apr 16, 2025
    Dataset authored and provided by
    Dublin City Council
    Time period covered
    Apr 16, 2025
    Description

    Noise and Air Quality Monitoring API DCC. Published by Dublin City Council. Available under the license cc-by (CC-BY-4.0).This is an api that provides continuous real time as well as historic data from the network of air quality monitoring stations that are part of the national air quality monitoring network managed in cooperation between the Environmental Protection Agency and Dublin City Council, as well as other stations set up by Dublin City Council to monitor local air quality conditions. This api also provides access to Dublin City Council's network of environmental sound level monitors. For more information, visit https://dublincityairandnoise.ie/

    To convert datetime to unix timestamp, you can use this converter: https://wtools.io/convert-date-time-to-unix-time...

  20. d

    EUROPEAN CITIES Environmental Data | Street Noise Levels | GDPR Compliant |...

    • datarade.ai
    Updated May 5, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Silencio Network (2025). EUROPEAN CITIES Environmental Data | Street Noise Levels | GDPR Compliant | 100% Traceable Consent [Dataset]. https://datarade.ai/data-products/european-cities-environmental-data-street-noise-levels-gd-silencio-network
    Explore at:
    .json, .xml, .csv, .xlsAvailable download formats
    Dataset updated
    May 5, 2025
    Dataset provided by
    Quickkonnect UG
    Authors
    Silencio Network
    Area covered
    Slovakia, Ireland, Albania, Bosnia and Herzegovina, Czech Republic, Gibraltar, Estonia, Norway, France, Sweden, Europe
    Description

    Silencio’s Street Noise-Level Dataset provides unmatched value environmental data industry, delivering highly granular noise data to researchers, developers, and governments. Built from over 35 billion datapoints collected globally via our mobile app and refined through AI-driven interpolation, this dataset offers hyper-local average noise levels (dBA) covering streets, neighborhoods, and venues across the whole USA.

    Our data helps assess the environmental quality of any location, supporting residential and commercial property valuations, site selection, and urban development. By integrating real-world noise measurements with AI-powered models, we enable real estate professionals to evaluate how noise exposure impacts property value, livability, and buyer perception — factors often overlooked by traditional market analyses.

    Silencio also operates the largest global database of noise complaints, providing additional context for understanding neighborhood soundscapes from both objective measurements and subjective community feedback.

    We offer on-demand visual delivery for mapped cities, regions, or even specific streets and districts, allowing clients to access exactly the data they need. Data is available both as historical and up-to-date records, ready to be integrated into valuation models, investment reports, and location intelligence platforms. Delivery options include CSV exports, S3 buckets, PDF, PNG, JPEG, and we are currently developing a full-featured API, with flexibility to adapt to client needs. We are open to discussion for API early access, custom projects, or unique delivery formats.

    Fully anonymized and fully compliant, Silencio’s data ensures ethical sourcing while providing real estate professionals with actionable insights for smarter, more transparent valuations.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
ECMWF (2025). Temperature and precipitation gridded data for global and regional domains derived from in-situ and satellite observations [Dataset]. http://doi.org/10.24381/cds.11dedf0c
Organization logo

Temperature and precipitation gridded data for global and regional domains derived from in-situ and satellite observations

Explore at:
16 scholarly articles cite this dataset (View in Google Scholar)
netcdfAvailable download formats
Dataset updated
Apr 9, 2025
Dataset provided by
European Centre for Medium-Range Weather Forecastshttp://ecmwf.int/
Authors
ECMWF
License

https://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/insitu-gridded-observations-global-and-regional/insitu-gridded-observations-global-and-regional_15437b363f02bf5e6f41fc2995e3d19a590eb4daff5a7ce67d1ef6c269d81d68.pdfhttps://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/insitu-gridded-observations-global-and-regional/insitu-gridded-observations-global-and-regional_15437b363f02bf5e6f41fc2995e3d19a590eb4daff5a7ce67d1ef6c269d81d68.pdf

Time period covered
Jan 1, 1750 - Jan 1, 2021
Description

This dataset provides high-resolution gridded temperature and precipitation observations from a selection of sources. Additionally the dataset contains daily global average near-surface temperature anomalies. All fields are defined on either daily or monthly frequency. The datasets are regularly updated to incorporate recent observations. The included data sources are commonly known as GISTEMP, Berkeley Earth, CPC and CPC-CONUS, CHIRPS, IMERG, CMORPH, GPCC and CRU, where the abbreviations are explained below. These data have been constructed from high-quality analyses of meteorological station series and rain gauges around the world, and as such provide a reliable source for the analysis of weather extremes and climate trends. The regular update cycle makes these data suitable for a rapid study of recently occurred phenomena or events. The NASA Goddard Institute for Space Studies temperature analysis dataset (GISTEMP-v4) combines station data of the Global Historical Climatology Network (GHCN) with the Extended Reconstructed Sea Surface Temperature (ERSST) to construct a global temperature change estimate. The Berkeley Earth Foundation dataset (BERKEARTH) merges temperature records from 16 archives into a single coherent dataset. The NOAA Climate Prediction Center datasets (CPC and CPC-CONUS) define a suite of unified precipitation products with consistent quantity and improved quality by combining all information sources available at CPC and by taking advantage of the optimal interpolation (OI) objective analysis technique. The Climate Hazards Group InfraRed Precipitation with Station dataset (CHIRPS-v2) incorporates 0.05° resolution satellite imagery and in-situ station data to create gridded rainfall time series over the African continent, suitable for trend analysis and seasonal drought monitoring. The Integrated Multi-satellitE Retrievals dataset (IMERG) by NASA uses an algorithm to intercalibrate, merge, and interpolate “all'' satellite microwave precipitation estimates, together with microwave-calibrated infrared (IR) satellite estimates, precipitation gauge analyses, and potentially other precipitation estimators over the entire globe at fine time and space scales for the Tropical Rainfall Measuring Mission (TRMM) and its successor, Global Precipitation Measurement (GPM) satellite-based precipitation products. The Climate Prediction Center morphing technique dataset (CMORPH) by NOAA has been created using precipitation estimates that have been derived from low orbiter satellite microwave observations exclusively. Then, geostationary IR data are used as a means to transport the microwave-derived precipitation features during periods when microwave data are not available at a location. The Global Precipitation Climatology Centre dataset (GPCC) is a centennial product of monthly global land-surface precipitation based on the ~80,000 stations world-wide that feature record durations of 10 years or longer. The data coverage per month varies from ~6,000 (before 1900) to more than 50,000 stations. The Climatic Research Unit dataset (CRU v4) features an improved interpolation process, which delivers full traceability back to station measurements. The station measurements of temperature and precipitation are public, as well as the gridded dataset and national averages for each country. Cross-validation was performed at a station level, and the results have been published as a guide to the accuracy of the interpolation. This catalogue entry complements the E-OBS record in many aspects, as it intends to provide high-resolution gridded meteorological observations at a global rather than continental scale. These data may be suitable as a baseline for model comparisons or extreme event analysis in the CMIP5 and CMIP6 dataset.

Search
Clear search
Close search
Google apps
Main menu