34 datasets found
  1. Data from: DSCOVR EPIC Level 4 Tropospheric Ozone

    • data.nasa.gov
    • s.cnmilf.com
    • +2more
    Updated Apr 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). DSCOVR EPIC Level 4 Tropospheric Ozone [Dataset]. https://data.nasa.gov/dataset/dscovr-epic-level-4-tropospheric-ozone-a1ed2
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    EPIC Tropospheric Ozone Data ProductThe Earth Polychromatic Imaging Camera (EPIC) on the Deep Space Climate Observatory (DSCOVR) spacecraft provides measurements of Earth-reflected radiances from the entire sunlit portion of the Earth. The measurements from four EPIC UV (Ultraviolet) channels reconstruct global distributions of total ozone. The tropospheric ozone columns (TCO) are then derived by subtracting independently measured stratospheric ozone columns from the EPIC total ozone. TCO data product files report gridded synoptic maps of TCO measured over the sunlit portion of the Earth disk on a 1-2 hour basis. Sampling times for these hourly TCO data files are the same as for the EPIC L2 total ozone product. Version 1.0 of the TCO product is based on Version 3 of the EPIC L1 product and the Version 3 Total Ozone Column Product. The stratospheric columns were derived from the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) ozone fields (Gelaro et al., 2017).In contrast to the EPIC total ozone maps that are reported at a high spatial resolution of 18 × 18 km2 near the center of the image, the TCO maps are spatially averaged over several EPIC pixels and written on a regular spatial grid (1° latitude x 1° longitude). Kramarova et al. (2021) describe the EPIC TCO product and its evaluation against independent sonde and satellite measurements. Table 1 lists all of the variables included in the TCO product files. Ozone arrays in the product files are integrated vertical columns in Dobson Units (DU; 1 DU = 2.69×1020 molecules m-2).Filename ConventionThe TCO product files are formatted HDF5 and represent a Level-4 (L4) product. The filenames have the following naming convention:”DSCOVR_EPIC_L4_TrO3_01_YYYYMMDDHHMMSS_03.h5” Where “TrO3” means tropospheric column ozone, “01” means that this is version 01 for this product, “YYYYMMDDHHMMSS” is the UTC measurement time with “YYYY” for year (2015-present), “MM” for month (01-12), “DD” for day of the month (1-31), and “HHMMSS” denotes hours-minutes-seconds, and “03” signifies that v3 L1b measurements were used to derive the EPIC total ozone and consequently TCO.Column Weighting Function AdjustmentThere are two TCO gridded arrays in each hourly data file for the user to choose from; one is denoted TroposphericColumnOzone, and the other is TroposphericColumnOzoneAdjusted. The latter TCO array includes an adjustment to correct for reduced sensitivity of the EPIC UV measurements in detecting ozone in the low troposphere/boundary layer. The adjustment depended on latitude and season and was derived using simulated tropospheric ozone from the GEOS-Replay model (Strode et al. 2020) constrained by the MERRA-2 meteorology through the replay method. Our analysis (Kramarova et al., 2021) indicated that the adjusted TCO array is more accurate and precise. Flagging Bad DataKramarova et al. (2021) note that the preferred EPIC total ozone measurements used for scientific study are those where the L2 “AlgorithmFlag” parameter equals 1, 101, or 111. In this TCO product, we have included only L2 total ozone pixels with these algorithm flag values. The TCO product files provide a gridded version of the AlgorithmFlag parameter as a comparison reference. Still, it is not needed by the user for applying data quality filtering.Another parameter in the EPIC L2 total ozone files for filtering questionable data is the “ErrorFlag.” The TCO product files include a gridded version of this ErrorFlag parameter that the user should apply. Only TCO-gridded pixels with an ErrorFlag value of zero should be used.TCO measurements at high satellite-look angles and/or high solar zenith angles should also be filtered out for analysis. The TCO files include a gridded version of the satellite look angle and the solar zenith angle denoted as “SatelliteLookAngle” and “SolarZenithAngle,” respectively. For scientific applications, users should filter TCO array data and use only pixels with SatelliteLookAngle and SolarZenithAngle 70°.Summary of the Derivation of the tropospheric column ozone productWe briefly summarize the derivation of EPIC TCO, stratospheric column ozone, and tropopause pressure. An independent measure of the stratospheric column ozone is needed to derive EPIC TCO. We use MERRA-2 ozone fields (Gelaro et al., 2017) to derive stratospheric ozone columns subtracted from EPIC total ozone (TOZ) to obtain TCO. The MERRA-2 data assimilation system ingests Aura OMI (Ozone Monitoring Instrument) v8.5 total ozone and MLS (Microwave Limb Sounder) v4.2 stratospheric ozone profiles to produce global synoptic maps of profile ozone from the surface to the top of the atmosphere; for our analyses, we use MERRA-2 ozone profiles reported every three hours (0, 3, 6, …, 21 UTC) at a resolution of 0.625° longitude × 0.5° latitude. MERRA-2 ozone profiles were integrated vertically from the top of the atmosphere down to tropopause pressure to derive maps of stratospheric column ozone. Tropopause pressure was determined from MERRA-2 re-analyses using standard PV-θ definition (2.5 PVU and 380K). The resulting maps of stratospheric column ozone at 3-hour intervals from MERRA-2 were then space-time collocated with EPIC footprints and subtracted from the EPIC total ozone, thus producing daily global maps of residual TCO sampled at the precise EPIC pixel times. These tropospheric ozone measurements were further binned to 1° latitude x 1° longitude resolution. ReferencesGelaro, R., W. McCarty, M.J. Suárez, R. Todling, A. Molod, L. Takacs, C.A. Randles, A. Darmenov, M.G. Bosilovich, R. Reichle, K. Wargan, L. Coy, R. Cullather, C. Draper, S. Akella, V. Buchard, A. Conaty, A.M. da Silva, W. Gu, G. Kim, R. Koster, R. Lucchesi, D. Merkova, J.E. Nielsen, G. Partyka, S. Pawson, W. Putman, M. Rienecker, S.D. Schubert, M. Sienkiewicz, and B. Zhao, The Modern-Era Retrospective Analysis for Research and Applications, Version 2 (MERRA-2), J. Climate, 30, 5419–5454, https://doi.org/10.1175/JCLI-D-16-0758.1, 2017.Kramarova N. A., J. R. Ziemke, L.-K. Huang, J. R. Herman, K. Wargan, C. J. Seftor, G. J. Labow, and L. D. Oman, Evaluation of Version 3 total and tropospheric ozone columns from EPIC on DSCOVR for studying regional-scale ozone variations, Front. Rem. Sens., in review, 2021.Table 1. List of parameters and data arrays in the EPIC tropospheric ozone hourly product files. The left column lists the variable name, the second column lists the variable description and units, and the third column lists the variable data type and dimensions.Product Variable Name Description and units Data Type and DimensionsNadirLatitude Nadir latitude in degrees Real4 numberNadirLongitude Nadir longitude in degrees Real4 numberLatitude Center latitude of grid-point in degrees Real4 array with 180 elementsLongitude Center longitude of grid-point in degrees Real4 array with 360 elementsTroposphericColumnOzone Tropospheric column ozone in Dobson Units Real4 array with dimensions 360 × 180TroposphericColumnOzoneAdjusted Tropospheric column ozone with BL adjustment in Dobson Units Real4 array with dimensions 360 × 180StratosphericColumnOzone Stratospheric column ozone in Dobson Units Real4 array with dimensions 360 × 180TotalColumnOzone Total column ozone in Dobson Units Real4 array with dimensions 360 × 180Reflectivity Reflectivity (no units) Real4 array with dimensions 360 × 180RadiativeCloudFraction Radiative cloud fraction (no units) Real4 array with dimensions 360 × 180TropopausePressure Tropopause pressure in units hPa Real4 array with dimensions 360 × 180CWF1 Column weighting function for layer 1 (506.6-1013.3 hPa) Real4 array with dimensions 360 × 180ErrorFlag Error flag for TCO data Real4 array with dimensions 360 × 180AlgorithmFlag Algorithm flag for TCO data Real4 array with dimensions 360 × 180SatelliteLookAngle Satellite Look Angle in degrees Real4 array with dimensions 360 × 180SolarZenithAngle Solar Zenith Angle in degrees Real4 array with dimensions 360 × 180

  2. D

    Replication data for: Ohmic response in BiFeO3 domain walls by...

    • dataverse.nl
    zip
    Updated Feb 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jan Rieck; Jan Rieck; Marcel Kolster; Marcel Kolster; Romar Angelo Avila; Romar Angelo Avila; Mian Li; Mian Li; Guus Rijnders; Guus Rijnders; Gertjan Koster; Gertjan Koster; Thomas Palstra; Thomas Palstra; Roeland Huijink; Roeland Huijink; Beatriz Noheda; Beatriz Noheda (2025). Replication data for: Ohmic response in BiFeO3 domain walls by submicron-scale four-point probe resistance measurements [Dataset]. http://doi.org/10.34894/XB8T9U
    Explore at:
    zip(16105), zip(611596), zip(136600), zip(5467464), zip(2800605), zip(136528), zip(17368150), zip(245417), zip(7994289), zip(92434), zip(6878950), zip(24332), zip(201007)Available download formats
    Dataset updated
    Feb 24, 2025
    Dataset provided by
    DataverseNL
    Authors
    Jan Rieck; Jan Rieck; Marcel Kolster; Marcel Kolster; Romar Angelo Avila; Romar Angelo Avila; Mian Li; Mian Li; Guus Rijnders; Guus Rijnders; Gertjan Koster; Gertjan Koster; Thomas Palstra; Thomas Palstra; Roeland Huijink; Roeland Huijink; Beatriz Noheda; Beatriz Noheda
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset contains the raw data needed to reproduce the figures for the journal article "Ohmic response in BiFeO3 domain walls by submicron-scale four-point probe resistance measurements". It consists of the following data types: 1) Image data (.png and .tif) 2) Scanning probe microscopy data (.spm and .ibw), which can be opened with the freeware Gwyddion. Data that has been processed with a denoising algorithm ends with .gsf. 3) Program code (.m), which can be opened with MATLAB for the image denoising code simulation. Instructions on how to run the denoising code: Using Matlab to run 'AFM_Image_Denoising_GUI.m'; Load data: select the .ibw file, input the channel number and click the button 'Load & Show', then you can see the image you want to process (It is not necessary to change 'Approx. Min Size' but you can change it if for checking other results); Denoise: Click the 'Denoise' button, and you can see the denoising result; Save: Click the 'Save as .gsf' button. 4) Electrical data (.xls), measured by a Keithley 4200 parameter analyzer. The files can be opened with Excel. The measurements are taken by multiple internal source-measure units (SMUs). Each file consists of several data columns such as time (s), voltage (V) or current (A) of the respective SMU and the respective error (the SMU number is noted in the header: AV is voltage from SMU1, BI is current from SMU2, etc.). From the current and voltage data, the 2-point and 4-point resistances can be calculated. Each .xls file is named after the figure in which the raw data was used. In case data was fitted, the current range of each fit is given in separate .txt files. 5) Finite element simulation files (.mph), which can be opened using the COMSOL software. 6) .dat files, which can be opened by any text editor. 7) .kicad_sch files, which can be opened by the Freeware KiCad. The data is ordered according to the order of appearance in the manuscript.

  3. h

    Measurement of kT splitting scales in W->lv events at sqrt(s)=7 TeV with the...

    • hepdata.net
    • osti.gov
    Updated Feb 11, 2013
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2013). Measurement of kT splitting scales in W->lv events at sqrt(s)=7 TeV with the ATLAS detector [Dataset]. http://doi.org/10.17182/hepdata.60309.v1
    Explore at:
    Dataset updated
    Feb 11, 2013
    Description

    CERN-LHC. A measurement of the splitting scales, as defined by the kT clustering algorithm, for final states containg a W boson produced in proton-proton collisions at a centre-of-mass energy 7 TeV. The measurements uses the full 2010 data sample with a total integrated luminsoity of 36 pb-1. The measurements are made separately for W bosons decaying into electron and muuon final states. Details of the splitting scale variable d_k are described in the paper. Data on the four hardest splitting scales d0,d1,d2 and d3 are presented together with their ratios.

  4. Factors manipulated in the simulation study and values studied in the four...

    • plos.figshare.com
    xls
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexandra Rouquette; Jean-Benoit Hardouin; Alexis Vanhaesebrouck; Véronique Sébille; Joël Coste (2023). Factors manipulated in the simulation study and values studied in the four kinds of Differential Item Functioning (DIF). [Dataset]. http://doi.org/10.1371/journal.pone.0215073.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Alexandra Rouquette; Jean-Benoit Hardouin; Alexis Vanhaesebrouck; Véronique Sébille; Joël Coste
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Factors manipulated in the simulation study and values studied in the four kinds of Differential Item Functioning (DIF).

  5. HAC Provider Level Measure Rates for Four Conditions

    • johnsnowlabs.com
    csv
    Updated Jan 20, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    John Snow Labs (2021). HAC Provider Level Measure Rates for Four Conditions [Dataset]. https://www.johnsnowlabs.com/marketplace/hac-provider-level-measure-rates-for-four-conditions/
    Explore at:
    csvAvailable download formats
    Dataset updated
    Jan 20, 2021
    Dataset authored and provided by
    John Snow Labs
    Area covered
    United States
    Description

    This dataset presents hospital-level measures rates of four conditions included in the Deficit Reduction Act (DRA) Hospital-Acquired Condition (HAC) payment provision – foreign object retained after surgery, blood incompatibility, air embolism, and falls and trauma – for Medicare fee-for-service discharges.

  6. f

    Data from: Measurement Scales of Reactions to the Assessment of Graduate...

    • scielo.figshare.com
    xls
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kelly Rocha de Queiroz; Amalia Raquel Pérez-Nebra; Fabiana Queiroga (2023). Measurement Scales of Reactions to the Assessment of Graduate Programs: Evidences of Factorial Validity [Dataset]. http://doi.org/10.6084/m9.figshare.14284643.v1
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    SciELO journals
    Authors
    Kelly Rocha de Queiroz; Amalia Raquel Pérez-Nebra; Fabiana Queiroga
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Abstract The propose was to seek validity evidences of scales based on the model of reactions of higher education professors about the evaluation of graduate programs conducted by the Brazilian Federal Agency for Support and Evaluation of Graduate Education (Capes). The scales of satisfaction, justice perception, utility perception, and accuracy perception were applied on 814 higher education professors, being 50.36% males, with a mean age of 47.66 years (SD = 9.34). Exploratory analysis indicated reliability of the four scales (alphas ranged from .69 to .97 and omegas are from .70). These and other psychometric indicators of the scales indicate that the measures are reliable, and the reaction model was confirmed by the strong correlation between the scales.

  7. d

    Leaf-level gas exchange and leaf-area index measurements collected at four...

    • catalog.data.gov
    • data.usgs.gov
    Updated Jul 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Leaf-level gas exchange and leaf-area index measurements collected at four study sites on the island of Maui, Hawaii, September 2017 - August 2018 [Dataset]. https://catalog.data.gov/dataset/leaf-level-gas-exchange-and-leaf-area-index-measurements-collected-at-four-study-sites-on-
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Maui, Hawaii
    Description

    The University of Hawaii at Manoa and the U.S. Geological Survey, in cooperation with the County of Maui Department of Water Supply and the State of Hawaii Commission on Water Resource Management, initiated a field data-collection program to measure the transpiration characteristics of native and non-native invasive plants within forested areas on the island of Maui. The field data collection is part of a study to quantify the impacts of high-priority non-native and dominant native plant species on freshwater resources throughout the State of Hawaii (https://archive.usgs.gov/archive/sites/hi.water.usgs.gov/studies/maui_eco/ index.html). The overall objective of the study is to provide needed information for (1) assessing species-specific impacts on freshwater resources and (2) reducing uncertainty in regional recharge estimates associated with forested areas. This dataset includes a summary of measurements of leaf-level gas exchange characteristics and leaf-area index at four study sites on the island of Maui, Hawaii between September 2017 and August 2018.

  8. Measurement Data from MARKlab: A Case Study of Key QoS Metrics Across...

    • zenodo.org
    bin, csv
    Updated May 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Viktoria Vomhoff; Stefan Geißler; Steffen Gebert; Tobias Hoßfeld; Viktoria Vomhoff; Stefan Geißler; Steffen Gebert; Tobias Hoßfeld (2025). Measurement Data from MARKlab: A Case Study of Key QoS Metrics Across Multiple RATs in Four Countries [Dataset]. http://doi.org/10.5281/zenodo.15420422
    Explore at:
    csv, binAvailable download formats
    Dataset updated
    May 16, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Viktoria Vomhoff; Stefan Geißler; Steffen Gebert; Tobias Hoßfeld; Viktoria Vomhoff; Stefan Geißler; Steffen Gebert; Tobias Hoßfeld
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    MARKlab: Distributed Measurements in Public Mobile Networks at International Scale

    This repository contains a description and sample datasets associated with the paper MARKlab: Distributed Measurements in Public Mobile Networks at International Scale published at the Network Traffic Measurement and Analysis (TMA) Conference 2025.
    The data is collected with the platform MARKlab, a distributed mobile measurement platform with focus on network QoS and sustainability for different radio access technologies and application layer protocols for the IoT use case. More information can be found https://lsinfo3.github.io/marklab-website">here.

    The datasets are organized by measurement type, with each CSV file corresponding to a specific figure from the study. Each file contains a subset of the available fields, selected to match the focus of the respective measurement.
    In the provided README.md file, we present the datasets, including an explanation of all contained fields.

    Contact

    For questions regarding the dataset, contact Viktoria Vomhoff (viktoria.vomhoff@uni-wuerzburg.de)

  9. Correlations among the Connection During Conversations Scale (CDCS), its...

    • plos.figshare.com
    xls
    Updated Jan 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Karynna Okabe-Miyamoto; Lisa C. Walsh; Daniel J. Ozer; Sonja Lyubomirsky (2024). Correlations among the Connection During Conversations Scale (CDCS), its four subscales, and other relevant connection scales (Study 2). [Dataset]. http://doi.org/10.1371/journal.pone.0286408.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jan 18, 2024
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Karynna Okabe-Miyamoto; Lisa C. Walsh; Daniel J. Ozer; Sonja Lyubomirsky
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Correlations among the Connection During Conversations Scale (CDCS), its four subscales, and other relevant connection scales (Study 2).

  10. u

    Data from: T1DiabetesGranada: a longitudinal multi-modal dataset of type 1...

    • produccioncientifica.ugr.es
    Updated 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rodriguez-Leon, Ciro; Aviles Perez, Maria Dolores; Banos, Oresti; Quesada-Charneco, Miguel; Lopez-Ibarra, Pablo J; Villalonga, Claudia; Munoz-Torres, Manuel; Rodriguez-Leon, Ciro; Aviles Perez, Maria Dolores; Banos, Oresti; Quesada-Charneco, Miguel; Lopez-Ibarra, Pablo J; Villalonga, Claudia; Munoz-Torres, Manuel (2023). T1DiabetesGranada: a longitudinal multi-modal dataset of type 1 diabetes mellitus [Dataset]. https://produccioncientifica.ugr.es/documentos/668fc429b9e7c03b01bd53b7
    Explore at:
    Dataset updated
    2023
    Authors
    Rodriguez-Leon, Ciro; Aviles Perez, Maria Dolores; Banos, Oresti; Quesada-Charneco, Miguel; Lopez-Ibarra, Pablo J; Villalonga, Claudia; Munoz-Torres, Manuel; Rodriguez-Leon, Ciro; Aviles Perez, Maria Dolores; Banos, Oresti; Quesada-Charneco, Miguel; Lopez-Ibarra, Pablo J; Villalonga, Claudia; Munoz-Torres, Manuel
    Description

    T1DiabetesGranada

    A longitudinal multi-modal dataset of type 1 diabetes mellitus

    Documented by:

    Rodriguez-Leon, C., Aviles-Perez, M. D., Banos, O., Quesada-Charneco, M., Lopez-Ibarra, P. J., Villalonga, C., & Munoz-Torres, M. (2023). T1DiabetesGranada: a longitudinal multi-modal dataset of type 1 diabetes mellitus. Scientific Data, 10(1), 916. https://doi.org/10.1038/s41597-023-02737-4

    Background

    Type 1 diabetes mellitus (T1D) patients face daily difficulties in keeping their blood glucose levels within appropriate ranges. Several techniques and devices, such as flash glucose meters, have been developed to help T1D patients improve their quality of life. Most recently, the data collected via these devices is being used to train advanced artificial intelligence models to characterize the evolution of the disease and support its management. The main problem for the generation of these models is the scarcity of data, as most published works use private or artificially generated datasets. For this reason, this work presents T1DiabetesGranada, a open under specific permission longitudinal dataset that not only provides continuous glucose levels, but also patient demographic and clinical information. The dataset includes 257780 days of measurements over four years from 736 T1D patients from the province of Granada, Spain. This dataset progresses significantly beyond the state of the art as one the longest and largest open datasets of continuous glucose measurements, thus boosting the development of new artificial intelligence models for glucose level characterization and prediction.

    Data Records

    The data are stored in four comma-separated values (CSV) files which are available in T1DiabetesGranada.zip. These files are described in detail below.

    Patient_info.csv

    Patient_info.csv is the file containing information about the patients, such as demographic data, start and end dates of blood glucose level measurements and biochemical parameters, number of biochemical parameters or number of diagnostics. This file is composed of 736 records, one for each patient in the dataset, and includes the following variables:

    Patient_ID – Unique identifier of the patient. Format: LIB19XXXX.

    Sex – Sex of the patient. Values: F (for female), masculine (for male)

    Birth_year – Year of birth of the patient. Format: YYYY.

    Initial_measurement_date – Date of the first blood glucose level measurement of the patient in the Glucose_measurements.csv file. Format: YYYY-MM-DD.

    Final_measurement_date – Date of the last blood glucose level measurement of the patient in the Glucose_measurements.csv file. Format: YYYY-MM-DD.

    Number_of_days_with_measures – Number of days with blood glucose level measurements of the patient, extracted from the Glucose_measurements.csv file. Values: ranging from 8 to 1463.

    Number_of_measurements – Number of blood glucose level measurements of the patient, extracted from the Glucose_measurements.csv file. Values: ranging from 400 to 137292.

    Initial_biochemical_parameters_date – Date of the first biochemical test to measure some biochemical parameter of the patient, extracted from the Biochemical_parameters.csv file. Format: YYYY-MM-DD.

    Final_biochemical_parameters_date – Date of the last biochemical test to measure some biochemical parameter of the patient, extracted from the Biochemical_parameters.csv file. Format: YYYY-MM-DD.

    Number_of_biochemical_parameters – Number of biochemical parameters measured on the patient, extracted from the Biochemical_parameters.csv file. Values: ranging from 4 to 846.

    Number_of_diagnostics – Number of diagnoses realized to the patient, extracted from the Diagnostics.csv file. Values: ranging from 1 to 24.

    Glucose_measurements.csv

    Glucose_measurements.csv is the file containing the continuous blood glucose level measurements of the patients. The file is composed of more than 22.6 million records that constitute the time series of continuous blood glucose level measurements. It includes the following variables:

    Patient_ID – Unique identifier of the patient. Format: LIB19XXXX.

    Measurement_date – Date of the blood glucose level measurement. Format: YYYY-MM-DD.

    Measurement_time – Time of the blood glucose level measurement. Format: HH:MM:SS.

    Measurement – Value of the blood glucose level measurement in mg/dL. Values: ranging from 40 to 500.

    Biochemical_parameters.csv

    Biochemical_parameters.csv is the file containing data of the biochemical tests performed on patients to measure their biochemical parameters. This file is composed of 87482 records and includes the following variables:

    Patient_ID – Unique identifier of the patient. Format: LIB19XXXX.

    Reception_date – Date of receipt in the laboratory of the sample to measure the biochemical parameter. Format: YYYY-MM-DD.

    Name – Name of the measured biochemical parameter. Values: 'Potassium', 'HDL cholesterol', 'Gammaglutamyl Transferase (GGT)', 'Creatinine', 'Glucose', 'Uric acid', 'Triglycerides', 'Alanine transaminase (GPT)', 'Chlorine', 'Thyrotropin (TSH)', 'Sodium', 'Glycated hemoglobin (Ac)', 'Total cholesterol', 'Albumin (urine)', 'Creatinine (urine)', 'Insulin', 'IA ANTIBODIES'.

    Value – Value of the biochemical parameter. Values: ranging from -4.0 to 6446.74.

    Diagnostics.csv

    Diagnostics.csv is the file containing diagnoses of diabetes mellitus complications or other diseases that patients have in addition to type 1 diabetes mellitus. This file is composed of 1757 records and includes the following variables:

    Patient_ID – Unique identifier of the patient. Format: LIB19XXXX.

    Code – ICD-9-CM diagnosis code. Values: subset of 594 of the ICD-9-CM codes (https://www.cms.gov/Medicare/Coding/ICD9ProviderDiagnosticCodes/codes).

    Description – ICD-9-CM long description. Values: subset of 594 of the ICD-9-CM long description (https://www.cms.gov/Medicare/Coding/ICD9ProviderDiagnosticCodes/codes).

    Technical Validation

    Blood glucose level measurements are collected using FreeStyle Libre devices, which are widely used for healthcare in patients with T1D. Abbott Diabetes Care, Inc., Alameda, CA, USA, the manufacturer company, has conducted validation studies of these devices concluding that the measurements made by their sensors compare to YSI analyzer devices (Xylem Inc.), the gold standard, yielding results of 99.9% of the time within zones A and B of the consensus error grid. In addition, other studies external to the company concluded that the accuracy of the measurements is adequate.

    Moreover, it was also checked in most cases the blood glucose level measurements per patient were continuous (i.e. a sample at least every 15 minutes) in the Glucose_measurements.csv file as they should be.

    Usage Notes

    For data downloading, it is necessary to be authenticated on the Zenodo platform, accept the Data Usage Agreement and send a request specifying full name, email, and the justification of the data use. This request will be processed by the Secretary of the Department of Computer Engineering, Automatics, and Robotics of the University of Granada and access to the dataset will be granted.

    The files that compose the dataset are CSV type files delimited by commas and are available in T1DiabetesGranada.zip. A Jupyter Notebook (Python v. 3.8) with code that may help to a better understanding of the dataset, with graphics and statistics, is available in UsageNotes.zip.

    Graphs_and_stats.ipynb

    The Jupyter Notebook generates tables, graphs and statistics for a better understanding of the dataset. It has four main sections, one dedicated to each file in the dataset. In addition, it has useful functions such as calculating the patient age, deleting a patient list from a dataset file and leaving only a patient list in a dataset file.

    Code Availability

    The dataset was generated using some custom code located in CodeAvailability.zip. The code is provided as Jupyter Notebooks created with Python v. 3.8. The code was used to conduct tasks such as data curation and transformation, and variables extraction.

    Original_patient_info_curation.ipynb

    In the Jupyter Notebook is preprocessed the original file with patient data. Mainly irrelevant rows and columns are removed, and the sex variable is recoded.

    Glucose_measurements_curation.ipynb

    In the Jupyter Notebook is preprocessed the original file with the continuous glucose level measurements of the patients. Principally rows without information or duplicated rows are removed and the variable with the timestamp is transformed into two new variables, measurement date and measurement time.

    Biochemical_parameters_curation.ipynb

    In the Jupyter Notebook is preprocessed the original file with patient data of the biochemical tests performed on patients to measure their biochemical parameters. Mainly irrelevant rows and columns are removed and the variable with the name of the measured biochemical parameter is translated.

    Diagnostic_curation.ipynb

    In the Jupyter Notebook is preprocessed the original file with patient data of the diagnoses of diabetes mellitus complications or other diseases that patients have in addition to T1D.

    Get_patient_info_variables.ipynb

    In the Jupyter Notebook it is coded the feature extraction process from the files Glucose_measurements.csv, Biochemical_parameters.csv and Diagnostics.csv to complete the file Patient_info.csv. It is divided into six sections, the first three to extract the features from each of the mentioned files and the next three to add the extracted features to the resulting new file.

    Data Usage Agreement

    The conditions for use are as follows:

    You confirm that you will not attempt to re-identify research participants for any reason, including for re-identification theory research.

    You commit to keeping the T1DiabetesGranada dataset confidential and secure and will not redistribute data or Zenodo account credentials.

    You will require

  11. f

    Measurement equivalence of the Four-Dimensional Symptom Questionnaire (4DSQ)...

    • figshare.com
    pdf
    Updated Jun 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Berend Terluin; Johannes C. van der Wouden; Henrica C. W. de Vet (2023). Measurement equivalence of the Four-Dimensional Symptom Questionnaire (4DSQ) in adolescents and emerging adults [Dataset]. http://doi.org/10.1371/journal.pone.0221904
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Berend Terluin; Johannes C. van der Wouden; Henrica C. W. de Vet
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Four-Dimensional Symptom Questionnaire (4DSQ) is a self-report instrument measuring distress, depression, anxiety and somatization. The questionnaire has been developed and validated in adult samples. It is unknown whether adolescents and emerging adults respond to the 4DSQ items in the same way as adults do. The objective of the study was to examine measurement equivalence of the 4DSQ across adolescents, emerging adults and adults. 4DSQ data were collected in a primary care psychotherapy practice (N = 1349). Measurement equivalence was assessed using differential item and test functioning (DIF and DTF) analysis in an item response theory framework. DIF was compared across the following groups: adolescents (age 10–17), emerging adults (age 18–25), and adults (age 26–40). DIF was found in 9 items (out of 50) across adolescents and adults, and in 4 items across emerging adults and adults. The item with the largest DIF was ‘difficulty getting to sleep’, which was less severe for adolescents compared to adults. A likely explanation is that adolescents have a high base rate for problems with sleep initiation. The effect of DIF on the scale scores (DTF) was negligible. Adolescents and emerging adults score some 4DSQ items differently compared to adults but this had practically no effect on 4DSQ scale scores. 4DSQ scale scores from adolescents and emerging adults can be interpreted in the same way as 4DSQ scores from adults.

  12. f

    Table_1_Factorial Validity and Measurement Invariance of the Slovene Version...

    • frontiersin.figshare.com
    docx
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eva Boštjančič; Luka Komidar; Richard B. Johnson (2023). Table_1_Factorial Validity and Measurement Invariance of the Slovene Version of the Cultural Intelligence Scale.docx [Dataset]. http://doi.org/10.3389/fpsyg.2018.01499.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Frontiers
    Authors
    Eva Boštjančič; Luka Komidar; Richard B. Johnson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This study examined the factorial validity of the Slovene version of the cultural intelligence scale (CQS) in a representative sample of 1,000 Slovenian participants (49% were female). The results of confirmatory factor analysis supported the factorial validity of the Slovene CQS and the existence of a general (second-order) cultural intelligence factor. The four scales and the overall (general) CQS scale showed satisfactory internal consistency. The results of multiple-group confirmatory factor analyses supported the hypotheses of partial measurement invariance across gender, and full measurement invariance across type of settlement (urban vs. rural).

  13. f

    pone.0280035.t001 - Measurement, data analysis and modeling of...

    • plos.figshare.com
    xls
    Updated Jun 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chaoyi Zhang; Zhangchao Ma; Jianquan Wang; Yan Yao; Xiangna Han; Xiang He (2023). pone.0280035.t001 - Measurement, data analysis and modeling of electromagnetic wave propagation gain in a typical vegetation environment [Dataset]. http://doi.org/10.1371/journal.pone.0280035.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 21, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Chaoyi Zhang; Zhangchao Ma; Jianquan Wang; Yan Yao; Xiangna Han; Xiang He
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    pone.0280035.t001 - Measurement, data analysis and modeling of electromagnetic wave propagation gain in a typical vegetation environment

  14. i

    Utrecht-Management of Identity Committments Scale (U-MICS) - School identity...

    • data.individualdevelopment.nl
    Updated Oct 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Utrecht-Management of Identity Committments Scale (U-MICS) - School identity - Measure - CD² [Dataset]. https://data.individualdevelopment.nl/dataset/5949ca752a594062fd06acca8694cf54
    Explore at:
    Dataset updated
    Oct 17, 2024
    Area covered
    Utrecht
    Description

    The Utrecht-Management of Identity Commitments Scale - Educational Identity (IO) assesses four processes: 1) commitment to one's educational identity, 2) an in-depth exploration of educational identity, 3) the educational identity reconsideration and 4) the educational identity in-breadth exploration as a youth self-report scale. Domain: Educational Identity. In RADAR Old Wave 6, 7, 12 and 13 and RADAR Young Wave 2, 3, 9 and 10, the subscale School In-breadth Exploration was excluded. The Utrecht-Management of Identity Commitments Scale - Educational Identity was not available from RADAR Old Wave 14 and RADAR Young Wave 11 since most RADAR participants reach adulthood and no longer attend school.

  15. f

    Cuff-less Blood Pressure Measurement based on Four-wavelength PPG Signals

    • figshare.com
    application/x-rar
    Updated Sep 14, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Liang yongbo (2023). Cuff-less Blood Pressure Measurement based on Four-wavelength PPG Signals [Dataset]. http://doi.org/10.6084/m9.figshare.23283518.v2
    Explore at:
    application/x-rarAvailable download formats
    Dataset updated
    Sep 14, 2023
    Dataset provided by
    figshare
    Authors
    Liang yongbo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Cuff-less Blood Pressure Measurement based on Four-wavelength PPG SignalsThis dataset was collected primarily to explore the role of PPG signals with different wavelengths in the prediction of cuffless blood pressure. The PPG signals are collected at the human index finger which are the reflex type. This dataset can be used to study data mining of PPG signals with different wavelengths, or it can be used to build novel cuffless blood pressure measurement models using single or multiple PPG signals.The dataset contains four-wavelength PPG signals and blood pressure values measured by OMRON HEM-7201. There are data files "ppg_data" and physiological information files in the dataset. The physiological information files are saved as an Excel document named as "Subject Information.xlsx". SBP and DBP represent systolic blood pressure and diastolic blood pressure. SBP and DBP were measured before PPG signal collection and PPG signal collection begins immediately after blood pressure measurement.The dataset contains a total of 180 subjects, each with 60 seconds signal length and 200 Hz sampling frequency. The wavelength information of signals in the dataset is listed as follows:channel1: 660nmchannel2: 730nmchannel3: 850nmchannel4: 940nmThe above dataset is collected and managed by CardioWorks Team. If you have any questions about the data or relative researches, please contact us by email: liangyongbo@guet.edu.cn or liangyongbo001@gmail.com.The CardioWorks Team focuses on PPG-based portable or wearable cardiovascular health detection and disease assessment. For more research datasets and published papers, please pay attention to the following:Dataset:PPG-BP Database: https://doi.org/10.6084/m9.figshare.5459299.v5Non-invasive Hemoglobin Detection based on Four-wavelength PPG Signal: https://doi.org/10.6084/m9.figshare.22256143.v1Cuff-less Blood Pressure Measurement based on Four-wavelength PPG Signals:https://doi.org/10.6084/m9.figshare.23283518.v1Published Articles:[1] Mohamed Elgendi, Richard Fletcher, Yongbo Liang, et al. The use of photoplethysmography for assessing hypertension [J]. npj Digital Medicine, 2019, 2(1):1-11.(2019)Link[2] Xudong Hu Shimin Yin, Xizhuang Zhang, Carlo Menon, Cheng Fang, Zhencheng Chen, Mohamed Elgendi* and Yongbo Liang*. Blood pressure stratification using photoplethysmography and light gradient boosting machine [J]. Frontiers in Physiology, 2023, 14(1072273): 1-11.(2023)Link[3] Yongbo Liang, Shimin Yin, Qunfeng Tang, Zhenyu Zheng, Mohamed Elgendi* and Zhencheng Chen*. Deep Learning Algorithm Classifies Heartbeat Events Based on Electrocardiogram Signals. Frontiers in Physiology, 02 October 2020. Doi: 10.3389/fphys.2020.569050. (2020)Link[4] Cheng, Peng,Chen, Zhencheng*,Li, Quanzhong,Gong, Qiong,Zhu, Jianming,Liang, Yongbo*. Atrial Fibrillation Identification With PPG Signals Using a Combination of Time-Frequency Analysis and Deep Learning. IEEE Access 8, 172692-172706 (2020). Link[5] Zhenyu Zheng, Zhencheng Chen*, Fangrong Hu, Jianming Zhu, Qunfeng Tang, Yongbo Liang*. An Automatic Diagnosis of Arrhythmias Using a Combination of CNN and LSTM Technology [J]. Electronics, 2020, 9(1): 1-15. Link[6] Yongbo Liang, Derek Abbott, Newton Howard, Kenneth Lim, Rabab Ward and Mohamed Elgendi*. How Effective Is Pulse Arrival Time for Evaluating Blood Pressure? Challenges and Recommendations from a Study Using the MIMIC Database. Journal of Clinical Medicine, 8, 1-14, doi:10.3390/jcm8030337 (2019). Link[7] Yongbo Liang, Zhencheng Chen*, Guiyong Liu, Mohamed Elgendi*. A new, short-recorded photoplethysmogram dataset for blood pressure monitoring in China. Scientific data, doi:10.1038/sdata.2018.20 (2018). Link[8] Yongbo Liang, Mohamed Elgendi*, Zhencheng Chen* & Rabab Ward. An optimal filter for short photoplethysmogram signals. Scientific data, 5, 180076, doi:10.1038/sdata.2018.76 (2018). Link[9] Yongbo Liang, Zhencheng Chen*, Rabab Ward & Mohamed Elgendi*. Hypertension Assessment Using Photoplethysmography: A Risk Stratification Approach. Journal of Clinical Medicine, 8, doi:10.3390/jcm8010012 (2018). Link[10] Yongbo Liang, Zhencheng Chen, Rabab Ward & Mohamed Elgendi*. Hypertension Assessment via ECG and PPG Signals: An Evaluation Using MIMIC Database. Diagnostics, 8, doi:10.3390/diagnostics8030065 (2018). Link[11] Yongbo Liang, Zhencheng Chen, Rabab Ward & Mohamed Elgendi*. Photoplethysmography and Deep Learning: Enhancing Hypertension Risk Stratification. Biosensors, 8,doi:10.3390/bios8040101 (2018). Link[12] Xuhao Dong Ziyi Wang, Liangli Cao, Zhencheng Chen*, Yongbo Liang*. Whale Optimization Algorithm with a Hybrid Relation Vector Machine: A Highly Robust Respiratory Rate Prediction Model Using Photoplethysmography Signals [J]. Diagnostics, 2023, 13(5): 1-14. Link[13] Zhencheng Chen, Huishan Qin, Wenjun Ge, Shiyong Li*, Yongbo Liang*. Research on a Non-Invasive Hemoglobin Measurement System Based on Four-Wavelength Photoplethysmography [J]. Electronics, 2023, 12(6): 1-12. Link[14] Yang Zhang, Jianming Zhu, Yongbo Liang, Hongbo Chen, Shimin Yin and Zhencheng Chen*. Non-invasive blood glucose detection system based on conservation of energy method. Physiological measurement, 2017, 38: 325-342.[15] Yongbo Liang, Ahmed Hussain, Derek Abbott, Carlo Menon, Rabab Ward and Mohamed Elgendi*. Impact of Data Transformation: An ECG Heartbeat Classification Approach. Frontiers in Digital Health, Dec 23, 2020 doi: 10.3389/fdgth.2020.610956 (2020), Link

  16. f

    Dimensionality statistics of the final bifactor models, by 4DSQ scale and...

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Berend Terluin; Johannes C. van der Wouden; Henrica C. W. de Vet (2023). Dimensionality statistics of the final bifactor models, by 4DSQ scale and age group. [Dataset]. http://doi.org/10.1371/journal.pone.0221904.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Berend Terluin; Johannes C. van der Wouden; Henrica C. W. de Vet
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dimensionality statistics of the final bifactor models, by 4DSQ scale and age group.

  17. f

    Datasheet1_Operationalization of the social cognitive theory to explain and...

    • figshare.com
    docx
    Updated Nov 26, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Viktoria S. Egele; Robin Stark (2024). Datasheet1_Operationalization of the social cognitive theory to explain and predict physical activity in Germany: a scale development.docx [Dataset]. http://doi.org/10.3389/fspor.2024.1508602.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    Nov 26, 2024
    Dataset provided by
    Frontiers
    Authors
    Viktoria S. Egele; Robin Stark
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Germany
    Description

    IntroductionSocial cognitive theory is one of the most prominent psychological theories regarding human behavior. Previous research tested and confirmed parts of the theory concerning the explanatory and predictive value of the theory, both in specific populations and in selected domains of physical activity. However, the value of this research is limited as researchers often use their own item sets rather than validated scales. Therefore, comparability of the studies is restricted and the quality of the individual findings can often not be conclusively assessed as psychometric properties of the measurement are unclear. The goal of this research was to develop a parsimonious, reliable, and valid questionnaire to assess the elements of SCT in the context of physical activity.MethodsIn total, 90 items were developed for the four factors of SCT, which were then examined by exploratory factor analysis and reduced to 18 items in total.ResultsCross-validation was successful. Internal consistency was good for the four subscales, test-retest reliability was satisfactory, as were indicators for convergent and divergent validity.DiscussionA short, reliable, and valid instrument was developed intended for use in the general adult population in Germany for research on theoretical assumptions and interventions based on social cognitive theory.

  18. f

    Data_Sheet_1_Validation of the Awareness Atlas—a new measure of the...

    • figshare.com
    pdf
    Updated Mar 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yuane Jia; Margaret Schenkman; Hester O Connor; Krishnmurthy Jayanna; Rosalind Pearmain; Annelies Van’t Westeinde; Kamlesh D. Patel (2024). Data_Sheet_1_Validation of the Awareness Atlas—a new measure of the manifestation of consciousness.PDF [Dataset]. http://doi.org/10.3389/fpsyg.2024.1283980.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Mar 21, 2024
    Dataset provided by
    Frontiers
    Authors
    Yuane Jia; Margaret Schenkman; Hester O Connor; Krishnmurthy Jayanna; Rosalind Pearmain; Annelies Van’t Westeinde; Kamlesh D. Patel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Consciousness has intrigued philosophers and scholars for millennia and has been the topic of considerable scientific investigation in recent decades. Despite its importance, there is no unifying definition of the term, nor are there widely accepted measures of consciousness. Indeed, it is likely that consciousness—by its very nature—eludes measurement. It is, however, possible to measure how consciousness manifests as a lived experience. Yet here, too, holistic measures are lacking. This investigation describes the development and validation of the Awareness Atlas, a measure of the manifestation of consciousness. The scale was informed by heart-based contemplative practices and the resulting lived experience with a focus on the impacts of manifestation of consciousness on daily life. Four hundred forty-nine individuals from the USA, Canada, India, and Europe participated in psychometric testing of the scale. Exploratory and confirmatory factor analyses were used for validation, demonstrating excellent validity in measuring manifestation of consciousness. The final model fit exceeded all required thresholds, indicating an excellent fitted model with a single dimensionality to measure the manifestation of consciousness comprised of four subscales: Relationship to Others; Listening to the Heart; Connection with Higher Self; and Acceptance and Letting Go. Number of years meditating and practicing Heartfulness meditation were positively related to the total and subscale scores. Test–retest reliability was excellent for the total scale, and good to excellent for the four subscales. Findings demonstrate that the Awareness Atlas is a well-constructed tool that will be useful in examining changes in manifestation of consciousness with various experiences (e.g., meditation, life-altering conditions).

  19. f

    Study 2_Experiment 3.

    • plos.figshare.com
    • figshare.com
    xlsx
    Updated May 23, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chen-Yueh Chen; Yi-Hsiu Lin; Che-Yi Yang; Ya-Lun Chou; Tzu-Yun Yeh (2025). Study 2_Experiment 3. [Dataset]. http://doi.org/10.1371/journal.pone.0320927.s006
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    May 23, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Chen-Yueh Chen; Yi-Hsiu Lin; Che-Yi Yang; Ya-Lun Chou; Tzu-Yun Yeh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The purposes of this paper are to develop measurement scale of value co-creation among spectators in professional spectating sporting events by means of Study I and to examine the effects of social media strategies on value co-creation among spectators through Study II. In Study I, a five-phase framework for developing a measurement scale is adopted including items generation, items refinement and edition, exploration of the latent factor structure of the scale and confirmation of reliability, examination of the structure of the factors, and scale validation. In Study II, four quasi-experiments are conducted to investigate the effects of electronic word of mouth, second screen, social media influencer promotion and online donation on value co-creation among spectators. The sample size for Study I and Study II is 830 and 993, respectively. Results obtained from Study I reveal three dimensions of value co-creation among spectators in the context of professional spectating sporting events: event atmosphere, word of mouth, and spectator interaction. Findings from Analysis of Covariance (ANCOVA) in Study II indicate that in Experiment I, positive electronic word of mouth does not help improve value co-creation among spectators while negative electronic word of mouth does decrease value co-creation among spectators. In Experiment II, the second screen under the condition of either positive or negative electronic word of mouth does not exert an influence on value co-creation among spectators. In Experiment III, the effect of social media influencer promotion on value co-creation among spectators is partially supported. In Experiment IV, under the condition of positive electronic word of mouth, the group of donation reveals greater mean score of event atmosphere than that of the counterparts for the group without donation. The findings of this study not only enrich sport management literature in terms of value co-creation, but also provide empirical evidence and practical implications for decision makers of professional spectating sporting events in terms of social media strategies.

  20. f

    Study two tests for measurement invariance for a four factor solution for 17...

    • plos.figshare.com
    • figshare.com
    xls
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Brian K. Miller; Kay Nicols; Robert Konopaske (2023). Study two tests for measurement invariance for a four factor solution for 17 of the 20 items on the Mach IV scale. [Dataset]. http://doi.org/10.1371/journal.pone.0223504.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Brian K. Miller; Kay Nicols; Robert Konopaske
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Study two tests for measurement invariance for a four factor solution for 17 of the 20 items on the Mach IV scale.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
nasa.gov (2025). DSCOVR EPIC Level 4 Tropospheric Ozone [Dataset]. https://data.nasa.gov/dataset/dscovr-epic-level-4-tropospheric-ozone-a1ed2
Organization logo

Data from: DSCOVR EPIC Level 4 Tropospheric Ozone

Related Article
Explore at:
Dataset updated
Apr 1, 2025
Dataset provided by
NASAhttp://nasa.gov/
Description

EPIC Tropospheric Ozone Data ProductThe Earth Polychromatic Imaging Camera (EPIC) on the Deep Space Climate Observatory (DSCOVR) spacecraft provides measurements of Earth-reflected radiances from the entire sunlit portion of the Earth. The measurements from four EPIC UV (Ultraviolet) channels reconstruct global distributions of total ozone. The tropospheric ozone columns (TCO) are then derived by subtracting independently measured stratospheric ozone columns from the EPIC total ozone. TCO data product files report gridded synoptic maps of TCO measured over the sunlit portion of the Earth disk on a 1-2 hour basis. Sampling times for these hourly TCO data files are the same as for the EPIC L2 total ozone product. Version 1.0 of the TCO product is based on Version 3 of the EPIC L1 product and the Version 3 Total Ozone Column Product. The stratospheric columns were derived from the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) ozone fields (Gelaro et al., 2017).In contrast to the EPIC total ozone maps that are reported at a high spatial resolution of 18 × 18 km2 near the center of the image, the TCO maps are spatially averaged over several EPIC pixels and written on a regular spatial grid (1° latitude x 1° longitude). Kramarova et al. (2021) describe the EPIC TCO product and its evaluation against independent sonde and satellite measurements. Table 1 lists all of the variables included in the TCO product files. Ozone arrays in the product files are integrated vertical columns in Dobson Units (DU; 1 DU = 2.69×1020 molecules m-2).Filename ConventionThe TCO product files are formatted HDF5 and represent a Level-4 (L4) product. The filenames have the following naming convention:”DSCOVR_EPIC_L4_TrO3_01_YYYYMMDDHHMMSS_03.h5” Where “TrO3” means tropospheric column ozone, “01” means that this is version 01 for this product, “YYYYMMDDHHMMSS” is the UTC measurement time with “YYYY” for year (2015-present), “MM” for month (01-12), “DD” for day of the month (1-31), and “HHMMSS” denotes hours-minutes-seconds, and “03” signifies that v3 L1b measurements were used to derive the EPIC total ozone and consequently TCO.Column Weighting Function AdjustmentThere are two TCO gridded arrays in each hourly data file for the user to choose from; one is denoted TroposphericColumnOzone, and the other is TroposphericColumnOzoneAdjusted. The latter TCO array includes an adjustment to correct for reduced sensitivity of the EPIC UV measurements in detecting ozone in the low troposphere/boundary layer. The adjustment depended on latitude and season and was derived using simulated tropospheric ozone from the GEOS-Replay model (Strode et al. 2020) constrained by the MERRA-2 meteorology through the replay method. Our analysis (Kramarova et al., 2021) indicated that the adjusted TCO array is more accurate and precise. Flagging Bad DataKramarova et al. (2021) note that the preferred EPIC total ozone measurements used for scientific study are those where the L2 “AlgorithmFlag” parameter equals 1, 101, or 111. In this TCO product, we have included only L2 total ozone pixels with these algorithm flag values. The TCO product files provide a gridded version of the AlgorithmFlag parameter as a comparison reference. Still, it is not needed by the user for applying data quality filtering.Another parameter in the EPIC L2 total ozone files for filtering questionable data is the “ErrorFlag.” The TCO product files include a gridded version of this ErrorFlag parameter that the user should apply. Only TCO-gridded pixels with an ErrorFlag value of zero should be used.TCO measurements at high satellite-look angles and/or high solar zenith angles should also be filtered out for analysis. The TCO files include a gridded version of the satellite look angle and the solar zenith angle denoted as “SatelliteLookAngle” and “SolarZenithAngle,” respectively. For scientific applications, users should filter TCO array data and use only pixels with SatelliteLookAngle and SolarZenithAngle 70°.Summary of the Derivation of the tropospheric column ozone productWe briefly summarize the derivation of EPIC TCO, stratospheric column ozone, and tropopause pressure. An independent measure of the stratospheric column ozone is needed to derive EPIC TCO. We use MERRA-2 ozone fields (Gelaro et al., 2017) to derive stratospheric ozone columns subtracted from EPIC total ozone (TOZ) to obtain TCO. The MERRA-2 data assimilation system ingests Aura OMI (Ozone Monitoring Instrument) v8.5 total ozone and MLS (Microwave Limb Sounder) v4.2 stratospheric ozone profiles to produce global synoptic maps of profile ozone from the surface to the top of the atmosphere; for our analyses, we use MERRA-2 ozone profiles reported every three hours (0, 3, 6, …, 21 UTC) at a resolution of 0.625° longitude × 0.5° latitude. MERRA-2 ozone profiles were integrated vertically from the top of the atmosphere down to tropopause pressure to derive maps of stratospheric column ozone. Tropopause pressure was determined from MERRA-2 re-analyses using standard PV-θ definition (2.5 PVU and 380K). The resulting maps of stratospheric column ozone at 3-hour intervals from MERRA-2 were then space-time collocated with EPIC footprints and subtracted from the EPIC total ozone, thus producing daily global maps of residual TCO sampled at the precise EPIC pixel times. These tropospheric ozone measurements were further binned to 1° latitude x 1° longitude resolution. ReferencesGelaro, R., W. McCarty, M.J. Suárez, R. Todling, A. Molod, L. Takacs, C.A. Randles, A. Darmenov, M.G. Bosilovich, R. Reichle, K. Wargan, L. Coy, R. Cullather, C. Draper, S. Akella, V. Buchard, A. Conaty, A.M. da Silva, W. Gu, G. Kim, R. Koster, R. Lucchesi, D. Merkova, J.E. Nielsen, G. Partyka, S. Pawson, W. Putman, M. Rienecker, S.D. Schubert, M. Sienkiewicz, and B. Zhao, The Modern-Era Retrospective Analysis for Research and Applications, Version 2 (MERRA-2), J. Climate, 30, 5419–5454, https://doi.org/10.1175/JCLI-D-16-0758.1, 2017.Kramarova N. A., J. R. Ziemke, L.-K. Huang, J. R. Herman, K. Wargan, C. J. Seftor, G. J. Labow, and L. D. Oman, Evaluation of Version 3 total and tropospheric ozone columns from EPIC on DSCOVR for studying regional-scale ozone variations, Front. Rem. Sens., in review, 2021.Table 1. List of parameters and data arrays in the EPIC tropospheric ozone hourly product files. The left column lists the variable name, the second column lists the variable description and units, and the third column lists the variable data type and dimensions.Product Variable Name Description and units Data Type and DimensionsNadirLatitude Nadir latitude in degrees Real4 numberNadirLongitude Nadir longitude in degrees Real4 numberLatitude Center latitude of grid-point in degrees Real4 array with 180 elementsLongitude Center longitude of grid-point in degrees Real4 array with 360 elementsTroposphericColumnOzone Tropospheric column ozone in Dobson Units Real4 array with dimensions 360 × 180TroposphericColumnOzoneAdjusted Tropospheric column ozone with BL adjustment in Dobson Units Real4 array with dimensions 360 × 180StratosphericColumnOzone Stratospheric column ozone in Dobson Units Real4 array with dimensions 360 × 180TotalColumnOzone Total column ozone in Dobson Units Real4 array with dimensions 360 × 180Reflectivity Reflectivity (no units) Real4 array with dimensions 360 × 180RadiativeCloudFraction Radiative cloud fraction (no units) Real4 array with dimensions 360 × 180TropopausePressure Tropopause pressure in units hPa Real4 array with dimensions 360 × 180CWF1 Column weighting function for layer 1 (506.6-1013.3 hPa) Real4 array with dimensions 360 × 180ErrorFlag Error flag for TCO data Real4 array with dimensions 360 × 180AlgorithmFlag Algorithm flag for TCO data Real4 array with dimensions 360 × 180SatelliteLookAngle Satellite Look Angle in degrees Real4 array with dimensions 360 × 180SolarZenithAngle Solar Zenith Angle in degrees Real4 array with dimensions 360 × 180

Search
Clear search
Close search
Google apps
Main menu