32 datasets found
  1. Z

    Example subjects for Mobilise-D data standardization

    • data.niaid.nih.gov
    Updated Oct 11, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gazit, Eran (2022). Example subjects for Mobilise-D data standardization [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7185428
    Explore at:
    Dataset updated
    Oct 11, 2022
    Dataset provided by
    Cereatti, Andrea
    Gazit, Eran
    Micó-Amigo, Encarna
    Soltani, Abolfazl
    Salis, Francesca
    Hansen, Clint
    D'Ascanio, Ilaria
    Palmerini, Luca
    on behalf of the Mobilise-D consortium
    Paraschiv-Ionescu, Anisoara
    Mazzà, Claudia
    Kluge, Felix
    Caruso, Marco
    Del Din, Silvia
    Kirk, Cameron
    Küderle, Arne
    Hiden, Hugo
    Chiari, Lorenzo
    Bertuletti, Stefano
    Rochester, Lynn
    Ullrich, Martin
    Reggi, Luca
    Bonci, Tecla
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Standardized data from Mobilise-D participants (YAR dataset) and pre-existing datasets (ICICLE, MSIPC2, Gait in Lab and real-life settings, MS project, UNISS-UNIGE) are provided in the shared folder, as an example of the procedures proposed in the publication "Mobility recorded by wearable devices and gold standards: the Mobilise-D procedure for data standardization" that is currently under review in Scientific data. Please refer to that publication for further information. Please cite that publication if using these data.

    The code to standardize an example subject (for the ICICLE dataset) and to open the standardized Matlab files in other languages (Python, R) is available in github (https://github.com/luca-palmerini/Procedure-wearable-data-standardization-Mobilise-D).

  2. f

    Data_Sheet_2_NormExpression: An R Package to Normalize Gene Expression Data...

    • frontiersin.figshare.com
    zip
    Updated Jun 1, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zhenfeng Wu; Weixiang Liu; Xiufeng Jin; Haishuo Ji; Hua Wang; Gustavo Glusman; Max Robinson; Lin Liu; Jishou Ruan; Shan Gao (2023). Data_Sheet_2_NormExpression: An R Package to Normalize Gene Expression Data Using Evaluated Methods.zip [Dataset]. http://doi.org/10.3389/fgene.2019.00400.s002
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Frontiers
    Authors
    Zhenfeng Wu; Weixiang Liu; Xiufeng Jin; Haishuo Ji; Hua Wang; Gustavo Glusman; Max Robinson; Lin Liu; Jishou Ruan; Shan Gao
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data normalization is a crucial step in the gene expression analysis as it ensures the validity of its downstream analyses. Although many metrics have been designed to evaluate the existing normalization methods, different metrics or different datasets by the same metric yield inconsistent results, particularly for the single-cell RNA sequencing (scRNA-seq) data. The worst situations could be that one method evaluated as the best by one metric is evaluated as the poorest by another metric, or one method evaluated as the best using one dataset is evaluated as the poorest using another dataset. Here raises an open question: principles need to be established to guide the evaluation of normalization methods. In this study, we propose a principle that one normalization method evaluated as the best by one metric should also be evaluated as the best by another metric (the consistency of metrics) and one method evaluated as the best using scRNA-seq data should also be evaluated as the best using bulk RNA-seq data or microarray data (the consistency of datasets). Then, we designed a new metric named Area Under normalized CV threshold Curve (AUCVC) and applied it with another metric mSCC to evaluate 14 commonly used normalization methods using both scRNA-seq data and bulk RNA-seq data, satisfying the consistency of metrics and the consistency of datasets. Our findings paved the way to guide future studies in the normalization of gene expression data with its evaluation. The raw gene expression data, normalization methods, and evaluation metrics used in this study have been included in an R package named NormExpression. NormExpression provides a framework and a fast and simple way for researchers to select the best method for the normalization of their gene expression data based on the evaluation of different methods (particularly some data-driven methods or their own methods) in the principle of the consistency of metrics and the consistency of datasets.

  3. Standardized Precipitation Index (SPI) 1981 - Present

    • community-climatesolutions.hub.arcgis.com
    • resilience.climate.gov
    • +9more
    Updated Aug 16, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2022). Standardized Precipitation Index (SPI) 1981 - Present [Dataset]. https://community-climatesolutions.hub.arcgis.com/maps/8aec7dfe18d244d9bfca141de611e934
    Explore at:
    Dataset updated
    Aug 16, 2022
    Dataset authored and provided by
    Esrihttp://esri.com/
    Area covered
    Description

    Droughts are natural occurring events in which dry conditions persist over time. Droughts are complex to characterize because they depend on water and energy balances at different temporal and spatial scales. The Standardized Precipitation Index (SPI) is used to analyze meteorological droughts. SPI estimates the deviation of precipitation from the long-term probability function at different time scales (e.g. 1, 3, 6, 9, or 12 months). SPI only uses monthly precipitation as an input, which can be helpful for characterizing meteorological droughts. Other variables should be included (e.g. temperature or evapotranspiration) in the characterization of other types of droughts (e.g. agricultural droughts).This layer shows the SPI index at different temporal periods calculated using the SPEI library in R and precipitation data from CHIRPS data set.Sources:Climate Hazards Center InfraRed Precipitation with Station data (CHIRPS)SPEI R library

  4. e

    Data applied to automatic method to transform routine otolith images for a...

    • b2find.eudat.eu
    Updated Jan 24, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). Data applied to automatic method to transform routine otolith images for a standardized otolith database using R - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/c2aba870-6c60-5b01-8514-245640b5ff64
    Explore at:
    Dataset updated
    Jan 24, 2023
    Description

    Fisheries management is generally based on age structure models. Thus, fish ageing data are collected by experts who analyze and interpret calcified structures (scales, vertebrae, fin rays, otoliths, etc.) according to a visual process. The otolith, in the inner ear of the fish, is the most commonly used calcified structure because it is metabolically inert and historically one of the first proxies developed. It contains information throughout the whole life of the fish and provides age structure data for stock assessments of all commercial species. The traditional human reading method to determine age is very time-consuming. Automated image analysis can be a low-cost alternative method, however, the first step is the transformation of routinely taken otolith images into standardized images within a database to apply machine learning techniques on the ageing data. Otolith shape, resulting from the synthesis of genetic heritage and environmental effects, is a useful tool to identify stock units, therefore a database of standardized images could be used for this aim. Using the routinely measured otolith data of plaice (Pleuronectes platessa; Linnaeus, 1758) and striped red mullet (Mullus surmuletus; Linnaeus, 1758) in the eastern English Channel and north-east Arctic cod (Gadus morhua; Linnaeus, 1758), a greyscale images matrix was generated from the raw images in different formats. Contour detection was then applied to identify broken otoliths, the orientation of each otolith, and the number of otoliths per image. To finalize this standardization process, all images were resized and binarized. Several mathematical morphology tools were developed from these new images to align and to orient the images, placing the otoliths in the same layout for each image. For this study, we used three databases from two different laboratories using three species (cod, plaice and striped red mullet). This method was approved to these three species and could be applied for others species for age determination and stock identification.

  5. f

    Standardizing Clinical Trials Workflow Representation in UML for...

    • plos.figshare.com
    doc
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Elias Cesar Araujo de Carvalho; Madhav Kishore Jayanti; Adelia Portero Batilana; Andreia M. O. Kozan; Maria J. Rodrigues; Jatin Shah; Marco R. Loures; Sunita Patil; Philip Payne; Ricardo Pietrobon (2023). Standardizing Clinical Trials Workflow Representation in UML for International Site Comparison [Dataset]. http://doi.org/10.1371/journal.pone.0013893
    Explore at:
    docAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Elias Cesar Araujo de Carvalho; Madhav Kishore Jayanti; Adelia Portero Batilana; Andreia M. O. Kozan; Maria J. Rodrigues; Jatin Shah; Marco R. Loures; Sunita Patil; Philip Payne; Ricardo Pietrobon
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundWith the globalization of clinical trials, a growing emphasis has been placed on the standardization of the workflow in order to ensure the reproducibility and reliability of the overall trial. Despite the importance of workflow evaluation, to our knowledge no previous studies have attempted to adapt existing modeling languages to standardize the representation of clinical trials. Unified Modeling Language (UML) is a computational language that can be used to model operational workflow, and a UML profile can be developed to standardize UML models within a given domain. This paper's objective is to develop a UML profile to extend the UML Activity Diagram schema into the clinical trials domain, defining a standard representation for clinical trial workflow diagrams in UML.MethodsTwo Brazilian clinical trial sites in rheumatology and oncology were examined to model their workflow and collect time-motion data. UML modeling was conducted in Eclipse, and a UML profile was developed to incorporate information used in discrete event simulation software.ResultsEthnographic observation revealed bottlenecks in workflow: these included tasks requiring full commitment of CRCs, transferring notes from paper to computers, deviations from standard operating procedures, and conflicts between different IT systems. Time-motion analysis revealed that nurses' activities took up the most time in the workflow and contained a high frequency of shorter duration activities. Administrative assistants performed more activities near the beginning and end of the workflow. Overall, clinical trial tasks had a greater frequency than clinic routines or other general activities.ConclusionsThis paper describes a method for modeling clinical trial workflow in UML and standardizing these workflow diagrams through a UML profile. In the increasingly global environment of clinical trials, the standardization of workflow modeling is a necessary precursor to conducting a comparative analysis of international clinical trials workflows.

  6. Dataset related to article "A reference framework for standardization and...

    • zenodo.org
    Updated Sep 26, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    R.Levi; R.Levi; M.Mollura; G. Savini; F. Garoli; M. Battaglia; A. Ammirabile; L.A. Cappellini; S. Superbi; M. Grimaldi; M. Grimaldi; R. Barbieri; L.S. Politi; L.S. Politi; M.Mollura; G. Savini; F. Garoli; M. Battaglia; A. Ammirabile; L.A. Cappellini; S. Superbi; R. Barbieri (2023). Dataset related to article "A reference framework for standardization and harmonization of CT Radiomics features: the "CadAIver" analysis" [Dataset]. http://doi.org/10.5281/zenodo.8144241
    Explore at:
    Dataset updated
    Sep 26, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    R.Levi; R.Levi; M.Mollura; G. Savini; F. Garoli; M. Battaglia; A. Ammirabile; L.A. Cappellini; S. Superbi; M. Grimaldi; M. Grimaldi; R. Barbieri; L.S. Politi; L.S. Politi; M.Mollura; G. Savini; F. Garoli; M. Battaglia; A. Ammirabile; L.A. Cappellini; S. Superbi; R. Barbieri
    Description

    Abstract

    Background

    In recent years, Radiomics features (RFs) have been developed to provide quantitative, standardized information about shape, density/intensity and texture patterns on radiological images. Several studies showed limitations in the reproducibility of RFs in different acquisition settings. To date, reproducibility studies using CT images mainly rely on phantoms, due to the harness of patient exposure to X-rays. In this study we analyze the effects of CT acquisition parameters on RFs of lumbar vertebrae in a cadaveric donor.

    Methods

    114 unique CT acquisitions from cadaveric truck were performed on 3 different CT scanners varying KV, mA, field of view and reconstruction kernel settings. Lumbar vertebrae were segmented through a deep learning convolutional neural network and RFs were computed. The effects of each protocol on each RFs were assessed by univariate and multivariate Generalized Linear Model. Further, we compared the GLM model to the ComBat algorithm in the efficiency in harmonizing CT images.

    Findings

    From GLM, mA variation was not associated with alteration of RFs , whereas kV modification was associated with exponential variation of several RFs, including First Order (94.4%), GLCM (87.5%) and NGTDM (100%).

    Upon cross-validation, ComBat algorithm obtained a mean R2 higher than 0.90 in 1 RFs (0.90%), whereas GLM model obtained high R2 in 21 RFs (19.6%), showing that the proposed GLM could effectively harmonize acquisitions better than ComBat.

    Interpretation

    This study represents the first attempt in describing the effects of CT acquisition parameters in bone RFs in a cadaveric donor. Our analyses showed that RFs could be substantially different according to the variation of each acquisition parameter and in dataset obtained from different CT scanners. These differences can be minimized using the proposed GLM model. Publicly available dataset and GLM could foster the research of Radiomics-based studies by increasing harmonization across CT protocols and vendors.

  7. Z

    soilmap_simple: a simplified and standardized derivative of the digital soil...

    • data.niaid.nih.gov
    Updated Mar 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cools, Nathalie (2025). soilmap_simple: a simplified and standardized derivative of the digital soil map of the Flemish Region [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3732903
    Explore at:
    Dataset updated
    Mar 24, 2025
    Dataset provided by
    De Vos, Bruno
    Vanderhaeghe, Floris
    Cools, Nathalie
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Flanders
    Description

    The data source soilmap_simple is a simplified and standardized derived form of the 'digital soil map of the Flemish Region' (the shapefile of which we named soilmap, for analytical workflows in R) published by 'Databank Ondergrond Vlaanderen’ (DOV). It is a GeoPackage that contains a spatial polygon layer ‘soilmap_simple’ in the Belgian Lambert 72 coordinate reference system (EPSG-code 31370), plus a non-spatial table ‘explanations’ with the meaning of category codes that occur in the spatial layer. Further documentation about the digital soil map of the Flemish Region is available in Van Ranst & Sys (2000) and Dudal et al. (2005).

    This version of soilmap_simple was derived from version 'soilmap_2017-06-20' (Zenodo DOI) as follows:

    all attribute variables received English names (purpose of standardization), starting with prefix bsm_ (referring to the 'Belgian soil map');

    attribute variables were reordered;

    the values of the morphogenetic substrate, texture and drainage variables (bsm_mo_substr, bsm_mo_tex and bsm_mo_drain + their _explan counterparts) were filled for most features in the 'coastal plain' area.

    To derive morphogenetic texture and drainage levels from the geomorphological soil types, a conversion table by Bruno De Vos & Carole Ampe was applied (for earlier work on this, see Ampe 2013).

    Substrate classes were copied over from bsm_ge_substr into bsm_mo_substr (bsm_ge_substr already followed the categories of bsm_mo_substr).

    These steps coincide with the approach that had been taken to construct the Unitype variable in the soilmap data source;
    

    only a minimal number of variables were selected: those that are most useful for analytical work.

    See R-code in the GitHub repository 'n2khab-preprocessing' at commit b3c6696 for the creation from the soilmap data source.

    A reading function to return soilmap_simple (this data source) or soilmap in a standardized way into the R environment is provided by the R-package n2khab.

    The attributes of the spatial polygon layer soilmap_simple can have mo_ in their name to refer to the Belgian Morphogenetic System:

    bsm_poly_id: unique polygon ID (numeric)

    bsm_region: name of the region

    bsm_converted: boolean. Were morphogenetic texture and drainage variables (bsm_mo_tex and bsm_mo_drain) derived from a conversion table (see above)? Value TRUE is largely confined to the 'coastal plain' areas.

    bsm_mo_soilunitype: code of the soil type (applying morphogenetic codes within the coastal plain areas when possible, just as for the following three variables)

    bsm_mo_substr: code of the soil substrate

    bsm_mo_tex: code of the soil texture category

    bsm_mo_drain: code of the soil drainage category

    bsm_mo_prof: code of the soil profile category

    bsm_mo_parentmat: code of a variant regarding the parent material

    bsm_mo_profvar: code of a variant regarding the soil profile

    The non-spatial table explanations has following variables:

    subject: attribute name of the spatial layer: either bsm_mo_substr, bsm_mo_tex, bsm_mo_drain, bsm_mo_prof, bsm_mo_parentmat or bsm_mo_profvar

    code: category code that occurs as value for the corresponding attribute in the spatial layer

    name: explanation of the value of code

  8. Seed Information Database: taxonomic standardization of 54,856 taxa to World...

    • zenodo.org
    Updated Mar 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Roeland Kindt; Roeland Kindt (2025). Seed Information Database: taxonomic standardization of 54,856 taxa to World Flora Online or the World Checklist of Vascular Plants [Dataset]. http://doi.org/10.5281/zenodo.15055069
    Explore at:
    Dataset updated
    Mar 21, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Roeland Kindt; Roeland Kindt
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Seed Information Database (SER, INSR & RBGK 2023) provides user-friendly access to information on seed weight, storage behaviour, germination requirements, and other traits for more than 50,000 plant taxa.

    Here I provide a dataset where these taxa were standardized to World Flora Online (Borsch et al. 2020; taxonomic backbone version 2023.12) or the World Checklist of Vascular Plants (Govaerts et al. 2021, version 11) by matching names with those in the Agroforestry Species Switchboard (Kindt et al. 2025; version 4). Taxa for which no matches could be found were standardized with the WorldFlora package (Kindt 2020), using similar R scripts and the same taxonomic backbone data (WFO; WCVP when no acceptable match was found) as those used to standardize species names for the Switchboard.

    Additional fields indicate whether a species was flagged as a tree in the Switchboard, and the lifeform obtained from the World Checklist of Vascular Plants (Govaerts et al. 2021, version 11).

    References

    • SER, INSR, RBGK. 2023. Seed Information Database (SID). Society for Ecological Restoration, International Network for Seed Based Restoration and Royal Botanic Gardens Kew. https://ser-sid.org/
    • Borsch, T., Berendsohn, W., Dalcin, E., Delmas, M., Demissew, S., Elliott, A., Fritsch, P., Fuchs, A., Geltman, D., Güner, A., Haevermans, T., Knapp, S., le Roux, M.M., Loizeau, P.-A., Miller, C., Miller, J., Miller, J.T., Palese, R., Paton, A., Parnell, J., Pendry, C., Qin, H.-N., Sosa, V., Sosef, M., von Raab-Straube, E., Ranwashe, F., Raz, L., Salimov, R., Smets, E., Thiers, B., Thomas, W., Tulig, M., Ulate, W., Ung, V., Watson, M., Jackson, P.W. and Zamora, N. (2020), World Flora Online: Placing taxonomists at the heart of a definitive and comprehensive global resource on the world's plants. TAXON, 69: 1311-1341. https://doi.org/10.1002/tax.12373
    • Roeland Kindt, Ilyas Siddique, Ian Dawson, Innocent John, Fabio Pedercini, Jens-Peter B. Lilleso, Lars Graudal. 2025. The Agroforestry Species Switchboard, a global resource to explore information for 107,269 plant species. bioRxiv 2025.03.09.642182; doi: https://doi.org/10.1101/2025.03.09.642182
    • Govaerts, R., Nic Lughadha, E., Black, N. et al. The World Checklist of Vascular Plants, a continuously updated resource for exploring global plant diversity. Sci Data 8, 215 (2021). https://doi.org/10.1038/s41597-021-00997-6
    • Kindt, R. 2020. WorldFlora: An R package for exact and fuzzy matching of plant names against the World Flora Online taxonomic backbone data. Applications in Plant Sciences 8(9): e11388. https://doi.org/10.1002/aps3.11388

    Funding

    The development of this dataset was supported by the Darwin Initiative to project DAREX001 of Developing a Global Biodiversity Standard certification for tree-planting and restoration, by Norway’s International Climate and Forest Initiative through the Royal Norwegian Embassy in Ethiopia to the Provision of Adequate Tree Seed Portfolio project in Ethiopia, by the Bezos Earth Fund to the Quality Tree Seed for Africa in Kenya and Rwanda project and by the German International Climate Initiative (IKI) to the regional tree seed programme on The Right Tree for the Right Place for the Right Purpose in Africa.

  9. Data from: WiBB: An integrated method for quantifying the relative...

    • zenodo.org
    • data.niaid.nih.gov
    • +1more
    zip
    Updated Jun 5, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Qin Li; Qin Li; Xiaojun Kou; Xiaojun Kou (2022). WiBB: An integrated method for quantifying the relative importance of predictive variables [Dataset]. http://doi.org/10.5061/dryad.xsj3tx9g1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 5, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Qin Li; Qin Li; Xiaojun Kou; Xiaojun Kou
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset contains simulated datasets, empirical data, and R scripts described in the paper: "Li, Q. and Kou, X. (2021) WiBB: An integrated method for quantifying the relative importance of predictive variables. Ecography (DOI: 10.1111/ecog.05651)".

    A fundamental goal of scientific research is to identify the underlying variables that govern crucial processes of a system. Here we proposed a new index, WiBB, which integrates the merits of several existing methods: a model-weighting method from information theory (Wi), a standardized regression coefficient method measured by ß* (B), and bootstrap resampling technique (B). We applied the WiBB in simulated datasets with known correlation structures, for both linear models (LM) and generalized linear models (GLM), to evaluate its performance. We also applied two other methods, relative sum of wight (SWi), and standardized beta (ß*), to evaluate their performance in comparison with the WiBB method on ranking predictor importances under various scenarios. We also applied it to an empirical dataset in a plant genus Mimulus to select bioclimatic predictors of species' presence across the landscape. Results in the simulated datasets showed that the WiBB method outperformed the ß* and SWi methods in scenarios with small and large sample sizes, respectively, and that the bootstrap resampling technique significantly improved the discriminant ability. When testing WiBB in the empirical dataset with GLM, it sensibly identified four important predictors with high credibility out of six candidates in modeling geographical distributions of 71 Mimulus species. This integrated index has great advantages in evaluating predictor importance and hence reducing the dimensionality of data, without losing interpretive power. The simplicity of calculation of the new metric over more sophisticated statistical procedures, makes it a handy method in the statistical toolbox.

  10. Simulation Data Set

    • catalog.data.gov
    Updated Nov 12, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. EPA Office of Research and Development (ORD) (2020). Simulation Data Set [Dataset]. https://catalog.data.gov/dataset/simulation-data-set
    Explore at:
    Dataset updated
    Nov 12, 2020
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Description

    These are simulated data without any identifying information or informative birth-level covariates. We also standardize the pollution exposures on each week by subtracting off the median exposure amount on a given week and dividing by the interquartile range (IQR) (as in the actual application to the true NC birth records data). The dataset that we provide includes weekly average pregnancy exposures that have already been standardized in this way while the medians and IQRs are not given. This further protects identifiability of the spatial locations used in the analysis. This dataset is not publicly accessible because: EPA cannot release personally identifiable information regarding living individuals, according to the Privacy Act and the Freedom of Information Act (FOIA). This dataset contains information about human research subjects. Because there is potential to identify individual participants and disclose personal information, either alone or in combination with other datasets, individual level data are not appropriate to post for public access. Restricted access may be granted to authorized persons by contacting the party listed. It can be accessed through the following means: File format: R workspace file; “Simulated_Dataset.RData”. Metadata (including data dictionary) • y: Vector of binary responses (1: adverse outcome, 0: control) • x: Matrix of covariates; one row for each simulated individual • z: Matrix of standardized pollution exposures • n: Number of simulated individuals • m: Number of exposure time periods (e.g., weeks of pregnancy) • p: Number of columns in the covariate design matrix • alpha_true: Vector of “true” critical window locations/magnitudes (i.e., the ground truth that we want to estimate) Code Abstract We provide R statistical software code (“CWVS_LMC.txt”) to fit the linear model of coregionalization (LMC) version of the Critical Window Variable Selection (CWVS) method developed in the manuscript. We also provide R code (“Results_Summary.txt”) to summarize/plot the estimated critical windows and posterior marginal inclusion probabilities. Description “CWVS_LMC.txt”: This code is delivered to the user in the form of a .txt file that contains R statistical software code. Once the “Simulated_Dataset.RData” workspace has been loaded into R, the text in the file can be used to identify/estimate critical windows of susceptibility and posterior marginal inclusion probabilities. “Results_Summary.txt”: This code is also delivered to the user in the form of a .txt file that contains R statistical software code. Once the “CWVS_LMC.txt” code is applied to the simulated dataset and the program has completed, this code can be used to summarize and plot the identified/estimated critical windows and posterior marginal inclusion probabilities (similar to the plots shown in the manuscript). Optional Information (complete as necessary) Required R packages: • For running “CWVS_LMC.txt”: • msm: Sampling from the truncated normal distribution • mnormt: Sampling from the multivariate normal distribution • BayesLogit: Sampling from the Polya-Gamma distribution • For running “Results_Summary.txt”: • plotrix: Plotting the posterior means and credible intervals Instructions for Use Reproducibility (Mandatory) What can be reproduced: The data and code can be used to identify/estimate critical windows from one of the actual simulated datasets generated under setting E4 from the presented simulation study. How to use the information: • Load the “Simulated_Dataset.RData” workspace • Run the code contained in “CWVS_LMC.txt” • Once the “CWVS_LMC.txt” code is complete, run “Results_Summary.txt”. Format: Below is the replication procedure for the attached data set for the portion of the analyses using a simulated data set: Data The data used in the application section of the manuscript consist of geocoded birth records from the North Carolina State Center for Health Statistics, 2005-2008. In the simulation study section of the manuscript, we simulate synthetic data that closely match some of the key features of the birth certificate data while maintaining confidentiality of any actual pregnant women. Availability Due to the highly sensitive and identifying information contained in the birth certificate data (including latitude/longitude and address of residence at delivery), we are unable to make the data from the application section publically available. However, we will make one of the simulated datasets available for any reader interested in applying the method to realistic simulated birth records data. This will also allow the user to become familiar with the required inputs of the model, how the data should be structured, and what type of output is obtained. While we cannot provide the application data here, access to the North Carolina birth records can be requested through the North Carolina State Center for Health Statistics, and requires an appropriate data use agreement. Description Permissions: These are simulated data without any identifying information or informative birth-level covariates. We also standardize the pollution exposures on each week by subtracting off the median exposure amount on a given week and dividing by the interquartile range (IQR) (as in the actual application to the true NC birth records data). The dataset that we provide includes weekly average pregnancy exposures that have already been standardized in this way while the medians and IQRs are not given. This further protects identifiability of the spatial locations used in the analysis. This dataset is associated with the following publication: Warren, J., W. Kong, T. Luben, and H. Chang. Critical Window Variable Selection: Estimating the Impact of Air Pollution on Very Preterm Birth. Biostatistics. Oxford University Press, OXFORD, UK, 1-30, (2019).

  11. f

    confusion matrices (3 groups, z-standardized data, 1000 measurement points)

    • datasetcatalog.nlm.nih.gov
    • figshare.com
    Updated May 8, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sauer, Sebastian (2016). confusion matrices (3 groups, z-standardized data, 1000 measurement points) [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001573367
    Explore at:
    Dataset updated
    May 8, 2016
    Authors
    Sauer, Sebastian
    Description

    R ObjectList of 3:List1: 1000 mean OOB errors of classificationList2: 1000 confusion matricesList3: 1000 mtry

  12. Dataset related to article "A reference framework for standardization and...

    • zenodo.org
    Updated Jul 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    R.Levi; R.Levi; M.Mollura; G. Savini; F. Garoli; M. Battaglia; A. Ammirabile; L.A. Cappellini; S. Superbi; M. Grimaldi; M. Grimaldi; R. Barbieri; L.S. Politi; L.S. Politi; M.Mollura; G. Savini; F. Garoli; M. Battaglia; A. Ammirabile; L.A. Cappellini; S. Superbi; R. Barbieri (2024). Dataset related to article "A reference framework for standardization and harmonization of CT Radiomics features: the "CadAIver" analysis" [Dataset]. http://doi.org/10.5281/zenodo.10053247
    Explore at:
    Dataset updated
    Jul 11, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    R.Levi; R.Levi; M.Mollura; G. Savini; F. Garoli; M. Battaglia; A. Ammirabile; L.A. Cappellini; S. Superbi; M. Grimaldi; M. Grimaldi; R. Barbieri; L.S. Politi; L.S. Politi; M.Mollura; G. Savini; F. Garoli; M. Battaglia; A. Ammirabile; L.A. Cappellini; S. Superbi; R. Barbieri
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Abstract

    Background

    In recent years, Radiomics features (RFs) have been developed to provide quantitative, standardized information about shape, density/intensity and texture patterns on radiological images. Several studies showed limitations in the reproducibility of RFs in different acquisition settings. To date, reproducibility studies using CT images mainly rely on phantoms, due to the harness of patient exposure to X-rays. In this study we analyze the effects of CT acquisition parameters on RFs of lumbar vertebrae in a cadaveric donor.

    Methods

    112 unique CT acquisitions from cadaveric truck were performed on 3 different CT scanners varying KV, mA, field of view and reconstruction kernel settings. Lumbar vertebrae were segmented through a deep learning convolutional neural network and RFs were computed. The effects of each protocol on each RFs were assessed by univariate and multivariate Generalized Linear Model. Further, we compared the GLM model to the ComBat algorithm in the efficiency in harmonizing CT images.

    Findings

    From GLM, mA variation was not associated with alteration of RFs , whereas kV modification was associated with exponential variation of several RFs, including First Order (94.4%), GLCM (87.5%) and NGTDM (100%).

    Upon cross-validation, ComBat algorithm obtained a mean R2 higher than 0.90 in 1 RFs (0.90%), whereas GLM model obtained high R2 in 21 RFs (19.6%), showing that the proposed GLM could effectively harmonize acquisitions better than ComBat.

    Interpretation

    This study represents the first attempt in describing the effects of CT acquisition parameters in bone RFs in a cadaveric donor. Our analyses showed that RFs could be substantially different according to the variation of each acquisition parameter and in dataset obtained from different CT scanners. These differences can be minimized using the proposed GLM model. Publicly available dataset and GLM could foster the research of Radiomics-based studies by increasing harmonization across CT protocols and vendors.

  13. f

    Data from: cvs data file of Length-standardized surface area index and...

    • datasetcatalog.nlm.nih.gov
    • rs.figshare.com
    Updated Jan 17, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sato, Katsufumi; Akiyama, Yu; Ramp, Christian; Swift, René; Hall, Ailsa; López, Lucía Martina Martín; Narazaki, Tomoko; Aoki, Kagari; Iwata, Takashi; Pomeroy, Patrick; Kershaw, Joanna; Miller, Patrick J. O.; Bellot, Charlotte; Wensveen, Paul J.; Biuw, Martin; Isojunno, Saana (2021). cvs data file of Length-standardized surface area index and tissue body density for R script from Aerial photogrammetry and tag-derived tissue density reveal patterns of lipid-store body condition of humpback whales on their feeding grounds [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000743536
    Explore at:
    Dataset updated
    Jan 17, 2021
    Authors
    Sato, Katsufumi; Akiyama, Yu; Ramp, Christian; Swift, René; Hall, Ailsa; López, Lucía Martina Martín; Narazaki, Tomoko; Aoki, Kagari; Iwata, Takashi; Pomeroy, Patrick; Kershaw, Joanna; Miller, Patrick J. O.; Bellot, Charlotte; Wensveen, Paul J.; Biuw, Martin; Isojunno, Saana
    Description

    See electronic supplementary materials for details

  14. Naturalistic Neuroimaging Database

    • openneuro.org
    Updated Apr 20, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sarah Aliko; Jiawen Huang; Florin Gheorghiu; Stefanie Meliss; Jeremy I Skipper (2021). Naturalistic Neuroimaging Database [Dataset]. http://doi.org/10.18112/openneuro.ds002837.v2.0.0
    Explore at:
    Dataset updated
    Apr 20, 2021
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Sarah Aliko; Jiawen Huang; Florin Gheorghiu; Stefanie Meliss; Jeremy I Skipper
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Overview

    • The Naturalistic Neuroimaging Database (NNDb v2.0) contains datasets from 86 human participants doing the NIH Toolbox and then watching one of 10 full-length movies during functional magnetic resonance imaging (fMRI).The participants were all right-handed, native English speakers, with no history of neurological/psychiatric illnesses, with no hearing impairments, unimpaired or corrected vision and taking no medication. Each movie was stopped in 40-50 minute intervals or when participants asked for a break, resulting in 2-6 runs of BOLD-fMRI. A 10 minute high-resolution defaced T1-weighted anatomical MRI scan (MPRAGE) is also provided.
    • The NNDb V2.0 is now on Neuroscout, a platform for fast and flexible re-analysis of (naturalistic) fMRI studies. See: https://neuroscout.org/

    v2.0 Changes

    • Overview
      • We have replaced our own preprocessing pipeline with that implemented in AFNI’s afni_proc.py, thus changing only the derivative files. This introduces a fix for an issue with our normalization (i.e., scaling) step and modernizes and standardizes the preprocessing applied to the NNDb derivative files. We have done a bit of testing and have found that results in both pipelines are quite similar in terms of the resulting spatial patterns of activity but with the benefit that the afni_proc.py results are 'cleaner' and statistically more robust.
    • Normalization

      • Emily Finn and Clare Grall at Dartmouth and Rick Reynolds and Paul Taylor at AFNI, discovered and showed us that the normalization procedure we used for the derivative files was less than ideal for timeseries runs of varying lengths. Specifically, the 3dDetrend flag -normalize makes 'the sum-of-squares equal to 1'. We had not thought through that an implication of this is that the resulting normalized timeseries amplitudes will be affected by run length, increasing as run length decreases (and maybe this should go in 3dDetrend’s help text). To demonstrate this, I wrote a version of 3dDetrend’s -normalize for R so you can see for yourselves by running the following code:
      # Generate a resting state (rs) timeseries (ts)
      # Install / load package to make fake fMRI ts
      # install.packages("neuRosim")
      library(neuRosim)
      # Generate a ts
      ts.rs <- simTSrestingstate(nscan=2000, TR=1, SNR=1)
      # 3dDetrend -normalize
      # R command version for 3dDetrend -normalize -polort 0 which normalizes by making "the sum-of-squares equal to 1"
      # Do for the full timeseries
      ts.normalised.long <- (ts.rs-mean(ts.rs))/sqrt(sum((ts.rs-mean(ts.rs))^2));
      # Do this again for a shorter version of the same timeseries
      ts.shorter.length <- length(ts.normalised.long)/4
      ts.normalised.short <- (ts.rs[1:ts.shorter.length]- mean(ts.rs[1:ts.shorter.length]))/sqrt(sum((ts.rs[1:ts.shorter.length]- mean(ts.rs[1:ts.shorter.length]))^2));
      # By looking at the summaries, it can be seen that the median values become  larger
      summary(ts.normalised.long)
      summary(ts.normalised.short)
      # Plot results for the long and short ts
      # Truncate the longer ts for plotting only
      ts.normalised.long.made.shorter <- ts.normalised.long[1:ts.shorter.length]
      # Give the plot a title
      title <- "3dDetrend -normalize for long (blue) and short (red) timeseries";
      plot(x=0, y=0, main=title, xlab="", ylab="", xaxs='i', xlim=c(1,length(ts.normalised.short)), ylim=c(min(ts.normalised.short),max(ts.normalised.short)));
      # Add zero line
      lines(x=c(-1,ts.shorter.length), y=rep(0,2), col='grey');
      # 3dDetrend -normalize -polort 0 for long timeseries
      lines(ts.normalised.long.made.shorter, col='blue');
      # 3dDetrend -normalize -polort 0 for short timeseries
      lines(ts.normalised.short, col='red');
      
    • Standardization/modernization

      • The above individuals also encouraged us to implement the afni_proc.py script over our own pipeline. It introduces at least three additional improvements: First, we now use Bob’s @SSwarper to align our anatomical files with an MNI template (now MNI152_2009_template_SSW.nii.gz) and this, in turn, integrates nicely into the afni_proc.py pipeline. This seems to result in a generally better or more consistent alignment, though this is only a qualitative observation. Second, all the transformations / interpolations and detrending are now done in fewers steps compared to our pipeline. This is preferable because, e.g., there is less chance of inadvertently reintroducing noise back into the timeseries (see Lindquist, Geuter, Wager, & Caffo 2019). Finally, many groups are advocating using tools like fMRIPrep or afni_proc.py to increase standardization of analyses practices in our neuroimaging community. This presumably results in less error, less heterogeneity and more interpretability of results across studies. Along these lines, the quality control (‘QC’) html pages generated by afni_proc.py are a real help in assessing data quality and almost a joy to use.
    • New afni_proc.py command line

      • The following is the afni_proc.py command line that we used to generate blurred and censored timeseries files. The afni_proc.py tool comes with extensive help and examples. As such, you can quickly understand our preprocessing decisions by scrutinising the below. Specifically, the following command is most similar to Example 11 for ‘Resting state analysis’ in the help file (see https://afni.nimh.nih.gov/pub/dist/doc/program_help/afni_proc.py.html): afni_proc.py \ -subj_id "$sub_id_name_1" \ -blocks despike tshift align tlrc volreg mask blur scale regress \ -radial_correlate_blocks tcat volreg \ -copy_anat anatomical_warped/anatSS.1.nii.gz \ -anat_has_skull no \ -anat_follower anat_w_skull anat anatomical_warped/anatU.1.nii.gz \ -anat_follower_ROI aaseg anat freesurfer/SUMA/aparc.a2009s+aseg.nii.gz \ -anat_follower_ROI aeseg epi freesurfer/SUMA/aparc.a2009s+aseg.nii.gz \ -anat_follower_ROI fsvent epi freesurfer/SUMA/fs_ap_latvent.nii.gz \ -anat_follower_ROI fswm epi freesurfer/SUMA/fs_ap_wm.nii.gz \ -anat_follower_ROI fsgm epi freesurfer/SUMA/fs_ap_gm.nii.gz \ -anat_follower_erode fsvent fswm \ -dsets media_?.nii.gz \ -tcat_remove_first_trs 8 \ -tshift_opts_ts -tpattern alt+z2 \ -align_opts_aea -cost lpc+ZZ -giant_move -check_flip \ -tlrc_base "$basedset" \ -tlrc_NL_warp \ -tlrc_NL_warped_dsets \ anatomical_warped/anatQQ.1.nii.gz \ anatomical_warped/anatQQ.1.aff12.1D \ anatomical_warped/anatQQ.1_WARP.nii.gz \ -volreg_align_to MIN_OUTLIER \ -volreg_post_vr_allin yes \ -volreg_pvra_base_index MIN_OUTLIER \ -volreg_align_e2a \ -volreg_tlrc_warp \ -mask_opts_automask -clfrac 0.10 \ -mask_epi_anat yes \ -blur_to_fwhm -blur_size $blur \ -regress_motion_per_run \ -regress_ROI_PC fsvent 3 \ -regress_ROI_PC_per_run fsvent \ -regress_make_corr_vols aeseg fsvent \ -regress_anaticor_fast \ -regress_anaticor_label fswm \ -regress_censor_motion 0.3 \ -regress_censor_outliers 0.1 \ -regress_apply_mot_types demean deriv \ -regress_est_blur_epits \ -regress_est_blur_errts \ -regress_run_clustsim no \ -regress_polort 2 \ -regress_bandpass 0.01 1 \ -html_review_style pythonic We used similar command lines to generate ‘blurred and not censored’ and the ‘not blurred and not censored’ timeseries files (described more fully below). We will provide the code used to make all derivative files available on our github site (https://github.com/lab-lab/nndb).

      We made one choice above that is different enough from our original pipeline that it is worth mentioning here. Specifically, we have quite long runs, with the average being ~40 minutes but this number can be variable (thus leading to the above issue with 3dDetrend’s -normalise). A discussion on the AFNI message board with one of our team (starting here, https://afni.nimh.nih.gov/afni/community/board/read.php?1,165243,165256#msg-165256), led to the suggestion that '-regress_polort 2' with '-regress_bandpass 0.01 1' be used for long runs. We had previously used only a variable polort with the suggested 1 + int(D/150) approach. Our new polort 2 + bandpass approach has the added benefit of working well with afni_proc.py.

      Which timeseries file you use is up to you but I have been encouraged by Rick and Paul to include a sort of PSA about this. In Paul’s own words: * Blurred data should not be used for ROI-based analyses (and potentially not for ICA? I am not certain about standard practice). * Unblurred data for ISC might be pretty noisy for voxelwise analyses, since blurring should effectively boost the SNR of active regions (and even good alignment won't be perfect everywhere). * For uncensored data, one should be concerned about motion effects being left in the data (e.g., spikes in the data). * For censored data: * Performing ISC requires the users to unionize the censoring patterns during the correlation calculation. * If wanting to calculate power spectra or spectral parameters like ALFF/fALFF/RSFA etc. (which some people might do for naturalistic tasks still), then standard FT-based methods can't be used because sampling is no longer uniform. Instead, people could use something like 3dLombScargle+3dAmpToRSFC, which calculates power spectra (and RSFC params) based on a generalization of the FT that can handle non-uniform sampling, as long as the censoring pattern is mostly random and, say, only up to about 10-15% of the data. In sum, think very carefully about which files you use. If you find you need a file we have not provided, we can happily generate different versions of the timeseries upon request and can generally do so in a week or less.

    • Effect on results

      • From numerous tests on our own analyses, we have qualitatively found that results using our old vs the new afni_proc.py preprocessing pipeline do not change all that much in terms of general spatial patterns. There is, however, an
  15. f

    Additional file 2 of Feature selection and causal analysis for microbiome...

    • springernature.figshare.com
    zip
    Updated Feb 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Emily Goren; Chong Wang; Zhulin He; Amy M. Sheflin; Dawn Chiniquy; Jessica E. Prenni; Susannah Tringe; Daniel P. Schachtman; Peng Liu (2024). Additional file 2 of Feature selection and causal analysis for microbiome studies in the presence of confounding using standardization [Dataset]. http://doi.org/10.6084/m9.figshare.14921120.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 8, 2024
    Dataset provided by
    figshare
    Authors
    Emily Goren; Chong Wang; Zhulin He; Amy M. Sheflin; Dawn Chiniquy; Jessica E. Prenni; Susannah Tringe; Daniel P. Schachtman; Peng Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Additional file 2: R Code. R and R markdown code for all simulation studies and data analysis.

  16. n

    Data from: Standardizing protocols for determining the cause of mortality in...

    • data.niaid.nih.gov
    • datasetcatalog.nlm.nih.gov
    • +1more
    zip
    Updated Jun 22, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bogdan Cristescu; Mark Elbroch; Tavis Forrester; Maximilian Allen; Derek Spitz; Christopher Wilmers; Heiko Wittmer (2022). Standardizing protocols for determining the cause of mortality in wildlife studies [Dataset]. http://doi.org/10.7291/D1GD50
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 22, 2022
    Dataset provided by
    Victoria University of Wellington
    Oregon Department of Fish and Wildlife
    University of Illinois Urbana-Champaign
    University of California, Santa Cruz
    Panthera Corporation
    Authors
    Bogdan Cristescu; Mark Elbroch; Tavis Forrester; Maximilian Allen; Derek Spitz; Christopher Wilmers; Heiko Wittmer
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Mortality site investigations of telemetered wildlife are important for cause-specific survival analyses and understanding underlying causes of observed population dynamics. Yet eroding ecoliteracy and a lack of quality control in data collection can lead researchers to make incorrect conclusions, which may negatively impact management decisions for wildlife populations. We reviewed a random sample of 50 peer-reviewed studies published between 2000 and 2019 on survival and cause-specific mortality of ungulates monitored with telemetry devices. This concise review revealed extensive variation in reporting of field procedures, with many studies omitting critical information for cause of mortality inference. Field protocols used to investigate mortality sites and ascertain the cause of mortality are often minimally described and frequently fail to address how investigators dealt with uncertainty. We outline a step-by-step procedure for mortality site investigations of telemetered ungulates, including evidence that should be documented in the field. Specifically, we highlight data that can be useful to differentiate predation from scavenging and more conclusively identify the predator species that killed the ungulate. We also outline how uncertainty in identifying the cause of mortality could be acknowledged and reported. We demonstrate the importance of rigorous protocols and prompt site investigations using data from our 5-year study on survival and cause-specific mortality of telemetered mule deer (Odocoileus hemionus) in northern California. Over the course of our study, we visited mortality sites of neonates (n = 91) and adults (n = 23) to ascertain the cause of mortality. Rapid site visitations significantly improved the successful identification of the cause of mortality and confidence levels for neonates. We discuss the need for rigorous and standardized protocols that include measures of confidence for mortality site investigations. We invite reviewers and journal editors to encourage authors to provide supportive information associated with the identification of causes of mortality, including uncertainty. Methods Three datasets on neonate and adult mule deer (Odocoileus hemionus) mortality site investigations were generated through ecological fieldwork in northern California, USA (2015-2020). The datasets in Dryad are: Does.csv (for use with R); Fawns.csv (for use with R); Full_data.xlsx (which combines the 2 .csv files and includes additional information) Two R code files associated with the 2 .csv datasets above are available in Zenodo: RScript_Does.R; RScript_Fawns.R The data were analyzed using RStudio v.1.1.447 and a variety of packages, including: broom, caret, ciTools, effects, lattice, modEvA, nnet, and tidyverse. The data are associated with the publication "Standardizing protocols for determining the cause of mortality in wildlife studies" in Ecology and Evolution.

  17. f

    Data from: Development of PainFace software to simplify, standardize, and...

    • datasetcatalog.nlm.nih.gov
    • figshare.com
    Updated Feb 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Major, Rami M.; Zylka, Mark; Ryan, Dan F.; Snyder, Magdalyn G.; Nesbitt, Jacob J.; Pudipeddi, Samhitha S.; Park, Sang-Kyoon; Wu, Guorong; Trocinski, Abigail K.; Mullen, Zachary J.; Garris, Rosanna L.; Vanden, Kelly A.; Lopez, Josh E.; McCoy, Eric; Mogil, Jeffrey S.; Krantz, James L.; Lima, Lucas V.; Austin, Jean-Sebastien; Shah, Sanya; Hu, Wenxin; Taylor-Blake, Bonnie; Patel, Rahul P; Klein, Morgan R.; Bazick, Hannah O.; Sotocinal, Susana G.; Kashlan, Adam D. (2024). Development of PainFace software to simplify, standardize, and scale up mouse grimace analyses [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001448360
    Explore at:
    Dataset updated
    Feb 14, 2024
    Authors
    Major, Rami M.; Zylka, Mark; Ryan, Dan F.; Snyder, Magdalyn G.; Nesbitt, Jacob J.; Pudipeddi, Samhitha S.; Park, Sang-Kyoon; Wu, Guorong; Trocinski, Abigail K.; Mullen, Zachary J.; Garris, Rosanna L.; Vanden, Kelly A.; Lopez, Josh E.; McCoy, Eric; Mogil, Jeffrey S.; Krantz, James L.; Lima, Lucas V.; Austin, Jean-Sebastien; Shah, Sanya; Hu, Wenxin; Taylor-Blake, Bonnie; Patel, Rahul P; Klein, Morgan R.; Bazick, Hannah O.; Sotocinal, Susana G.; Kashlan, Adam D.
    Description

    These files contain the validation data for the development of the PainFace software, an automated mouse grimace platform.

  18. f

    Data Sheet 4_Italian standardization of the BPSD-SINDEM scale for the...

    • frontiersin.figshare.com
    • datasetcatalog.nlm.nih.gov
    docx
    Updated Nov 21, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Federico Emanuele Pozzi; Fabrizia D'Antonio; Marta Zuffi; Oriana Pelati; Davide Vernè; Massimiliano Panigutti; Margherita Alberoni; Maria Grazia Di Maggio; Alfredo Costa; Sindem BPSD Study Group; Lucio Tremolizzo; Elisabetta Farina (2024). Data Sheet 4_Italian standardization of the BPSD-SINDEM scale for the assessment of neuropsychiatric symptoms in persons with dementia.docx [Dataset]. http://doi.org/10.3389/fneur.2024.1455787.s005
    Explore at:
    docxAvailable download formats
    Dataset updated
    Nov 21, 2024
    Dataset provided by
    Frontiers
    Authors
    Federico Emanuele Pozzi; Fabrizia D'Antonio; Marta Zuffi; Oriana Pelati; Davide Vernè; Massimiliano Panigutti; Margherita Alberoni; Maria Grazia Di Maggio; Alfredo Costa; Sindem BPSD Study Group; Lucio Tremolizzo; Elisabetta Farina
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    IntroductionBehavioral and Psychological Symptoms of Dementia (BPSD) are a heterogeneous set of psychological reactions and abnormal behaviors in people with dementia (PwD). Current assessment tools, like the Neuropsychiatric Inventory (NPI), only rely on caregiver assessment of BPSD and are therefore prone to bias.Materials and methodsA multidisciplinary team developed the BPSD-SINDEM scale as a three-part instrument, with two questionnaires administered to the caregiver (evaluating BPSD extent and caregiver distress) and a clinician-rated observational scale. This first instrument was tested on a sample of 33 dyads of PwD and their caregivers, and the results were qualitatively appraised in order to revise the tool through a modified Delphi method. During this phase, the wording of the questions was slightly changed, and the distress scale was changed into a coping scale based on the high correlation between extent and distress (r = 0.94). The final version consisted of three 17-item subscales, evaluating BPSD extent and caregiver coping, and the unchanged clinician-rated observational scale.ResultsThis tool was quantitatively validated in a sample of 208 dyads. It demonstrated good concurrent validity, with the extent subscale correlating positively with NPI scores (r = 0.64, p 

  19. f

    Variables included in study, by domain.

    • plos.figshare.com
    xls
    Updated Jun 7, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Erica S. Spatz; Susannah M. Bernheim; Leora I. Horwitz; Jeph Herrin (2023). Variables included in study, by domain. [Dataset]. http://doi.org/10.1371/journal.pone.0240222.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 7, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Erica S. Spatz; Susannah M. Bernheim; Leora I. Horwitz; Jeph Herrin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Variables included in study, by domain.

  20. f

    Supplementary Material for: Nomenclature of extracorporeal blood...

    • datasetcatalog.nlm.nih.gov
    • karger.figshare.com
    Updated Nov 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    R. , Mehta; T. , Reis; M. , Ostermann; A. , Zarbock; G, Ankawi; Faculty, On behalf of the Nomenclature Standardization; T. , Rimmelé; R. , Madarasu; J. , Prowle; K. , Kashani; F. , Husain-Syed; K. , Dolan; V. , Cantaluppi; C. , Ronco; J. A. , Kellum (2023). Supplementary Material for: Nomenclature of extracorporeal blood purification therapies for acute indications – The Nomenclature Standardization Conference [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000951139
    Explore at:
    Dataset updated
    Nov 3, 2023
    Authors
    R. , Mehta; T. , Reis; M. , Ostermann; A. , Zarbock; G, Ankawi; Faculty, On behalf of the Nomenclature Standardization; T. , Rimmelé; R. , Madarasu; J. , Prowle; K. , Kashani; F. , Husain-Syed; K. , Dolan; V. , Cantaluppi; C. , Ronco; J. A. , Kellum
    Description

    The development of new extracorporeal blood purification (EBP) techniques has led to increased application in clinical practice but also inconsistencies in nomenclature and misunderstanding. In November 2022, an international consensus conference was held to establish consensus on the terminology of EBP therapies. It was agreed to define EBP therapies as techniques that use an extracorporeal circuit to remove and/or modulate circulating substances to achieve physiological homeostasis, including support of the function of specific organs and/or detoxification. Specific acute EBP techniques include renal replacement therapy, isolated ultrafiltration, hemoadsorption and plasma therapies, all of which can be applied in isolation and combination. This paper summarises the proposed nomenclature of EBP therapies and serves as a framework for clinical practice and future research.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Gazit, Eran (2022). Example subjects for Mobilise-D data standardization [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7185428

Example subjects for Mobilise-D data standardization

Explore at:
Dataset updated
Oct 11, 2022
Dataset provided by
Cereatti, Andrea
Gazit, Eran
Micó-Amigo, Encarna
Soltani, Abolfazl
Salis, Francesca
Hansen, Clint
D'Ascanio, Ilaria
Palmerini, Luca
on behalf of the Mobilise-D consortium
Paraschiv-Ionescu, Anisoara
Mazzà, Claudia
Kluge, Felix
Caruso, Marco
Del Din, Silvia
Kirk, Cameron
Küderle, Arne
Hiden, Hugo
Chiari, Lorenzo
Bertuletti, Stefano
Rochester, Lynn
Ullrich, Martin
Reggi, Luca
Bonci, Tecla
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Standardized data from Mobilise-D participants (YAR dataset) and pre-existing datasets (ICICLE, MSIPC2, Gait in Lab and real-life settings, MS project, UNISS-UNIGE) are provided in the shared folder, as an example of the procedures proposed in the publication "Mobility recorded by wearable devices and gold standards: the Mobilise-D procedure for data standardization" that is currently under review in Scientific data. Please refer to that publication for further information. Please cite that publication if using these data.

The code to standardize an example subject (for the ICICLE dataset) and to open the standardized Matlab files in other languages (Python, R) is available in github (https://github.com/luca-palmerini/Procedure-wearable-data-standardization-Mobilise-D).

Search
Clear search
Close search
Google apps
Main menu