4 datasets found
  1. Z

    Image database to supplement "Paulus, F.M. et al. Pain empathy but not...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marx, Svenja (2020). Image database to supplement "Paulus, F.M. et al. Pain empathy but not surprise in response to unexpected action explains arousal related pupil dilation." (VIPER database) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_322426
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Einhäuser, Wolfgang
    Walper, Daniel
    Hamschmidt, Lisanne
    Rademacher, Lena
    Paulus, Frieder Michel
    Marx, Svenja
    Müller-Pinzler, Laura
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This folder contains the 282 images of the "VIPER" database (visually-induced pain empathy repository) along with ratings of 24 independent raters. Details are described in the following publication:

    Paulus, F.M., Müller-Pinzler, L., Walper, D., Marx, S., Hamschmidt, L., Rademacher, L., Krach, S., Einhäuser, W. Pain empathy but not surprise in response to unexpected action explains arousal related pupil dilation.

    The material can be used for scientific purposes, provided this reference is appropriately cited. Please check the download site to get the up-to-date reference at the time of your publication.

    Conditions are identified by the filename of the image, which consists of the number of the scenario (1-83) and the condition identifier: pain neut(ral) mism(atch) tool Note that the tool and the mismatch condition do not exist for all scenarios.

    The file ratings_viper.csv contains the ratings. Each image corresponds to a line, the columns are as follows: Column 1: Filename of the image Column 2: Scenario number Column 3: condition Columns 4 through 27: ratings of the 24 individuals (between 0 and 4, NaN if there was no rating recorded)

    The file thumbnail_viper.jpg provides an overview over all images in the database.

    For ease of download, the images are available as tar-archive (allImages_viper.tar) and as inidivual files.

  2. B

    Replication Data for: Improving Objective Wound Assessment: "Fully-automated...

    • borealisdata.ca
    Updated Mar 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jose Ramirez Garcia Luna; Dhanesh Ramachandram; Robert DJ Fraser; Justin Allport (2022). Replication Data for: Improving Objective Wound Assessment: "Fully-automated wound tissue segmentation using Deep Learning on mobile devices" [Dataset]. http://doi.org/10.5683/SP3/8C4FDV
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 14, 2022
    Dataset provided by
    Borealis
    Authors
    Jose Ramirez Garcia Luna; Dhanesh Ramachandram; Robert DJ Fraser; Justin Allport
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Background: The composition of tissue types present within a wound is a useful indicator of its healing progression and could be helpful in guiding its treatment. Additionally, this measure is clinically used in wound healing tools (e.g. BWAT) to assess risk and recommend treatment. However, the identification of wound tissue and the estimation of their relative composition is highly subjective and variable. This results in incorrect assessments being reported, leading to downstream impacts including inappropriate dressing selection, failure to identify wounds at risk of not healing, or failure to make appropriate referrals to specialists. Objective: To measure inter-and intra-rater variability in manual tissue segmentation and quantification among a cohort of wound care clinicians. To determine if an objective assessment of tissue types (i.e., size, amount) can be achieved using a deep convolutional neural network that predicts wound tissue types. The proposed objective measurement by machine learning model’s performance is reported in terms of mean intersection over union (mIOU) between model prediction and the ground truth labels. Finally, to compare the performance of the model wound tissue identification by a cohort of wound care clinicians. Methods: A dataset of 58 anonymized wound images of various types of chronic wounds from Swift Medical’s Wound Database was used to conduct the inter-rater and intra-rater agreement study. The dataset was split into 3 subsets, with 50% overlap between subsets to measure intra-rater agreement. Four different tissue types (epithelial, granulation, slough and eschar) within the wound bed were independently labelled by the 5 wound clinicians using a browser-based image annotation tool. Each subset was labelled at one-week intervals. Inter-rater and intra rater agreement was computed. Next, two separate deep convolutional neural networks architectures were developed for wound segmentation and tissue segmentation and are used in sequence in the proposed workflow. These models were trained using 465,187 wound image-label pairs and 17,000 image-label pairs respectively. This is by far the largest and most diverse reported dataset of labelled wound images used for training deep learning models for wound and wound tissue segmentation. This allows our models to be robust, unbiased towards skin tones and generalize well to unseen data. The deep learning model architectures were designed to be fast and nimble to allow them to run in near real-time on mobile devices. Results: We observed considerable variability when a cohort of wound clinicians was tasked to label the different tissue types within the wound using a browser-based image annotation tool. We report poor to moderate inter-rater agreement in identifying tissue types in chronic wound images. A very poor Krippendorff alpha value of 0.014 for inter-rater variability when identifying epithelization has been observed, while granulation is most consistently identified by the clinicians. The intra-rater ICC(3,1) (Intra-Class Correlation) however indicates raters are relatively consistent when labelling the same image multiple times over a period of time. Our deep learning models achieved a mean intersection over union (mIOU) of 0.8644 and 0.7192 for wound and tissue segmentation respectively. A cohort of wound clinicians, by consensus, rated 91% of the tissue segmentation results to be between fair and good in terms of tissue identification and segmentation quality. Conclusions: Our inter-rater agreement study validates that clinicians may exhibit considerable variability when identifying and visually estimating tissue proportion within the wound bed. The proposed deep learning model provides objective tissue identification and measurements to assist clinicians in documenting the wound more accurately. Our solution works on off-the-shelf mobile devices and was trained with the largest and most diverse chronic wound dataset ever reported and leading to a robust model when deployed. The proposed solution brings us a step closer to more accurate wound documentation and may lead to improved healing outcomes when deployed at scale.

  3. f

    Composition of final image database.

    • plos.figshare.com
    xls
    Updated Jun 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gabriel Carreira Lencioni; Rafael Vieira de Sousa; Edson José de Souza Sardinha; Rodrigo Romero Corrêa; Adroaldo José Zanella (2023). Composition of final image database. [Dataset]. http://doi.org/10.1371/journal.pone.0258672.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 9, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Gabriel Carreira Lencioni; Rafael Vieira de Sousa; Edson José de Souza Sardinha; Rodrigo Romero Corrêa; Adroaldo José Zanella
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Composition of final image database.

  4. s

    Montreal Pain and Affective Face Clips

    • scicrunch.org
    Updated Oct 17, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2019). Montreal Pain and Affective Face Clips [Dataset]. http://identifiers.org/RRID:SCR_015497
    Explore at:
    Dataset updated
    Oct 17, 2019
    Description

    Collection of standardized stimuli of dynamic, prototypical facial expressions of pain, the six basic emotions, and a neutral display. The set can be used in cognitive, social, clinical, and neuroscience studies on facial expressions and their social communicative functions.

  5. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Marx, Svenja (2020). Image database to supplement "Paulus, F.M. et al. Pain empathy but not surprise in response to unexpected action explains arousal related pupil dilation." (VIPER database) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_322426

Image database to supplement "Paulus, F.M. et al. Pain empathy but not surprise in response to unexpected action explains arousal related pupil dilation." (VIPER database)

Explore at:
Dataset updated
Jan 24, 2020
Dataset provided by
Einhäuser, Wolfgang
Walper, Daniel
Hamschmidt, Lisanne
Rademacher, Lena
Paulus, Frieder Michel
Marx, Svenja
Müller-Pinzler, Laura
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

This folder contains the 282 images of the "VIPER" database (visually-induced pain empathy repository) along with ratings of 24 independent raters. Details are described in the following publication:

Paulus, F.M., Müller-Pinzler, L., Walper, D., Marx, S., Hamschmidt, L., Rademacher, L., Krach, S., Einhäuser, W. Pain empathy but not surprise in response to unexpected action explains arousal related pupil dilation.

The material can be used for scientific purposes, provided this reference is appropriately cited. Please check the download site to get the up-to-date reference at the time of your publication.

Conditions are identified by the filename of the image, which consists of the number of the scenario (1-83) and the condition identifier: pain neut(ral) mism(atch) tool Note that the tool and the mismatch condition do not exist for all scenarios.

The file ratings_viper.csv contains the ratings. Each image corresponds to a line, the columns are as follows: Column 1: Filename of the image Column 2: Scenario number Column 3: condition Columns 4 through 27: ratings of the 24 individuals (between 0 and 4, NaN if there was no rating recorded)

The file thumbnail_viper.jpg provides an overview over all images in the database.

For ease of download, the images are available as tar-archive (allImages_viper.tar) and as inidivual files.

Search
Clear search
Close search
Google apps
Main menu