51 datasets found
  1. IGF-1 and Chondroitinase ABC Augment Nerve Regeneration after Vascularized...

    • plos.figshare.com
    • datasetcatalog.nlm.nih.gov
    tiff
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nataliya V. Kostereva; Yong Wang; Derek R. Fletcher; Jignesh V. Unadkat; Jonas T. Schnider; Chiaki Komatsu; Yang Yang; Donna B. Stolz; Michael R. Davis; Jan A. Plock; Vijay S. Gorantla (2023). IGF-1 and Chondroitinase ABC Augment Nerve Regeneration after Vascularized Composite Limb Allotransplantation [Dataset]. http://doi.org/10.1371/journal.pone.0156149
    Explore at:
    tiffAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Nataliya V. Kostereva; Yong Wang; Derek R. Fletcher; Jignesh V. Unadkat; Jonas T. Schnider; Chiaki Komatsu; Yang Yang; Donna B. Stolz; Michael R. Davis; Jan A. Plock; Vijay S. Gorantla
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Impaired nerve regeneration and inadequate recovery of motor and sensory function following peripheral nerve repair remain the most significant hurdles to optimal functional and quality of life outcomes in vascularized tissue allotransplantation (VCA). Neurotherapeutics such as Insulin-like Growth Factor-1 (IGF-1) and chondroitinase ABC (CH) have shown promise in augmenting or accelerating nerve regeneration in experimental models and may have potential in VCA. The aim of this study was to evaluate the efficacy of low dose IGF-1, CH or their combination (IGF-1+CH) on nerve regeneration following VCA. We used an allogeneic rat hind limb VCA model maintained on low-dose FK506 (tacrolimus) therapy to prevent rejection. Experimental animals received neurotherapeutics administered intra-operatively as multiple intraneural injections. The IGF-1 and IGF-1+CH groups received daily IGF-1 (intramuscular and intraneural injections). Histomorphometry and immunohistochemistry were used to evaluate outcomes at five weeks. Overall, compared to controls, all experimental groups showed improvements in nerve and muscle (gastrocnemius) histomorphometry. The IGF-1 group demonstrated superior distal regeneration as confirmed by Schwann cell (SC) immunohistochemistry as well as some degree of extrafascicular regeneration. IGF-1 and CH effectively promote nerve regeneration after VCA as confirmed by histomorphometric and immunohistochemical outcomes.

  2. Z

    Wallhack1.8k Dataset | Data Augmentation Techniques for Cross-Domain WiFi...

    • data.niaid.nih.gov
    • data-staging.niaid.nih.gov
    • +2more
    Updated Apr 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Strohmayer, Julian; Kampel, Martin (2025). Wallhack1.8k Dataset | Data Augmentation Techniques for Cross-Domain WiFi CSI-Based Human Activity Recognition [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8188998
    Explore at:
    Dataset updated
    Apr 4, 2025
    Dataset provided by
    Computer Vision Lab, TU Wien
    Authors
    Strohmayer, Julian; Kampel, Martin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains the Wallhack1.8k dataset for WiFi-based long-range activity recognition in Line-of-Sight (LoS) and Non-Line-of-Sight (NLoS)/Through-Wall scenarios, as proposed in [1,2], as well as the CAD models (of 3D-printable parts) of the WiFi systems proposed in [2].

    PyTroch Dataloader

    A minimal PyTorch dataloader for the Wallhack1.8k dataset is provided at: https://github.com/StrohmayerJ/wallhack1.8k

    Dataset Description

    The Wallhack1.8k dataset comprises 1,806 CSI amplitude spectrograms (and raw WiFi packet time series) corresponding to three activity classes: "no presence," "walking," and "walking + arm-waving." WiFi packets were transmitted at a frequency of 100 Hz, and each spectrogram captures a temporal context of approximately 4 seconds (400 WiFi packets).

    To assess cross-scenario and cross-system generalization, WiFi packet sequences were collected in LoS and through-wall (NLoS) scenarios, utilizing two different WiFi systems (BQ: biquad antenna and PIFA: printed inverted-F antenna). The dataset is structured accordingly:

    LOS/BQ/ <- WiFi packets collected in the LoS scenario using the BQ system

    LOS/PIFA/ <- WiFi packets collected in the LoS scenario using the PIFA system

    NLOS/BQ/ <- WiFi packets collected in the NLoS scenario using the BQ system

    NLOS/PIFA/ <- WiFi packets collected in the NLoS scenario using the PIFA system

    These directories contain the raw WiFi packet time series (see Table 1). Each row represents a single WiFi packet with the complex CSI vector H being stored in the "data" field and the class label being stored in the "class" field. H is of the form [I, R, I, R, ..., I, R], where two consecutive entries represent imaginary and real parts of complex numbers (the Channel Frequency Responses of subcarriers). Taking the absolute value of H (e.g., via numpy.abs(H)) yields the subcarrier amplitudes A.

    To extract the 52 L-LTF subcarriers used in [1], the following indices of A are to be selected:

    52 L-LTF subcarriers

    csi_valid_subcarrier_index = [] csi_valid_subcarrier_index += [i for i in range(6, 32)] csi_valid_subcarrier_index += [i for i in range(33, 59)]

    Additional 56 HT-LTF subcarriers can be selected via:

    56 HT-LTF subcarriers

    csi_valid_subcarrier_index += [i for i in range(66, 94)]
    csi_valid_subcarrier_index += [i for i in range(95, 123)]

    For more details on subcarrier selection, see ESP-IDF (Section Wi-Fi Channel State Information) and esp-csi.

    Extracted amplitude spectrograms with the corresponding label files of the train/validation/test split: "trainLabels.csv," "validationLabels.csv," and "testLabels.csv," can be found in the spectrograms/ directory.

    The columns in the label files correspond to the following: [Spectrogram index, Class label, Room label]

    Spectrogram index: [0, ..., n]

    Class label: [0,1,2], where 0 = "no presence", 1 = "walking", and 2 = "walking + arm-waving."

    Room label: [0,1,2,3,4,5], where labels 1-5 correspond to the room number in the NLoS scenario (see Fig. 3 in [1]). The label 0 corresponds to no room and is used for the "no presence" class.

    Dataset Overview:

    Table 1: Raw WiFi packet sequences.

    Scenario System "no presence" / label 0 "walking" / label 1 "walking + arm-waving" / label 2 Total

    LoS BQ b1.csv w1.csv, w2.csv, w3.csv, w4.csv and w5.csv ww1.csv, ww2.csv, ww3.csv, ww4.csv and ww5.csv

    LoS PIFA b1.csv w1.csv, w2.csv, w3.csv, w4.csv and w5.csv ww1.csv, ww2.csv, ww3.csv, ww4.csv and ww5.csv

    NLoS BQ b1.csv w1.csv, w2.csv, w3.csv, w4.csv and w5.csv ww1.csv, ww2.csv, ww3.csv, ww4.csv and ww5.csv

    NLoS PIFA b1.csv w1.csv, w2.csv, w3.csv, w4.csv and w5.csv ww1.csv, ww2.csv, ww3.csv, ww4.csv and ww5.csv

    4 20 20 44

    Table 2: Sample/Spectrogram distribution across activity classes in Wallhack1.8k.

    Scenario System

    "no presence" / label 0

    "walking" / label 1

    "walking + arm-waving" / label 2 Total

    LoS BQ 149 154 155

    LoS PIFA 149 160 152

    NLoS BQ 148 150 152

    NLoS PIFA 143 147 147

    589 611 606 1,806

    Download and UseThis data may be used for non-commercial research purposes only. If you publish material based on this data, we request that you include a reference to one of our papers [1,2].

    [1] Strohmayer, Julian, and Martin Kampel. (2024). “Data Augmentation Techniques for Cross-Domain WiFi CSI-Based Human Activity Recognition”, In IFIP International Conference on Artificial Intelligence Applications and Innovations (pp. 42-56). Cham: Springer Nature Switzerland, doi: https://doi.org/10.1007/978-3-031-63211-2_4.

    [2] Strohmayer, Julian, and Martin Kampel., “Directional Antenna Systems for Long-Range Through-Wall Human Activity Recognition,” 2024 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 2024, pp. 3594-3599, doi: https://doi.org/10.1109/ICIP51287.2024.10647666.

    BibTeX citations:

    @inproceedings{strohmayer2024data, title={Data Augmentation Techniques for Cross-Domain WiFi CSI-Based Human Activity Recognition}, author={Strohmayer, Julian and Kampel, Martin}, booktitle={IFIP International Conference on Artificial Intelligence Applications and Innovations}, pages={42--56}, year={2024}, organization={Springer}}@INPROCEEDINGS{10647666, author={Strohmayer, Julian and Kampel, Martin}, booktitle={2024 IEEE International Conference on Image Processing (ICIP)}, title={Directional Antenna Systems for Long-Range Through-Wall Human Activity Recognition}, year={2024}, volume={}, number={}, pages={3594-3599}, keywords={Visualization;Accuracy;System performance;Directional antennas;Directive antennas;Reflector antennas;Sensors;Human Activity Recognition;WiFi;Channel State Information;Through-Wall Sensing;ESP32}, doi={10.1109/ICIP51287.2024.10647666}}

  3. d

    Data from: Fast and accurate estimation of species-specific diversification...

    • datadryad.org
    • search.dataone.org
    • +1more
    zip
    Updated Nov 3, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Odile Maliet; Hélène Morlon (2020). Fast and accurate estimation of species-specific diversification rates using data augmentation [Dataset]. http://doi.org/10.5061/dryad.tb2rbnzzh
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 3, 2020
    Dataset provided by
    Dryad
    Authors
    Odile Maliet; Hélène Morlon
    Time period covered
    Nov 3, 2020
    Description

    Diversification rates vary across species as a response to various factors, including environmental conditions and species-specific features. Phylogenetic models that allow accounting for and quantifying this heterogeneity in diversification rates have proven particularly useful for understanding clades diversification. Recently, we introduced the cladogenetic diversification rate shift model (ClaDS), which allows inferring subtle rate variations across lineages. Here we present a new inference technique for this model that considerably reduces computation time through the use of data augmentation and provide an implementation of this method in Julia. In addition to drastically reducing computation time, this new inference approach provides a posterior distribution of the augmented data, that is the tree with extinct and unsampled lineages as well as associated diversification rates. In particular, this allows extracting the distribution through time of both the mean rate and the number...

  4. u

    Data augmentation for Multi-Classification of Non-Functional Requirements -...

    • portalinvestigacion.udc.gal
    • investigacion.usc.es
    • +1more
    Updated 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Limaylla-Lunarejo, María-Isabel; Condori-Fernandez, Nelly; R. Luaces, Miguel; Limaylla-Lunarejo, María-Isabel; Condori-Fernandez, Nelly; R. Luaces, Miguel (2024). Data augmentation for Multi-Classification of Non-Functional Requirements - Dataset [Dataset]. https://portalinvestigacion.udc.gal/documentos/668fc40fb9e7c03b01bd38a0
    Explore at:
    Dataset updated
    2024
    Authors
    Limaylla-Lunarejo, María-Isabel; Condori-Fernandez, Nelly; R. Luaces, Miguel; Limaylla-Lunarejo, María-Isabel; Condori-Fernandez, Nelly; R. Luaces, Miguel
    Description

    There are four datasets:

    1.Dataset_structure indicates the structure of the datasets, such as column name, type, and value.

    1. Spanish_promise_exp_nfr_train and Spanish_promise_exp_nfr_test are the non-functional requirements of the Promise_exp[1] dataset translated into the Spanish language.

    2. Balanced_promise_exp_nfr_train is the new balanced dataset of Spanish_promise_exp_nfr_train, in which the Data Augmentation technique with chatGPT was applied to increase the requirements with little data and random undersampling was used to eliminate requirements.

    The labeling schema, similar to PROMISE NFR, includes the following categories: A: Availability, PO: Portability, L: Legal, FT: Fault tolerance, SC: Scalability, MN: Maintainability, LF: Look and feel, PE: Performance, O: Operational. US: Usability, and SE: Security.

  5. h

    syntactic-augmentation-nli

    • huggingface.co
    Updated Aug 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    metaeval (2023). syntactic-augmentation-nli [Dataset]. https://huggingface.co/datasets/metaeval/syntactic-augmentation-nli
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 18, 2023
    Dataset authored and provided by
    metaeval
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    https://github.com/Aatlantise/syntactic-augmentation-nli/tree/master/datasets @inproceedings{min-etal-2020-syntactic, title = "Syntactic Data Augmentation Increases Robustness to Inference Heuristics", author = "Min, Junghyun and McCoy, R. Thomas and Das, Dipanjan and Pitler, Emily and Linzen, Tal", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address =… See the full description on the dataset page: https://huggingface.co/datasets/metaeval/syntactic-augmentation-nli.

  6. d

    Data from: Partially incorrect fossil data augment analyses of discrete...

    • datadryad.org
    zip
    Updated Jun 20, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mark N. Puttick (2016). Partially incorrect fossil data augment analyses of discrete trait evolution in living species [Dataset]. http://doi.org/10.5061/dryad.v66b2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 20, 2016
    Dataset provided by
    Dryad
    Authors
    Mark N. Puttick
    Time period covered
    Jun 18, 2016
    Description

    runSimulationsThis file contains R scripts to simulated discrete traits and phylogenies used in the analyses

  7. [Data S2] Microbiota dictate T cell clonal selection to augment...

    • zenodo.org
    Updated Apr 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Albert C. Yeh; Albert C. Yeh; Motoko Koyama; Olivia Waltner; Simone A. Minnie; Julie R. Boiko; Tamer Shabaneh; Shuichiro Takahashi; Kathleen S. Ensbey; Christine R. Schmidt; Samuel R.W. Legg; Tomoko Sekiguchi; Ethan Nelson; Andrew R. Stevens; Shruti S. Bhise; Tracy Goodpaster; Saranya Chakka; Scott N. Furlan; Kate A. Markey; Marie E. Bleakley; Charles O. Elson; Philip H. Bradley; Geoffrey R. Hill; Motoko Koyama; Olivia Waltner; Simone A. Minnie; Julie R. Boiko; Tamer Shabaneh; Shuichiro Takahashi; Kathleen S. Ensbey; Christine R. Schmidt; Samuel R.W. Legg; Tomoko Sekiguchi; Ethan Nelson; Andrew R. Stevens; Shruti S. Bhise; Tracy Goodpaster; Saranya Chakka; Scott N. Furlan; Kate A. Markey; Marie E. Bleakley; Charles O. Elson; Philip H. Bradley; Geoffrey R. Hill (2024). [Data S2] Microbiota dictate T cell clonal selection to augment graft-vs-host disease after stem cell transplantation [Dataset]. http://doi.org/10.5281/zenodo.7402790
    Explore at:
    Dataset updated
    Apr 28, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Albert C. Yeh; Albert C. Yeh; Motoko Koyama; Olivia Waltner; Simone A. Minnie; Julie R. Boiko; Tamer Shabaneh; Shuichiro Takahashi; Kathleen S. Ensbey; Christine R. Schmidt; Samuel R.W. Legg; Tomoko Sekiguchi; Ethan Nelson; Andrew R. Stevens; Shruti S. Bhise; Tracy Goodpaster; Saranya Chakka; Scott N. Furlan; Kate A. Markey; Marie E. Bleakley; Charles O. Elson; Philip H. Bradley; Geoffrey R. Hill; Motoko Koyama; Olivia Waltner; Simone A. Minnie; Julie R. Boiko; Tamer Shabaneh; Shuichiro Takahashi; Kathleen S. Ensbey; Christine R. Schmidt; Samuel R.W. Legg; Tomoko Sekiguchi; Ethan Nelson; Andrew R. Stevens; Shruti S. Bhise; Tracy Goodpaster; Saranya Chakka; Scott N. Furlan; Kate A. Markey; Marie E. Bleakley; Charles O. Elson; Philip H. Bradley; Geoffrey R. Hill
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data S2. GLIPH2 hits for all recipient pairs amongst B6 to B6D2F1 transplants with or without antibiotic exposure and 900cGy vs. 1300cGy TBI conditioning. For all TCRs within each recipient pair (12 mice total, 66 pairs for spleen; 15 pairs for SILP analysis), GLIPH2 specificity groups are generated as described in methods.

  8. A dataset for window and blind states detection

    • figshare.com
    bin
    Updated Aug 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seunghyeon Wang (2024). A dataset for window and blind states detection [Dataset]. http://doi.org/10.6084/m9.figshare.26403004.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Aug 5, 2024
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Seunghyeon Wang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The data was constructed for detecting window and blind states. All images were annotated in XML format using LabelImg for object detection tasks. The results of applying the Faster R-CNN based model include detected images and loss graphs for both training and validation in this dataset. Additionally, the raw data with other annotations can be used for applications such as semantic segmentation and image captioning.

  9. Code and Data from: An Imputation-Based Approach for Augmenting Sparse...

    • zenodo.org
    zip
    Updated Jul 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amy Benefield; Amy Benefield; VP Nagraj; VP Nagraj; Desiree Williams; Desiree Williams (2025). Code and Data from: An Imputation-Based Approach for Augmenting Sparse Epidemiological Signals [Dataset]. http://doi.org/10.5281/zenodo.16584391
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 29, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Amy Benefield; Amy Benefield; VP Nagraj; VP Nagraj; Desiree Williams; Desiree Williams
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This directory contains R code and required data to run the full data augmentation described in, "An Imputation-Based Approach for Augmenting Sparse Epidemiological Signals." This is the updated code corresponding to the updated medRxiv manuscript. It now includes ILINet data as a predictor in the imputation.

    "aug_pipeline.R" runs through all component steps and calls individual functions and data files within the directory. "plots_for_pipeline.R" uses data created during the aug_pipeline script to visualize individual steps in the augmentation process.

  10. Ten best U.S. plastic surgeons for breast augmentation 2025

    • statista.com
    Updated Sep 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Ten best U.S. plastic surgeons for breast augmentation 2025 [Dataset]. https://www.statista.com/statistics/1621916/best-breast-augmentation-plastic-surgeons-us/
    Explore at:
    Dataset updated
    Sep 11, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Mar 2025 - May 2025
    Area covered
    United States
    Description

    According to a 2025 U.S. ranking by Statista R published by Newsweek, the best plastic surgeon for breast augmentation was ******************, an MD based in Texas. This was followed by doctors *************, also in Texas, and ************** in Michigan. Breast augmentation is the ****** most popular cosmetic surgery in the United States, with over *** thousand procedures in 2024.

  11. R

    Canned Goods Dataset

    • universe.roboflow.com
    zip
    Updated Jan 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vizonix (2023). Canned Goods Dataset [Dataset]. https://universe.roboflow.com/vizonix/canned-goods
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 22, 2023
    Dataset authored and provided by
    Vizonix
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Canned Goods Bounding Boxes
    Description

    The Canned Goods Dataset

    by Vizonix

    This dataset differentiates between 4 similar object classes: 4 types of canned goods. We built this dataset with cans of olives, beans, stewed tomatoes, and refried beans.

    The dataset is pre-augmented. That is to say, all required augmentations are applied to the actual native dataset prior to inference. We have found that augmenting this way provides our users maximum visibility and flexibility in tuning their dataset (and classifier) to achieve their specific use-case goals. Augmentations are present and visible in the native dataset prior to the classifier - so it's never a mystery what augmentation tweaks produce a more positive or negative outcome during training. It also eliminates the risk of downsizing affecting annotations.

    The training images in this dataset were created in our studio in Florida from actual physical objects to the following specifications:

    • Each item was imaged using a 360-degree horizontal rotation - imaged every 9 degrees at 0 degrees of elevation.
    • Each item was imaged (per above) 3 times - using physical left lighting, right lighting, and frontal lighting.
    • Backgrounds in this dataset are completely random - they do not factor into the classifier's decision-making (nor do we ever want them to). We used 100% random backgrounds generated in-house. This eliminates background bias in the dataset. Our use of random backgrounds is a newly released feature in our datasets.

    The training images in this dataset were composited / augmented in this way:

    • Imaged objects were randomly rotated in frame from 15 to 340 degrees.
    • Imaged objects were randomly positioned in frame.
    • Imaged objects were randomly sized from .33 to 1.0 of original.
    • Image contrast was randomly adjusted from .7 to 1.25.
    • Gaussian blur was randomly introduced at a factor from 2 to 5.
    • Color channels were dropped randomly (R,G,B).
    • Grayscale images were introduced randomly.
    • Soft occlusions (noise, and others) in random transperencies were randomly introduced.
    • Hard occlusions (noise and others) in solid transperencies were randomly introduced.
    • Brightness was randomly adjusted.
    • Sharpness was randomly adjusted.
    • Color balance was randomly adjusted.
    • Images were resized to 640x640 for Roboflow's platform.

    1,600 (+) different images were uploaded for each class (out of the 25,000 total images created for each class).

    Understanding our Dataset Insights File

    As users train their classifiers, they often wish to enhance accuracy by experimenting with or tweaking their dataset. With our Dataset Insights documents, they can easily determine which images possess which augmentations. Dataset Insights allow users to easily add or remove images with specific augmentations as they wish. This also provides a detailed profile and inventory of each file in the dataset.

    The Dataset Insights document enables the user to see exactly which source image, angle, augmentation(s), etc. were used to create each image in the dataset.

    Dataset Insight Files:

    About Vizonix

    Vizonix (vizonix.com) creates from-scratch datasets created from 100% in-house generated photography. Our images and backgrounds are generated in-house in our Florida studio. We typically image smaller items, deliver in 72 hours, and specialize in Manufacturer Quality Assurance (MQA) datasets.

  12. d

    Data from: Bayesian analysis of biogeography when the number of areas is...

    • datadryad.org
    • data.niaid.nih.gov
    • +1more
    zip
    Updated May 31, 2013
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michael J. Landis; Nicholas J. Matzke; Brian R. Moore; John P. Huelsenbeck (2013). Bayesian analysis of biogeography when the number of areas is large [Dataset]. http://doi.org/10.5061/dryad.8346r
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 31, 2013
    Dataset provided by
    Dryad
    Authors
    Michael J. Landis; Nicholas J. Matzke; Brian R. Moore; John P. Huelsenbeck
    Time period covered
    Dec 17, 2012
    Area covered
    Malesian Archipelago
    Description

    Biogeographic history of Malesian RhododendronBiogeographic history of Malesian Rhododendron. The region was parsed into 20 discrete geographic areas following Brown et al. (2006). Each circle corresponds to a discrete area whose geographic coordinates is summarized by its location on the map. Posterior probability of being present in an area is proportional to the opacity of the circle, with the data at the tips being observed with probability one. Circles are colored according to their position relative to Wallace’s Line. We infer a continental Asian origin for Malesian rhododendrons. This figure complements the summarized results presented in FIgure 7 in the manuscript and is best explored using zoom.supp_vireya_map.pdfVireya phylogenyTime-calibrated phylogeny of Rhododendron section Vireya from Webb & Ree (2012).malaysia.55.treeVireya biogeographical coordinatesGeographical coordinates used to represent biogeographical areas of Rhododendron section Vireya as defined by Brown et ...

  13. Bitter Gourd Images from Augmentation

    • kaggle.com
    zip
    Updated May 17, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    RAGHAV R POTDAR (2021). Bitter Gourd Images from Augmentation [Dataset]. https://www.kaggle.com/raghavrpotdar/bitter-gourd-images-from-augmentation
    Explore at:
    zip(4925653 bytes)Available download formats
    Dataset updated
    May 17, 2021
    Authors
    RAGHAV R POTDAR
    Description

    Context

    This Dataset is created to use in a Notebook Dataset

  14. Vegnet Augmented jpeg

    • kaggle.com
    zip
    Updated Aug 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Adwyth Darsan R (2023). Vegnet Augmented jpeg [Dataset]. https://www.kaggle.com/datasets/adwythdarsanr/vegnet-augmented-jpeg
    Explore at:
    zip(308876080 bytes)Available download formats
    Dataset updated
    Aug 29, 2023
    Authors
    Adwyth Darsan R
    Description

    Dataset

    This dataset was created by Adwyth Darsan R

    Contents

  15. f

    Data from: Augmentation of telemedicine post-operative follow-up after...

    • datasetcatalog.nlm.nih.gov
    • tandf.figshare.com
    Updated Aug 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vagefi, M. Reza; Grob, Seanna R.; Ahmad, Meleha; Winn, Bryan J.; Smith, Loreley D.; Ashraf, Davin C.; Kersten, Robert C.; Miller, Amanda (2022). Augmentation of telemedicine post-operative follow-up after oculofacial plastic surgery with a self-guided patient tool [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000249833
    Explore at:
    Dataset updated
    Aug 3, 2022
    Authors
    Vagefi, M. Reza; Grob, Seanna R.; Ahmad, Meleha; Winn, Bryan J.; Smith, Loreley D.; Ashraf, Davin C.; Kersten, Robert C.; Miller, Amanda
    Description

    This study evaluates a web-based tool designed to augment telemedicine post-operative visits after periocular surgery. Adult, English-speaking patients undergoing periocular surgery with telemedicine follow-up were studied prospectively in this interventional case series. Participants submitted visual acuity measurements and photographs via a web-based tool prior to routine telemedicine post-operative visits. An after-visit survey assessed patient perceptions. Surgeons rated photographs and live video for quality and blurriness; external raters also evaluated photographs. Images were analyzed for facial centration, resolution, and algorithmically detected blur. Complications were recorded and graded for severity and relation to telemedicine. Seventy-nine patients were recruited. Surgeons requested an in-person assessment for six patients (7.6%) due to inadequate evaluation by telemedicine. Surgeons rated patient-provided photographs to be of higher quality than live video at the time of the post-operative visit (p < 0.001). Image blur and resolution had moderate and weak correlation with photograph quality, respectively. A photograph blur detection algorithm demonstrated sensitivity of 85.5% and specificity of 75.1%. One patient experienced a wound dehiscence with a possible relationship to inadequate evaluation during telemedicine follow-up. Patients rated the telemedicine experience and their comfort with the structure of the visit highly. Augmented telemedicine follow-up after oculofacial plastic surgery is associated with high patient satisfaction, rare conversion to clinic evaluation, and few related post-operative complications. Automated detection of image resolution and blur may play a role in screening photographs for subsequent iterations of the web-based tool.

  16. brain tumor augmented dataset

    • kaggle.com
    zip
    Updated Oct 29, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dr. Mohamed R. Shoaib (2021). brain tumor augmented dataset [Dataset]. https://www.kaggle.com/mohammedredaomramn/brain-tumor-augmented-dataset
    Explore at:
    zip(781345037 bytes)Available download formats
    Dataset updated
    Oct 29, 2021
    Authors
    Dr. Mohamed R. Shoaib
    Description

    Dataset

    This dataset was created by Dr. Mohamed R. Shoaib

    Contents

  17. Appendix C. R code for likelihood-based analyses of horned lizard data and...

    • wiley.figshare.com
    html
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Murray G. Efford (2023). Appendix C. R code for likelihood-based analyses of horned lizard data and simulations. [Dataset]. http://doi.org/10.6084/m9.figshare.3552177.v1
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    Wileyhttps://www.wiley.com/
    Authors
    Murray G. Efford
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    R code for likelihood-based analyses of horned lizard data and simulations.

  18. The results of the augmented dickey-fuller test.

    • plos.figshare.com
    xls
    Updated Jul 24, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aya Salama Abdelhady; Nadia Dahmani; Lobna M. AbouEl-Magd; Ashraf Darwish; Aboul Ella Hassanien (2024). The results of the augmented dickey-fuller test. [Dataset]. http://doi.org/10.1371/journal.pone.0306874.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jul 24, 2024
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Aya Salama Abdelhady; Nadia Dahmani; Lobna M. AbouEl-Magd; Ashraf Darwish; Aboul Ella Hassanien
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Climate change mitigation necessitates increased investment in green sectors. This study proposes a methodology to predict green finance growth across various countries, aiming to encourage such investments. Our approach leverages time-series Conditional Generative Adversarial Networks (CT-GANs) for data augmentation and Nonlinear Autoregressive Neural Networks (NARNNs) for prediction. The green finance growth predicting model was applied to datasets collected from forty countries across five continents. The Augmented Dickey-Fuller (ADF) test confirmed the non-stationary nature of the data, supporting the use of Nonlinear Autoregressive Neural Networks (NARNNs). CT-GANs were then employed to augment the data for improved prediction accuracy. Results demonstrate the effectiveness of the proposed model. NARNNs trained with CT-GAN augmented data achieved superior performance across all regions, with R-squared (R2) values of 98.8%, 96.6%, and 99% for Europe, Asia, and other countries respectively. While the RMSE for Europe, Asia, and other countries are 1.26e+2, 2.16e+2, and 1.16e+2 respectively. Compared to a baseline NARNN model without augmentation, CT-GAN augmentation significantly improved both R2 and RMSE. The R2 values for the Europe, Asia, and other countries models are 96%, 73%, and 97.2%, respectively. The RMSE values for the Europe, Asia, and various countries models are 2.24e+2, 7e+2, and 2.07e+2, respectively. The Nonlinear Autoregressive Exogenous Neural Network (NARX-NN) exhibited significantly lower performance across Europe, Asia, and other countries with R2 values of 74%, 52%, and 86%, and RMSE values of 1.11e+2, 3.63e+2, and 1.8e+2, respectively.

  19. Yolo tiger and lion labelled detection

    • kaggle.com
    zip
    Updated Sep 10, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Junkie75 (2024). Yolo tiger and lion labelled detection [Dataset]. https://www.kaggle.com/datasets/junkie75/yolo-tiger-and-lion-labelled-detection/discussion
    Explore at:
    zip(64999035 bytes)Available download formats
    Dataset updated
    Sep 10, 2024
    Authors
    Junkie75
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    This dataset contains images of lions and tigers sourced from the Open Images Dataset V6 and labeled specifically for object detection using the YOLO format. The dataset focuses on two classes: lion and tiger, with annotations provided for each image in a YOLO-compatible .txt file format. This dataset is ideal for training machine learning models for wildlife detection and classification tasks, particularly in distinguishing between these two majestic big cats. Key Features:

    Classes: Lion and Tiger
    Annotations: YOLO format, with bounding box coordinates and class labels provided in separate .txt files for each image.
    Source: Images sourced from Open Images Dataset V6, which is published under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
    Application: Suitable for object detection models like YOLO, SSD, or Faster R-CNN.
    

    Usage:

    The dataset can be used for training, validating, or testing object detection models. Each image is accompanied by a corresponding YOLO annotation file, making it easy to integrate into any YOLO-based pipeline. Attribution:

    This dataset is derived from the Open Images Dataset V6, and proper attribution must be given. Please credit the Open Images Dataset when using or sharing this dataset in any format.

  20. d

    Data from: Exploring deep learning techniques for wild animal behaviour...

    • search.dataone.org
    • data.niaid.nih.gov
    • +2more
    Updated Jul 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ryoma Otsuka; Naoya Yoshimura; Kei Tanigaki; Shiho Koyama; Yuichi Mizutani; Ken Yoda; Takuya Maekawa (2025). Exploring deep learning techniques for wild animal behaviour classification using animal-borne accelerometers [Dataset]. http://doi.org/10.5061/dryad.2ngf1vhwk
    Explore at:
    Dataset updated
    Jul 5, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Ryoma Otsuka; Naoya Yoshimura; Kei Tanigaki; Shiho Koyama; Yuichi Mizutani; Ken Yoda; Takuya Maekawa
    Time period covered
    Jan 23, 2024
    Description

    Machine learning†based behaviour classification using acceleration data is a powerful tool in bio†logging research. Deep learning architectures such as convolutional neural networks (CNN), long short†term memory (LSTM) and self†attention mechanisms as well as related training techniques have been extensively studied in human activity recognition. However, they have rarely been used in wild animal studies. The main challenges of acceleration†based wild animal behaviour classification include data shortages, class imbalance problems, various types of noise in data due to differences in individual behaviour and where the loggers were attached and complexity in data due to complex animal†specific behaviours, which may have limited the application of deep learning techniques in this area. To overcome these challenges, we explored the effectiveness of techniques for efficient model training: data augmentation, manifold mixup and pre†training of deep learning models with unlabelled data, ..., , , # Data from: Exploring deep learning techniques for wild animal behaviour classification using animal-borne accelerometers

    https://doi.org/10.5061/dryad.2ngf1vhwk

    This repository contains the datasets of two seabird species (streaked shearwaters and black-tailed gulls) used in the following paper (Otsuka et al., 2024).

    Otsuka, R., Yoshimura, N., Tanigaki, K., Koyama, S., Mizutani, Y., Yoda, K., & Maekawa, T. (2024). Exploring deep learning techniques for wild animal behaviour classification using animal-borne accelerometers. Methods in Ecology and Evolution.

    The paper aimed to classify the behaviour of these two seabird species using tri-axial acceleration data and deep learning. It explored the effectiveness of deep learning models and related training techniques, such as data augmentation.

    âš ï¸ WARNING (2025-07-02)
    We found that the data collected using the BMX-055 sensor was likely not sampled consistently at the intended fre...,

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Nataliya V. Kostereva; Yong Wang; Derek R. Fletcher; Jignesh V. Unadkat; Jonas T. Schnider; Chiaki Komatsu; Yang Yang; Donna B. Stolz; Michael R. Davis; Jan A. Plock; Vijay S. Gorantla (2023). IGF-1 and Chondroitinase ABC Augment Nerve Regeneration after Vascularized Composite Limb Allotransplantation [Dataset]. http://doi.org/10.1371/journal.pone.0156149
Organization logo

IGF-1 and Chondroitinase ABC Augment Nerve Regeneration after Vascularized Composite Limb Allotransplantation

Explore at:
7 scholarly articles cite this dataset (View in Google Scholar)
tiffAvailable download formats
Dataset updated
Jun 2, 2023
Dataset provided by
PLOShttp://plos.org/
Authors
Nataliya V. Kostereva; Yong Wang; Derek R. Fletcher; Jignesh V. Unadkat; Jonas T. Schnider; Chiaki Komatsu; Yang Yang; Donna B. Stolz; Michael R. Davis; Jan A. Plock; Vijay S. Gorantla
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Impaired nerve regeneration and inadequate recovery of motor and sensory function following peripheral nerve repair remain the most significant hurdles to optimal functional and quality of life outcomes in vascularized tissue allotransplantation (VCA). Neurotherapeutics such as Insulin-like Growth Factor-1 (IGF-1) and chondroitinase ABC (CH) have shown promise in augmenting or accelerating nerve regeneration in experimental models and may have potential in VCA. The aim of this study was to evaluate the efficacy of low dose IGF-1, CH or their combination (IGF-1+CH) on nerve regeneration following VCA. We used an allogeneic rat hind limb VCA model maintained on low-dose FK506 (tacrolimus) therapy to prevent rejection. Experimental animals received neurotherapeutics administered intra-operatively as multiple intraneural injections. The IGF-1 and IGF-1+CH groups received daily IGF-1 (intramuscular and intraneural injections). Histomorphometry and immunohistochemistry were used to evaluate outcomes at five weeks. Overall, compared to controls, all experimental groups showed improvements in nerve and muscle (gastrocnemius) histomorphometry. The IGF-1 group demonstrated superior distal regeneration as confirmed by Schwann cell (SC) immunohistochemistry as well as some degree of extrafascicular regeneration. IGF-1 and CH effectively promote nerve regeneration after VCA as confirmed by histomorphometric and immunohistochemical outcomes.

Search
Clear search
Close search
Google apps
Main menu