Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Label Pet is a dataset for object detection tasks - it contains Pet Label annotations for 1,269 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Visualize on Visual Layer
Oxford-IIIT-Pets-VL-Enriched
An enriched version of the Oxford IIIT Pets Dataset with image caption, bounding boxes, and label issues! With this additional information, the Oxford IIIT Pet dataset can be extended to various tasks such as image retrieval or visual question answering. The label issues help to curate a cleaner and leaner dataset.
Description
The dataset consists of 6 columns:
image_id: Unique identifier for each⦠See the full description on the dataset page: https://huggingface.co/datasets/visual-layer/oxford-iiit-pet-vl-enriched.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Oxford-IIIT Pet Dataset
Images from The Oxford-IIIT Pet Dataset. Only images and labels have been pushed, segmentation annotations were ignored.
Homepage: https://www.robots.ox.ac.uk/~vgg/data/pets/
License: Same as the original dataset.
The Oxford-IIIT Pet Dataset is a 37-category pet dataset with roughly 200 images for each class. The images have large variations in scale, pose, and lighting. All images have an associated ground truth annotation of breed, head ROI, and pixel-level trimap segmentation.
The Oxford-IIIT pet dataset is a 37 category pet image dataset with roughly 200 images for each class. The images have large variations in scale, pose and lighting. All images have an associated ground truth annotation of breed and species. Additionally, head bounding boxes are provided for the training split, allowing using this dataset for simple object detection tasks. In the test split, the bounding boxes are empty.
To use this dataset:
import tensorflow_datasets as tfds
ds = tfds.load('oxford_iiit_pet', split='train')
for ex in ds.take(4):
print(ex)
See the guide for more informations on tensorflow_datasets.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Yolov6 Oxford IIIT Pet Dataset is a dataset for object detection tasks - it contains Dogs annotations for 3,317 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The Oxford-IIIT Pet Dataset is a 37 classes pet dataset created by the Visual Geometry Group at Oxford. Citation: O. M. Parkhi et al., 2012
TFRecords have been made by using this Kaggle dataset of Oxford's Pets.
The data is split into train and test set of files where trainX-Y has a total of Y images. TFRecords itself have the following features:
feature = {
'image': _bytes_feature,
'image_name': _bytes_feature,
'target': _int64_feature
}
Images are resized to 224x224 and range of the target labels is shifted from original 1-37 to 0-36.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
== Introdution ==
For many years PET centres around the world have developed and optimised their own analysis pipelines, including a mixture of in-house and independent software, and have implemented different modelling choices for PET image processing and data quantification. As a result, many different methods and tools are available for PET image analysis.
== Aim of the dataset ==
This dataset aims to provide a normative tool to assess the performance and consistency of PET modelling approaches on the same data for which the ground truth is known. It was created and released for the NRM2018 PET Grand Challenge. The challenge aimed at evaluating the performances of different PET analysis tools to identify areas and magnitude of receptor binding changes in a PET radioligand neurotransmission study.
The present dataset refers to 5 simulated human subjects scanned twice. For each subject the first PET scan (ses-baseline) represents baseline conditions; the second scan (ses-displaced) represents the scan after a pharmacological challenge in which the tracer binding has been displaced in certain regions of interest. A total of 10 dynamic scans are provided in the current dataset.
The nature of the neuroreceptor tracer used for the simulation (hereafter referred to as [11C]LondonPride) wants to be as general as possible. Any similarity to real PET tracer uptake is purely coincidental. Each simulated scan consists of a 90 minutes dynamic PET acquisition after bolus tracer injection as obtained with a Siemens Biograph mMR PET/MR scanner. The data were simulated including attenuation, randoms and scatters effects, the decay of the radiotracer and considering the geometry and resolution of the scanner. PET data can be considered motion-free as no motion or motion-related artifacts are included in the simulated dataset. The data were binned into 23 frames: 4Γ15 s, 4Γ60 s, 2Γ150 s, 10Γ300 s and 3Γ600 s. Each frame was reconstructed with the MLEM algorithm with 100 iterations. The reconstructed images available in the dataset are already decay corrected.
All provided PET images are already normalised in standard MNI space (182x218x182 β 1mm).
== Data simulation process ==
For the simulation of each of the 10 scans (5 patients, 2 scans each), time activity curves (TACs) for each voxel of the phantom were generated from the kinetic parameters using the 2TCM equations. The TACs had a resolution of 1 sec and included the effect of the radiotracer decay, which was simulated with a half-life of 20.34 min (11C half-life). Each voxel TAC was binned with the following framing: 4Γ15 s, 4Γ60 s, 2Γ150 s, 10Γ300 s and 3Γ600 s by using the mean activity value for each time frame. After this process, the dynamic phantom for each scan is ready to be used in the simulation of each scan. The phantoms had the same resolution as the parametric maps (1Γ1Γ1 mm^3).
Each scan was simulated with a total of 3Γ10e8 counts and by modelling the different physical effects of a PET acquisition. For each frame of a scan, the phantom was smoothed with a 2.5 mm FWHM kernel (lower than the spatial resolution of the mMR scanner since the phantom was already low resolution) and projected into a span 11 sinogram using the mMR scanner geometry. Then the resulting sinograms were multiplied by the attenuation factors, obtained from an attenuation map generated from the CT image of the patient, and by the normalization factors of the mMR scanner. Next, Poisson noise was introduced by simulating a random process for every sinogram bin, obtaining the sinogram with true events. A uniform sinogram multiplied by the normalization factors was used for the randoms and a smoothed version of the emission sinogram for the scatters, which were scaled in order to have 20% of randoms and 25% of scatters of the total counts. Poisson noise was introduced to randoms and scatters and added to the trues sinogram. Finally, each frame was individually reconstructed using the MLEM algorithm with 100 iterations, a 2.5 mm PSF and the standard mMR voxel size (2.09x2.09x2.03 mm3). The reconstructed images were corrected for the activity decay and resampled into the original MNI space. For the simulation and reconstruction, an in-house reconstruction framework was used (Belzunce and Reader 2017).
== Simulated Drug ===
The pharmacological challenge given to the subjects before the second scan (ses-displaced) is based, as is the tracer, on a simulated drug . Any similarity with existing drugs is purely coincidental. The drug has competitive binding to the radiotracer target and has no secondary affinities. The drug is simulated as given as a single oral bolus 30 min prior to the scan.
== Additional data in the folder ===
Along with the raw data, some additional derivatives data are provided. This data are 6 regions of displacements helpful for the quantification and analysis. Six regions of displacement have been manually generated (using ITKSnap) and applied consistently to all the subjects to generate displaced π3 parametric maps. Based on the neuroreceptor theory (Innis, Cunningham et al. 2007), any change in π3 would produce an equivalent change in BPnd. The regions volumes of the regions ranged from 343mm3 to 2275mm3 and were selected to be in regions of higher tracer uptake at baseline. None of the displacement ROIs has a purely geometrical (e.g. cube or sphere) or anatomical shape. The regions have been created to represent different sizes and different levels of tracer displacement according to the following values:
+----- ROI -----+----- Volume(mm^3) -----+----- Displacement (%) -----+
| ROI1 | 2555 | 27 |
| ROI2 | 2275 | 27 |
| ROI3 | 1152 | 21 |
| ROI4 | 493 | 18 |
| ROI5 | 343 | 18 |
| ROI6 | 418 | 18 |
+---------------+------------------------+----------------------------+
The ROIs are not symmetrically spatially distributed across the brain. A definintion of the ROI name can be found in the accompaning dseg.tsv file.
== References == - Belzunce, M. A. and A. J. Reader (2017). "Assessment of the impact of modeling axial compression on PET image reconstruction." Medical physics 44(10): 5172-5186. - Innis, R. B., V. J. Cunningham, J. Delforge, M. Fujita, A. Gjedde, R. N. Gunn, J. Holden, S. Houle, S. C. Huang, M. Ichise, H. Iida, H. Ito, Y. Kimura, R. A. Koeppe, G. M. Knudsen, J. Knuuti, A. A. Lammertsma, M. Laruelle, J. Logan, R. P. Maguire, M. A. Mintun, E. D. Morris, R. Parsey, J. C. Price, M. Slifstein, V. Sossi, T. Suhara, J. R. Votaw, D. F. Wong and R. E. Carson (2007). "Consensus nomenclature for in vivo imaging of reversibly binding radioligands." J Cereb Blood Flow Metab 27(9): 1533-1539.
== Appendix: Current Folder Contents ==
βββ CHANGES βββ LICENSE βββ README βββ dataset_description.json βββ derivatives β βββ masks β βββ dseg.tsv β βββ sub-000101 β β βββ ses-baseline β β β βββ sub-000101_ses-baseline_label-displacementROI_dseg.nii.gz β β βββ ses-displaced β β βββ sub-000101_ses-displaced_label-displacementROI_dseg.nii.gz β βββ sub-000102 β β βββ ses-baseline β β β βββ sub-000102_ses-baseline_label-displacementROI_dseg.nii.gz β β βββ ses-displaced β β βββ sub-000102_ses-displaced_label-displacementROI_dseg.nii.gz β βββ sub-000103 β β βββ ses-baseline β β β βββ sub-000103_ses-baseline_label-displacementROI_dseg.nii.gz β β βββ ses-displaced β β βββ sub-000103_ses-displaced_label-displacementROI_dseg.nii.gz β βββ sub-000104 β β βββ ses-baseline β β β βββ sub-000104_ses-baseline_label-displacementROI_dseg.nii.gz β β βββ ses-displaced β β βββ sub-000104_ses-displaced_label-displacementROI_dseg.nii.gz β βββ sub-000105 β βββ ses-baseline β β βββ sub-000105_ses-baseline_label-displacementROI_dseg.nii.gz β βββ ses-displaced β βββ sub-000105_ses-displaced_label-displacementROI_dseg.nii.gz βββ participants.json βββ participants.tsv βββ sub-000101 β βββ ses-baseline β β βββ anat β β β βββ sub-000101_ses-baseline_acq-T1w.json β β β βββ sub-000101_ses-baseline_acq-T1w.nii.gz β β βββ pet β β βββ sub-000101_ses-baseline_rec-MLEM_pet.json β β βββ sub-000101_ses-baseline_rec-MLEM_pet.nii.gz β βββ ses-displaced β βββ anat β β βββ sub-000101_ses-displaced_acq-T1w.json β β βββ sub-000101_ses-displaced_acq-T1w.nii.gz β βββ pet β βββ sub-000101_ses-displaced_rec-MLEM_pet.json β βββ sub-000101_ses-displaced_rec-MLEM_pet.nii.gz βββ sub-000102 β βββ ses-baseline β β βββ anat β β β βββ sub-000102_ses-baseline_acq-T1w.json β β β βββ sub-000102_ses-baseline_acq-T1w.nii.gz β β βββ pet β β βββ sub-000102_ses-baseline_rec-MLEM_pet.json β β βββ sub-000102_ses-baseline_rec-MLEM_pet.nii.gz β βββ ses-displaced β βββ anat β β βββ sub-000102_ses-displaced_acq-T1w.json β β βββ
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Cat Vs Dog Single Label Classification is a dataset for classification tasks - it contains Items annotations for 818 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
5 Classes : cats, dogs, ferrets, hamsters and rabbits 100 Images each, divided into: 70 for training, 15 for evaluation and 15 for testing.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset was created by Muhammad Zain Faraz
Released under MIT
The dataset contains 45 documents containing narrative description of business process and their annotations. Annotated with activities, gateways, actors, and flow information.
Each document is composed of three files:
Doc_name.txt (Process description in CONLL format)
Doc_name.process-elements.IOB2.txt (Process elements annotated with IOB2 Schema in CONLL format)
Doc_name.relations.tsv (Process relations between process elements. Each line is a triplette (source, relation tag, target). Source and target are in the form: n_sent_x words range.)
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Animal Categories: Masters of Survival, Speed, and Strategy
This dataset categorizes animals into unique groups based on their survival traits, physical abilities, and behavioral strategies. From stealthy nocturnal hunters to fast-paced predators and intelligent problem-solvers, this collection highlights the diversity and adaptability of the animal kingdom. Each category is carefully curated to showcase the unique characteristics that make these animals stand out in their respective environments.
Dataset Structure:
In labels.csv
file
image_name
: The image_name
column contains the file paths or names of the images. The names are structured in a way that indicates the type of animal or object in the image (e.g., beetle/687486f1cb.jpg
, parrot/5affc48d37.jpg
).category
: The category
column contains the label or category associated with each image. These categories describe the type of animal or object in the image, such as Tiny Survivors
, Survival Geniuses
, Apex Predators
, etc.Categories Included:
Stealth & Shadows
: Masters of camouflage and nocturnal survival (e.g., bat, leopard, owl).Speed Demons
: Fastest animals on land, air, or water (e.g., cheetah, falcon, dolphin).Tough Defenders
: Hard-shelled or armored animals (e.g., turtle, armadillo, hedgehog).Apex Predators
: Top of the food chain (e.g., lion, shark, tiger).Survival Geniuses
: Highly intelligent and skilled problem-solvers (e.g., chimpanzee, octopus, crow).Flight Masters
: Birds and insects that dominate the skies (e.g., eagle, hummingbird, butterfly).Underwater Specialists
: Ocean-based creatures (e.g., whale, jellyfish, seahorse).Cold-Climate Survivors:
Adapted to harsh winters (e.g., penguin, polar bear, arctic fox).Pack Hunters & Social Strategists
: Animals that work in groups to hunt or survive (e.g., wolf, lion, meerkat).Tiny Survivors
: Small but resilient creatures (e.g., rat, cockroach, ladybugs).Other
: Miscellaneous animals that donβt fit into the above categories.Acknowledgments: This dataset was inspired by the incredible diversity of the animal kingdom. Special thanks to the dataset Animal Image Dataset (90 Different Animals) by Sourav Banerjee for providing a rich collection of animal images that can complement this categorical dataset. Combining these datasets could enable exciting projects, such as image classification or trait-based animal recognition.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Animal Labelling is a dataset for object detection tasks - it contains Animals annotations for 812 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) is currently one of the powerful tools for the clinical diagnosis of dementia such as Alzheimer's Disease (AD). Meanwhile, MR imaging, being non-radioactive and having high contrast resolution, is highly accessible in clinical settings. Therefore, this dataset intends to use FDG-PET images as the Ground Truth for evaluating AD, for the development of predicting AD patients using MR images. This dataset includes an AD group and a control group (Healthy Group). The determination of the image diagnosis group is made by neurology specialists based on comprehensive judgment using clinically relevant information. Each set of data contains one set of MRI T1 images and one set of FDG-PET images. The image format is DICOM, and all images have been anonymized. To obtain the clinical information and related documentation, please contact the administrator.
https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/
This dataset consists of CT and PET-CT DICOM images of lung cancer subjects with XML Annotation files that indicate tumor location with bounding boxes. The images were retrospectively acquired from patients with suspicion of lung cancer, and who underwent standard-of-care lung biopsy and PET/CT. Subjects were grouped according to a tissue histopathological diagnosis. Patients with Names/IDs containing the letter 'A' were diagnosed with Adenocarcinoma, 'B' with Small Cell Carcinoma, 'E' with Large Cell Carcinoma, and 'G' with Squamous Cell Carcinoma.
The images were analyzed on the mediastinum (window width, 350 HU; level, 40 HU) and lung (window width, 1,400 HU; level, β700 HU) settings. The reconstructions were made in 2mm-slice-thick and lung settings. The CT slice interval varies from 0.625 mm to 5 mm. Scanning mode includes plain, contrast and 3D reconstruction.
Before the examination, the patient underwent fasting for at least 6 hours, and the blood glucose of each patient was less than 11 mmol/L. Whole-body emission scans were acquired 60 minutes after the intravenous injection of 18F-FDG (4.44MBq/kg, 0.12mCi/kg), with patients in the supine position in the PET scanner. FDG doses and uptake times were 168.72-468.79MBq (295.8Β±64.8MBq) and 27-171min (70.4Β±24.9 minutes), respectively. 18F-FDG with a radiochemical purity of 95% was provided. Patients were allowed to breathe normally during PET and CT acquisitions. Attenuation correction of PET images was performed using CT data with the hybrid segmentation method. Attenuation corrections were performed using a CT protocol (180mAs,120kV,1.0pitch). Each study comprised one CT volume, one PET volume and fused PET and CT images: the CT resolution was 512 Γ 512 pixels at 1mm Γ 1mm, the PET resolution was 200 Γ 200 pixels at 4.07mm Γ 4.07mm, with a slice thickness and an interslice distance of 1mm. Both volumes were reconstructed with the same number of slices. Three-dimensional (3D) emission and transmission scanning were acquired from the base of the skull to mid femur. The PET images were reconstructed via the TrueX TOF method with a slice thickness of 1mm.
The location of each tumor was annotated by five academic thoracic radiologists with expertise in lung cancer to make this dataset a useful tool and resource for developing algorithms for medical diagnosis. Two of the radiologists had more than 15 years of experience and the others had more than 5 years of experience. After one of the radiologists labeled each subject the other four radiologists performed a verification, resulting in all five radiologists reviewing each annotation file in the dataset. Annotations were captured using Labellmg. The image annotations are saved as XML files in PASCAL VOC format, which can be parsed using the PASCAL Development Toolkit: https://pypi.org/project/pascal-voc-tools/. Python code to visualize the annotation boxes on top of the DICOM images can be downloaded here.
Two deep learning researchers used the images and the corresponding annotation files to train several well-known detection models which resulted in a maximum a posteriori probability (MAP) of around 0.87 on the validation set.
Locations that do NOT have physical license tags for sale, just mailers to obtain tags
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset use for training and testing the CNN and model training, there are 10000 images of cats with different angles, position and places, and 10000 images for dogs with different angles, positions and locations.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Cat & Dog Segmentation Dataset is crafted for the media & entertainment and tourism industries, featuring a broad collection of internet-collected images with resolutions varying from 367 x 288 to 3456 x 4608 pixels. This dataset focuses on contour segmentation and includes diverse annotations such as humans, cats, dogs, and environmental elements like walls, tables, grass, and water surfaces, among others.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Can And Pet is a dataset for object detection tasks - it contains Can Pet Detection annotations for 12,127 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Label Pet is a dataset for object detection tasks - it contains Pet Label annotations for 1,269 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).