Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
The WB/WoB-ReID dataset consists of four sets: “with_bag”, “without_bag”, “both_small”, and “both_large”. The “with_bag” set encompasses images of 164 identities, with three distinct types of images for each unique identity P, (i) P with no bags, (ii) P with 1st bag, and (iii) P with a 2nd bag. The “without_bag” set contains 336 identities of persons without distinct backpacks and with different poses, lighting, views, backgrounds, and resolutions. The number of images per identity in the “with_bag” and the “without_bag” set ranges from 4 to 15. The sets “both_small” and “both_large” are formed by the combination of “with_bag” and “without_bag”, where the “both_large” set comprises more images for each identity.
Original paper : https://doi.org/10.1016/j.jvcir.2023.103931
Facebook
Twitterhttps://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified
The DukeMTMC-reID (Duke Multi-Tracking Multi-Camera ReIDentification) dataset is a subset of the DukeMTMC for image-based person re-ID. The dataset is created from high-resolution videos from 8 different cameras. It is one of the largest pedestrian image datasets wherein images are cropped by hand-drawn bounding boxes. The dataset consists 16,522 training images of 702 identities, 2,228 query images of the other 702 identities and 17,661 gallery images. PUBLISHED 2016 IMAGES 2,000,000 IDENTITIES 2,700 PURPOSE Person re-identification, multi-camera tracking
Facebook
TwitterThis dataset was created by wudidabaozha
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
## Overview
Reid Analysis is a dataset for object detection tasks - it contains Person annotations for 1,303 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [MIT license](https://creativecommons.org/licenses/MIT).
Facebook
TwitterWe present the Multi-view Extended Videos with Identities (MEVID) dataset for large-scale, video person re-identification (ReID) in the wild. To our knowledge, MEVID represents the most-varied video person ReID dataset, spanning an extensive indoor and outdoor environment across nine unique dates in a 73-day window, various camera viewpoints, and entity clothing changes. Specifically, we label the identities of 158 unique people wearing 598 outfits taken from 8,092 tracklets, average length of about 590 frames, seen in 33 camera views from the very-large-scale MEVA person activities dataset
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
DLCR: A Generative Data Expansion Framework via Diffusion for Clothes-Changing Person Re-ID
Publicly hosted repository for the generated data presented in "DLCR: A Generative Data Expansion Framework via Diffusion for Clothes-Changing Person Re-ID", under review in WACV 2025 Algorithms Track. We generate and release over 2.1M synthetic images across 4 CC-ReID datasets, namely PRCC, CCVID, VC-Clothes, and LaST. We use diffusion inpainting to change the subject's clothing in an image… See the full description on the dataset page: https://huggingface.co/datasets/ihaveamoose/DLCR.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Office Surveillance ReID is a dataset for object detection tasks - it contains Person annotations for 1,558 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Fisheye Person Detection is a dataset for object detection tasks - it contains Person annotations for 2,101 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The CHIRLA dataset (Comprehensive High-resolution Identification and Re-identification for Large-scale Analysis) is designed for long-term person re-identification (Re-ID) in real-world scenarios. The dataset consists of multi-camera video recordings captured over seven months in an indoor office environment. This dataset aims to facilitate the development and evaluation of Re-ID algorithms capable of handling significant variations in individuals’ appearances, including changes in clothing and physical characteristics. The dataset includes 22 individuals with 963,554 bounding box annotations across 596,345 frames.For more details refers to CHIRLA paper: https://arxiv.org/pdf/2502.06681Data Generation ProceduresThe dataset was recorded at the Robotics, Vision, and Intelligent Systems Research Group headquarters at the University of Alicante, Spain. Seven strategically placed Reolink RLC-410W cameras were used to capture videos in a typical office setting, covering areas such as laboratories, hallways, and shared workspaces. Each camera features a 1/2.7" CMOS image sensor with a 5.0-megapixel resolution and an 80° horizontal field of view. The cameras were connected via Ethernet and WiFi to ensure stable streaming and synchronization.A ROS-based interconnection framework was used to synchronize and retrieve images from all cameras. The dataset includes video recordings at a resolution of 1080×720 pixels, with a consistent frame rate of 30 fps, stored in AVI format with DivX MPEG-4 encoding.Data Processing Methods and StepsData processing involved a semi-automatic labeling procedure:Detection: YOLOv8x was used to detect individuals in video frames and extract bounding boxes.Tracking: The Deep SORT algorithm was employed to generate tracklets and assign unique IDs to detected individuals.Manual Verification: A custom graphical user interface (GUI) was developed to facilitate manual verification and correction of the automatically generated labels.Bounding boxes and IDs were assigned consistently across different cameras and sequences to maintain identity coherence.Data Structure and FormatThe dataset comprises:Video Files: 70 videos, each corresponding to a specific camera view in a sequence, stored in AVI format.Annotation Files: JSON files containing frame-wise annotations, including bounding box coordinates and identity labels.Benchmark Data: Processed image crops organized for ReID and tracking evaluationThe dataset is structured as follows:videos/seq_XXX/camera_Y.avi: Video files for each camera view.annotations/seq_XXX/camera_Y.json: Annotation files providing labeled bounding boxes and IDs.benchmark: Train and test data to use in two benchmarks proposed for tracking and Re-ID tasks in different scenarios.Datail data directory struture:CHIRLA_dataset/ ├── videos/ # Raw video files │ └── seq_XXX/ │ └── camera_Y.avi # Video files for each camera view ├── annotations/ # Frame-level annotations │ └── seq_XXX/ │ └── camera_Y.json # Bounding boxes and IDs └── benchmark/ # Processed benchmark data ├── reid/ # Person Re-Identification │ ├── long_term/ # Long-term ReID scenario │ │ ├── train/ │ │ │ ├── train_0/ │ │ │ │ └── seq_XXX/ │ │ │ └── train_1/ │ │ └── test/ │ │ ├── test_0/ # Validation subset │ │ └── test_1/ # Test subset │ ├── multi_camera/ # Multi-camera ReID │ ├── multi_camera_long_term/ # Combined scenario │ └── reappearance/ # Reappearance detection └── tracking/ # Person Tracking ├── brief_occlusions/ # Short-term occlusions └── multiple_people_occlusions/ # Multi-person scenarios For more information on how to use the benchmark data refers to CHIRLA github repository: https://github.com/bdager/CHIRLA and paper: https://arxiv.org/pdf/2502.06681 .Use Cases and ReusabilityThe CHIRLA dataset is suitable for:Long-term person re-identificationMulti-camera tracking and re-identificationSingle-camera tracking and re-identificationCitationIf you use CHIRLA dataset and benchmark, please cite the work as:@article{bdager2025chirla,title={CHIRLA: Comprehensive High-resolution Identification and Re-identification for Large-scale Analysis},author={Dominguez-Dager, Bessie and Escalona, Felix and Gomez-Donoso, Fran and Cazorla, Miguel},journal={arXiv preprint arXiv:2502.06681},year={2025},}
Facebook
TwitterThe video-based person re-identification (ReID) aims to identify the given pedestrian video sequence across multiple non-overlapping cameras.
Facebook
TwitterMARS (Motion Analysis and Re-identification Set) is a large scale video based person reidentification dataset, an extension of the Market-1501 dataset. It has been collected from six near-synchronized cameras. It consists of 1,261 different pedestrians, who are captured by at least 2 cameras. The variations in poses, colors and illuminations of pedestrians, as well as the poor image quality, make it very difficult to yield high matching accuracy. Moreover, the dataset contains 3,248 distractors in order to make it more realistic. Deformable Part Model and GMMCP tracker were used to automatically generate the tracklets (mostly 25-50 frames long).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comparison with state-of-the-art methods for person ReID problem.
Facebook
TwitterDataset Card for "Market1501-Background-Modified"
Dataset Summary
The Market1501-Background-Modified dataset is a variation of the original Market1501 dataset. It focuses on reducing the influence of background information by replacing the backgrounds in the images with solid colors, noise patterns, or other simplified alternatives. This dataset is designed for person re-identification (ReID) tasks, ensuring models learn person-specific features while ignoring background… See the full description on the dataset page: https://huggingface.co/datasets/ideepankarsharma2003/Market1501-Background-Modified.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The COCO Irregular Occlusion Dataset (CIOD) is a derived dataset created from the COCO dataset, focusing on irregularly-shaped occluders extracted from object masks. These occlusion samples are intended to simulate real-world complex occlusions and are specifically designed for use in occluded person re-identification (ReID) tasks. CIOD provides diverse and realistic occlusion patterns to enhance model robustness in challenging visual scenarios.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Detailed statistics of Market-1501 and DukeMTMC-reID.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset presents median household incomes for various household sizes in Reid, Wisconsin, as reported by the U.S. Census Bureau. The dataset highlights the variation in median household income with the size of the family unit, offering valuable insights into economic trends and disparities within different household sizes, aiding in data analysis and decision-making.
Key observations
https://i.neilsberg.com/ch/reid-wi-median-household-income-by-household-size.jpeg" alt="Reid, Wisconsin median household income, by household size (in 2022 inflation-adjusted dollars)">
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2017-2021 5-Year Estimates.
Household Sizes:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Reid town median household income. You can refer the same here
Facebook
TwitterPerson Re-Identification (ReID) aims to recognize a person-of-interest across different places and times. RF-ReID uses radio frequency (RF) signals for longterm person ReID.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Summary of experimental results of dehazed images on PKU-Reid dataset in terms of ACC by using five (5) features’ subsets (100, 250, 500, 750, and 1000 features) with SVM and KNN classifiers.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
An Open Context "persons" dataset item. Open Context publishes structured data as granular, URL identified Web resources. This "Person" record is part of the "Petra Great Temple Excavations" data publication.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset tabulates the data for the Reid, Wisconsin population pyramid, which represents the Reid town population distribution across age and gender, using estimates from the U.S. Census Bureau American Community Survey 5-Year estimates. It lists the male and female population for each age group, along with the total population for those age groups. Higher numbers at the bottom of the table suggest population growth, whereas higher numbers at the top indicate declining birth rates. Furthermore, the dataset can be utilized to understand the youth dependency ratio, old-age dependency ratio, total dependency ratio, and potential support ratio.
Key observations
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2017-2021 5-Year Estimates.
Age groups:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Reid town Population by Age. You can refer the same here
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
The WB/WoB-ReID dataset consists of four sets: “with_bag”, “without_bag”, “both_small”, and “both_large”. The “with_bag” set encompasses images of 164 identities, with three distinct types of images for each unique identity P, (i) P with no bags, (ii) P with 1st bag, and (iii) P with a 2nd bag. The “without_bag” set contains 336 identities of persons without distinct backpacks and with different poses, lighting, views, backgrounds, and resolutions. The number of images per identity in the “with_bag” and the “without_bag” set ranges from 4 to 15. The sets “both_small” and “both_large” are formed by the combination of “with_bag” and “without_bag”, where the “both_large” set comprises more images for each identity.
Original paper : https://doi.org/10.1016/j.jvcir.2023.103931