The Market-1501 dataset is collected in front of a supermarket in Tsinghua University. A total of six cameras were used, including 5 high-resolution cameras, and one low-resolution camera. Field-of-view overlap exists among different cameras. Overall, this dataset contains 32,668 annotated bounding boxes of 1,501 identities. In this open system, images of each identity are captured by at most six cameras. We make sure that each annotated identity is present in at least two cameras, so that cross-camera search can be performed. The Market-1501 dataset has three featured properties:
First, our dataset uses the Deformable Part Model (DPM) as pedestrian detector. Second, in addition to the true positive bounding boxes, we also provde false alarm detection results. Third, each identify may have multiple images under each camera. During cross-camera search, there are multiple queries and multiple ground truths for each identity.
This dataset was created by 27Wilson
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Market1501 is a dataset for object detection tasks - it contains Person annotations for 1,000 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
The Market1501-Attributes dataset is built from the Market1501 dataset. Market1501 Attribute is an augmentation of this dataset with 28 hand annotated attributes, such as gender, age, sleeve length, flags for items carried as well as upper clothes colors and lower clothes colors.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset was created by Igor Krashenyi
Released under CC0: Public Domain
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Performance comparison of our method with baselines on the Market1501, DukeMTMC-reID and MSMT17 dataset.
š¦ FiftyOne-Compatible Multiview Person ReID with Visual Attributes
A curated, attribute-rich person re-identification dataset based on Market-1501, enhanced with:
ā Multi-view images per person ā Detailed physical and clothing attributes ā Natural language descriptions ā Global attribute consolidation
š Dataset Statistics
Subset Samples
Train 3,181
Query 1,726
Gallery 1,548
Total 6,455
š„ Installation
Install the required⦠See the full description on the dataset page: https://huggingface.co/datasets/adonaivera/fiftyone-multiview-reid-attributes.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Ablation experiments of our method on the Market1501, DukeMTMC-reID and MSMT17 datasets.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Market-1501-C is an evaluation set that consists of algorithmically generated corruptions applied to the Market-1501 test-set. These corruptions consist of Noise: Gaussian, shot, impulse, and speckle; Blur: defocus, frosted glass, motion, zoom, and Gaussian; Weather: snow, frost, fog, brightness, spatter, and rain; Digital: contrast, elastic, pixel, JPEG compression, and saturate. Each corruption has five severity levels, resulting in 100 distinct corruptions.
MARS is an extension of the Market-1501 dataset [51]. It has been collected from six near-synchronized cameras. It consists of 1,261 different pedestrians, who are captured by at least 2 cameras.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
MARS (Motion Analysis and Re-identification Set) is a large scale video based person reidentification dataset, an extension of the Market-1501 dataset. It has been collected from six near-synchronized cameras. It consists of 1,261 different pedestrians, who are captured by at least 2 cameras. The variations in poses, colors and illuminations of pedestrians, as well as the poor image quality, make it very difficult to yield high matching accuracy. Moreover, the dataset contains 3,248 distractors in order to make it more realistic. Deformable Part Model and GMMCP tracker were used to automatically generate the tracklets (mostly 25-50 frames long).
https://www.coherentmarketinsights.com/privacy-policyhttps://www.coherentmarketinsights.com/privacy-policy
[202] PoC Platform & Technology Market to reach US$ 57,000 Mn by 2028. Market Analysis By Technology, Application, and End User.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Learning from limited exemplars (few-shot learning) is a fundamental, unsolved problem that has been laboriously explored in the machine learning community. However, current few-shot learners are mostly supervised and rely heavily on a large amount of labeled examples. Unsupervised learning is a more natural procedure for cognitive mammals and has produced promising results in many machine learning tasks. In this paper, we propose an unsupervised feature learning method for few-shot learning. The proposed model consists of two alternate processes, progressive clustering and episodic training. The former generates pseudo-labeled training examples for constructing episodic tasks; and the later trains the few-shot learner using the generated episodic tasks which further optimizes the feature representations of data. The two processes facilitate each other, and eventually produce a high quality few-shot learner. In our experiments, our model achieves good generalization performance in a variety of downstream few-shot learning tasks on Omniglot and MiniImageNet. We also construct a new few-shot person re-identification dataset FS-Market1501 to demonstrate the feasibility of our model to a real-world application.
Market-1203 dataset: This dataset contains 1203 individuals captured from two disjoint camera views. To each person, one to twelve images are captured from one to six different orientations under one camera view and are normalized to 128x64 pixels. This dataset is constructed based on the Market-1501 benchmark data and we annotate the orientation label for each image manually. We randomly select 601 individuals for training and the rest for testing.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
The Market-1501 dataset is collected in front of a supermarket in Tsinghua University. A total of six cameras were used, including 5 high-resolution cameras, and one low-resolution camera. Field-of-view overlap exists among different cameras. Overall, this dataset contains 32,668 annotated bounding boxes of 1,501 identities. In this open system, images of each identity are captured by at most six cameras. We make sure that each annotated identity is present in at least two cameras, so that cross-camera search can be performed. The Market-1501 dataset has three featured properties:
First, our dataset uses the Deformable Part Model (DPM) as pedestrian detector. Second, in addition to the true positive bounding boxes, we also provde false alarm detection results. Third, each identify may have multiple images under each camera. During cross-camera search, there are multiple queries and multiple ground truths for each identity.