39 datasets found
  1. D

    PASCAL Context Dataset

    • datasetninja.com
    • opendatalab.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Roozbeh Mottaghi; Xianjie Chen; Xiaobai Liu, PASCAL Context Dataset [Dataset]. https://datasetninja.com/pascal-context
    Explore at:
    Dataset provided by
    Dataset Ninja
    Authors
    Roozbeh Mottaghi; Xianjie Chen; Xiaobai Liu
    License

    http://host.robots.ox.ac.uk/pascal/VOC/voc2010/index.html#rightshttp://host.robots.ox.ac.uk/pascal/VOC/voc2010/index.html#rights

    Description

    The authors of the PASCAL Context dataset conduct a comprehensive investigation into the significance of context within existing state-of-the-art detection and segmentation methodologies. Their approach involves the meticulous labeling of every pixel encompassed within the PASCAL VOC 2010 detection challenge, associating each pixel with a semantic category. This dataset is envisioned to present a considerable challenge to the research community, as it incorporates an impressive 520 additional classes that cater to both semantic segmentation and object detection.

  2. a

    PASCAL-Context Dataset

    • academictorrents.com
    bittorrent
    Updated Nov 26, 2015
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    UCLA CCVL (2015). PASCAL-Context Dataset [Dataset]. https://academictorrents.com/details/eec6177ad62f4c47086e4cbec93ac4c08857ddbe
    Explore at:
    bittorrent(82931861)Available download formats
    Dataset updated
    Nov 26, 2015
    Dataset authored and provided by
    UCLA CCVL
    License

    https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified

    Description

    This dataset is a set of additional annotations for PASCAL VOC 2010. It goes beyond the original PASCAL semantic segmentation task by providing annotations for the whole scene. The statistics section has a full list of 400+ labels. Every pixel has a unique class label. Instance information (i.e, different masks to separate different instances of the same class in the same image) are currently provided for the 20 PASCAL objects. Statistics Since the dataset is an annotation of PASCAL VOC 2010, it has the same statistics as those of the original dataset. Training and validation contains 10,103 images while testing contains 9,637 images. Usage Considerations The classes are not drawn from a fixed pool. Instead labelers were free to either select or type in what they believe to be the appropriate class and to determine what the appropriate object granularity is. We decided to merge/split some of the categories so the current number of categories is different from what we mentioned in the C

  3. t

    COCO, ADE20K, PASCAL Context, and LVIS datasets

    • service.tib.eu
    Updated Dec 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). COCO, ADE20K, PASCAL Context, and LVIS datasets [Dataset]. https://service.tib.eu/ldmservice/dataset/coco--ade20k--pascal-context--and-lvis-datasets
    Explore at:
    Dataset updated
    Dec 16, 2024
    Description

    COCO dataset, ADE20K dataset, PASCAL Context dataset, LVIS dataset

  4. pascal_context

    • kaggle.com
    zip
    Updated Nov 20, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Adel Samigullin (2021). pascal_context [Dataset]. https://www.kaggle.com/samadel/pascal-context
    Explore at:
    zip(1208907941 bytes)Available download formats
    Dataset updated
    Nov 20, 2021
    Authors
    Adel Samigullin
    Description

    Dataset

    This dataset was created by Adel Samigullin

    Contents

  5. Pascal VOC 2010

    • kaggle.com
    zip
    Updated Aug 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fatemeh Boloori (2024). Pascal VOC 2010 [Dataset]. https://www.kaggle.com/datasets/fatemehboloori/pascal-context-voc-2010/suggestions
    Explore at:
    zip(1311606689 bytes)Available download formats
    Dataset updated
    Aug 2, 2024
    Authors
    Fatemeh Boloori
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Dataset

    This dataset was created by Fatemeh Boloori

    Released under Apache 2.0

    Contents

  6. PASCAL VOC 2007

    • kaggle.com
    zip
    Updated Mar 25, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    zarak (2018). PASCAL VOC 2007 [Dataset]. https://www.kaggle.com/zaraks/pascal-voc-2007
    Explore at:
    zip(1774851628 bytes)Available download formats
    Dataset updated
    Mar 25, 2018
    Authors
    zarak
    Description

    The PASCAL VOC project:

    • Provides standardised image data sets for object class recognition
    • Provides a common set of tools for accessing the data sets and annotations
    • Enables evaluation and comparison of different methods
    • Ran challenges evaluating performance on object class recognition (from 2005-2012, now finished)

    Context

    The goal of this challenge is to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). It is fundamentally a supervised learning learning problem in that a training set of labelled images is provided. The twenty object classes that have been selected are:

    • Person: person
    • Animal: bird, cat, cow, dog, horse, sheep
    • Vehicle: aeroplane, bicycle, boat, bus, car, motorbike, train
    • Indoor: bottle, chair, dining table, potted plant, sofa, tv/monitor
    • There will be two main competitions, and two smaller scale "taster" competitions.

    Content

    The training data provided consists of a set of images; each image has an annotation file giving a bounding box and object class label for each object in one of the twenty classes present in the image. Note that multiple objects from multiple classes may be present in the same image.

    Acknowledgements

    @misc{pascal-voc-2007, author = "Everingham, M. and Van~Gool, L. and Williams, C. K. I. and Winn, J. and Zisserman, A.", title = "The {PASCAL} {V}isual {O}bject {C}lasses {C}hallenge 2007 {(VOC2007)} {R}esults", howpublished = "http://www.pascal-network.org/challenges/VOC/voc2007/workshop/index.html"}

  7. f

    Comparison of lightweight network performance.

    • plos.figshare.com
    xls
    Updated Nov 29, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yuping Yin; Zheyu Zhang; Lin Wei; Chao Geng; Haoxiang Ran; Haodong Zhu (2023). Comparison of lightweight network performance. [Dataset]. http://doi.org/10.1371/journal.pone.0294865.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Nov 29, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Yuping Yin; Zheyu Zhang; Lin Wei; Chao Geng; Haoxiang Ran; Haodong Zhu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In the context of intelligent driving, pedestrian detection faces challenges related to low accuracy in target recognition and positioning. To address this issue, a pedestrian detection algorithm is proposed that integrates a large kernel attention mechanism with the YOLOV5 lightweight model. The algorithm aims to enhance long-term attention and dependence during image processing by fusing the large kernel attention module with the C3 module. Furthermore, it addresses the lack of long-distance relationship information in channel and spatial feature extraction and representation by introducing the Coordinate Attention mechanism. This mechanism effectively extracts local information and focused location details, thereby improving detection accuracy. To improve the positioning accuracy of obscured targets, the alpha CIOU bounding box regression loss function is employed. It helps mitigate the impact of occlusions and enhances the algorithm’s ability to precisely localize pedestrians. To evaluate the effectiveness of trained model, experiments are conducted on the BDD100K pedestrian dataset as well as the Pascal VOC dataset. Experimental results demonstrate that the improved attention fusion YOLOV5 lightweight model achieves an average accuracy of 60.3%. Specifically, the detection accuracy improves by 1.1% compared to the original YOLOV5 algorithm, and the accuracy performance index reaches 73.0%. These findings strongly indicate the proposed algorithm in significantly enhancing the accuracy of pedestrian detection in road scenes.

  8. Colorful Fashion Dataset For Object Detection

    • kaggle.com
    zip
    Updated Feb 18, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nguyễn Gia Bảo Lê (2022). Colorful Fashion Dataset For Object Detection [Dataset]. https://www.kaggle.com/datasets/nguyngiabol/colorful-fashion-dataset-for-object-detection/discussion
    Explore at:
    zip(109441424 bytes)Available download formats
    Dataset updated
    Feb 18, 2022
    Authors
    Nguyễn Gia Bảo Lê
    Description

    Context

    Original Dataset is used in the paper, (S. Liu, J. Feng, C. Domokos, H. Xu, J. Huang, Z. Hu, & S. Yan. 2014) CFPD | Fashion Parsing with Weak Color-Category Labels, with for Object Detection and Segmentation tasks (https://sites.google.com/site/fashionparsing)

    This dataset is custom for Object Detection task, with remove skin, face, background infomation, and format follow PASCAL VOC format. The classes of the this dataset: -sunglass, -hat, -jacket, -shirt, -pants, -shorts, -skirt, -dress, -bag, -shoe

    Note: If you want .txt file with YOLO format, you can use Annotations_txt directory.

  9. PAsCAL WP6 Pilot 2 Autonomous Driving Training

    • data.niaid.nih.gov
    • nde-dev.biothings.io
    • +2more
    Updated Jan 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fedel, Nuccia; Vecere, Lucia; Vella, Valerio; Scotti, Marco (2023). PAsCAL WP6 Pilot 2 Autonomous Driving Training [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7505845
    Explore at:
    Dataset updated
    Jan 6, 2023
    Dataset provided by
    Automobile Club d'Italia
    Authors
    Fedel, Nuccia; Vecere, Lucia; Vella, Valerio; Scotti, Marco
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset was collected within the context of the PAsCAL research project between January 2022 and February 2022 at the ACI Vallelunga test circuit and premises in Rome, Italy. Subject of the pilot was a driving training for advanced ADAS systems and test driving of a Level-2+ autonomous vehicle on a test track, performing several different manoeuvres to test the capability of the ADAS systems.

    Some of the participants were subjected to a driving training for autonomous vehicles before they did the test drive, wherein they had to perform several difficult driving manoeuvres (such as on slippery ground or emergency braking). The purpose of this pilot was to observe whether a driving training improves the driver's capability to use ADAS systems and therefore operate the vehicle in a safer way. Depending on the pilot, it is recommended to adapt existing driving training for beginners, professionals and experienced drivers.

    In order to analyse the answers given to the questions, it is recommended to consult also the "PAsCAL WP6 Pilots Surveys" dataset, which contains all questions and possible answers.

  10. Z

    PAsCAL WP6 Pilot 3 Autonomous Bus Line Datasets (Passengers and Co-Road...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kühl, Friederike L.; Papí, José F.; De La Peña, Elena (2023). PAsCAL WP6 Pilot 3 Autonomous Bus Line Datasets (Passengers and Co-Road Users) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7343005
    Explore at:
    Dataset updated
    Jan 5, 2023
    Dataset provided by
    Etelätär Innovation
    Asociación Española De La Carretera
    Authors
    Kühl, Friederike L.; Papí, José F.; De La Peña, Elena
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    These two datasets were collected within the context of the PAsCAL research project between September 2021 and March 2022 on the campus of the UAM University in Madrid, Spain. Subject of the pilot was a Level 4 autonomous bus shuttle, which is to date one of the only shuttles in Europe to run in open traffic. Due to this and the fact that only a steward is on-board of the vehicle in case of incidences or passenger support, two surveys were designed:

    Survey for Shuttle Users: Passengers experienced the ride on the autonomous shuttle within the context of the multi-modal trip, connecting them to an interurban train station and an interurban (long-distance) bus station on the other side. Purpose of the survey was to capture the participant's overall acceptance and attitude towards the vehicle after using it and comparing it directly to available traditional modes of transport.

    Survey for Shuttle Co-Road Users: Since the shuttle is operating in open traffic, co-road users were also stopped randomly and asked to complete the survey to map the acceptance of the autonomous shared and public vehicle they were sharing the road with. This included not just car drivers, but also pedestrians and cyclists on-site.

    In order to analyse the answers given to the questions, it is recommended to consult also the "PAsCAL WP6 Pilots Surveys" dataset, which contains all questions and possible answers.

  11. Contrast of detection results of different algorithms in PASCAL VOC2007...

    • plos.figshare.com
    xls
    Updated Sep 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Guoguang Hua; Fangfang Wu; Guangzhao Hao; Chenbo Xia; Li Li (2025). Contrast of detection results of different algorithms in PASCAL VOC2007 dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0332714.t008
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Sep 24, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Guoguang Hua; Fangfang Wu; Guangzhao Hao; Chenbo Xia; Li Li
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Contrast of detection results of different algorithms in PASCAL VOC2007 dataset.

  12. Sheep detection

    • kaggle.com
    zip
    Updated Sep 22, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    intelec.ai (2019). Sheep detection [Dataset]. https://www.kaggle.com/intelecai/sheep-detection
    Explore at:
    zip(28953294 bytes)Available download formats
    Dataset updated
    Sep 22, 2019
    Authors
    intelec.ai
    Description

    Context

    This dataset was created to show, how one can create an object detector from scratch without writing any code. More about that here https://youtu.be/wuVh1X-HbJ8

    Content

    The dataset contains sheep images, which were collected from the Internet and annotated by VoTT visual object tagging tool. More about that here https://youtu.be/uDWgWJ5Gpwc

    Acknowledgement

    This dataset was created by Intelec AI team.

  13. PAsCAL WP6 Pilot 1 High-Capacity Bus Operations

    • data.europa.eu
    • data.niaid.nih.gov
    • +1more
    unknown
    Updated Jul 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zenodo (2025). PAsCAL WP6 Pilot 1 High-Capacity Bus Operations [Dataset]. https://data.europa.eu/data/datasets/oai-zenodo-org-7505804?locale=da
    Explore at:
    unknown(18289)Available download formats
    Dataset updated
    Jul 3, 2025
    Dataset authored and provided by
    Zenodohttp://zenodo.org/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset was collected within the context of the PAsCAL research project between June 2021 and September 2021 at the E-Bus Competence Centre premises in Livange, Luxembourg. Subject of the pilot was a Wizard of Oz experiment, which consisted of a modified high-capacity bus vehicle to simulate a Level-5 automated vehicle to the passengers, although the bus was operated and driven by a human driver, which was not visible to the participants. The participants experienced several malfunctions of the bus and were offered a Human-Machine-Interface (HMI), which connected them to a traffic control centre for troubleshooting. The purpose of the pilot was to evaluate whether available HMIs are able to bridge the gap to human driver support to passengers. Some of the participants were blind or partially sighted to also observe the adequacy of the solution on vulnerable travellers. In order to analyse the answers given to the questions, it is recommended to consult also the "PAsCAL WP6 Pilots Surveys" dataset, which contains all questions and possible answers.

  14. The comparison experimental results (pixel accuracy, mean accuracy, and mean...

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ZeYu Wang; YanXia Wu; ShuHui Bu; PengCheng Han; GuoYin Zhang (2023). The comparison experimental results (pixel accuracy, mean accuracy, and mean IoU) of the SIEANs with different methods on Stanford Background dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0195114.t007
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    ZeYu Wang; YanXia Wu; ShuHui Bu; PengCheng Han; GuoYin Zhang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The comparison experimental results (pixel accuracy, mean accuracy, and mean IoU) of the SIEANs with different methods on Stanford Background dataset.

  15. Z

    PAsCAL WP6 Pilot 4 Shared Connected Transport

    • data.niaid.nih.gov
    • data.europa.eu
    Updated Jan 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wirtz, Joanne; Van Egmond, Patrick; Berthelot, Sébastien; Kaeding, Daniel (2023). PAsCAL WP6 Pilot 4 Shared Connected Transport [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7509351
    Explore at:
    Dataset updated
    Jan 6, 2023
    Dataset provided by
    Moovee
    LuxMobility
    Sales-Lentz
    Authors
    Wirtz, Joanne; Van Egmond, Patrick; Berthelot, Sébastien; Kaeding, Daniel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    These two datasets were collected within the context of the PAsCAL research project between November 2021 and June 2022. Each of the surveys were dedicated to two different pilot scenarios, settings and vehicles:

    Shared Connected Vehicle Fleet: An existing rental service for a vehicle fleet for employees and students of the University of Luxembourg was enhanced by adding an advances Level-2+ vehicle to the fleet. The users were already familiar with the functionality of the service (booking process, etc.) and were asked to take a realistic trip including urban areas but also a strip of highway and were invited to test the autonomous features of the vehicle (removing hands from steering wheel, automatic parking and many more). The pilot took place in the Belval area of Luxembourg;

    Bus shuttle: An autonomous bus shuttle with Level-4 autonomy was piloted, which connects a train station to a business park. Participants were workers or visitors of the business park and the objective of this pilot was to observe the adequacy of the shuttle in the commuting context.

    In order to analyse the answers given to the questions, it is recommended to consult also the "PAsCAL WP6 Pilots Surveys" dataset, which contains all questions and possible answers.

  16. AHOD: Adaptive Hybrid Object Detector for Context-Awareed Item

    • figshare.com
    json
    Updated May 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Serge AMAN (2025). AHOD: Adaptive Hybrid Object Detector for Context-Awareed Item [Dataset]. http://doi.org/10.6084/m9.figshare.29064287.v2
    Explore at:
    jsonAvailable download formats
    Dataset updated
    May 14, 2025
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Serge AMAN
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We evaluated our AHOD model using two well-known datasets in the field of object detection:COCO (Common Objects in Context)One of the most widely used benchmarks for object detection.Contains over 200,000 images and more than 80 object categories.Includes objects in varied and sometimes cluttered contexts, allowing the robustness of detectors to be evaluated.Pascal VOCAnother reference dataset, often used for classification, detection and segmentation tasks.Includes 20 object categories, with precise bounding box annotations.Less complex than COCO, but useful for comparing performance on more conventional objects.Tools, techniques and innovations usedThe AHOD architecture is based on three main modules:Feature Pyramid Enhancement (FPE)Multi-scale feature processing tool.Improves the representation of objects of various sizes in the same image.Inspired by architectures such as FPN (Feature Pyramid Networks), but optimised for better performance.Dynamic Context Module (DCM)Intelligent contextual module.Capable of dynamically adjusting the extracted features according to the context (e.g. by adapting the features according to urban or rural areas in a road image).Enhances the model's ability to understand the overall context of the scene.Fast and Accurate Detection Head (FADH)Optimised detection head.Seeks a compromise between the speed of YOLO and the accuracy of Faster R-CNN.Probably uses lightweight convolution layers or optimisations such as MobileNet/Depthwise Convolutions.Probable technologies usedAlthough the summary does not specify this, we can reasonably assume that the following tools are used:Deep learning frameworks: PyTorch or TensorFlow, which are standard in object detection research.GPUs for training and inference, particularly for measuring inference times (essential in real-time applications).Standard evaluation techniques:mAP (mean Average Precision): measure of average precision.FPS (Frames Per Second) or inference time for real-time performance.

  17. The lightweight methods based on DETR.

    • plos.figshare.com
    xls
    Updated Sep 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Guoguang Hua; Fangfang Wu; Guangzhao Hao; Chenbo Xia; Li Li (2025). The lightweight methods based on DETR. [Dataset]. http://doi.org/10.1371/journal.pone.0332714.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Sep 24, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Guoguang Hua; Fangfang Wu; Guangzhao Hao; Chenbo Xia; Li Li
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Small object detection is an essential but challenging task in computer vision. Transformer-based algorithms have demonstrated remarkable performance in the domain of computer vision tasks. Nevertheless, they suffer from inadequate feature extraction for small objects. Additionally, they face difficulties in deployment on resource-constrained platforms due to their heavy computational burden. To tackle these problems, an efficient local-global fusion Transformer (ELFT) is proposed for small object detection, which is based on attention and grouping strategy. Specifically, we first design an efficient local-global fusion attention (ELGFA) mechanism to extract sufficient location features and integrate detailed information from feature maps, thereby promoting the accuracy. Besides, we present a grouped feature update module (GFUM) to reduce computational complexity by alternately updating high-level and low-level features within each group. Furthermore, the broadcast context module (CB) is introduced to obtain richer context information. It further enhances the ability to detect small objects. Extensive experiments conducted on three benchmarks, i.e. Remote Sensing Object Detection (RSOD), NWPU VHR-10 and PASCAL VOC2007, achieving 95.8%, 94.3% and 85.2% in mean average precision (mAP), respectively. Compared to DINO, the number of parameters is reduced by 10.4%, and the floating point operations (FLOPs) are reduced by 22.7%. The experimental results demonstrate the efficacy of ELFT in small object detection tasks, while maintaining an attractive level of computational complexity.

  18. O

    Stanford Background (Standford Background Dataset)

    • opendatalab.com
    zip
    Updated Apr 7, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stanford University (2023). Stanford Background (Standford Background Dataset) [Dataset]. https://opendatalab.com/OpenDataLab/Stanford_Background
    Explore at:
    zip(888537298 bytes)Available download formats
    Dataset updated
    Apr 7, 2023
    Dataset provided by
    Stanford University
    Area covered
    斯坦福
    Description

    The Stanford Background Dataset is a new dataset introduced in Gould et al. (ICCV 2009) for evaluating methods for geometric and semantic scene understanding. The dataset contains 715 images chosen from existing public datasets: LabelMe, MSRC, PASCAL VOC and Geometric Context. Our selection criteria were for the images to be of outdoor scenes, have approximately 320-by-240 pixels, contain at least one foreground object, and have the horizon position within the image (it need not be visible).

  19. Comparison of detection results of different algorithms in NWPU VHR-10...

    • plos.figshare.com
    xls
    Updated Sep 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Guoguang Hua; Fangfang Wu; Guangzhao Hao; Chenbo Xia; Li Li (2025). Comparison of detection results of different algorithms in NWPU VHR-10 dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0332714.t006
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Sep 24, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Guoguang Hua; Fangfang Wu; Guangzhao Hao; Chenbo Xia; Li Li
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Comparison of detection results of different algorithms in NWPU VHR-10 dataset.

  20. The results of ablation studies on the RSOD dataset.

    • plos.figshare.com
    xls
    Updated Sep 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Guoguang Hua; Fangfang Wu; Guangzhao Hao; Chenbo Xia; Li Li (2025). The results of ablation studies on the RSOD dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0332714.t010
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Sep 24, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Guoguang Hua; Fangfang Wu; Guangzhao Hao; Chenbo Xia; Li Li
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The results of ablation studies on the RSOD dataset.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Roozbeh Mottaghi; Xianjie Chen; Xiaobai Liu, PASCAL Context Dataset [Dataset]. https://datasetninja.com/pascal-context

PASCAL Context Dataset

Explore at:
Dataset provided by
Dataset Ninja
Authors
Roozbeh Mottaghi; Xianjie Chen; Xiaobai Liu
License

http://host.robots.ox.ac.uk/pascal/VOC/voc2010/index.html#rightshttp://host.robots.ox.ac.uk/pascal/VOC/voc2010/index.html#rights

Description

The authors of the PASCAL Context dataset conduct a comprehensive investigation into the significance of context within existing state-of-the-art detection and segmentation methodologies. Their approach involves the meticulous labeling of every pixel encompassed within the PASCAL VOC 2010 detection challenge, associating each pixel with a semantic category. This dataset is envisioned to present a considerable challenge to the research community, as it incorporates an impressive 520 additional classes that cater to both semantic segmentation and object detection.

Search
Clear search
Close search
Google apps
Main menu