19 datasets found
  1. cityscapes dataset

    • kaggle.com
    • datasetninja.com
    zip
    Updated Sep 13, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dev-ShuvoAlok (2023). cityscapes dataset [Dataset]. https://www.kaggle.com/datasets/shuvoalok/cityscapes/code
    Explore at:
    zip(209001313 bytes)Available download formats
    Dataset updated
    Sep 13, 2023
    Authors
    Dev-ShuvoAlok
    Description

    Context

    Cityscapes data (dataset home page) contains labeled videos taken from vehicles driven in Germany. This version is a processed subsample created as part of the Pix2Pix paper. The dataset has still images from the original videos, and the semantic segmentation labels are shown in images alongside the original image. This is one of the best datasets around for semantic segmentation tasks.

    Acknowledgements

    This dataset is the same as what is available here from the Berkeley AI Research group.

    License

    The Cityscapes data available from cityscapes-dataset.com has the following license:

    This dataset is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications, or personal experimentation. Permission is granted to use the data given that you agree:

    That the dataset comes "AS IS", without express or implied warranty. Although every effort has been made to ensure accuracy, we (Daimler AG, MPI Informatics, TU Darmstadt) do not accept any responsibility for errors or omissions. That you include a reference to the Cityscapes Dataset in any work that makes use of the dataset. For research papers, cite our preferred publication as listed on our website; for other media cite our preferred publication as listed on our website or link to the Cityscapes website. That you do not distribute this dataset or modified versions. It is permissible to distribute derivative works in as far as they are abstract representations of this dataset (such as models trained on it or additional annotations that do not directly include any of our data) and do not allow to recover the dataset or something similar in character. That you may not use the dataset or any derivative work for commercial purposes as, for example, licensing or selling the data, or using the data with a purpose to procure a commercial gain. That all rights not expressly granted to you are reserved by (Daimler AG, MPI Informatics, TU Darmstadt).

    Inspiration

    Can you identify you identify what objects are where in these images from a vehicle.

  2. CityScapes - Depth and Segmentation

    • kaggle.com
    zip
    Updated Dec 4, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sakshay Mahna (2021). CityScapes - Depth and Segmentation [Dataset]. https://www.kaggle.com/datasets/sakshaymahna/cityscapes-depth-and-segmentation/discussion
    Explore at:
    zip(673063205 bytes)Available download formats
    Dataset updated
    Dec 4, 2021
    Authors
    Sakshay Mahna
    Description

    Context

    This dataset is a preprocessed dataset of the City Scapes dataset, to be used for two tasks: Depth Estimation and Semantic Segmentation.

    Content

    The dataset contains 128 x 256 sized images, their 19 class semantic segmentation labels and inverse depth labels.

    Acknowledgements

    The original dataset is taken from this website and the preprocessed ones are taken from this website.

    License

    The Cityscapes data available from cityscapes-dataset.com has the following license:

    This dataset is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications, or personal experimentation. Permission is granted to use the data given that you agree:

    • That the dataset comes "AS IS", without express or implied warranty. Although every effort has been made to ensure accuracy, we (Daimler AG, MPI Informatics, TU Darmstadt) do not accept any responsibility for errors or omissions.
    • That you include a reference to the Cityscapes Dataset in any work that makes use of the dataset. For research papers, cite our preferred publication as listed on our website; for other media cite our preferred publication as listed on our website or link to the Cityscapes website.
    • That you do not distribute this dataset or modified versions. It is permissible to distribute derivative works in as far as they are abstract representations of this dataset (such as models trained on it or additional annotations that do not directly include any of our data) and do not allow to recover the dataset or something similar in character.
    • That you may not use the dataset or any derivative work for commercial purposes as, for example, licensing or selling the data, or using the data with a purpose to procure a commercial gain.
    • That all rights not expressly granted to you are reserved by (Daimler AG, MPI Informatics, TU Darmstadt).

    Citations

    • M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele, “The Cityscapes Dataset for Semantic Urban Scene Understanding,” in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.

    • Liu, Shikun and Johns, Edward and Davison, Andrew J, "End-to-End Multi-task Learning with Attention" in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.

  3. Effect of additional modules on segmentation performance: Ablation study...

    • plos.figshare.com
    xls
    Updated Jan 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Effat Sahragard; Hassan Farsi; Sajad Mohamadzadeh (2025). Effect of additional modules on segmentation performance: Ablation study results in Cityscapes dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0305561.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jan 16, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Effat Sahragard; Hassan Farsi; Sajad Mohamadzadeh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Effect of additional modules on segmentation performance: Ablation study results in Cityscapes dataset.

  4. f

    Performance comparison of semantic segmentation methods on Cityscapes.

    • plos.figshare.com
    xls
    Updated Jan 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Effat Sahragard; Hassan Farsi; Sajad Mohamadzadeh (2025). Performance comparison of semantic segmentation methods on Cityscapes. [Dataset]. http://doi.org/10.1371/journal.pone.0305561.t007
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jan 16, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Effat Sahragard; Hassan Farsi; Sajad Mohamadzadeh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Performance comparison of semantic segmentation methods on Cityscapes.

  5. f

    The impact of pre-trained weight.

    • figshare.com
    xls
    Updated Sep 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jian Wei; Qinzhao Wang; Zixu Zhao (2023). The impact of pre-trained weight. [Dataset]. http://doi.org/10.1371/journal.pone.0291241.t008
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Sep 11, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Jian Wei; Qinzhao Wang; Zixu Zhao
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Cross-domain object detection is a key problem in the research of intelligent detection models. Different from lots of improved algorithms based on two-stage detection models, we try another way. A simple and efficient one-stage model is introduced in this paper, comprehensively considering the inference efficiency and detection precision, and expanding the scope of undertaking cross-domain object detection problems. We name this gradient reverse layer-based model YOLO-G, which greatly improves the object detection precision in cross-domain scenarios. Specifically, we add a feature alignment branch following the backbone, where the gradient reverse layer and a classifier are attached. With only a small increase in computational, the performance is higher enhanced. Experiments such as Cityscapes→Foggy Cityscapes, SIM10k→Cityscape, PASCAL VOC→Clipart, and so on, indicate that compared with most state-of-the-art (SOTA) algorithms, the proposed model achieves much better mean Average Precision (mAP). Furthermore, ablation experiments were also performed on 4 components to confirm the reliability of the model. The project is available at https://github.com/airy975924806/yolo-G.

  6. E

    Cityscape Source Tables

    • ega-archive.org
    Updated Mar 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Cityscape Source Tables [Dataset]. https://ega-archive.org/datasets/EGAD50000000366
    Explore at:
    Dataset updated
    Mar 26, 2024
    License

    https://ega-archive.org/dacs/EGAC00001002525https://ega-archive.org/dacs/EGAC00001002525

    Description

    Source data of clinical study data corresponding to figures reported in the paper titled: Anti-TIGIT antibody improves PD-L1 blockade through myeloid and Treg cells. PMID: 38418879 DOI: 10.1038/s41586-024-07121-9

  7. pix2pix dataset

    • kaggle.com
    zip
    Updated Jul 4, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vikram Tiwari (2018). pix2pix dataset [Dataset]. https://www.kaggle.com/vikramtiwari/pix2pix-dataset
    Explore at:
    zip(2574957257 bytes)Available download formats
    Dataset updated
    Jul 4, 2018
    Authors
    Vikram Tiwari
    Description

    Introduction

    This is the dataset for pix2pix model which aims to work as a general-purpose solution for image-to-image translation problems.

    Due to Kaggle's size limitations, only 4 datasets are available here.

    • Facades
    • Cityscapes
    • Maps
    • Edges to shoes

    1 more dataset (Edges to handbags) and can be downloaded from the link provided in the sources section.

    Common tasks

    More description of the actual model, some implementations, and all the community contributions can be found on the author's GitHub project page here: https://phillipi.github.io/pix2pix/

    Sources

  8. The impact of different hyper-parameter α.

    • plos.figshare.com
    xls
    Updated Sep 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jian Wei; Qinzhao Wang; Zixu Zhao (2023). The impact of different hyper-parameter α. [Dataset]. http://doi.org/10.1371/journal.pone.0291241.t009
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Sep 11, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Jian Wei; Qinzhao Wang; Zixu Zhao
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Cross-domain object detection is a key problem in the research of intelligent detection models. Different from lots of improved algorithms based on two-stage detection models, we try another way. A simple and efficient one-stage model is introduced in this paper, comprehensively considering the inference efficiency and detection precision, and expanding the scope of undertaking cross-domain object detection problems. We name this gradient reverse layer-based model YOLO-G, which greatly improves the object detection precision in cross-domain scenarios. Specifically, we add a feature alignment branch following the backbone, where the gradient reverse layer and a classifier are attached. With only a small increase in computational, the performance is higher enhanced. Experiments such as Cityscapes→Foggy Cityscapes, SIM10k→Cityscape, PASCAL VOC→Clipart, and so on, indicate that compared with most state-of-the-art (SOTA) algorithms, the proposed model achieves much better mean Average Precision (mAP). Furthermore, ablation experiments were also performed on 4 components to confirm the reliability of the model. The project is available at https://github.com/airy975924806/yolo-G.

  9. The Coralscapes Dataset: Semantic Scene Understanding in Coral Reefs

    • zenodo.org
    bin
    Updated Mar 24, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jonathan Sauder; Jonathan Sauder; Viktor Domazetoski; Viktor Domazetoski; Guilhem Banc-Prandi; Guilhem Banc-Prandi; Gabriela Perna; Gabriela Perna; Anders Meibom; Anders Meibom; Devis Tuia; Devis Tuia (2025). The Coralscapes Dataset: Semantic Scene Understanding in Coral Reefs [Dataset]. http://doi.org/10.5281/zenodo.15061505
    Explore at:
    binAvailable download formats
    Dataset updated
    Mar 24, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jonathan Sauder; Jonathan Sauder; Viktor Domazetoski; Viktor Domazetoski; Guilhem Banc-Prandi; Guilhem Banc-Prandi; Gabriela Perna; Gabriela Perna; Anders Meibom; Anders Meibom; Devis Tuia; Devis Tuia
    License

    http://www.apache.org/licenses/LICENSE-2.0http://www.apache.org/licenses/LICENSE-2.0

    Description

    The Coralscapes dataset is the first general-purpose dense semantic segmentation dataset for coral reefs. Similar in scope and with the same structure as the widely used Cityscapes dataset for urban scene understanding, Coralscapes allows for the benchmarking of semantic segmentation models in a new challenging domain. The Coralscapes dataset spans 2075 images at 1024×2048px resolution gathered from 35 dive sites in 5 countries in the Red Sea, labeled in a consistent and speculation-free manner containing 174k polygons over 39 benthic classes.

    This repository provides a collection of scripts and instructions for working with the Coralscapes dataset. It includes the full codebase necessary for training and evaluating models on this dataset, allowing to reproduce the results in the paper. Additionally, it contains scripts and step-by-step guidance on how to use the trained models for inference and how to fine-tune the models to external datasets.

    Dataset Structure

    The dataset structure of the Coralscapes dataset follows the structure of the Cityscapes dataset:

    {root}/{type}/{split}/{site}/{site}_{seq:0>6}_{frame:0>6}_{type}{ext}
    

    The meaning of the individual elements is:

    • root the root folder of the Coralscapes dataset.
    • type the type/modality of data, gtFine for fine ground truth, leftImg8bit for left 8-bit images, leftImg8bit_1080p (gtFine_1080p) for the images (ground truth) in 1080p resolution, leftImg8bit_videoframes for the 19 preceding and 10 trailing video frames.
    • split the split, i.e. train/val/test. Note that not all kinds of data exist for all splits. Thus, do not be surprised to occasionally find empty folders.
    • site ID of the site in which this part of the dataset was recorded.
    • seq the sequence number using 6 digits.
    • frame the frame number using 6 digits.
    • ext .png

    File Structure

    The files provided in the Zenodo repository are the following:

    • coralscapes.7z contains the Coralscapes dataset which includes the 2075 images and corresponding ground truth semantic segmentation masks at 1024x2048px resolution.
    • coralscapes_1080p.7z contains the Coralscapes images and masks in their native 1080x1920px resolution.
    • model_checkpoints.7z contains the checkpoints of the semantic segmentation models that have been fine-tuned on the Coralscapes dataset. This includes the following models: SegFormer (with a B2 and B5 backbone, trained with and without LoRA), DPT (with a DINOv2-Base and DINOv2-Giant backbone, trained with and without LoRA), a Linear segmentation model with a DINOv2-Base backbone, a UNet++ with a ResNet50 backbone and DeepLabV3+ with a ResNet50 backbone.
    • coralscapes_videoframes.7z contains the the 19 preceding and 10 trailing video frames of each image in the Coralscapes dataset.
  10. t

    Infrastructure-scale sustainable energy planning in the cityscape:...

    • service.tib.eu
    Updated Nov 17, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Infrastructure-scale sustainable energy planning in the cityscape: Transforming urban energy metabolism in East Asia - Vdataset - LDM in NFDI4Energy [Dataset]. https://service.tib.eu/ldm_nfdi4energy/ldmservice/dataset/openaire_16a34c31-4d3b-4dd0-88cf-2780b1865bed
    Explore at:
    Dataset updated
    Nov 17, 2025
    Description

    {"Datasets and Jupyter notebook corresponding to the paper "Infrastructure-scale sustainable energy planning in the cityscape: Transforming urban energy metabolism in East Asia" published in WIREs Energy and Environment. See the Jupyter Notebook for additional explanations about the datasets."}

  11. Performance metrics of our Sim2Real transfer model in different datasets.

    • plos.figshare.com
    xls
    Updated Nov 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Balaji Ganesh Rajagopal; Manish Kumar; Abdulaziz H. Alshehri; Fayez Alanazi; Ahmed farouk Deifalla; Ahmed M. Yosri; Abdelhalim Azam (2023). Performance metrics of our Sim2Real transfer model in different datasets. [Dataset]. http://doi.org/10.1371/journal.pone.0293978.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Nov 30, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Balaji Ganesh Rajagopal; Manish Kumar; Abdulaziz H. Alshehri; Fayez Alanazi; Ahmed farouk Deifalla; Ahmed M. Yosri; Abdelhalim Azam
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Performance metrics of our Sim2Real transfer model in different datasets.

  12. f

    Details of large kernel attention modules.

    • plos.figshare.com
    xls
    Updated Jan 16, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Effat Sahragard; Hassan Farsi; Sajad Mohamadzadeh (2025). Details of large kernel attention modules. [Dataset]. http://doi.org/10.1371/journal.pone.0305561.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jan 16, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Effat Sahragard; Hassan Farsi; Sajad Mohamadzadeh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This paper presents a novel method for improving semantic segmentation performance in computer vision tasks. Our approach utilizes an enhanced UNet architecture that leverages an improved ResNet50 backbone. We replace the last layer of ResNet50 with deformable convolution to enhance feature representation. Additionally, we incorporate an attention mechanism, specifically ECA-ASPP (Attention Spatial Pyramid Pooling), in the encoding path of UNet to capture multi-scale contextual information effectively. In the decoding path of UNet, we explore the use of attention mechanisms after concatenating low-level features with high-level features. Specifically, we investigate two types of attention mechanisms: ECA (Efficient Channel Attention) and LKA (Large Kernel Attention). Our experiments demonstrate that incorporating attention after concatenation improves segmentation accuracy. Furthermore, we compare the performance of ECA and LKA modules in the decoder path. The results indicate that the LKA module outperforms the ECA module. This finding highlights the importance of exploring different attention mechanisms and their impact on segmentation performance. To evaluate the effectiveness of the proposed method, we conduct experiments on benchmark datasets, including Stanford and Cityscapes, as well as the newly introduced WildPASS and DensPASS datasets. Based on our experiments, the proposed method achieved state-of-the-art results including mIoU 85.79 and 82.25 for the Stanford dataset, and the Cityscapes dataset, respectively. The results demonstrate that our proposed method performs well on these datasets, achieving state-of-the-art results with high segmentation accuracy.

  13. Semantic Segmentation - BEV

    • kaggle.com
    zip
    Updated Dec 10, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sakshay Mahna (2024). Semantic Segmentation - BEV [Dataset]. https://www.kaggle.com/datasets/sakshaymahna/semantic-segmentation-bev/versions/922
    Explore at:
    zip(2155380825 bytes)Available download formats
    Dataset updated
    Dec 10, 2024
    Authors
    Sakshay Mahna
    Description

    Context

    This dataset has been created as part of the Cam2BEV project. There, the datasets are used for the computation of a semantically segmented bird's eye view (BEV) image given the images of multiple vehicle-mounted cameras as presented in the paper:

    A Sim2Real Deep Learning Approach for the Transformation of Images from Multiple Vehicle-Mounted Cameras to a Semantically Segmented Image in Bird’s Eye View (arXiv)

    Lennart Reiher, Bastian Lampe, and Lutz Eckstein
    Institute for Automotive Engineering (ika), RWTH Aachen University

    Content

    360° Surround Cameras

    • front camera
    • rear camera
    • left camera
    • right camera
    • bird's eye view
    • bird's eye view incl. occlusion
    • homography view

    https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/raw/master/1_FRLR/examples/front.png"> https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/raw/master/1_FRLR/examples/rear.png"> https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/raw/master/1_FRLR/examples/left.png"> https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/raw/master/1_FRLR/examples/right.png"> https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/raw/master/1_FRLR/examples/bev.png"> https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/raw/master/1_FRLR/examples/bev+occlusion.png"> https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/raw/master/1_FRLR/examples/homography.png">

    Characteristics

    # Training Samples# Validation Samples# Vehicle Cameras# Semantic Classes
    3319937314 (front, rear, left, right)30 (CityScapes)

    Note: The CityScapes colors for semantic classes Pedestrian and Rider are switched due to technical reasons.

    Front Camera

    Resolution (x,y)Focal Length (x,y)Principal Point (x,y)Position (X,Y,Z)Rotation (H, P, R)
    964, 604278.283, 408.1295482, 3021.7, 0.0, 1.40.0, 0.0, 0.0

    Rear Camera

    Resolution (x,y)Focal Length (x,y)Principal Point (x,y)Position (X,Y,Z)Rotation (H, P, R)
    964, 604278.283, 408.1295482, 302-0.6, 0.0, 1.43.1415, 0.0, 0.0

    Left Camera

    Resolution (x,y)Focal Length (x,y)Principal Point (x,y)Position (X,Y,Z)Rotation (H, P, R)
    964, 604278.283, 408.1295482, 3020.5, 0.5, 1.51.5708, 0.0, 0.0

    Right Camera

    Resolution (x,y)Focal Length (x,y)Principal Point (x,y)Position (X,Y,Z)Rotation (H, P, R)
    964, 604278.283, 408.1295482, 3020.5, -0.5, 1.5-1.5708, 0.0, 0.0

    Drone Camera

    Resolution (x,y)Focal Length (x,y)Principal Point (x,y)Position (X,Y,Z)Rotation (H, P, R)
    964, 604682.578, 682.578482, 3020.0, 0.0, 50.00.0, 1.5708, -1.5708

    Acknowledgements

    The original dataset is taken from this website.

    License

    The Cam2BEV data available from the corresponding website has the following license:

    This dataset is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching or scientific publications. Permission is granted to use the data given that you agree:

    1. That the dataset comes "AS IS", without express or implied warranty. Although every effort has been made to ensure accuracy, we (ika) do not accept any responsibility for errors or omissions.
    2. That you include a reference to the Cam2BEV dataset in any work that makes use of the dataset. For research papers, cite our preferred publication as listed in the readme of this repository; for other media cite our preferred publication as listed in the readme of this repository or link to the github page of the Cam2BEV project.
    3. That you do not distribute this dataset or modified versions. It is permissible to distribute derivative works in as far as they are abstract representations of this dataset (such as models trained on it or additional annotations that do not directly include any of our data) and do not allow to recover the dataset or something similar in character.
    4. That you may not use the dataset or any derivative work for commercial purposes as, for example, licensing or selling the data, or using the data with a purpose to procure a commercial gain.
    5. That all rights not expressly granted to you are r...
  14. Machine software and hardware configuration.

    • plos.figshare.com
    xls
    Updated Jun 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shusheng Li; Liang Wan; Lu Tang; Zhining Zhang (2023). Machine software and hardware configuration. [Dataset]. http://doi.org/10.1371/journal.pone.0274249.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 13, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Shusheng Li; Liang Wan; Lu Tang; Zhining Zhang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Machine software and hardware configuration.

  15. Ablation results on the PASCAL VOC 2012 valuation set.

    • plos.figshare.com
    xls
    Updated Jun 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shusheng Li; Liang Wan; Lu Tang; Zhining Zhang (2023). Ablation results on the PASCAL VOC 2012 valuation set. [Dataset]. http://doi.org/10.1371/journal.pone.0274249.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 13, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Shusheng Li; Liang Wan; Lu Tang; Zhining Zhang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Concat refers to the use of concatenation operations to fuse the output of DAPM. SFM: Selective Fuse Module. SSFM is the addition of the Spatial Attention Mechanism to each of the two branches of SFM. FSFM refers to the Focusing Selective Fuse Module.

  16. Ablation results on the PASCAL VOC 2012 valuation set.

    • plos.figshare.com
    xls
    Updated Jun 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shusheng Li; Liang Wan; Lu Tang; Zhining Zhang (2023). Ablation results on the PASCAL VOC 2012 valuation set. [Dataset]. http://doi.org/10.1371/journal.pone.0274249.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 13, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Shusheng Li; Liang Wan; Lu Tang; Zhining Zhang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ASPP is Atrous spatial pyramid pooling. MASPP refers to replacing the 1 × 1 convolution layer of ASPP with a 3 × 3 depthwise separable convolution layer. DSPM is our Double Attention Pyramid Module.

  17. Experimental results of different methods on “Cityscape to Foggy” data set.

    • plos.figshare.com
    xls
    Updated Jun 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zhengyun Fang; Hongbin Wang; Shilin Li; Yi Hu; Xingbo Han (2023). Experimental results of different methods on “Cityscape to Foggy” data set. [Dataset]. http://doi.org/10.1371/journal.pone.0270356.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 6, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Zhengyun Fang; Hongbin Wang; Shilin Li; Yi Hu; Xingbo Han
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Experimental results of different methods on “Cityscape to Foggy” data set.

  18. Ablation experimental results of the model on the “Cityscape to Foggy”...

    • figshare.com
    • plos.figshare.com
    xls
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zhengyun Fang; Hongbin Wang; Shilin Li; Yi Hu; Xingbo Han (2023). Ablation experimental results of the model on the “Cityscape to Foggy” dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0270356.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Zhengyun Fang; Hongbin Wang; Shilin Li; Yi Hu; Xingbo Han
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Ablation experimental results of the model on the “Cityscape to Foggy” dataset.

  19. Sustainable Regional Aesthetics in Tropical Environments data_figshare.docx

    • figshare.com
    docx
    Updated Aug 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amidu Ayeni (2025). Sustainable Regional Aesthetics in Tropical Environments data_figshare.docx [Dataset]. http://doi.org/10.6084/m9.figshare.29978683.v1
    Explore at:
    docxAvailable download formats
    Dataset updated
    Aug 25, 2025
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Amidu Ayeni
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The data used in a paper that undertakes a comparative analysis of Kingston, Jamaica and Lagos, Nigeria, two historically rich, culturally vibrant, and environmentally challenged cities that embody the evolving tensions between indigenous aesthetics, colonial legacies, rapid urbanisation, and contemporary demands for sustainability.

  20. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Dev-ShuvoAlok (2023). cityscapes dataset [Dataset]. https://www.kaggle.com/datasets/shuvoalok/cityscapes/code
Organization logo

cityscapes dataset

Well Maintained Train and Val data with Separated Image and MASK Label( 96*256)

Explore at:
zip(209001313 bytes)Available download formats
Dataset updated
Sep 13, 2023
Authors
Dev-ShuvoAlok
Description

Context

Cityscapes data (dataset home page) contains labeled videos taken from vehicles driven in Germany. This version is a processed subsample created as part of the Pix2Pix paper. The dataset has still images from the original videos, and the semantic segmentation labels are shown in images alongside the original image. This is one of the best datasets around for semantic segmentation tasks.

Acknowledgements

This dataset is the same as what is available here from the Berkeley AI Research group.

License

The Cityscapes data available from cityscapes-dataset.com has the following license:

This dataset is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications, or personal experimentation. Permission is granted to use the data given that you agree:

That the dataset comes "AS IS", without express or implied warranty. Although every effort has been made to ensure accuracy, we (Daimler AG, MPI Informatics, TU Darmstadt) do not accept any responsibility for errors or omissions. That you include a reference to the Cityscapes Dataset in any work that makes use of the dataset. For research papers, cite our preferred publication as listed on our website; for other media cite our preferred publication as listed on our website or link to the Cityscapes website. That you do not distribute this dataset or modified versions. It is permissible to distribute derivative works in as far as they are abstract representations of this dataset (such as models trained on it or additional annotations that do not directly include any of our data) and do not allow to recover the dataset or something similar in character. That you may not use the dataset or any derivative work for commercial purposes as, for example, licensing or selling the data, or using the data with a purpose to procure a commercial gain. That all rights not expressly granted to you are reserved by (Daimler AG, MPI Informatics, TU Darmstadt).

Inspiration

Can you identify you identify what objects are where in these images from a vehicle.

Search
Clear search
Close search
Google apps
Main menu