9 datasets found
  1. P

    NYUv2 Dataset

    • paperswithcode.com
    Updated Apr 13, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nathan Silberman; Derek Hoiem; Pushmeet Kohli; Rob Fergus (2023). NYUv2 Dataset [Dataset]. https://paperswithcode.com/dataset/nyuv2
    Explore at:
    Dataset updated
    Apr 13, 2023
    Authors
    Nathan Silberman; Derek Hoiem; Pushmeet Kohli; Rob Fergus
    Description

    The NYU-Depth V2 data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft Kinect. It features:

    1449 densely labeled pairs of aligned RGB and depth images 464 new scenes taken from 3 cities 407,024 new unlabeled frames Each object is labeled with a class and an instance number. The dataset has several components: Labeled: A subset of the video data accompanied by dense multi-class labels. This data has also been preprocessed to fill in missing depth labels. Raw: The raw RGB, depth and accelerometer data as provided by the Kinect. Toolbox: Useful functions for manipulating the data and labels.

  2. T

    nyu_depth_v2

    • tensorflow.org
    • huggingface.co
    • +1more
    Updated Nov 23, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). nyu_depth_v2 [Dataset]. https://www.tensorflow.org/datasets/catalog/nyu_depth_v2
    Explore at:
    Dataset updated
    Nov 23, 2022
    Description

    The NYU-Depth V2 data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft Kinect.

    To use this dataset:

    import tensorflow_datasets as tfds
    
    ds = tfds.load('nyu_depth_v2', split='train')
    for ex in ds.take(4):
     print(ex)
    

    See the guide for more informations on tensorflow_datasets.

    https://storage.googleapis.com/tfds-data/visualization/fig/nyu_depth_v2-0.0.1.png" alt="Visualization" width="500px">

  3. t

    NYU-Depth-V2 Dataset

    • service.tib.eu
    Updated Dec 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). NYU-Depth-V2 Dataset [Dataset]. https://service.tib.eu/ldmservice/dataset/nyu-depth-v2-dataset
    Explore at:
    Dataset updated
    Dec 2, 2024
    Description

    The NYU-Depth-V2 dataset is a large-scale dataset for indoor depth estimation.

  4. NYUv2: Official Split Dataset

    • kaggle.com
    Updated Jan 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Awsaf (2023). NYUv2: Official Split Dataset [Dataset]. https://www.kaggle.com/datasets/awsaf49/nyuv2-official-split-dataset/data
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 27, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Awsaf
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Overview

    Source: https://huggingface.co/datasets/sayakpaul/nyu_depth_v2 (origin : fast-depth)

    Train: 48k Test: 654 Image dtype: uint8 Depth dtype: uint16

    Image to Depth Conversion

    def image2depth(path):
      depth = cv2.imwrite(path, cv2.IMREAD_UNCHANGED)
      depth = depth.astype('float32')
      depth /= (2**16 - 1)
      depth *= 10.0
      return depth
    
  5. h

    nyuv2

    • huggingface.co
    Updated May 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jagennath Hari (2025). nyuv2 [Dataset]. https://huggingface.co/datasets/jagennath-hari/nyuv2
    Explore at:
    Dataset updated
    May 29, 2025
    Authors
    Jagennath Hari
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    NYUv2

    This is an unofficial and preprocessed version of NYU Depth Dataset V2 made available for easier integration with modern ML workflows. The dataset was converted from the original .mat format into a split structure with embedded RGB images, depth maps, semantic masks, and instance masks in Hugging Face-compatible format.

      📸 Sample Visualization
    
    
    
    
    
    
    
      RGB
    
    
    
      Depth (Jet colormap)
    
      Semantic Mask… See the full description on the dataset page: https://huggingface.co/datasets/jagennath-hari/nyuv2.
    
  6. f

    Quantitative comparison on NYU Depth v2 dataset.

    • plos.figshare.com
    • figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tao Li; Songning Luo; Zhiwei Fan; Qunbing Zhou; Ting Hu (2023). Quantitative comparison on NYU Depth v2 dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0280886.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Tao Li; Songning Luo; Zhiwei Fan; Qunbing Zhou; Ting Hu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Quantitative comparison on NYU Depth v2 dataset.

  7. P

    RMRC 2014 Dataset

    • paperswithcode.com
    Updated May 31, 2015
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nathan Silberman; Derek Hoiem; Pushmeet Kohli; Rob Fergus (2015). RMRC 2014 Dataset [Dataset]. https://paperswithcode.com/dataset/rmrc-2014
    Explore at:
    Dataset updated
    May 31, 2015
    Authors
    Nathan Silberman; Derek Hoiem; Pushmeet Kohli; Rob Fergus
    Description

    The RMRC 2014 indoor dataset is a dataset for indoor semantic segmentation. It employs the NYU Depth V2 and Sun3D datasets to define the training set. The test data consists of newly acquired images.

  8. h

    nyuv2

    • huggingface.co
    Updated Jun 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anke Tang (2024). nyuv2 [Dataset]. https://huggingface.co/datasets/tanganke/nyuv2
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 11, 2024
    Authors
    Anke Tang
    Description

    This is the NYUv2 dataset for scene understanding tasks. I downloaded the original data from the Tsinghua Cloud and transformed it into Huggingface Dataset. Credit to ForkMerge: Mitigating Negative Transfer in Auxiliary-Task Learning.

      Dataset Information
    

    This data contains two splits: 'train' and 'val' (used as test dataset). Each sample in the dataset has 5 items: 'image', 'segmentation', 'depth', 'normal', and 'noise'. The noise is generated using torch.rand().

      Usage… See the full description on the dataset page: https://huggingface.co/datasets/tanganke/nyuv2.
    
  9. h

    monocular-geometry-evaluation

    • huggingface.co
    Updated Mar 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ruicheng Wang (2025). monocular-geometry-evaluation [Dataset]. https://huggingface.co/datasets/Ruicheng/monocular-geometry-evaluation
    Explore at:
    Dataset updated
    Mar 23, 2025
    Authors
    Ruicheng Wang
    Description

    Processed versions of some open-source datasets for evaluation of monocular geometry estimation.

    Dataset Source Publication Num images Storage Size Note

    NYUv2 NYU Depth Dataset V2 [1] 654 243 MB Offical test split. Mirror, glass and window manually removed. Depth beyound 5 m truncated.

    KITTI KITTI Vision Benchmark Suite [2, 3] 652 246 MB Eigen's test split.

    ETH3D ETH3D SLAM & Stereo Benchmarks [4] 454 1.3 GB Downsized from 6202×4135 to 2048×1365

    iBims-1 iBims-1 (independent… See the full description on the dataset page: https://huggingface.co/datasets/Ruicheng/monocular-geometry-evaluation.

  10. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Nathan Silberman; Derek Hoiem; Pushmeet Kohli; Rob Fergus (2023). NYUv2 Dataset [Dataset]. https://paperswithcode.com/dataset/nyuv2

NYUv2 Dataset

NYU-Depth V2

Explore at:
Dataset updated
Apr 13, 2023
Authors
Nathan Silberman; Derek Hoiem; Pushmeet Kohli; Rob Fergus
Description

The NYU-Depth V2 data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft Kinect. It features:

1449 densely labeled pairs of aligned RGB and depth images 464 new scenes taken from 3 cities 407,024 new unlabeled frames Each object is labeled with a class and an instance number. The dataset has several components: Labeled: A subset of the video data accompanied by dense multi-class labels. This data has also been preprocessed to fill in missing depth labels. Raw: The raw RGB, depth and accelerometer data as provided by the Kinect. Toolbox: Useful functions for manipulating the data and labels.

Search
Clear search
Close search
Google apps
Main menu