9 datasets found
  1. P

    Data from: LLFF Dataset

    • paperswithcode.com
    • library.toponeai.link
    Updated Jan 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ben Mildenhall; Pratul P. Srinivasan; Rodrigo Ortiz-Cayon; Nima Khademi Kalantari; Ravi Ramamoorthi; Ren Ng; Abhishek Kar (2025). LLFF Dataset [Dataset]. https://paperswithcode.com/dataset/llff
    Explore at:
    Dataset updated
    Jan 12, 2025
    Authors
    Ben Mildenhall; Pratul P. Srinivasan; Rodrigo Ortiz-Cayon; Nima Khademi Kalantari; Ravi Ramamoorthi; Ren Ng; Abhishek Kar
    Description

    Local Light Field Fusion (LLFF) is a practical and robust deep learning solution for capturing and rendering novel views of complex real-world scenes for virtual exploration. The dataset consists of both renderings and real images of natural scenes. The synthetic images are rendered from the SUNCG and UnrealCV where SUNCG contains 45000 simplistic house and room environments with texture-mapped surfaces and low geometric complexity. UnrealCV contains a few large-scale environments modeled and rendered with extreme detail. The real images are 24 scenes captured from a handheld cellphone.

  2. h

    bilarf_data

    • huggingface.co
    Updated May 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yuehao Wang (2024). bilarf_data [Dataset]. https://huggingface.co/datasets/Yuehao/bilarf_data
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 20, 2024
    Authors
    Yuehao Wang
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    BilaRF Dataset

    Project Page | Arxiv | Code This dataset contains our own captured nighttime scenes, synthetic data generated from RawNeRF dataset, and editing samples. To use the data, please go to 'Files and versions' and download 'bilarf_data.zip'. The source images with EXIF metadata are available for download at this Google Drive link. The dataset follows the file structure of NeRF LLFF data (forward-facing scenes). In addition, editing samples are stored in the 'edits/'… See the full description on the dataset page: https://huggingface.co/datasets/Yuehao/bilarf_data.

  3. P

    Shiny dataset Dataset

    • paperswithcode.com
    Updated Mar 8, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Suttisak Wizadwongsa; Pakkapon Phongthawee; Jiraphon Yenphraphai; Supasorn Suwajanakorn (2021). Shiny dataset Dataset [Dataset]. https://paperswithcode.com/dataset/shiny-dataset
    Explore at:
    Dataset updated
    Mar 8, 2021
    Authors
    Suttisak Wizadwongsa; Pakkapon Phongthawee; Jiraphon Yenphraphai; Supasorn Suwajanakorn
    Description

    The shiny folder contains 8 scenes with challenging view-dependent effects used in our paper. We also provide additional scenes in the shiny_extended folder. The test images for each scene used in our paper consist of one of every eight images in alphabetical order.

    Each scene contains the following directory structure: scene/ dense/ cameras.bin images.bin points3D.bin project.ini images/ image_name1.png image_name2.png ... image_nameN.png images_distort/ image_name1.png image_name2.png ... image_nameN.png sparse/ cameras.bin images.bin points3D.bin project.ini database.db hwf_cxcy.npy planes.txt poses_bounds.npy

    dense/ folder contains COLMAP's output [1] after the input images are undistorted. images/ folder contains undistorted images. (We use these images in our experiments.) images_distort/ folder contains raw images taken from a smartphone. sparse/ folder contains COLMAP's sparse reconstruction output [1].

    Our poses_bounds.npy is similar to the LLFF[2] file format with a slight modification. This file stores a Nx14 numpy array, where N is the number of cameras. Each row in this array is split into two parts of sizes 12 and 2. The first part, when reshaped into 3x4, represents the camera extrinsic (camera-to-world transformation), and the second part with two dimensions stores the distances from that point of view to the first and last planes (near, far). These distances are computed automatically based on the scene’s statistics using LLFF’s code. (For details on how these are computed, see this code)

    hwf_cxcy.npy stores the camera intrinsic (height, width, focal length, principal point x, principal point y) in a 1x5 numpy array.

    planes.txt stores information about the MPI planes. The first two numbers are the distances from a reference camera to the first and last planes (near, far). The third number tells whether the planes are placed equidistantly in the depth space (0) or inverse depth space (1). The last number is the padding size in pixels on all four sides of each of the MPI planes. I.e., the total dimension of each plane is (H + 2 * padding, W + 2 * padding).

    References:

    [1]: COLMAP structure from motion (Schönberger and Frahm, 2016). [2]: Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines (Mildenhall et al., 2019).

  4. f

    Quantitative Comparison on LLFF.Our proposed method outperforms other...

    • figshare.com
    • plos.figshare.com
    xls
    Updated May 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yongshuo Zhang; Guangyuan Zhang; Kefeng Li; Zhenfang Zhu; Peng Wang; Zhenfei Wang; Chen Fu; Xiaotong Li; Zhiming Fan; Yongpeng Zhao (2025). Quantitative Comparison on LLFF.Our proposed method outperforms other methods on real-world forward-facing scenes, ft indicates the results fine-tuned on each scene individually. [Dataset]. http://doi.org/10.1371/journal.pone.0321878.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 13, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Yongshuo Zhang; Guangyuan Zhang; Kefeng Li; Zhenfang Zhu; Peng Wang; Zhenfei Wang; Chen Fu; Xiaotong Li; Zhiming Fan; Yongpeng Zhao
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Quantitative Comparison on LLFF.Our proposed method outperforms other methods on real-world forward-facing scenes, ft indicates the results fine-tuned on each scene individually.

  5. i

    IReNe: Instant Recoloring of Radiance Fields dataset

    • ieee-dataport.org
    Updated Oct 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alessio Mazzucchelli (2024). IReNe: Instant Recoloring of Radiance Fields dataset [Dataset]. https://ieee-dataport.org/documents/irene-instant-recoloring-radiance-fields-dataset
    Explore at:
    Dataset updated
    Oct 15, 2024
    Authors
    Alessio Mazzucchelli
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    LLFF

  6. f

    Ablation studies. We perform ablation studies on LLFF with 3 input views,...

    • plos.figshare.com
    xls
    Updated May 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yongshuo Zhang; Guangyuan Zhang; Kefeng Li; Zhenfang Zhu; Peng Wang; Zhenfei Wang; Chen Fu; Xiaotong Li; Zhiming Fan; Yongpeng Zhao (2025). Ablation studies. We perform ablation studies on LLFF with 3 input views, where DPT(V2) means more advanced depth priors, DOSL means dynamic optimal sampling layer and PII means per-Layer inputs incorporation. [Dataset]. http://doi.org/10.1371/journal.pone.0321878.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 13, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Yongshuo Zhang; Guangyuan Zhang; Kefeng Li; Zhenfang Zhu; Peng Wang; Zhenfei Wang; Chen Fu; Xiaotong Li; Zhiming Fan; Yongpeng Zhao
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Ablation studies. We perform ablation studies on LLFF with 3 input views, where DPT(V2) means more advanced depth priors, DOSL means dynamic optimal sampling layer and PII means per-Layer inputs incorporation.

  7. P

    Mip-NeRF 360 Dataset

    • paperswithcode.com
    Updated Jan 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jonathan T. Barron; Ben Mildenhall; Dor Verbin; Pratul P. Srinivasan; Peter Hedman (2025). Mip-NeRF 360 Dataset [Dataset]. https://paperswithcode.com/dataset/mip-nerf-360
    Explore at:
    Dataset updated
    Jan 12, 2025
    Authors
    Jonathan T. Barron; Ben Mildenhall; Dor Verbin; Pratul P. Srinivasan; Peter Hedman
    Description

    Mip-NeRF 360 is an extension to the Mip-NeRF that uses a non-linear parameterization, online distillation, and a novel distortion-based regularize to overcome the challenge of unbounded scenes. The dataset consists of 9 scenes with 5 outdoors and 4 indoors, each containing a complex central object or area with a detailed background.

  8. t

    Juil Koo, Chanho Park, Minhyuk Sung (2024). Dataset: NeRF Editing and SVG...

    • service.tib.eu
    Updated Dec 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Juil Koo, Chanho Park, Minhyuk Sung (2024). Dataset: NeRF Editing and SVG Editing Datasets. https://doi.org/10.57702/lwws045h [Dataset]. https://service.tib.eu/ldmservice/dataset/nerf-editing-and-svg-editing-datasets
    Explore at:
    Dataset updated
    Dec 16, 2024
    Description

    The dataset used in the paper is not explicitly mentioned, but it is mentioned that the authors used real scenes and scenes from IN2N [9] and LLFF [29].

  9. P

    NeRF Dataset

    • paperswithcode.com
    Updated Aug 5, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ben Mildenhall; Pratul P. Srinivasan; Matthew Tancik; Jonathan T. Barron; Ravi Ramamoorthi; Ren Ng (2022). NeRF Dataset [Dataset]. https://paperswithcode.com/dataset/nerf
    Explore at:
    Dataset updated
    Aug 5, 2022
    Authors
    Ben Mildenhall; Pratul P. Srinivasan; Matthew Tancik; Jonathan T. Barron; Ravi Ramamoorthi; Ren Ng
    Description

    Neural Radiance Fields (NeRF) is a method for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. The dataset contains three parts with the first 2 being synthetic renderings of objects called Diffuse Synthetic 360◦ and Realistic Synthetic 360◦ while the third is real images of complex scenes. Diffuse Synthetic 360◦ consists of four Lambertian objects with simple geometry. Each object is rendered at 512x512 pixels from viewpoints sampled on the upper hemisphere. Realistic Synthetic 360◦ consists of eight objects of complicated geometry and realistic non-Lambertian materials. Six of them are rendered from viewpoints sampled on the upper hemisphere and the two left are from viewpoints sampled on a full sphere with all of them at 800x800 pixels. The real images of complex scenes consist of 8 forward-facing scenes captured with a cellphone at a size of 1008x756 pixels.

  10. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Ben Mildenhall; Pratul P. Srinivasan; Rodrigo Ortiz-Cayon; Nima Khademi Kalantari; Ravi Ramamoorthi; Ren Ng; Abhishek Kar (2025). LLFF Dataset [Dataset]. https://paperswithcode.com/dataset/llff

Data from: LLFF Dataset

Local Light Field Fusion

Related Article
Explore at:
Dataset updated
Jan 12, 2025
Authors
Ben Mildenhall; Pratul P. Srinivasan; Rodrigo Ortiz-Cayon; Nima Khademi Kalantari; Ravi Ramamoorthi; Ren Ng; Abhishek Kar
Description

Local Light Field Fusion (LLFF) is a practical and robust deep learning solution for capturing and rendering novel views of complex real-world scenes for virtual exploration. The dataset consists of both renderings and real images of natural scenes. The synthetic images are rendered from the SUNCG and UnrealCV where SUNCG contains 45000 simplistic house and room environments with texture-mapped surfaces and low geometric complexity. UnrealCV contains a few large-scale environments modeled and rendered with extreme detail. The real images are 24 scenes captured from a handheld cellphone.

Search
Clear search
Close search
Google apps
Main menu