23 datasets found
  1. R

    Test Blender Synth Data Dataset

    • universe.roboflow.com
    zip
    Updated Dec 14, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ukasz Kowalczyk (2023). Test Blender Synth Data Dataset [Dataset]. https://universe.roboflow.com/ukasz-kowalczyk/test-blender-synth-data/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 14, 2023
    Dataset authored and provided by
    ukasz Kowalczyk
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Geometric Shape Bounding Boxes
    Description

    Test Blender Synth Data

    ## Overview
    
    Test Blender Synth Data is a dataset for object detection tasks - it contains Geometric Shape annotations for 329 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  2. OB3D Dataset

    • kaggle.com
    Updated May 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    shintacs (2025). OB3D Dataset [Dataset]. https://www.kaggle.com/datasets/shintacs/ob3d-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 14, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    shintacs
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Overview

    This repository provides the Omnidirectional Blender 3D (OB3D) Dataset , a dataset designed for 3D reconstruction from multi-view equirectangular images. In addition to 3D reconstruction, it also supports novel view synthesis and camera pose estimation for equirectangular images. This dataset consists of 12 scenes, each of which contains RGB images, depth maps, normal maps, camera parameters, and sparse 3D point clouds.

    Camera Trajectory Examples

    Dataset Structure

    OB3D contains 12 different scenes: archiviz-flat, barbershop, bistro, classroom, emerald-square, fisher-hut, lone-monk, pavillion, restroom, san-miguel, sponza, sun-temple. Each scene has the following directory structure:

    OB3D
    |-- archivis-flat
    |  |-- Egocentric
    |    |--cameras
    |     |--00000_cam.json
    |     |--...
    |    |--depths
    |     |--00000_depth.exr
    |     |--...
    |    |--images
    |     |--00000_rgb.png
    |     |--...
    |    |--normals
    |     |--00000_normal.exr
    |     |--...
    |    |--sparse
    |     |--sparse.ply
    |    |--train.txt
    |    |--test.txt
    |
    |  |-- Non-Egocentric
    |    |-...
    
    • train.txt: Contains the indices of the viewpoints used for train
    • test.txt: Contains the indices of the viewpoints used for test

    Image Capture Type

    In the OB3D, the images were captured along two different camera trajectories: Egocentric and Non-Egocentric. - Egocentric: Capture images while moving the camera in a spiral path - Non-Egocentric: Capture images while moving an environment in a free path

    Camera Format

    cameras/00000_cam.json contains both the camera extrinsic and intrinsic parameters, and each data is stored in the following format:

    [
     {
      "id": 0,
      "width": 1600,
      "height": 800,
      "intrinsics": {
       "focal": 800,
       "cx": 800,
       "cy": 400
      },
      "extrinsics": {
       "rotation": [...],
       "translation": [...]
      }
     }
    ]
    
    • rotation: 3x3 rotation matrix (world to camera)
    • translation: 3x1 translation vector (world to camera)

    Sparse Point Cloud

    OB3D provides sparse 3D point clouds (sparse.ply) obtained from OpenMVG for each scene. These point clouds are reconstructed using ground-truth camera parameters and can be used for initializing Gaussian Splatting or as preprocessing for SDF-based methods.

    Evaluation of reconstructed mesh

    We evaluate the reconstructed mesh using ground-truth depth maps, which are rendered from ground truth 3D mesh. A straightforward way to evaluate the quality of a reconstructed mesh is to measure the distance between the reconstructed mesh and the ground-truth mesh. However, 3D models created using software like Blender are often designed to appear plausible only from specific viewpoints, and as a result, the geometry of occluded or unseen regions may be inaccurate or incomplete. Moreover, certain areas may not be visible in the images at all, making accurate reconstruction in those regions inherently impossible. To deal with these problems and ensure a fair evaluation, we compare depth maps rendered from the reconstructed mesh with those rendered from the ground-truth model.

    Once the mesh is reconstructed in the same scale and coordinate system as the ground truth, our evaluation code can be used to quantitatively assess its quality. The evaluation code takes the reconstructed mesh and the ground-truth camera parameters from the OB3D dataset as input, renders depth maps from the mesh, and then compares these rendered depth maps to the corresponding ground-truth depth maps to compute quantitative metrics. The evaluation code is available at GitHub Page of OB3D.

    Aditional Information

    About scale and coordinate system

    When using SDF-based methods like NeuS, it may be necessary to transform the scene into a normalized space —such as fitting it into a unit sphere — which alters the scale and coordinate system. In such cases, we recommend saving the transformation parameters so that the mesh can be converted back to the original coordinate system and scale for evaluation.

    Blender Projects

    The following is the source of the Blender project used for dataset creation. We would like to thank the creators of each project.

    SceneURL
    archiviz-flathttps://www.blender.org/download/demo-files/
    barbershophttps://www.blender.org/download/demo-files/
    classroomhttps://www.blender.org/download/demo-files/
    restroomhttps://blendswap.com/blend/14216
    sun-templehttps://developer.nvidia.com/ue4-sun-temple
    bistrohttps://developer.nvidia.com/orca/amazon-lumberyard-bistro
    emerald-squarehttps://developer.nvidia.com/orca/nvidia-emerald-square
    fisher-huthttps://www.blendswap.com/blend/3...
  3. MatSim Dataset and benchmark for one-shot visual materials and textures...

    • zenodo.org
    • data.niaid.nih.gov
    pdf, zip
    Updated Jun 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Manuel S. Drehwald; Sagi Eppel; Jolina Li; Han Hao; Alan Aspuru-Guzik; Manuel S. Drehwald; Sagi Eppel; Jolina Li; Han Hao; Alan Aspuru-Guzik (2025). MatSim Dataset and benchmark for one-shot visual materials and textures recognition [Dataset]. http://doi.org/10.5281/zenodo.7390166
    Explore at:
    zip, pdfAvailable download formats
    Dataset updated
    Jun 25, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Manuel S. Drehwald; Sagi Eppel; Jolina Li; Han Hao; Alan Aspuru-Guzik; Manuel S. Drehwald; Sagi Eppel; Jolina Li; Han Hao; Alan Aspuru-Guzik
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    The MatSim Dataset and benchmark

    Lastest version

    Synthetic dataset and real images benchmark for visual similarity recognition of materials and textures.

    MatSim: a synthetic dataset, a benchmark, and a method for computer vision-based recognition of similarities and transitions between materials and textures focusing on identifying any material under any conditions using one or a few examples (one-shot learning).

    Based on the paper: One-shot recognition of any material anywhere using contrastive learning with physics-based rendering

    Benchmark_MATSIM.zip: contain the benchmark made of real-world images as described in the paper



    MatSim_object_train_split_1,2,3.zip: Contain a subset of the synthetics dataset for images of CGI images materials on random objects as described in the paper.

    MatSim_Vessels_Train_1,2,3.zip : Contain a subset of the synthetics dataset for images of CGI images materials inside transparent containers as described in the paper.

    *Note: these are subsets of the dataset; the full dataset can be found at:
    https://e1.pcloud.link/publink/show?code=kZIiSQZCYU5M4HOvnQykql9jxF4h0KiC5MX

    or
    https://icedrive.net/s/A13FWzZ8V2aP9T4ufGQ1N3fBZxDF

    Code:

    Up to date code for generating the dataset, reading and evaluation and trained nets can be found in this URL:https://github.com/sagieppel/MatSim-Dataset-Generator-Scripts-And-Neural-net

    Dataset Generation Scripts.zip: Contain the Blender (3.1) Python scripts used for generating the dataset, this code might be odl up to date code can be found here
    Net_Code_And_Trained_Model.zip: Contain a reference neural net code, including loaders, trained models, and evaluators scripts that can be used to read and train with the synthetic dataset or test the model with the benchmark. Note code in the ZIP file is not up to date and contains some bugs For the Latest version of this code see this URL

    Further documentation can be found inside the zip files or in the paper.

  4. h

    mix-instruct

    • huggingface.co
    • opendatalab.com
    Updated Nov 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    LLM Blender (2024). mix-instruct [Dataset]. https://huggingface.co/datasets/llm-blender/mix-instruct
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 13, 2024
    Dataset authored and provided by
    LLM Blender
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    MixInstruct

      Introduction
    

    This is the official realease of dataset MixInstruct for project LLM-Blender. This dataset contains 11 responses from the current popular instruction following-LLMs that includes:

    Stanford Alpaca FastChat Vicuna Dolly V2 StableLM Open Assistant Koala Baize Flan-T5 ChatGLM MOSS Moasic MPT

    We evaluate each response with auto metrics including BLEU, ROUGE, BERTScore, BARTScore. And provide pairwise comparison results by prompting ChatGPT for the… See the full description on the dataset page: https://huggingface.co/datasets/llm-blender/mix-instruct.

  5. Data from: Blenderproc Dataset

    • universe.roboflow.com
    zip
    Updated Jun 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    blender (2024). Blenderproc Dataset [Dataset]. https://universe.roboflow.com/blender-1oo0s/blenderproc-pdfuc
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 22, 2024
    Dataset provided by
    Blender Foundationhttps://blender.org/foundation/
    Authors
    blender
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Conector Tapa Bounding Boxes
    Description

    Blenderproc

    ## Overview
    
    Blenderproc is a dataset for object detection tasks - it contains Conector Tapa annotations for 230 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  6. Z

    RailEnV-PASMVS: a dataset for multi-view stereopsis training and...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 18, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Petrus Johannes Gräbe (2024). RailEnV-PASMVS: a dataset for multi-view stereopsis training and reconstruction applications [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5202742
    Explore at:
    Dataset updated
    Jul 18, 2024
    Dataset provided by
    André Broekman
    Petrus Johannes Gräbe
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    A Perfectly Accurate, Synthetic dataset featuring a virtual railway EnVironment for Multi-View Stereopsis (RailEnV-PASMVS) is presented, consisting of 40 scenes and 79,800 renderings together with ground truth depth maps, extrinsic and intrinsic camera parameters and binary segmentation masks of all the track components and surrounding environment. Every scene is rendered from a set of 3 cameras, each positioned relative to the track for optimal 3D reconstruction of the rail profile. The set of cameras is translated across the 100-meter length of tangent (straight) track to yield a total of 1,995 camera views. Photorealistic lighting of each of the 40 scenes is achieved with the implementation of high-definition, high dynamic range (HDR) environmental textures. Additional variation is introduced in the form of camera focal lengths, random noise for the camera location and rotation parameters and shader modifications of the rail profile. Representative track geometry data is used to generate random and unique vertical alignment data for the rail profile for every scene. This primary, synthetic dataset is augmented by a smaller image collection consisting of 320 manually annotated photographs for improved segmentation performance. The specular rail profile represents the most challenging component for MVS reconstruction algorithms, pipelines and neural network architectures, increasing the ambiguity and complexity of the data distribution. RailEnV-PASMVS represents an application specific dataset for railway engineering, against the backdrop of existing datasets available in the field of computer vision, providing the precision required for novel research applications in the field of transportation engineering.

    File descriptions

    RailEnV-PASMVS.blend (227 Mb) - Blender file (developed using Blender version 2.8.1) used to generate the dataset. The Blender file packs only one of the HDR environmental textures to use as an example, along with all the other asset textures.

    RailEnV-PASMVS_sample.png (28 Mb) - A visual collage of 30 scenes, illustrating the variability introduced by using different models, illumination, material properties and camera focal lengths.

    geometry.zip (2 Mb) - Geometry CSV files used for scenes 01 to 20. The Bezier curve defines the geometry of the rail profile (10 mm intervals).

    PhysicalDataset.7z (2.0 Gb) - A smaller, secondary dataset of 320 manually annotated photographs of railway environments; only the railway profiles are annotated.

    01.7z-20.7z (2.0 Gb each) - Archive of each scene (01 through 20).

    all_list.txt, training_list.txt, validation_list.txt - Text files containing the all the scene names, together with those used for validation (validation_list.txt) and training (training_list.txt), used by MVSNet

    index.csv - CSV file provides a convenient reference for all the sample files, linking the corresponding file and relative data path.

    NOTE: Only 20 of the original 40 scenes are made available owing to size limitations of the data repository. This is still adequate for the purposes of training MVS neural networks. The Blender file is made available specifically to render out the scenes for different applications or adapt the camera framework altogether for different applications. Please refer to the corresponding manuscript for additional details.

    Steps to reproduce

    The open source Blender software suite (https://www.blender.org/) was used to generate the dataset, with the entire pipeline developed using the exposed Python API interface. The camera trajectory is kept fixed for all 40 scenes, except for small perturbations introduced in the form of random noise to increase the camera variation. The camera intrinsic information was initially exported as a single CSV file (scene.csv) for every scene, from which the camera information files were generated; this includes the focal length (focalLengthmm), image sensor dimensions (pixelDimensionX, pixelDimensionY), position, coordinate vector (vectC) and rotation vector (vectR). The STL model files, as provided in this data repository, were exported directly from Blender, such that the geometry/scenes can be reproduced. The data processing below is written for a Python implementation, transforming the information from Blender's coordinate system into universal rotation (R_world2cv) and translation (T_world2cv) matrices.

    import numpy as np from scipy.spatial.transform import Rotation as R

    The intrinsic matrix K is constructed using the following formulation:

    focalLengthPixel = focalLengthmm x pixelDimensionX / sensorWidthmm K = [[focalLengthPixel, 0, dimX/2], [0, focalPixel, dimY/2], [0, 0, 1]]

    The rotation vector as provided by Blender was first transformed to a rotation matrix:

    r = R.from_euler('xyz', vectR, degrees=True) matR = r.as_matrix()

    Transpose the rotation matrix, to find matrix from the WORLD to BLENDER coordinate system:

    R_world2bcam = np.transpose(matR)

    The matrix describing the transformation from BLENDER to CV/STANDARD coordinates is:

    R_bcam2cv = np.array([[1, 0, 0], [0, -1, 0], [0, 0, -1]])

    Thus the representation from WORLD to CV/STANDARD coordinates is:

    R_world2cv = R_bcam2cv.dot(R_world2bcam)

    The camera coordinate vector requires a similar transformation moving from BLENDER to WORLD coordinates:

    T_world2bcam = -1 * R_world2bcam.dot(vectC) T_world2cv = R_bcam2cv.dot(T_world2bcam)

    The resulting R_world2cv and T_world2cv matrices are written to the camera information file using exactly the same format as that of BlendedMVS developed by Dr. Yao. The original rotation and translation information can be found by following the process in reverse. Note that additional steps were required to convert from Blender's unique coordinate system to that of OpenCV; this ensures universal compatibility in the way that the camera intrinsic and extrinsic information is provided.

    Equivalent GPS information is provided (gps.csv), whereby the local coordinate frame is transformed into equivalent GPS information, centered around the Engineering 4.0 campus, University of Pretoria, South Africa. This information is embedded within the JPG files as EXIF data.

  7. d

    6DOF pose estimation - synthetically generated dataset using BlenderProc

    • search.dataone.org
    • data.niaid.nih.gov
    • +1more
    Updated Jul 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Divyam Sheth (2025). 6DOF pose estimation - synthetically generated dataset using BlenderProc [Dataset]. http://doi.org/10.5061/dryad.rbnzs7hj5
    Explore at:
    Dataset updated
    Jul 11, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Divyam Sheth
    Time period covered
    Jan 1, 2023
    Description

    Accurate and robust 6DOF (Six Degrees of Freedom) pose estimation is a critical task in various fields, including computer vision, robotics, and augmented reality. This research paper presents a novel approach to enhance the accuracy and reliability of 6DOF pose estimation by introducing a robust method for generating synthetic data and leveraging the ease of multi-class training using the generated dataset. The proposed method tackles the challenge of insufficient real-world annotated data by creating a large and diverse synthetic dataset that accurately mimics real-world scenarios. The proposed method only requires a CAD model of the object and there is no limit to the number of unique data that can be generated. Furthermore, a multi-class training strategy that harnesses the synthetic dataset's diversity is proposed and presented. This approach mitigates class imbalance issues and significantly boosts accuracy across varied object classes and poses. Experimental results underscore th..., This dataset has been synthetically generated using 3D software like Blender and APIs like Blendeproc., , # Data Repository README

    This repository contains data organized into a structured format. The data consists of three main folders and two files, each serving a specific purpose. The data contains two folders - Cat and Hand.

    Cat Dataset: 63492 labeled data with images, masks, and poses.

    Hand Dataset: 42418 labeled data with images, masks, and poses.

    Usage: The dataset is ready for use by simply extracting the contents of the zip file, whether for training in a segmentation task or a pose estimation task.

    To view .npy files you will need to use Python with the numpy package installed. In Python use the following commands.

    import numpy
    data = numpy.load('file.npy')
    print(data)

    What free/open software is appropriate for viewing the .ply files?
    These files can be opened using any 3D modeling software like Blender, Meshlab, etc.

    Camera Matrix Intrinstics Format :

    Fx 0 px 0 Fy py 0 0 0

    Below is an overview of the data organization:

    Folder Structure

    1. Rgb:
      • This ...
  8. XSLT Blender

    • catalog.data.gov
    • data.nist.gov
    Updated Jul 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Standards and Technology (2025). XSLT Blender [Dataset]. https://catalog.data.gov/dataset/xslt-blender
    Explore at:
    Dataset updated
    Jul 9, 2025
    Dataset provided by
    National Institute of Standards and Technologyhttp://www.nist.gov/
    Description

    Demonstrations and utilities using XSLT in the web browser. XSLT 1.0 is supported with no dependencies on external libraries. Source code is in XML, XSLT, Javascript and Typescript.

  9. PairRM-2.7B-data

    • huggingface.co
    Updated Mar 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    LLM Blender (2024). PairRM-2.7B-data [Dataset]. https://huggingface.co/datasets/llm-blender/PairRM-2.7B-data
    Explore at:
    Dataset updated
    Mar 22, 2024
    Dataset provided by
    Blender Foundationhttps://blender.org/foundation/
    Authors
    LLM Blender
    Description

    llm-blender/PairRM-2.7B-data dataset hosted on Hugging Face and contributed by the HF Datasets community

  10. e

    Replication Data for ReViBE: protocol for Refit Visualisation of lithic...

    • b2find.eudat.eu
    Updated Oct 25, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Replication Data for ReViBE: protocol for Refit Visualisation of lithic reduction sequences using the Blender Engine - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/cba90160-d708-5d83-a951-1d8aab67f204
    Explore at:
    Dataset updated
    Oct 25, 2024
    Description

    This dataset contains the raw data generated by the technological study and registration of the photogrammetry and 3D modelling of the refit set nº 41 of the 497D level of the Cova Gran de Santa Linya (south-eastern Pyrenees, Iberian Peninsula). The refit set is composed of 9 pieces that make up 4 morphometrically distinct artefacts. The physical characteristics of the pieces are tabulated in the database, the relationship between the pieces can be consulted in the flowchart. The Materials to reproduce the process folder contains the files that make it possible to reproduce the process described in the protocol. An example of the files obtained in photogrammetry is included (example piece). The other subfolders contain the 3D data files (.obj, .mtl; .jpg) that allow the virtual models and the animation sequence to be reconstructed. In addition, two videos show the synthesis of the process with the main steps of the process. Universitat Autònoma de Barcelona. Centre d'Estudis del Patrimoni Arqueològic de la Prehistòria (CEPAP-UAB)

  11. u

    Unimelb Corridor Synthetic dataset

    • figshare.unimelb.edu.au
    png
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Debaditya Acharya; KOUROSH KHOSHELHAM; STEPHAN WINTER (2023). Unimelb Corridor Synthetic dataset [Dataset]. http://doi.org/10.26188/5dd8b8085b191
    Explore at:
    pngAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    The University of Melbourne
    Authors
    Debaditya Acharya; KOUROSH KHOSHELHAM; STEPHAN WINTER
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data-set is a supplementary material related to the generation of synthetic images of a corridor in the University of Melbourne, Australia from a building information model (BIM). This data-set was generated to check the ability of deep learning algorithms to learn task of indoor localisation from synthetic images, when being tested on real images. =============================================================================The following is the name convention used for the data-sets. The brackets show the number of images in the data-set.REAL DATAReal
    ---------------------> Real images (949 images)

    Gradmag-Real -------> Gradmag of real data (949 images)SYNTHETIC DATASyn-Car
    ----------------> Cartoonish images (2500 images)

    Syn-pho-real ----------> Synthetic photo-realistic images (2500 images)

    Syn-pho-real-tex -----> Synthetic photo-realistic textured (2500 images)

    Syn-Edge --------------> Edge render images (2500 images)

    Gradmag-Syn-Car ---> Gradmag of Cartoonish images (2500 images)=============================================================================Each folder contains the images and their respective groundtruth poses in the following format [ImageName X Y Z w p q r].To generate the synthetic data-set, we define a trajectory in the 3D indoor model. The points in the trajectory serve as the ground truth poses of the synthetic images. The height of the trajectory was kept in the range of 1.5–1.8 m from the floor, which is the usual height of holding a camera in hand. Artificial point light sources were placed to illuminate the corridor (except for Edge render images). The length of the trajectory was approximately 30 m. A virtual camera was moved along the trajectory to render four different sets of synthetic images in Blender*. The intrinsic parameters of the virtual camera were kept identical to the real camera (VGA resolution, focal length of 3.5 mm, no distortion modeled). We have rendered images along the trajectory at 0.05 m interval and ± 10° tilt.The main difference between the cartoonish (Syn-car) and photo-realistic images (Syn-pho-real) is the model of rendering. Photo-realistic rendering is a physics-based model that traces the path of light rays in the scene, which is similar to the real world, whereas the cartoonish rendering roughly traces the path of light rays. The photorealistic textured images (Syn-pho-real-tex) were rendered by adding repeating synthetic textures to the 3D indoor model, such as the textures of brick, carpet and wooden ceiling. The realism of the photo-realistic rendering comes at the cost of rendering times. However, the rendering times of the photo-realistic data-sets were considerably reduced with the help of a GPU. Note that the naming convention used for the data-sets (e.g. Cartoonish) is according to Blender terminology.An additional data-set (Gradmag-Syn-car) was derived from the cartoonish images by taking the edge gradient magnitude of the images and suppressing weak edges below a threshold. The edge rendered images (Syn-edge) were generated by rendering only the edges of the 3D indoor model, without taking into account the lighting conditions. This data-set is similar to the Gradmag-Syn-car data-set, however, does not contain the effect of illumination of the scene, such as reflections and shadows.*Blender is an open-source 3D computer graphics software and finds its applications in video games, animated films, simulation and visual art. For more information please visit: http://www.blender.orgPlease cite the papers if you use the data-set:1) Acharya, D., Khoshelham, K., and Winter, S., 2019. BIM-PoseNet: Indoor camera localisation using a 3D indoor model and deep learning from synthetic images. ISPRS Journal of Photogrammetry and Remote Sensing. 150: 245-258.2) Acharya, D., Singha Roy, S., Khoshelham, K. and Winter, S. 2019. Modelling uncertainty of single image indoor localisation using a 3D model and deep learning. In ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences, IV-2/W5, pages 247-254.

  12. I

    BL30K

    • databank.illinois.edu
    Updated Nov 15, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ho Kei Cheng (2024). BL30K [Dataset]. http://doi.org/10.13012/B2IDB-1702934_V1
    Explore at:
    Dataset updated
    Nov 15, 2024
    Authors
    Ho Kei Cheng
    Description

    BL30K is a synthetic dataset rendered using Blender with ShapeNet's data. We break the dataset into six segments, each with approximately 5K videos. The videos are organized in a similar format as DAVIS and YouTubeVOS, so dataloaders for those datasets can be used directly. Each video is 160 frames long, and each frame has a resolution of 768*512. There are 3-5 objects per video, and each object has a random smooth trajectory -- we tried to optimize the trajectories in a greedy fashion to minimize object intersection (not guaranteed), with occlusions still possible (happen a lot in reality). See [Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion (MiVOS), CVPR 2022] for details.

  13. Z

    Multimodal3DIdent

    • data.niaid.nih.gov
    Updated Mar 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alice Bizeul (2023). Multimodal3DIdent [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7678230
    Explore at:
    Dataset updated
    Mar 29, 2023
    Dataset provided by
    Alexander Marx
    Julia E. Vogt
    Imant Daunhawer
    Alice Bizeul
    Emanuele Palumbo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This upload contains the Multimodal3DIdent dataset introduced in the paper Identifiability Results for Multimodal Contrastive Learning presented at ICLR 2023. The dataset provides an identifiability benchmark with image/text pairs generated from controllable ground truth factors, some of which are shared between image and text modalities. The training, validation, and test sets contain 125000, 10000, and 10000 image/text pairs and ground truth factors, respectively. The code for the data generation is publicly available: https://github.com/imantdaunhawer/Multimodal3DIdent.

    Description

    The generated dataset contains image and text data as well as the ground truth factors of variation for each modality. Each split (train/val/test) of the dataset is structured as follows:

    . ├── images │ ├── 000000.png │ ├── 000001.png │ └── etc. ├── text │ └── text_raw.txt ├── latents_image.csv └── latents_text.csv

    The directories images and text contain the generated image and text data, whereas the CSV files latents_image.csv and latents_text.csv contain the values of the respective latent factors. There is an index-wise correspondence between images, sentences, and latent factors. For example, the first line in the file text_raw.txt is the sentence that corresponds to the first image in the images directory.

    Latent factors: We use the following ground truth latent factors to generate image and text data. Each factor is sampled from a uniform distribution defined on the specified set of values for the respective factor.

        Modality
        Latent Factor
        Values
        Details
    
    
    
    
        Image
        Object shape
        {0, 1, ..., 6}
        Mapped to Blender shapes like "Teapot", "Hare", etc.
    
    
        Image
        Object x-position
        {0, 1, 2}
        Mapped to {-3, 0, 3} for Blender
    
    
        Image
        Object y-position
        {0, 1, 2}
        Mapped to {-3, 0, 3} for Blender
    
    
        Image
        Object z-position
        {0}
        Constant
    
    
        Image
        Object alpha-rotation
        [0, 1]-interval
        Linearly transformed to [-pi/2, pi/2] for Blender
    
    
        Image
        Object beta-rotation
        [0, 1]-interval
        Linearly transformed to [-pi/2, pi/2] for Blender
    
    
        Image
        Object gamma-rotation
        [0, 1]-interval
        Linearly transformed to [-pi/2, pi/2] for Blender
    
    
        Image
        Object color
        [0, 1]-interval
        Hue value in HSV transformed to RGB for Blender
    
    
        Image
        Spotlight position
        [0, 1]-interval
        Transformed to a unique position on a semicircle
    
    
        Image
        Spotlight color
        [0, 1]-interval
        Hue value in HSV transformed to RGB for Blender
    
    
        Image
        Background color
        [0, 1]-interval
        Hue value in HSV transformed to RGB for Blender
    
    
        Text
        Object shape
        {0, 1, ..., 6}
        Mapped to strings like "teapot", "hare", etc.
    
    
        Text
        Object x-position
        {0, 1, 2}
        Mapped to strings "left", "center", "right"
    
    
        Text
        Object y-position
        {0, 1, 2}
        Mapped to strings "top", "mid", "bottom"
    
    
        Text
        Object color
        string values
        Color names from 3 different color palettes
    
    
        Text
        Text phrasing
        {0, 1, ..., 4}
        Mapped to 5 different English sentences
    

    Image rendering: We use the Blender rendering engine to create visually complex images depicting a 3D scene. Each image in the dataset shows a colored 3D object of a certain shape or class (i.e., teapot, hare, cow, armadillo, dragon, horse, or head) in front of a colored background and illuminated by a colored spotlight that is focused on the object and located on a semicircle above the scene. The resulting RGB images are of size 224 x 224 x 3.

    Text generation: We generate a short sentence describing the respective scene. Each sentence describes the object's shape or class (e.g., teapot), position (e.g., bottom-left), and color. The color is represented in a human-readable form (e.g., "lawngreen", "xkcd:bright aqua", etc.) as the name of the color (from a randomly sampled palette) that is closest to the sampled color value in RGB space. The sentence is constructed from one of five pre-configured phrases with placeholders for the respective ground truth factors.

    Relation between modalities: Three latent factors (object shape, x-position, y-position) are shared between image/text pairs. The object color also exhibits a dependence between modalities; however, it is not a 1-to-1 correspondence because the color palette is sampled randomly from a set of multiple palettes. Additionally, there is a causal dependence of object color on object x-position since the range of hue values [0, 1] is split into three equally sized intervals, each of which is associated with a fixed x-position of the object. For instance, if x-position is “left”, we sample the hue value from the interval [0, 1/3]. Consequently, the color of the object can be predicted to some degree from the object's position.

    Acknowledgements

    The Multimodal3DIdent dataset builds on the following resources: - 3DIdent dataset - Causal3DIdent dataset - CLEVR dataset - Blender open-source 3D creation suite

  14. w

    Dataset of book subjects that contain Procedural 3D modeling using geometry...

    • workwithdata.com
    Updated Nov 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Work With Data (2024). Dataset of book subjects that contain Procedural 3D modeling using geometry nodes in blender : discover the professional usage of geometry nodes and develop a creative approach to a node-based workflow [Dataset]. https://www.workwithdata.com/datasets/book-subjects?f=1&fcol0=j0-book&fop0=%3D&fval0=Procedural+3D+modeling+using+geometry+nodes+in+blender+%3A+discover+the+professional+usage+of+geometry+nodes+and+develop+a+creative+approach+to+a+node-based+workflow&j=1&j0=books
    Explore at:
    Dataset updated
    Nov 7, 2024
    Dataset authored and provided by
    Work With Data
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is about book subjects. It has 2 rows and is filtered where the books is Procedural 3D modeling using geometry nodes in blender : discover the professional usage of geometry nodes and develop a creative approach to a node-based workflow. It features 10 columns including number of authors, number of books, earliest publication date, and latest publication date.

  15. h

    pickup-blender

    • huggingface.co
    Updated May 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AutoBio Benchmark (2025). pickup-blender [Dataset]. https://huggingface.co/datasets/autobio-bench/pickup-blender
    Explore at:
    Dataset updated
    May 14, 2025
    Dataset authored and provided by
    AutoBio Benchmark
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    This dataset was created using LeRobot.

      Dataset Structure
    

    meta/info.json: { "codebase_version": "v2.0", "robot_type": null, "total_episodes": 100, "total_frames": 49959, "total_tasks": 1, "total_videos": 200, "total_chunks": 1, "chunks_size": 1000, "fps": 50, "splits": { "train": "0:100"}, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path":… See the full description on the dataset page: https://huggingface.co/datasets/autobio-bench/pickup-blender.

  16. h

    screw_loose-blender

    • huggingface.co
    Updated May 14, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AutoBio Benchmark (2025). screw_loose-blender [Dataset]. https://huggingface.co/datasets/autobio-bench/screw_loose-blender
    Explore at:
    Dataset updated
    May 14, 2025
    Dataset authored and provided by
    AutoBio Benchmark
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    This dataset was created using LeRobot.

      Dataset Structure
    

    meta/info.json: { "codebase_version": "v2.0", "robot_type": null, "total_episodes": 100, "total_frames": 136834, "total_tasks": 1, "total_videos": 300, "total_chunks": 1, "chunks_size": 1000, "fps": 50, "splits": { "train": "0:100"}, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path":… See the full description on the dataset page: https://huggingface.co/datasets/autobio-bench/screw_loose-blender.

  17. Data from: The walking dead: blender as a tool for palaeontologists with a...

    • zenodo.org
    • data.niaid.nih.gov
    • +1more
    avi, bin, pdf
    Updated Jul 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Russell Garwood; Jason Dunlop; Russell Garwood; Jason Dunlop (2024). Data from: The walking dead: blender as a tool for palaeontologists with a case study on extinct arachnids [Dataset]. http://doi.org/10.5061/dryad.1v2s7
    Explore at:
    pdf, avi, binAvailable download formats
    Dataset updated
    Jul 19, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Russell Garwood; Jason Dunlop; Russell Garwood; Jason Dunlop
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This paper serves two roles. First, it acts as an introduction to Blender, an open-source computer graphics program, which can be of utility to paleontologists. To lessen the software's otherwise steep learning curve, a step-by-step guide to create an idealized reconstruction of a fossil in the form of a three-dimensional model in Blender, or to use the software to render results from 'virtual paleontology' techniques, is provided as an online supplemental data file. Second, here we demonstrate the use of Blender with a case study on the extinct trigonotarbid arachnids. We report the limb articulations of members of the Devonian genus Palaeocharinus on the basis of exceptionally preserved fossils from the Rhynie Cherts of Scotland. We use these newly reported articulations to create a Blender model, and draw comparisons with the gait of extant arachnids to produce as accurate a representation of the trigonotarbid flexing its limbs and walking as possible, presented in additional online supplemental data files. Knowledge of the limb articulations of trigonotarbid arachnids also allows us to discuss their functional morphology: trigonotarbids' limbs and gait were likely comparable to extant cursorial spiders, but lacked some innovations seen in more derived arachnids.

  18. n

    Blender

    • neuinfo.org
    • scicrunch.org
    • +2more
    Updated Oct 16, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2019). Blender [Dataset]. http://identifiers.org/RRID:SCR_008606
    Explore at:
    Dataset updated
    Oct 16, 2019
    Description

    Blender is the free open source 3D content creation suite, available for all major operating systems under the GNU General Public License. Because of the overwhelming success of the first open movie project, Ton Roosendaal, the Blender Foundation''s chairman, has established the Blender Institute. This now is the permanent office and studio to more efficiently organize the Blender Foundation goals, but especially to coordinate and facilitate Open Projects related to 3D movies, games or visual effects.

  19. h

    thermal_cycler_close-blender

    • huggingface.co
    Updated May 14, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AutoBio Benchmark (2025). thermal_cycler_close-blender [Dataset]. https://huggingface.co/datasets/autobio-bench/thermal_cycler_close-blender
    Explore at:
    Dataset updated
    May 14, 2025
    Dataset authored and provided by
    AutoBio Benchmark
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    This dataset was created using LeRobot.

      Dataset Structure
    

    meta/info.json: { "codebase_version": "v2.0", "robot_type": null, "total_episodes": 100, "total_frames": 106292, "total_tasks": 1, "total_videos": 200, "total_chunks": 1, "chunks_size": 1000, "fps": 50, "splits": { "train": "0:100" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path":… See the full description on the dataset page: https://huggingface.co/datasets/autobio-bench/thermal_cycler_close-blender.

  20. f

    S1 Data -

    • plos.figshare.com
    xlsx
    Updated Sep 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mikael T. Ekvall; Isabella Gimskog; Egle Kelpsiene; Alice Mellring; Alma MĂĄnsson; Martin Lundqvist; Tommy Cedervall (2023). S1 Data - [Dataset]. http://doi.org/10.1371/journal.pone.0289377.s002
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Sep 13, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Mikael T. Ekvall; Isabella Gimskog; Egle Kelpsiene; Alice Mellring; Alma MĂĄnsson; Martin Lundqvist; Tommy Cedervall
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Waste of polymer products, especially plastics, in nature has become a problem that caught the awareness of the general public during the last decade. The macro- and micro polymers in nature will be broken down by naturally occurring events such as mechanical wear and ultra-violet (UV) radiation which will result in the generation of polymeric particles in the nano-size range. We have recently shown that polystyrene and high-density polyethylene macroplastic can be broken down into nano-sized particles by applying mechanical force from an immersion blender. In this article, we show that particles in the nano-size range are released from silicone and latex pacifiers after the same treatment. Additionally, boiling the pacifiers prior to the mechanical breakdown process results in an increased number of particles released from the silicone but not the latex pacifier. Particles from the latex pacifier are acutely toxic to the freshwater filter feeding zooplankter Daphnia magna.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
ukasz Kowalczyk (2023). Test Blender Synth Data Dataset [Dataset]. https://universe.roboflow.com/ukasz-kowalczyk/test-blender-synth-data/dataset/1

Test Blender Synth Data Dataset

test-blender-synth-data

test-blender-synth-data-dataset

Explore at:
zipAvailable download formats
Dataset updated
Dec 14, 2023
Dataset authored and provided by
ukasz Kowalczyk
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Variables measured
Geometric Shape Bounding Boxes
Description

Test Blender Synth Data

## Overview

Test Blender Synth Data is a dataset for object detection tasks - it contains Geometric Shape annotations for 329 images.

## Getting Started

You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.

  ## License

  This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Search
Clear search
Close search
Google apps
Main menu