Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Test Blender Synth Data is a dataset for object detection tasks - it contains Geometric Shape annotations for 329 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This repository provides the Omnidirectional Blender 3D (OB3D) Dataset , a dataset designed for 3D reconstruction from multi-view equirectangular images. In addition to 3D reconstruction, it also supports novel view synthesis and camera pose estimation for equirectangular images. This dataset consists of 12 scenes, each of which contains RGB images, depth maps, normal maps, camera parameters, and sparse 3D point clouds.
OB3D contains 12 different scenes: archiviz-flat, barbershop, bistro, classroom, emerald-square, fisher-hut, lone-monk, pavillion, restroom, san-miguel, sponza, sun-temple. Each scene has the following directory structure:
OB3D
|-- archivis-flat
| |-- Egocentric
| |--cameras
| |--00000_cam.json
| |--...
| |--depths
| |--00000_depth.exr
| |--...
| |--images
| |--00000_rgb.png
| |--...
| |--normals
| |--00000_normal.exr
| |--...
| |--sparse
| |--sparse.ply
| |--train.txt
| |--test.txt
|
| |-- Non-Egocentric
| |-...
In the OB3D, the images were captured along two different camera trajectories: Egocentric and Non-Egocentric. - Egocentric: Capture images while moving the camera in a spiral path - Non-Egocentric: Capture images while moving an environment in a free path
cameras/00000_cam.json
contains both the camera extrinsic and intrinsic parameters, and each data is stored in the following format:
[
{
"id": 0,
"width": 1600,
"height": 800,
"intrinsics": {
"focal": 800,
"cx": 800,
"cy": 400
},
"extrinsics": {
"rotation": [...],
"translation": [...]
}
}
]
OB3D provides sparse 3D point clouds (sparse.ply
) obtained from OpenMVG for each scene. These point clouds are reconstructed using ground-truth camera parameters and can be used for initializing Gaussian Splatting or as preprocessing for SDF-based methods.
We evaluate the reconstructed mesh using ground-truth depth maps, which are rendered from ground truth 3D mesh. A straightforward way to evaluate the quality of a reconstructed mesh is to measure the distance between the reconstructed mesh and the ground-truth mesh. However, 3D models created using software like Blender are often designed to appear plausible only from specific viewpoints, and as a result, the geometry of occluded or unseen regions may be inaccurate or incomplete. Moreover, certain areas may not be visible in the images at all, making accurate reconstruction in those regions inherently impossible. To deal with these problems and ensure a fair evaluation, we compare depth maps rendered from the reconstructed mesh with those rendered from the ground-truth model.
Once the mesh is reconstructed in the same scale and coordinate system as the ground truth, our evaluation code can be used to quantitatively assess its quality. The evaluation code takes the reconstructed mesh and the ground-truth camera parameters from the OB3D dataset as input, renders depth maps from the mesh, and then compares these rendered depth maps to the corresponding ground-truth depth maps to compute quantitative metrics. The evaluation code is available at GitHub Page of OB3D.
When using SDF-based methods like NeuS, it may be necessary to transform the scene into a normalized space —such as fitting it into a unit sphere — which alters the scale and coordinate system. In such cases, we recommend saving the transformation parameters so that the mesh can be converted back to the original coordinate system and scale for evaluation.
The following is the source of the Blender project used for dataset creation. We would like to thank the creators of each project.
Scene | URL |
---|---|
archiviz-flat | https://www.blender.org/download/demo-files/ |
barbershop | https://www.blender.org/download/demo-files/ |
classroom | https://www.blender.org/download/demo-files/ |
restroom | https://blendswap.com/blend/14216 |
sun-temple | https://developer.nvidia.com/ue4-sun-temple |
bistro | https://developer.nvidia.com/orca/amazon-lumberyard-bistro |
emerald-square | https://developer.nvidia.com/orca/nvidia-emerald-square |
fisher-hut | https://www.blendswap.com/blend/3... |
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
The MatSim Dataset and benchmark
Synthetic dataset and real images benchmark for visual similarity recognition of materials and textures.
MatSim: a synthetic dataset, a benchmark, and a method for computer vision-based recognition of similarities and transitions between materials and textures focusing on identifying any material under any conditions using one or a few examples (one-shot learning).
Based on the paper: One-shot recognition of any material anywhere using contrastive learning with physics-based rendering
Benchmark_MATSIM.zip: contain the benchmark made of real-world images as described in the paper
MatSim_object_train_split_1,2,3.zip: Contain a subset of the synthetics dataset for images of CGI images materials on random objects as described in the paper.
MatSim_Vessels_Train_1,2,3.zip : Contain a subset of the synthetics dataset for images of CGI images materials inside transparent containers as described in the paper.
*Note: these are subsets of the dataset; the full dataset can be found at:
https://e1.pcloud.link/publink/show?code=kZIiSQZCYU5M4HOvnQykql9jxF4h0KiC5MX
or
https://icedrive.net/s/A13FWzZ8V2aP9T4ufGQ1N3fBZxDF
Code:
Up to date code for generating the dataset, reading and evaluation and trained nets can be found in this URL:https://github.com/sagieppel/MatSim-Dataset-Generator-Scripts-And-Neural-net
Dataset Generation Scripts.zip: Contain the Blender (3.1) Python scripts used for generating the dataset, this code might be odl up to date code can be found here
Net_Code_And_Trained_Model.zip: Contain a reference neural net code, including loaders, trained models, and evaluators scripts that can be used to read and train with the synthetic dataset or test the model with the benchmark. Note code in the ZIP file is not up to date and contains some bugs For the Latest version of this code see this URL
Further documentation can be found inside the zip files or in the paper.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
MixInstruct
Introduction
This is the official realease of dataset MixInstruct for project LLM-Blender. This dataset contains 11 responses from the current popular instruction following-LLMs that includes:
Stanford Alpaca FastChat Vicuna Dolly V2 StableLM Open Assistant Koala Baize Flan-T5 ChatGLM MOSS Moasic MPT
We evaluate each response with auto metrics including BLEU, ROUGE, BERTScore, BARTScore. And provide pairwise comparison results by prompting ChatGPT for the… See the full description on the dataset page: https://huggingface.co/datasets/llm-blender/mix-instruct.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Blenderproc is a dataset for object detection tasks - it contains Conector Tapa annotations for 230 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A Perfectly Accurate, Synthetic dataset featuring a virtual railway EnVironment for Multi-View Stereopsis (RailEnV-PASMVS) is presented, consisting of 40 scenes and 79,800 renderings together with ground truth depth maps, extrinsic and intrinsic camera parameters and binary segmentation masks of all the track components and surrounding environment. Every scene is rendered from a set of 3 cameras, each positioned relative to the track for optimal 3D reconstruction of the rail profile. The set of cameras is translated across the 100-meter length of tangent (straight) track to yield a total of 1,995 camera views. Photorealistic lighting of each of the 40 scenes is achieved with the implementation of high-definition, high dynamic range (HDR) environmental textures. Additional variation is introduced in the form of camera focal lengths, random noise for the camera location and rotation parameters and shader modifications of the rail profile. Representative track geometry data is used to generate random and unique vertical alignment data for the rail profile for every scene. This primary, synthetic dataset is augmented by a smaller image collection consisting of 320 manually annotated photographs for improved segmentation performance. The specular rail profile represents the most challenging component for MVS reconstruction algorithms, pipelines and neural network architectures, increasing the ambiguity and complexity of the data distribution. RailEnV-PASMVS represents an application specific dataset for railway engineering, against the backdrop of existing datasets available in the field of computer vision, providing the precision required for novel research applications in the field of transportation engineering.
File descriptions
RailEnV-PASMVS.blend (227 Mb) - Blender file (developed using Blender version 2.8.1) used to generate the dataset. The Blender file packs only one of the HDR environmental textures to use as an example, along with all the other asset textures.
RailEnV-PASMVS_sample.png (28 Mb) - A visual collage of 30 scenes, illustrating the variability introduced by using different models, illumination, material properties and camera focal lengths.
geometry.zip (2 Mb) - Geometry CSV files used for scenes 01 to 20. The Bezier curve defines the geometry of the rail profile (10 mm intervals).
PhysicalDataset.7z (2.0 Gb) - A smaller, secondary dataset of 320 manually annotated photographs of railway environments; only the railway profiles are annotated.
01.7z-20.7z (2.0 Gb each) - Archive of each scene (01 through 20).
all_list.txt, training_list.txt, validation_list.txt - Text files containing the all the scene names, together with those used for validation (validation_list.txt) and training (training_list.txt), used by MVSNet
index.csv - CSV file provides a convenient reference for all the sample files, linking the corresponding file and relative data path.
NOTE: Only 20 of the original 40 scenes are made available owing to size limitations of the data repository. This is still adequate for the purposes of training MVS neural networks. The Blender file is made available specifically to render out the scenes for different applications or adapt the camera framework altogether for different applications. Please refer to the corresponding manuscript for additional details.
Steps to reproduce
The open source Blender software suite (https://www.blender.org/) was used to generate the dataset, with the entire pipeline developed using the exposed Python API interface. The camera trajectory is kept fixed for all 40 scenes, except for small perturbations introduced in the form of random noise to increase the camera variation. The camera intrinsic information was initially exported as a single CSV file (scene.csv) for every scene, from which the camera information files were generated; this includes the focal length (focalLengthmm), image sensor dimensions (pixelDimensionX, pixelDimensionY), position, coordinate vector (vectC) and rotation vector (vectR). The STL model files, as provided in this data repository, were exported directly from Blender, such that the geometry/scenes can be reproduced. The data processing below is written for a Python implementation, transforming the information from Blender's coordinate system into universal rotation (R_world2cv) and translation (T_world2cv) matrices.
import numpy as np from scipy.spatial.transform import Rotation as R
focalLengthPixel = focalLengthmm x pixelDimensionX / sensorWidthmm K = [[focalLengthPixel, 0, dimX/2], [0, focalPixel, dimY/2], [0, 0, 1]]
r = R.from_euler('xyz', vectR, degrees=True) matR = r.as_matrix()
R_world2bcam = np.transpose(matR)
R_bcam2cv = np.array([[1, 0, 0], [0, -1, 0], [0, 0, -1]])
R_world2cv = R_bcam2cv.dot(R_world2bcam)
T_world2bcam = -1 * R_world2bcam.dot(vectC) T_world2cv = R_bcam2cv.dot(T_world2bcam)
The resulting R_world2cv and T_world2cv matrices are written to the camera information file using exactly the same format as that of BlendedMVS developed by Dr. Yao. The original rotation and translation information can be found by following the process in reverse. Note that additional steps were required to convert from Blender's unique coordinate system to that of OpenCV; this ensures universal compatibility in the way that the camera intrinsic and extrinsic information is provided.
Equivalent GPS information is provided (gps.csv), whereby the local coordinate frame is transformed into equivalent GPS information, centered around the Engineering 4.0 campus, University of Pretoria, South Africa. This information is embedded within the JPG files as EXIF data.
Accurate and robust 6DOF (Six Degrees of Freedom) pose estimation is a critical task in various fields, including computer vision, robotics, and augmented reality. This research paper presents a novel approach to enhance the accuracy and reliability of 6DOF pose estimation by introducing a robust method for generating synthetic data and leveraging the ease of multi-class training using the generated dataset. The proposed method tackles the challenge of insufficient real-world annotated data by creating a large and diverse synthetic dataset that accurately mimics real-world scenarios. The proposed method only requires a CAD model of the object and there is no limit to the number of unique data that can be generated. Furthermore, a multi-class training strategy that harnesses the synthetic dataset's diversity is proposed and presented. This approach mitigates class imbalance issues and significantly boosts accuracy across varied object classes and poses. Experimental results underscore th..., This dataset has been synthetically generated using 3D software like Blender and APIs like Blendeproc., , # Data Repository README
This repository contains data organized into a structured format. The data consists of three main folders and two files, each serving a specific purpose. The data contains two folders - Cat and Hand.
Cat Dataset: 63492 labeled data with images, masks, and poses.
Hand Dataset: 42418 labeled data with images, masks, and poses.
Usage: The dataset is ready for use by simply extracting the contents of the zip file, whether for training in a segmentation task or a pose estimation task.
To view .npy files you will need to use Python with the numpy package installed. In Python use the following commands.
import numpy
data = numpy.load('file.npy')
print(data)
What free/open software is appropriate for viewing the .ply files?
These files can be opened using any 3D modeling software like Blender, Meshlab, etc.
Camera Matrix Intrinstics Format :
Fx 0 px 0 Fy py 0 0 0
Below is an overview of the data organization:
Demonstrations and utilities using XSLT in the web browser. XSLT 1.0 is supported with no dependencies on external libraries. Source code is in XML, XSLT, Javascript and Typescript.
llm-blender/PairRM-2.7B-data dataset hosted on Hugging Face and contributed by the HF Datasets community
This dataset contains the raw data generated by the technological study and registration of the photogrammetry and 3D modelling of the refit set nº 41 of the 497D level of the Cova Gran de Santa Linya (south-eastern Pyrenees, Iberian Peninsula). The refit set is composed of 9 pieces that make up 4 morphometrically distinct artefacts. The physical characteristics of the pieces are tabulated in the database, the relationship between the pieces can be consulted in the flowchart. The Materials to reproduce the process folder contains the files that make it possible to reproduce the process described in the protocol. An example of the files obtained in photogrammetry is included (example piece). The other subfolders contain the 3D data files (.obj, .mtl; .jpg) that allow the virtual models and the animation sequence to be reconstructed. In addition, two videos show the synthesis of the process with the main steps of the process. Universitat Autònoma de Barcelona. Centre d'Estudis del Patrimoni Arqueològic de la Prehistòria (CEPAP-UAB)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data-set is a supplementary material related to the generation of synthetic images of a corridor in the University of Melbourne, Australia from a building information model (BIM). This data-set was generated to check the ability of deep learning algorithms to learn task of indoor localisation from synthetic images, when being tested on real images. =============================================================================The following is the name convention used for the data-sets. The brackets show the number of images in the data-set.REAL DATAReal
---------------------> Real images (949 images)
Gradmag-Real -------> Gradmag of real data
(949 images)SYNTHETIC DATASyn-Car
----------------> Cartoonish images (2500 images)
Syn-pho-real ----------> Synthetic photo-realistic images (2500 images)
Syn-pho-real-tex -----> Synthetic photo-realistic textured (2500 images)
Syn-Edge --------------> Edge render images (2500 images)
Gradmag-Syn-Car ---> Gradmag of Cartoonish images (2500 images)=============================================================================Each folder contains the images and their respective groundtruth poses in the following format [ImageName X Y Z w p q r].To generate the synthetic data-set, we define a trajectory in the 3D indoor model. The points in the trajectory serve as the ground truth poses of the synthetic images. The height of the trajectory was kept in the range of 1.5–1.8 m from the floor, which is the usual height of holding a camera in hand. Artificial point light sources were placed to illuminate the corridor (except for Edge render images). The length of the trajectory was approximately 30 m. A virtual camera was moved along the trajectory to render four different sets of synthetic images in Blender*. The intrinsic parameters of the virtual camera were kept identical to the real camera (VGA resolution, focal length of 3.5 mm, no distortion modeled). We have rendered images along the trajectory at 0.05 m interval and ± 10° tilt.The main difference between the cartoonish (Syn-car) and photo-realistic images (Syn-pho-real) is the model of rendering. Photo-realistic rendering is a physics-based model that traces the path of light rays in the scene, which is similar to the real world, whereas the cartoonish rendering roughly traces the path of light rays. The photorealistic textured images (Syn-pho-real-tex) were rendered by adding repeating synthetic textures to the 3D indoor model, such as the textures of brick, carpet and wooden ceiling. The realism of the photo-realistic rendering comes at the cost of rendering times. However, the rendering times of the photo-realistic data-sets were considerably reduced with the help of a GPU. Note that the naming convention used for the data-sets (e.g. Cartoonish) is according to Blender terminology.An additional data-set (Gradmag-Syn-car) was derived from the cartoonish images by taking the edge gradient magnitude of the images and suppressing weak edges below a threshold. The edge rendered images (Syn-edge) were generated by rendering only the edges of the 3D indoor model, without taking into account the lighting conditions. This data-set is similar to the Gradmag-Syn-car data-set, however, does not contain the effect of illumination of the scene, such as reflections and shadows.*Blender is an open-source 3D computer graphics software and finds its applications in video games, animated films, simulation and visual art. For more information please visit: http://www.blender.orgPlease cite the papers if you use the data-set:1) Acharya, D., Khoshelham, K., and Winter, S., 2019. BIM-PoseNet: Indoor camera localisation using a 3D indoor model and deep learning from synthetic images. ISPRS Journal of Photogrammetry and Remote Sensing. 150: 245-258.2) Acharya, D., Singha Roy, S., Khoshelham, K. and Winter, S. 2019. Modelling uncertainty of single image indoor localisation using a 3D model and deep learning. In ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences, IV-2/W5, pages 247-254.
BL30K is a synthetic dataset rendered using Blender with ShapeNet's data. We break the dataset into six segments, each with approximately 5K videos. The videos are organized in a similar format as DAVIS and YouTubeVOS, so dataloaders for those datasets can be used directly. Each video is 160 frames long, and each frame has a resolution of 768*512. There are 3-5 objects per video, and each object has a random smooth trajectory -- we tried to optimize the trajectories in a greedy fashion to minimize object intersection (not guaranteed), with occlusions still possible (happen a lot in reality). See [Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion (MiVOS), CVPR 2022] for details.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This upload contains the Multimodal3DIdent dataset introduced in the paper Identifiability Results for Multimodal Contrastive Learning presented at ICLR 2023. The dataset provides an identifiability benchmark with image/text pairs generated from controllable ground truth factors, some of which are shared between image and text modalities. The training, validation, and test sets contain 125000, 10000, and 10000 image/text pairs and ground truth factors, respectively. The code for the data generation is publicly available: https://github.com/imantdaunhawer/Multimodal3DIdent.
The generated dataset contains image and text data as well as the ground truth factors of variation for each modality. Each split (train/val/test) of the dataset is structured as follows:
. ├── images │ ├── 000000.png │ ├── 000001.png │ └── etc. ├── text │ └── text_raw.txt ├── latents_image.csv └── latents_text.csv
The directories images and text contain the generated image and text data, whereas the CSV files latents_image.csv and latents_text.csv contain the values of the respective latent factors. There is an index-wise correspondence between images, sentences, and latent factors. For example, the first line in the file text_raw.txt is the sentence that corresponds to the first image in the images directory.
Latent factors: We use the following ground truth latent factors to generate image and text data. Each factor is sampled from a uniform distribution defined on the specified set of values for the respective factor.
Modality
Latent Factor
Values
Details
Image
Object shape
{0, 1, ..., 6}
Mapped to Blender shapes like "Teapot", "Hare", etc.
Image
Object x-position
{0, 1, 2}
Mapped to {-3, 0, 3} for Blender
Image
Object y-position
{0, 1, 2}
Mapped to {-3, 0, 3} for Blender
Image
Object z-position
{0}
Constant
Image
Object alpha-rotation
[0, 1]-interval
Linearly transformed to [-pi/2, pi/2] for Blender
Image
Object beta-rotation
[0, 1]-interval
Linearly transformed to [-pi/2, pi/2] for Blender
Image
Object gamma-rotation
[0, 1]-interval
Linearly transformed to [-pi/2, pi/2] for Blender
Image
Object color
[0, 1]-interval
Hue value in HSV transformed to RGB for Blender
Image
Spotlight position
[0, 1]-interval
Transformed to a unique position on a semicircle
Image
Spotlight color
[0, 1]-interval
Hue value in HSV transformed to RGB for Blender
Image
Background color
[0, 1]-interval
Hue value in HSV transformed to RGB for Blender
Text
Object shape
{0, 1, ..., 6}
Mapped to strings like "teapot", "hare", etc.
Text
Object x-position
{0, 1, 2}
Mapped to strings "left", "center", "right"
Text
Object y-position
{0, 1, 2}
Mapped to strings "top", "mid", "bottom"
Text
Object color
string values
Color names from 3 different color palettes
Text
Text phrasing
{0, 1, ..., 4}
Mapped to 5 different English sentences
Image rendering: We use the Blender rendering engine to create visually complex images depicting a 3D scene. Each image in the dataset shows a colored 3D object of a certain shape or class (i.e., teapot, hare, cow, armadillo, dragon, horse, or head) in front of a colored background and illuminated by a colored spotlight that is focused on the object and located on a semicircle above the scene. The resulting RGB images are of size 224 x 224 x 3.
Text generation: We generate a short sentence describing the respective scene. Each sentence describes the object's shape or class (e.g., teapot), position (e.g., bottom-left), and color. The color is represented in a human-readable form (e.g., "lawngreen", "xkcd:bright aqua", etc.) as the name of the color (from a randomly sampled palette) that is closest to the sampled color value in RGB space. The sentence is constructed from one of five pre-configured phrases with placeholders for the respective ground truth factors.
Relation between modalities: Three latent factors (object shape, x-position, y-position) are shared between image/text pairs. The object color also exhibits a dependence between modalities; however, it is not a 1-to-1 correspondence because the color palette is sampled randomly from a set of multiple palettes. Additionally, there is a causal dependence of object color on object x-position since the range of hue values [0, 1] is split into three equally sized intervals, each of which is associated with a fixed x-position of the object. For instance, if x-position is “left”, we sample the hue value from the interval [0, 1/3]. Consequently, the color of the object can be predicted to some degree from the object's position.
The Multimodal3DIdent dataset builds on the following resources: - 3DIdent dataset - Causal3DIdent dataset - CLEVR dataset - Blender open-source 3D creation suite
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is about book subjects. It has 2 rows and is filtered where the books is Procedural 3D modeling using geometry nodes in blender : discover the professional usage of geometry nodes and develop a creative approach to a node-based workflow. It features 10 columns including number of authors, number of books, earliest publication date, and latest publication date.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset was created using LeRobot.
Dataset Structure
meta/info.json: { "codebase_version": "v2.0", "robot_type": null, "total_episodes": 100, "total_frames": 49959, "total_tasks": 1, "total_videos": 200, "total_chunks": 1, "chunks_size": 1000, "fps": 50, "splits": { "train": "0:100"}, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path":… See the full description on the dataset page: https://huggingface.co/datasets/autobio-bench/pickup-blender.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset was created using LeRobot.
Dataset Structure
meta/info.json: { "codebase_version": "v2.0", "robot_type": null, "total_episodes": 100, "total_frames": 136834, "total_tasks": 1, "total_videos": 300, "total_chunks": 1, "chunks_size": 1000, "fps": 50, "splits": { "train": "0:100"}, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path":… See the full description on the dataset page: https://huggingface.co/datasets/autobio-bench/screw_loose-blender.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This paper serves two roles. First, it acts as an introduction to Blender, an open-source computer graphics program, which can be of utility to paleontologists. To lessen the software's otherwise steep learning curve, a step-by-step guide to create an idealized reconstruction of a fossil in the form of a three-dimensional model in Blender, or to use the software to render results from 'virtual paleontology' techniques, is provided as an online supplemental data file. Second, here we demonstrate the use of Blender with a case study on the extinct trigonotarbid arachnids. We report the limb articulations of members of the Devonian genus Palaeocharinus on the basis of exceptionally preserved fossils from the Rhynie Cherts of Scotland. We use these newly reported articulations to create a Blender model, and draw comparisons with the gait of extant arachnids to produce as accurate a representation of the trigonotarbid flexing its limbs and walking as possible, presented in additional online supplemental data files. Knowledge of the limb articulations of trigonotarbid arachnids also allows us to discuss their functional morphology: trigonotarbids' limbs and gait were likely comparable to extant cursorial spiders, but lacked some innovations seen in more derived arachnids.
Blender is the free open source 3D content creation suite, available for all major operating systems under the GNU General Public License. Because of the overwhelming success of the first open movie project, Ton Roosendaal, the Blender Foundation''s chairman, has established the Blender Institute. This now is the permanent office and studio to more efficiently organize the Blender Foundation goals, but especially to coordinate and facilitate Open Projects related to 3D movies, games or visual effects.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset was created using LeRobot.
Dataset Structure
meta/info.json: { "codebase_version": "v2.0", "robot_type": null, "total_episodes": 100, "total_frames": 106292, "total_tasks": 1, "total_videos": 200, "total_chunks": 1, "chunks_size": 1000, "fps": 50, "splits": { "train": "0:100" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path":… See the full description on the dataset page: https://huggingface.co/datasets/autobio-bench/thermal_cycler_close-blender.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Waste of polymer products, especially plastics, in nature has become a problem that caught the awareness of the general public during the last decade. The macro- and micro polymers in nature will be broken down by naturally occurring events such as mechanical wear and ultra-violet (UV) radiation which will result in the generation of polymeric particles in the nano-size range. We have recently shown that polystyrene and high-density polyethylene macroplastic can be broken down into nano-sized particles by applying mechanical force from an immersion blender. In this article, we show that particles in the nano-size range are released from silicone and latex pacifiers after the same treatment. Additionally, boiling the pacifiers prior to the mechanical breakdown process results in an increased number of particles released from the silicone but not the latex pacifier. Particles from the latex pacifier are acutely toxic to the freshwater filter feeding zooplankter Daphnia magna.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Test Blender Synth Data is a dataset for object detection tasks - it contains Geometric Shape annotations for 329 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).