The Robomimic proficient human datasets were collected by 1 proficient operator using the RoboTurk platform (with the exception of Transport, which had 2 proficient operators working together). Each dataset consists of 200 successful trajectories.
Each task has two versions: one with low dimensional observations (low_dim
),
and one with images (image
).
The datasets follow the RLDS format to represent steps and episodes.
To use this dataset:
import tensorflow_datasets as tfds
ds = tfds.load('robomimic_ph', split='train')
for ex in ds.take(4):
print(ex)
See the guide for more informations on tensorflow_datasets.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset was created using LeRobot.
Dataset Structure
meta/info.json: { "codebase_version": "v2.1", "robot_type": "robomimic", "total_episodes": 10, "total_frames": 1500, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 20, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path":… See the full description on the dataset page: https://huggingface.co/datasets/ankile/robomimic-mg-lift-image.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset was created using LeRobot.
Dataset Structure
meta/info.json: { "codebase_version": "v2.1", "robot_type": "robomimic", "total_episodes": 300, "total_frames": 80731, "total_tasks": 1, "total_videos": 600, "total_chunks": 1, "chunks_size": 1000, "fps": 20, "splits": { "train": "0:300" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path":… See the full description on the dataset page: https://huggingface.co/datasets/ankile/robomimic-mh-square-image.
The Robomimic machine generated datasets were collected using a Soft Actor Critic agent trained with a dense reward. Each dataset consists of the agent's replay buffer.
Each task has two versions: one with low dimensional observations (low_dim
),
and one with images (image
).
The datasets follow the RLDS format to represent steps and episodes.
To use this dataset:
import tensorflow_datasets as tfds
ds = tfds.load('robomimic_mg', split='train')
for ex in ds.take(4):
print(ex)
See the guide for more informations on tensorflow_datasets.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset was created using LeRobot.
Dataset Structure
meta/info.json: { "codebase_version": "v2.1", "robot_type": "robomimic", "total_episodes": 200, "total_frames": 93752, "total_tasks": 1, "total_videos": 600, "total_chunks": 1, "chunks_size": 1000, "fps": 20, "splits": { "train": "0:200" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path":… See the full description on the dataset page: https://huggingface.co/datasets/ankile/robomimic-ph-transport-image.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset was created using LeRobot.
Dataset Structure
meta/info.json: { "codebase_version": "v2.1", "robot_type": "robomimic", "total_episodes": 200, "total_frames": 95962, "total_tasks": 1, "total_videos": 400, "total_chunks": 1, "chunks_size": 1000, "fps": 20, "splits": { "train": "0:200" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path":… See the full description on the dataset page: https://huggingface.co/datasets/ankile/robomimic-ph-tool-hang-image.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset was created using LeRobot.
Dataset Structure
meta/info.json: { "codebase_version": "v2.1", "robot_type": "robomimic", "total_episodes": 300, "total_frames": 31127, "total_tasks": 1, "total_videos": 600, "total_chunks": 1, "chunks_size": 1000, "fps": 20, "splits": { "train": "0:300" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path":… See the full description on the dataset page: https://huggingface.co/datasets/ankile/robomimic-mh-lift-image.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
The Robomimic proficient human datasets were collected by 1 proficient operator using the RoboTurk platform (with the exception of Transport, which had 2 proficient operators working together). Each dataset consists of 200 successful trajectories.
Each task has two versions: one with low dimensional observations (low_dim
),
and one with images (image
).
The datasets follow the RLDS format to represent steps and episodes.
To use this dataset:
import tensorflow_datasets as tfds
ds = tfds.load('robomimic_ph', split='train')
for ex in ds.take(4):
print(ex)
See the guide for more informations on tensorflow_datasets.