MPOSE2021, a dataset for real-time short-time HAR, suitable for both pose-based and RGB-based methodologies. It includes 15,429 sequences from 100 actors and different scenarios, with limited frames per scene (between 20 and 30). In contrast to other publicly available datasets, the peculiarity of having a constrained number of time steps stimulates the development of real-time methodologies that perform HAR with low latency and high throughput.
A large, annotated video dataset of mice performing a sequence of actions. The dataset was collected and labeled by experts for the purpose of neuroscience research.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The ReMouse dataset is collected in a guided environment
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
## Overview
Mouse Tracking is a dataset for object detection tasks - it contains Mouse annotations for 204 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [Public Domain license](https://creativecommons.org/licenses/Public Domain).
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
This is a series of datasets related to the Anipose paper. We provide these to allow others to reproduce our tracking results and build upon them.
Anipose is an open-source toolkit for robust markerless 3D pose estimation. Anipose is built on the 2D tracking method DeepLabCut, so users can expand their existing experimental setups to obtain accurate 3D tracking. It consists of four components: (1) a 3D calibration module, (2) filters to resolve 2D tracking errors, (3) a triangulation module that integrates temporal and spatial regularization, and (4) a pipeline to structure processing of large numbers of videos.
Applying 3D tracking to estimate joint angles of walking Drosophila, we found that flies move their middle legs primarily by rotating their coxa and femur, whereas the front and rear legs are driven primarily by femur-tibia flexion. We then show how Anipose can be used to quantify differences between successful and unsuccessful trajectories in a mouse reaching task.
We share these fly and mouse datasets and tracking models in this dataset to allow others to reproduce our findings and reuse the training data and models in their research.
Methods Please refer to the Anipose paper for detailed information on the methods used to collect the videos and to track the 3D joint positions.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
“MouseBehaviourDataset” contains a 7 frame length and 1 frame interval mouse behaviour recognition dataset. "MouseKeypointDataset " contains the mouse keypoint detection dataset."BehaviourWeights" contain the weights of behavior recognition algorithms LSTM, BI-LSTM, ConvLSTM and 3DCNN."KeypointWeights" contain the weights of keypoint detection algorithms CPM, Hourglass, DeepLabCut and imporved DeepLabCut.“Results” is the result of relevant experiments.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A dataset composed of 11897 grayscale images of humans (4285) and mouse (7612) eyes. In different experimental conditions: head-fixation sessions (HF: 5061), 2-photon Ca2+ imaging ( 2P: 2551), and human eyes (H: 4285). The dataset contains 1596 eye blinks, 841 images in the mouse, and 755 photos in the human datasets. Five human raters segmented the pupil in all pictures (one per image) by manual placement of an ellipse or polygon over the pupil area. Raters flagged blinks using the same code. All the photos are illuminated using infrared (IR, 850 nm) light sources.
The dataset contains 2 folders:
'fullFrames': contains all the grayscale images in png format.
'annotation': contains a folder called 'png' with pupil mask in the red channel. There is also a file called 'annotations.csv' containing a list with a description of each file in the dataset in this folder.
Description of the fields in annotations.csv:
filename: [string] with the file name
eye: [0,1] if true an eye is present in the picture
blink: [0,1] if true the subject is blinking
exp: [string] what kind of experiments
w: [int] resolution width
h: [int] resolution height
roi_x: [int] roi x coordinate
roi_y: [int] roi y coordinate
roi_w: [int] roi width-height (128x128)
sub: [int] subject's label
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
EPM Mouse is a dataset for computer vision tasks - it contains Mouse annotations for 200 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Custom Kraken2/Bracken database built using the representative genomes for 1,021 microbial species from the mouse gut microbiota. Genomes include isolates and MAGs, but all are near-complete (>90% completeness; <5% contamination; maximum genome size ≤ 8 Mb; maximum contig count ≤ 500; N50 ≥ 10 kb; mean contig length ≥ 5 kb). This database achieved a mean read classification rate of 87.7% when benchmarked on 1,785 independent (i.e. non-contributory) mouse gut shotgun metagenome samples. An equivalent human database (UHGG) only attained classification rates of 36.6%.
This database is a publicly available resource to facilitate more efficient/deeper analyses of mouse gut shotgun metagenomes.
Find out more about the Mouse Microbial Genome Collection at our GitHub repository.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A collection of publicly available, but preprocessed, reference data for the analysis of ATAC-seq samples using the GRCm38 (mm10) assembly of the mouse genome using the Ultimate ATAC-seq Data Processing & Analysis Pipeline (details in the documentation on GitHub).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
built as described here: https://github.com/nasa/GeneLab_Data_Processing/blob/master/Metagenomics/Estimate_host_reads_in_raw_data/Workflow_Documentation/SW_MGEstHostReads/reference-database-info.md
Test fastq files hold 4 read pairs: 1 phage, 1 e. coli, 1 human, 1 mouse
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Caltech Resident-Intruder Mouse dataset (CRIM13) (Burgos-Artizzu et al., CVPR 2012) consists of two mice interacting in an enclosed arena, captured by top and side view cameras at 30 Hz. We only use the top view. Seven keypoints are labeled on each mouse for a total of 14 keypoints (Segalin et al., eLife 2021).Each keypoint in the original CRIM13 dataset (https://data.caltech.edu/records/4emt5-b0t10) is labeled by five different annotators. To create the final set of labels, we take the median across all labels for each keypoint. Additionally, we remove all frames where one or both mice were absent.The labeled data are partitioned into disjoint "in-distribution" (InD) and "out-of-distribution" (OOD) sets. Each set contains different sessions / animals. We use the train/test split provided in the original dataset - the (4) resident mice are present in both InD and OOD splits; however, the intruder mouse is different for each session. The InD data contain 3986 labeled frames, and 37 unlabeled videos; the OOD data contain 1274 labeled frames, and 19 unlabeled videos.Many thanks to the authors of the CRIM13 paper who collected and analyzed the original video dataset: Xavier P. Burgos-Artizzu, Piotr Dollár, Dayu Lin, David J. Anderson and Pietro Perona.We also thank the authors of the MARS paper who collected keypoint annotations for the CRIM13 dataset: Cristina Segalin, Jalani Williams, Tomomi Karigo, May Hui, Moriel Zelikowsky, Jennifer J Sun, Pietro Perona, David J Anderson and Ann Kennedy.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
MPOSE2021, a dataset for real-time short-time HAR, suitable for both pose-based and RGB-based methodologies. It includes 15,429 sequences from 100 actors and different scenarios, with limited frames per scene (between 20 and 30). In contrast to other publicly available datasets, the peculiarity of having a constrained number of time steps stimulates the development of real-time methodologies that perform HAR with low latency and high throughput.