Video Action Recognition dataset. Contains 5% of balanced Kinetics-400 and Kinetics-600 (Kinetics) training data as zipped folder of mp4 files.
The Kinetics dataset is a large-scale, high-quality dataset for human action recognition in videos. The dataset consists of around 500,000 video clips covering 400/600 human action classes with at least 400/600 video clips for each action class. Each video clip lasts around 10 seconds and is labeled with a single action class. The videos are collected from YouTube.
More than 10000 videos in each dataset. 10-40 videos per class.
A dataset by Deepmind.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Kinetics dataset is a large-scale, high-quality dataset for human action recognition in videos. The dataset consists of around 500,000 video clips covering 600 human action classes with at least 600 video clips for each action class. Each video clip lasts around 10 seconds and is labeled with a single action class. The videos are collected from YouTube.
https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified
MD5 checksum: kinetics400.zip: 33224b5b77c634aa6717da686efce2d4 kinetics400_validation.zip: 013358d458477d7ac10cebb9e84df354
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1984321%2Fee10abf5409ea4eaaad3dfaa9514a4bb%2FScreenshot_2021-08-06_at_16.15.03.png?generation=1694441423300452&alt=media" alt="">
The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. Homepage.
The kinetics dataset is licensed by Google Inc. under a Creative Commons Attribution 4.0 International License. Published. May 22, 2017.
The Kinetics-400 and Kinetics-600 datasets are video understanding datasets used for learning rich and multi-scale spatiotemporal semantics from high-dimensional videos.
chereddysaivreddy/kinetics-400 dataset hosted on Hugging Face and contributed by the HF Datasets community
Three large-scale video datasets for action recognition: Something-Something V1 & V2 and Kinetics-400.
kiyoonkim/kinetics-400-splits dataset hosted on Hugging Face and contributed by the HF Datasets community
The Kinetics-400, UCF101, HMDB51, Something-Something V1, and Something-Something V2 datasets are used for evaluating the performance of the Bi-Calibration Networks.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Kinetics-600 is a large-scale action recognition dataset which consists of around 480K videos from 600 action categories. The 480K videos are divided into 390K, 30K, 60K for training, validation and test sets, respectively. Each video in the dataset is a 10-second clip of action moment annotated from raw YouTube video. It is an extensions of the Kinetics-400 dataset.
The Kinetics dataset is a large-scale human action dataset, which consists of 400 action classes where each category has more than 400 videos.
The dataset used in the paper is Kinetics-400 and Something-Something-V2.
This dataset contains both 8 and 16 sampled frames of the "eating-spaghetti" video of the Kinetics-400 dataset, with the following frame indices being used:
8 frames (eating_spaghetti_8_frames.npy): 97, 98, 99, 100, 101, 102, 103, 104 16 frames (eating_spaghetti.npy): [164, 168, 172, 176, 181, 185, 189, 193, 198, 202, 206, 210, 215, 219, 223, 227]. 32 frames (eating_spaghetti_32_frames.npy): array([ 47, 51, 55… See the full description on the dataset page: https://huggingface.co/datasets/hf-internal-testing/spaghetti-video.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
In the wake of shaping the energy future through materials innovation, lithium–sulfur batteries (LSBs) are top-of-the-line energy storage system attributed to their high theoretical energy density and specific capacity inclusive of low material costs. Despite their strengths, LSBs suffer from the cross-over of soluble polysulfide redox species to the anode, entailing fast capacity fading and inferior cycling stability. Adding to the concern, the insulating character of polysulfides lends to sluggish reaction kinetics. To address these challenges, we construct optimized polysulfide blockers-cum-conversion catalysts by accommodating the battery separator with covalent organic framework@Graphene (COF@G) composites. We settle on a crystalline TAPP-ETTB COF in the interest of its nitrogen-enriched scaffold with a regular pore geometry, providing ample lithiophilic sites for strong chemisorption and catalytic effect to polysulfides. On another front, graphene enables high electron mobility, boosting the sulfur redox kinetics. Consequently, a lithium–sulfur battery with a TAPP-ETTB COF@G-based separator demonstrates a high reversible capacity of 1489.8 mA h g–1 at 0.2 A g–1 after the first cycle and good cyclic performance (920 mA h g–1 after 400 cycles) together with excellent rate performance (827.7 mA h g–1 at 2 A g–1). The scope and opportunities to harness the designability and synthetic structural control in crystalline organic materials is a promising domain at the interface of sustainable materials, energy storage, and Li–S chemistry.
This repository contains the mapping from integer id's to actual label names (in HuggingFace Transformers typically called id2label) for several datasets. Current datasets include:
ImageNet-1k ImageNet-22k (also called ImageNet-21k as there are 21,843 classes) COCO detection 2017 COCO panoptic 2017 ADE20k (actually, the MIT Scene Parsing benchmark, which is a subset of ADE20k) Cityscapes VQAv2 Kinetics-700 RVL-CDIP PASCAL VOC Kinetics-400 ...
You can read in a label file as follows (using… See the full description on the dataset page: https://huggingface.co/datasets/huggingface/label-files.
Oxidation kinetic experiments with various crude oil types show two reaction peaks at about 250{degree}C (482{degree}F) and 400{degree}C (725{degree}F). These experiments lead to the conclusion that the fuel during high temperature oxidation is an oxygenated hydrocarbon. A new oxidation reaction model has been developed which includes two partially-overlapping reactions: namely, low-temperature oxidation followed by high-temperature oxidation. For the fuel oxidation reaction, the new model includes the effects of sand grain size and the atomic hydrogen-carbon (H/C) and oxygen-carbon (O/C) ratios of the fuel. Results based on the new model are in good agreement with the experimental data. Methods have been developed to calculate the atomic H/C and O/C ratios. These methods consider the oxygen in the oxygenated fuel, and enable a direct comparison of the atomic H/C ratios obtained from kinetic and combustion tube experiments. The finding that the fuel in kinetic tube experiments is an oxygenated hydrocarbon indicates that oxidation reactions are different in kinetic and combustion tube experiments. A new experimental technique or method of analysis will be required to obtain kinetic parameters for oxidation reactions encountered in combustion tube experiments and field operations.
The reaction kinetics between CO2 and residual carbon from Colorado oil shale (Mahogany Zone) have been investigated using both isothermal and nonisothermal methods. It was found that oil-shale residual carbon is approximately an order of magnitude more reactive than subbituminous coal char although the surface areas are similar. The reactivity of the residual carbon was found to carry by a factor of two for samples prepared by retorting the shale at heating rates between 0.033 and 12 degrees C min. Since the surface area of the residual carbon is approximately independent of the amount of oil coking, the heating rate effect cannot be explained by pore filling. Surface areas of the residual organic carbon in shale were estimated by comparing the surface area of retorted shale with that of retorted shale which has been declared by oxidation at 400 C. Surface areas of 250-400 m/g and 100-200 m2/g were obtained using CO2 and N2 respectively as the absorbed gases. Mercury porosimetry results are also presented.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
I3D Video Features, Labels and Splits for Multicamera Overlapping Datasets Pets-2009, HQFS and Up-Fall
The Inflated 3D (I3D) video features, ground truths, and train/test splits for the multicamera datasets Pets-2009, HQFS, and Up-Fall are available here. We relabeled two datasets (HQFS and Pets-2009) for the task of VAD-MIL under multiple cameras. Three feature dispositions of I3D data are available: I3D-RGB, I3D-OF, and the linear concatenation of these features. These datasets can be used as benchmarks for the video anomaly detection task under multiple instance learning and multiple overlapping cameras.
Preprocessed Datasets
PETS-2009 is a benchmark dataset (https://cs.binghamton.edu/~mrldata/pets2009) aggregating different scene sets with multiple overlapping camera views and distinct events involving crowds. We labeled the scenes at \textit{frame} level as anomaly or normal events. Scenes with background, people walking individually or in a crowd, and regular passing of cars are considered normal patterns. Frames with occurrences of people running (individually or in the crowd), crowding of people in the middle of the traffic intersection, and people in the counterflow were considered anomalous patterns. Videos of scenes with the occurrence of anomalous frames are labeled as anomalous, while videos without the occurrence of anomalies are marked as normal videos. The High-Quality Fall Simulation Data - HQFS dataset (https://iiw.kuleuven.be/onderzoek/advise/datasets/fall-and-adl-meta-data) is an indoor scenario with five overlapping cameras with the occurrence of fall incidents. We consider a person falling on the floor an uncommon event. We also relabeled the frame annotations to consider the intervals where the person remains lying on the ground after the fall. The multi-class Up-Fall (https://sites.google.com/up.edu.mx/har-up/) detection dataset contains two overlapping camera views and infrared sensors in a laboratory scenario.
Video Feature Extraction
We use Inflated 3D (I3D) features to represent video clips of 16 frames. We use the Video Features library (https://github.com/v-iashin/video_features) that uses a pre-trained model on the Kinetics 400 dataset. For this procedure, the frame sequence length from which to get the video clip feature representation (or window size) and the number of frames to step before extracting the next features were set to 16 frames. After the video extraction process, each video from each camera corresponds to a matrix with dimension n x 1024, where n is a variable number of existing segments and the number of attributes is 1024 (I3D attributes referring to RGB appearance information or I3D attributes referring to Optical Flow information). It is important to note that the videos (\textit{bags}) are divided into clips with a fixed number of \textit{frames}. Consequently, each video \textit{bag} contains a variable number of clips. A clip can be completely normal, completely anomalous, or mixed with normal and anomalous frames. There are three possible deep feature dispositions considered: I3D features generated with only RGB (1024 I3D features from RGB data), Optical Flow (1024 I3D features from optical flow data), and the combination of both (by simple linear concatenation). We also make available 10-crop features (https://pytorch.org/vision/main/generated/torchvision.transforms.TenCrop.html) by yielding 10 crops for a given video clip.
File Description
center-crop.zip: Folder with I3D features of Pets-2009, HQFS and Up-Fall datasets;
10-crop.zip: Folder with I3D features (10-crop) of Pets-2009, HQFS and Up-Fall datasets;
gts.zip: Folder with ground truths at frame-level and video-level of Pets-2009, HQFS and Up-Fall datasets;
splits.zip: Folder with Lists of training and test splits of Pets-2009, HQFS and Up-Fall datasets;
A portion of the preprocessed I3D feature sets was leveraged in the studies outlined in these publications:
Pereira, S. S., & Maia, J. E. B. (2024). MC-MIL: video surveillance anomaly detection with multi-instance learning and multiple overlapped cameras. Neural Computing and Applications, 36(18), 10527-10543. Available at https://link.springer.com/article/10.1007/s00521-024-09611-3.
Pereira, S. S. L., Maia, J. E. B., & Proença, H. (2024, September). Video Anomaly Detection in Overlapping Data: The More Cameras, the Better?. In 2024 IEEE International Joint Conference on Biometrics (IJCB) (pp. 1-10). IEEE. Available at https://ieeexplore.ieee.org/document/10744502.
Raman spectroscopic data and information of the determination of the kinetics of the polymerisation and NMR spectroscopic data and information of the kinetics of the depolymerisation experiments were deposited. Bruker Acance II (400 MHz) Bruker Avance III (400 MHz) RXN1 spectrometer of Kaiser Optical System with a 785 nm laser
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Video training data of LongVU downloaded from https://huggingface.co/datasets/OpenGVLab/VideoChat2-IT
Video
Please download the original videos from the provided links:
BDD100K: bdd.zip ShareGPTVideo: https://huggingface.co/datasets/ShareGPTVideo/train_video_and_instruction/tree/main/train_300k CLEVRER: clevrer_qa.zip DiDeMo: didemo.zip EgoQA: https://huggingface.co/datasets/ynhe/videochat2_data/resolve/main/egoqa_split_videos.zipKinetics-710: k400.zip MovieChat: moviechat.zip… See the full description on the dataset page: https://huggingface.co/datasets/shenxq/VideoChat2.
Video Action Recognition dataset. Contains 5% of balanced Kinetics-400 and Kinetics-600 (Kinetics) training data as zipped folder of mp4 files.
The Kinetics dataset is a large-scale, high-quality dataset for human action recognition in videos. The dataset consists of around 500,000 video clips covering 400/600 human action classes with at least 400/600 video clips for each action class. Each video clip lasts around 10 seconds and is labeled with a single action class. The videos are collected from YouTube.
More than 10000 videos in each dataset. 10-40 videos per class.
A dataset by Deepmind.