Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
*This is the Train part of the dataset. You can access the Test part via this link.*
The ShanghaiTech Campus dataset is designed for anomaly detection in surveillance videos, aiming to promote generalization across diverse real-world environments. Unlike most existing datasets that contain videos captured from a single, fixed-angle camera, this dataset covers multiple scenes and view angles, making it suitable for developing models that can operate across varied contexts.
It consists of videos recorded from 13 different scenes around the ShanghaiTech University campus, encompassing a range of lighting conditions, backgrounds, and camera perspectives. The dataset includes various types of anomalies, including those caused by sudden or abnormal motion (e.g., chasing, brawling), which are rarely represented in other datasets.
In addition, the dataset provides pixel-level annotations for abnormal regions, enabling both weakly- and fully-supervised approaches to anomaly detection.
This combination of multi-scene diversity, rich motion anomalies, and fine-grained labeling makes the ShanghaiTech Campus dataset a comprehensive and realistic benchmark for surveillance anomaly detection research.
If you use this dataset in your research, please cite the following paper:
@INPROCEEDINGS{liu2018ano_pred,
author = {W. Liu and W. Luo and D. Lian and S. Gao},
booktitle = {2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
title = {Future Frame Prediction for Anomaly Detection -- A New Baseline},
year = {2018}
}
For more information and dataset access, please refer to the original publication and its associated resources.
Facebook
TwitterSAIVT-Campus Dataset
Overview
The SAIVT-Campus Database is an abnormal event detection database captured on a university campus, where the abnormal events are caused by the onset of a storm. Contact Dr Simon Denman or Dr Jingxin Xu for more information.
Licensing
The SAIVT-Campus database is © 2012 QUT and is licensed under the Creative Commons Attribution-ShareAlike 3.0 Australia License.
Attribution
To attribute this database, please include the following citation: Xu, Jingxin, Denman, Simon, Fookes, Clinton B., & Sridharan, Sridha (2012) Activity analysis in complicated scenes using DFT coefficients of particle trajectories. In 9th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS 2012), 18-21 September 2012, Beijing, China. available at eprints.
Acknowledging the Database in your Publications
In addition to citing our paper, we kindly request that the following text be included in an acknowledgements section at the end of your publications: We would like to thank the SAIVT Research Labs at Queensland University of Technology (QUT) for freely supplying us with the SAIVT-Campus database for our research.
Installing the SAIVT-Campus database
After downloading and unpacking the archive, you should have the following structure:
SAIVT-Campus +-- LICENCE.txt +-- README.txt +-- test_dataset.avi +-- training_dataset.avi +-- Xu2012 - Activity analysis in complicated scenes using DFT coefficients of particle trajectories.pdf
Notes
The SAIVT-Campus dataset is captured at the Queensland University of Technology, Australia.
It contains two video files from real-world surveillance footage without any actors:
training_dataset.avi (the training dataset)
test_dataset.avi (the test dataset).
This dataset contains a mixture of crowd densities and it has been used in the following paper for abnormal event detection:
Xu, Jingxin, Denman, Simon, Fookes, Clinton B., & Sridharan, Sridha (2012) Activity analysis in complicated scenes using DFT coefficients of particle trajectories. In 9th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS 2012), 18-21 September 2012, Beijing, China. Available at eprints.
This paper is also included with the database (Xu2012 - Activity analysis in complicated scenes using DFT coefficients of particle trajectories.pdf) Both video files are one hour in duration.
The normal activities include pedestrians entering or exiting the building, entering or exiting a lecture theatre (yellow door), and going to the counter at the bottom right. The abnormal events are caused by a heavy rain outside, and include people running in from the rain, people walking towards the door to exit and turning back, wearing raincoats, loitering and standing near the door and overcrowded scenes. The rain happens only in the later part of the test dataset.
As a result, we assume that the training dataset only contains the normal activities. We have manually made an annotation as below:
the training dataset does not have abnormal scenes
the test dataset separates into two parts: only normal activities occur from 00:00:00 to 00:47:16 abnormalities are present from 00:47:17 to 01:00:00. We annotate the time 00:47:17 as the start time for the abnormal events, as from this time on we have begun to observe people stop walking or turn back from walking towards the door to exit, which indicates that the rain outside the building has influenced the activities inside the building. Should you have any questions, please do not hesitate to contact Dr Jingxin Xu.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset is designed for traffic surveillance anomaly detection, originally from the WSAL (Weakly-Supervised Anomaly Localization) repository. It consists of 500 short video clips totaling approximately 25 hours of footage. Each clip averages around 1,075 frames, and anomalies, when present, typically span around 80 frames.
Each video is labeled to indicate whether it contains an anomaly or not, enabling both supervised training and evaluation. You can use the labels to develop or compare different anomaly detection methods.
If you use this dataset for your research, please cite the following paper:
@article{wsal_tip21,
author = {Hui Lv and
Chuanwei Zhou and
Zhen Cui and
Chunyan Xu and
Yong Li and
Jian Yang},
title = {Localizing Anomalies from Weakly-Labeled Videos},
journal = {IEEE Transactions on Image Processing (TIP)},
year = {2021}
}
For more details about how the dataset was created and used, see the original WSAL GitHub repository.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset was collected from the Smartphone sensors and can be used to analyse behaviour of a crowd, for example, an anomaly.
Dataset Characteristics: Time-Series
Subject Area: Computer Science
Associated Tasks: Classification
Instances: 14221
For what purpose was the dataset created?
The key purpose of donating this dataset is to provide an opportunity to the research community to use it for further research purposes.
Who funded the creation of the dataset? Muhammad Irfan
What do the instances in this dataset represent? One instance represents a movement patter for a group based activity.
Are there recommended data splits? No.
Has Missing Values? No
Title: Anomaly Detection in Crowds using Multi Sensory Information
Author:M. Irfan, L. Marcenaro, and L. Tokarchuk, C. Regazzoni. 2018
Journal: Published in 5th IEEE International Conference on Advanced Video and Signal-based Surveillance (AVSS), Auckland, New Zealand,
Link: https://ieeexplore.ieee.org/document/8639151
This paper presents, a system capable of detecting unusual activities in crowds from real-world data captured from multiple sensors. The detection is achieved by classifying the distinct movements of people in crowds, and those patterns can be different and can be classified as normal and abnormal activities. Statistical features are extracted from the dataset collected by applying sliding time window operations. A model for classifying movements is trained by using Random Forest technique. The system was tested by using two datasets collected from mobile phones during social events gathering. Results show that mobile data can be used to detect anomalies in crowds as an alternative to video sensors with significant performances. Our approach is the first to detect any unusual behavior in crowd with non-visual data, which is simple to train and easy to deploy. We also present our dataset for public research as there is no such dataset available to perform experiments on crowds for detecting unusual behaviours.
Citation: Irfan,Muhammad. (2021). Smartphone Dataset for Anomaly Detection in Crowds. UCI Machine Learning Repository. https://doi.org/10.24432/C5Q90H.
BibTeX: @misc{misc_smartphone_dataset_for_anomaly_detection_in_crowds_613,
author = {Irfan,Muhammad},
title = {{Smartphone Dataset for Anomaly Detection in Crowds}},
year = {2021},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: https://doi.org/10.24432/C5Q90H}
}
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
*This is the Train part of the dataset. You can access the Test part via this link.*
The ShanghaiTech Campus dataset is designed for anomaly detection in surveillance videos, aiming to promote generalization across diverse real-world environments. Unlike most existing datasets that contain videos captured from a single, fixed-angle camera, this dataset covers multiple scenes and view angles, making it suitable for developing models that can operate across varied contexts.
It consists of videos recorded from 13 different scenes around the ShanghaiTech University campus, encompassing a range of lighting conditions, backgrounds, and camera perspectives. The dataset includes various types of anomalies, including those caused by sudden or abnormal motion (e.g., chasing, brawling), which are rarely represented in other datasets.
In addition, the dataset provides pixel-level annotations for abnormal regions, enabling both weakly- and fully-supervised approaches to anomaly detection.
This combination of multi-scene diversity, rich motion anomalies, and fine-grained labeling makes the ShanghaiTech Campus dataset a comprehensive and realistic benchmark for surveillance anomaly detection research.
If you use this dataset in your research, please cite the following paper:
@INPROCEEDINGS{liu2018ano_pred,
author = {W. Liu and W. Luo and D. Lian and S. Gao},
booktitle = {2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
title = {Future Frame Prediction for Anomaly Detection -- A New Baseline},
year = {2018}
}
For more information and dataset access, please refer to the original publication and its associated resources.