4 datasets found
  1. ShanghaiTech Campus Dataset (Train-Part)

    • kaggle.com
    zip
    Updated Oct 7, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nikan vasei (2025). ShanghaiTech Campus Dataset (Train-Part) [Dataset]. https://www.kaggle.com/datasets/nikanvasei/shanghaitech-campus-dataset
    Explore at:
    zip(11718494278 bytes)Available download formats
    Dataset updated
    Oct 7, 2025
    Authors
    nikan vasei
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    *This is the Train part of the dataset. You can access the Test part via this link.*

    The ShanghaiTech Campus dataset is designed for anomaly detection in surveillance videos, aiming to promote generalization across diverse real-world environments. Unlike most existing datasets that contain videos captured from a single, fixed-angle camera, this dataset covers multiple scenes and view angles, making it suitable for developing models that can operate across varied contexts.

    It consists of videos recorded from 13 different scenes around the ShanghaiTech University campus, encompassing a range of lighting conditions, backgrounds, and camera perspectives. The dataset includes various types of anomalies, including those caused by sudden or abnormal motion (e.g., chasing, brawling), which are rarely represented in other datasets.

    • Number of scenes: 13
    • Number of abnormal events: 130
    • Total frames: 317,398
      • Training frames: 274,515
      • Testing frames: 42,883
    • Regular frames: 300,308
    • Irregular frames: 17,090

    In addition, the dataset provides pixel-level annotations for abnormal regions, enabling both weakly- and fully-supervised approaches to anomaly detection.

    This combination of multi-scene diversity, rich motion anomalies, and fine-grained labeling makes the ShanghaiTech Campus dataset a comprehensive and realistic benchmark for surveillance anomaly detection research.

    Citation

    If you use this dataset in your research, please cite the following paper:

    @INPROCEEDINGS{liu2018ano_pred, 
     author  = {W. Liu and W. Luo and D. Lian and S. Gao}, 
     booktitle = {2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 
     title   = {Future Frame Prediction for Anomaly Detection -- A New Baseline}, 
     year   = {2018}
    }
    

    For more information and dataset access, please refer to the original publication and its associated resources.

  2. q

    SAIVT-Campus Dataset

    • researchdatafinder.qut.edu.au
    Updated Jun 30, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dr Simon Denman (2016). SAIVT-Campus Dataset [Dataset]. https://researchdatafinder.qut.edu.au/individual/n2531
    Explore at:
    Dataset updated
    Jun 30, 2016
    Dataset provided by
    Queensland University of Technology (QUT)
    Authors
    Dr Simon Denman
    Description

    SAIVT-Campus Dataset

    Overview

    The SAIVT-Campus Database is an abnormal event detection database captured on a university campus, where the abnormal events are caused by the onset of a storm. Contact Dr Simon Denman or Dr Jingxin Xu for more information.

    Licensing

    The SAIVT-Campus database is © 2012 QUT and is licensed under the Creative Commons Attribution-ShareAlike 3.0 Australia License.

    Attribution

    To attribute this database, please include the following citation: Xu, Jingxin, Denman, Simon, Fookes, Clinton B., & Sridharan, Sridha (2012) Activity analysis in complicated scenes using DFT coefficients of particle trajectories. In 9th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS 2012), 18-21 September 2012, Beijing, China. available at eprints.

    Acknowledging the Database in your Publications

    In addition to citing our paper, we kindly request that the following text be included in an acknowledgements section at the end of your publications: We would like to thank the SAIVT Research Labs at Queensland University of Technology (QUT) for freely supplying us with the SAIVT-Campus database for our research.

    Installing the SAIVT-Campus database

    After downloading and unpacking the archive, you should have the following structure:

    SAIVT-Campus +-- LICENCE.txt +-- README.txt +-- test_dataset.avi +-- training_dataset.avi +-- Xu2012 - Activity analysis in complicated scenes using DFT coefficients of particle trajectories.pdf

    Notes

    The SAIVT-Campus dataset is captured at the Queensland University of Technology, Australia.

    It contains two video files from real-world surveillance footage without any actors:

    training_dataset.avi (the training dataset)
    test_dataset.avi (the test dataset).
    

    This dataset contains a mixture of crowd densities and it has been used in the following paper for abnormal event detection:

    Xu, Jingxin, Denman, Simon, Fookes, Clinton B., & Sridharan, Sridha (2012) Activity analysis in complicated scenes using DFT coefficients of particle trajectories. In 9th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS 2012), 18-21 September 2012, Beijing, China. Available at eprints. 
    This paper is also included with the database (Xu2012 - Activity analysis in complicated scenes using DFT coefficients of particle trajectories.pdf) Both video files are one hour in duration.
    

    The normal activities include pedestrians entering or exiting the building, entering or exiting a lecture theatre (yellow door), and going to the counter at the bottom right. The abnormal events are caused by a heavy rain outside, and include people running in from the rain, people walking towards the door to exit and turning back, wearing raincoats, loitering and standing near the door and overcrowded scenes. The rain happens only in the later part of the test dataset.

    As a result, we assume that the training dataset only contains the normal activities. We have manually made an annotation as below:

    the training dataset does not have abnormal scenes
    the test dataset separates into two parts: only normal activities occur from 00:00:00 to 00:47:16 abnormalities are present from 00:47:17 to 01:00:00. We annotate the time 00:47:17 as the start time for the abnormal events, as from this time on we have begun to observe people stop walking or turn back from walking towards the door to exit, which indicates that the rain outside the building has influenced the activities inside the building. Should you have any questions, please do not hesitate to contact Dr Jingxin Xu.
    
  3. Traffic Anomaly Dataset (TAD)

    • kaggle.com
    zip
    Updated Oct 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nikan vasei (2025). Traffic Anomaly Dataset (TAD) [Dataset]. https://www.kaggle.com/datasets/nikanvasei/traffic-anomaly-dataset-tad
    Explore at:
    zip(13379994751 bytes)Available download formats
    Dataset updated
    Oct 15, 2025
    Authors
    nikan vasei
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    This dataset is designed for traffic surveillance anomaly detection, originally from the WSAL (Weakly-Supervised Anomaly Localization) repository. It consists of 500 short video clips totaling approximately 25 hours of footage. Each clip averages around 1,075 frames, and anomalies, when present, typically span around 80 frames.

    • Number of videos: 500
      • Abnormal videos: 250
      • Normal videos: 250
    • Average duration (frames) per clip: ~1,075
    • Average anomaly length (frames): ~80
    • Total duration: ~25 hours
    • Partition:
      • Training set: 400 videos
      • Test set: 100 videos

    Each video is labeled to indicate whether it contains an anomaly or not, enabling both supervised training and evaluation. You can use the labels to develop or compare different anomaly detection methods.

    Citation

    If you use this dataset for your research, please cite the following paper:

    @article{wsal_tip21,
     author  = {Hui Lv and
            Chuanwei Zhou and
            Zhen Cui and
            Chunyan Xu and
            Yong Li and
            Jian Yang},
     title   = {Localizing Anomalies from Weakly-Labeled Videos},
     journal  = {IEEE Transactions on Image Processing (TIP)},
     year   = {2021}
    }
    

    For more details about how the dataset was created and used, see the original WSAL GitHub repository.

  4. Smartphone Dataset for Anomaly Detection in Crowds

    • kaggle.com
    zip
    Updated Apr 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rabie El Kharoua (2024). Smartphone Dataset for Anomaly Detection in Crowds [Dataset]. https://www.kaggle.com/datasets/rabieelkharoua/smartphone-dataset-for-anomaly-detection-in-crowds
    Explore at:
    zip(271703 bytes)Available download formats
    Dataset updated
    Apr 24, 2024
    Authors
    Rabie El Kharoua
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset was collected from the Smartphone sensors and can be used to analyse behaviour of a crowd, for example, an anomaly.

    Dataset Characteristics: Time-Series

    Subject Area: Computer Science

    Associated Tasks: Classification

    Instances: 14221

    Dataset Information

    For what purpose was the dataset created?

    The key purpose of donating this dataset is to provide an opportunity to the research community to use it for further research purposes.

    Who funded the creation of the dataset? Muhammad Irfan

    What do the instances in this dataset represent? One instance represents a movement patter for a group based activity.

    Are there recommended data splits? No.

    Has Missing Values? No

    Introductory Paper

    Title: Anomaly Detection in Crowds using Multi Sensory Information

    Author:M. Irfan, L. Marcenaro, and L. Tokarchuk, C. Regazzoni. 2018

    Journal: Published in 5th IEEE International Conference on Advanced Video and Signal-based Surveillance (AVSS), Auckland, New Zealand,

    Link: https://ieeexplore.ieee.org/document/8639151

    Abstract of Introductory Paper

    This paper presents, a system capable of detecting unusual activities in crowds from real-world data captured from multiple sensors. The detection is achieved by classifying the distinct movements of people in crowds, and those patterns can be different and can be classified as normal and abnormal activities. Statistical features are extracted from the dataset collected by applying sliding time window operations. A model for classifying movements is trained by using Random Forest technique. The system was tested by using two datasets collected from mobile phones during social events gathering. Results show that mobile data can be used to detect anomalies in crowds as an alternative to video sensors with significant performances. Our approach is the first to detect any unusual behavior in crowd with non-visual data, which is simple to train and easy to deploy. We also present our dataset for public research as there is no such dataset available to perform experiments on crowds for detecting unusual behaviours.

    Cite

    Citation: Irfan,Muhammad. (2021). Smartphone Dataset for Anomaly Detection in Crowds. UCI Machine Learning Repository. https://doi.org/10.24432/C5Q90H.

    BibTeX: @misc{misc_smartphone_dataset_for_anomaly_detection_in_crowds_613, author = {Irfan,Muhammad}, title = {{Smartphone Dataset for Anomaly Detection in Crowds}}, year = {2021}, howpublished = {UCI Machine Learning Repository}, note = {{DOI}: https://doi.org/10.24432/C5Q90H} }

  5. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
nikan vasei (2025). ShanghaiTech Campus Dataset (Train-Part) [Dataset]. https://www.kaggle.com/datasets/nikanvasei/shanghaitech-campus-dataset
Organization logo

ShanghaiTech Campus Dataset (Train-Part)

A multi-scene benchmark for surveillance video anomaly detection

Explore at:
zip(11718494278 bytes)Available download formats
Dataset updated
Oct 7, 2025
Authors
nikan vasei
License

MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically

Description

*This is the Train part of the dataset. You can access the Test part via this link.*

The ShanghaiTech Campus dataset is designed for anomaly detection in surveillance videos, aiming to promote generalization across diverse real-world environments. Unlike most existing datasets that contain videos captured from a single, fixed-angle camera, this dataset covers multiple scenes and view angles, making it suitable for developing models that can operate across varied contexts.

It consists of videos recorded from 13 different scenes around the ShanghaiTech University campus, encompassing a range of lighting conditions, backgrounds, and camera perspectives. The dataset includes various types of anomalies, including those caused by sudden or abnormal motion (e.g., chasing, brawling), which are rarely represented in other datasets.

  • Number of scenes: 13
  • Number of abnormal events: 130
  • Total frames: 317,398
    • Training frames: 274,515
    • Testing frames: 42,883
  • Regular frames: 300,308
  • Irregular frames: 17,090

In addition, the dataset provides pixel-level annotations for abnormal regions, enabling both weakly- and fully-supervised approaches to anomaly detection.

This combination of multi-scene diversity, rich motion anomalies, and fine-grained labeling makes the ShanghaiTech Campus dataset a comprehensive and realistic benchmark for surveillance anomaly detection research.

Citation

If you use this dataset in your research, please cite the following paper:

@INPROCEEDINGS{liu2018ano_pred, 
 author  = {W. Liu and W. Luo and D. Lian and S. Gao}, 
 booktitle = {2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, 
 title   = {Future Frame Prediction for Anomaly Detection -- A New Baseline}, 
 year   = {2018}
}

For more information and dataset access, please refer to the original publication and its associated resources.

Search
Clear search
Close search
Google apps
Main menu