100+ datasets found
  1. a

    Movies Fight Detection Dataset

    • academictorrents.com
    bittorrent
    Updated Feb 16, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nievas, Enrique Bermejo and Suarez, Oscar Deniz and Garcia, Gloria Bueno and Sukthankar, Rahul (2016). Movies Fight Detection Dataset [Dataset]. https://academictorrents.com/details/70e0794e2292fc051a13f05ea6f5b6c16f3d3635
    Explore at:
    bittorrent(409966355)Available download formats
    Dataset updated
    Feb 16, 2016
    Dataset authored and provided by
    Nievas, Enrique Bermejo and Suarez, Oscar Deniz and Garcia, Gloria Bueno and Sukthankar, Rahul
    License

    https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified

    Description

    Whereas the action recognition community has focused mostly on detecting simple actions like clapping, walking or jogging, the detection of fights or in general aggressive behaviors has been comparatively less studied. Such capability may be extremely useful in some video surveillance scenarios like in prisons, psychiatric or elderly centers or even in camera phones. After an analysis of previous approaches we test the well-known Bag-of-Words framework used for action recognition in the specific problem of fight detection, along with two of the best action descriptors currently available: STIP and MoSIFT. For the purpose of evaluation and to foster research on violence detection in video we introduce a new video database containing 1000 sequences divided in two groups: fights and non-fights. Experiments on this database and another one with fights from action movies show that fights can be detected with near 90% accuracy.

  2. R

    Honey Bee Fighting Detection Dataset

    • universe.roboflow.com
    zip
    Updated Jan 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Machine Learning project (2025). Honey Bee Fighting Detection Dataset [Dataset]. https://universe.roboflow.com/machine-learning-project-qy1nr/honey-bee-fighting-detection
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 21, 2025
    Dataset authored and provided by
    Machine Learning project
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Honeybees Bounding Boxes
    Description

    Honey Bee Fighting Detection

    ## Overview
    
    Honey Bee Fighting Detection is a dataset for object detection tasks - it contains Honeybees annotations for 1,279 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  3. O

    Hockey Fight Detection Dataset

    • opendatalab.com
    • academictorrents.com
    zip
    Updated Sep 22, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Universidad de Castilla-La Mancha (2022). Hockey Fight Detection Dataset [Dataset]. https://opendatalab.com/OpenDataLab/Hockey_Fight_Detection_Dataset
    Explore at:
    zip(13593 bytes)Available download formats
    Dataset updated
    Sep 22, 2022
    Dataset provided by
    Intel Labs Pittsburgh and Robotics Institute
    Universidad de Castilla-La Mancha
    Description

    Whereas the action recognition community has focused mostly on detecting simple actions like clapping, walking or jogging, the detection of fights or in general aggressive behaviors has been comparatively less studied. Such capability may be extremely useful in some video surveillance scenarios like in prisons, psychiatric or elderly centers or even in camera phones. After an analysis of previous approaches we test the well-known Bag-of-Words framework used for action recognition in the specific problem of fight detection, along with two of the best action descriptors currently available: STIP and MoSIFT. For the purpose of evaluation and to foster research on violence detection in video we introduce a new video database containing 1000 sequences divided in two groups: fights and non-fights. Experiments on this database and another one with fights from action movies show that fights can be detected with near 90% accuracy.

  4. R

    Fighting Detection Dataset

    • universe.roboflow.com
    zip
    Updated Nov 23, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    fighting (2023). Fighting Detection Dataset [Dataset]. https://universe.roboflow.com/fighting-m17lw/fighting-detection-4htur/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 23, 2023
    Dataset authored and provided by
    fighting
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Fighting Bounding Boxes
    Description

    Fighting Detection

    ## Overview
    
    Fighting Detection is a dataset for object detection tasks - it contains Fighting annotations for 820 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  5. h

    ucf_crime

    • huggingface.co
    Updated Jul 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MyungHoonJin (2023). ucf_crime [Dataset]. https://huggingface.co/datasets/jinmang2/ucf_crime
    Explore at:
    Dataset updated
    Jul 3, 2023
    Authors
    MyungHoonJin
    License

    https://choosealicense.com/licenses/cc0-1.0/https://choosealicense.com/licenses/cc0-1.0/

    Description

    Real-world Anomaly Detection in Surveillance Videos

    Surveillance videos are able to capture a variety of realistic anomalies. In this paper, we propose to learn anomalies by exploiting both normal and anomalous videos. To avoid annotating the anomalous segments or clips in training videos, which is very time consuming, we propose to learn anomaly through the deep multiple instance ranking framework by leveraging weakly labeled training videos, i.e. the training labels (anomalous or normal) are at video-level instead of clip-level. In our approach, we consider normal and anomalous videos as bags and video segments as instances in multiple instance learning (MIL), and automatically learn a deep anomaly ranking model that predicts high anomaly scores for anomalous video segments. Furthermore, we introduce sparsity and temporal smoothness constraints in the ranking loss function to better localize anomaly during training. We also introduce a new large-scale first of its kind dataset of 128 hours of videos. It consists of 1900 long and untrimmed real-world surveillance videos, with 13 realistic anomalies such as fighting, road accident, burglary, robbery, etc. as well as normal activities. This dataset can be used for two tasks. First, general anomaly detection considering all anomalies in one group and all normal activities in another group. Second, for recognizing each of 13 anomalous activities. Our experimental results show that our MIL method for anomaly detection achieves significant improvement on anomaly detection performance as compared to the state-of-the-art approaches. We provide the results of several recent deep learning baselines on anomalous activity recognition. The low recognition performance of these baselines reveals that our dataset is very challenging and opens more opportunities for future work.

    Problem & Motivation

    One critical task in video surveillance is detecting anomalous events such as traffic accidents, crimes or illegal activities. Generally, anomalous events rarely occur as compared to normal activities. Therefore, to alleviate the waste of labor and time, developing intelligent computer vision algorithms for automatic video anomaly detection is a pressing need. The goal of a practical anomaly detection system is to timely signal an activity that deviates normal patterns and identify the time window of the occurring anomaly. Therefore, anomaly detection can be considered as coarse level video understanding, which filters out anomalies from normal patterns. Once an anomaly is detected, it can further be categorized into one of the specific activities using classification techniques. In this work, we propose an anomaly detection algorithm using weakly labeled training videos. That is we only know the video-level labels, i.e. a video is normal or contains anomaly somewhere, but we do not know where. This is intriguing because we can easily annotate a large number of videos by only assigning video-level labels. To formulate a weakly-supervised learning approach, we resort to multiple instance learning. Specifically, we propose to learn anomaly through a deep MIL framework by treating normal and anomalous surveillance videos as bags and short segments/clips of each video as instances in a bag. Based on training videos, we automatically learn an anomaly ranking model that predicts high anomaly scores for anomalous segments in a video. During testing, a longuntrimmed video is divided into segments and fed into our deep network which assigns anomaly score for each video segment such that an anomaly can be detected.

    Method

    Our proposed approach (summarized in Figure 1) begins with dividing surveillance videos into a fixed number of segments during training. These segments make instances in a bag. Using both positive (anomalous) and negative (normal) bags, we train the anomaly detection model using the proposed deep MIL ranking loss. https://www.crcv.ucf.edu/projects/real-world/method.png

    UCF-Crime Dataset

    We construct a new large-scale dataset, called UCF-Crime, to evaluate our method. It consists of long untrimmed surveillance videos which cover 13 realworld anomalies, including Abuse, Arrest, Arson, Assault, Road Accident, Burglary, Explosion, Fighting, Robbery, Shooting, Stealing, Shoplifting, and Vandalism. These anomalies are selected because they have a significant impact on public safety. We compare our dataset with previous anomaly detection datasets in Table 1. For more details about the UCF-Crime dataset, please refer to our paper. A short description of each anomalous event is given below. Abuse: This event contains videos which show bad, cruel or violent behavior against children, old people, animals, and women. Burglary: This event contains videos that show people (thieves) entering into a building or house with the intention to commit theft. It does not include use of force against people. Robbery: This event contains videos showing thieves taking money unlawfully by force or threat of force. These videos do not include shootings. Stealing: This event contains videos showing people taking property or money without permission. They do not include shoplifting. Shooting: This event contains videos showing act of shooting someone with a gun. Shoplifting: This event contains videos showing people stealing goods from a shop while posing as a shopper. Assault: This event contains videos showing a sudden or violent physical attack on someone. Note that in these videos the person who is assaulted does not fight back. Fighting: This event contains videos displaying two are more people attacking one another. Arson: This event contains videos showing people deliberately setting fire to property. Explosion: This event contains videos showing destructive event of something blowing apart. This event does not include videos where a person intentionally sets a fire or sets off an explosion. Arrest: This event contains videos showing police arresting individuals. Road Accident: This event contains videos showing traffic accidents involving vehicles, pedestrians or cyclists. Vandalism: This event contains videos showing action involving deliberate destruction of or damage to public or private property. The term includes property damage, such as graffiti and defacement directed towards any property without permission of the owner. Normal Event: This event contains videos where no crime occurred. These videos include both indoor (such as a shopping mall) and outdoor scenes as well as day and night-time scenes. https://www.crcv.ucf.edu/projects/real-world/dataset_table.png https://www.crcv.ucf.edu/projects/real-world/method.png

  6. R

    Thai Boxing Object Detection Dataset

    • universe.roboflow.com
    zip
    Updated Jan 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ai lab (2025). Thai Boxing Object Detection Dataset [Dataset]. https://universe.roboflow.com/ai-lab-homm5/thai-boxing-object-detection/dataset/3
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 21, 2025
    Dataset authored and provided by
    ai lab
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Variables measured
    Fighting Bounding Boxes
    Description

    Thai Boxing Object Detection

    ## Overview
    
    Thai Boxing Object Detection is a dataset for object detection tasks - it contains Fighting annotations for 2,760 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [MIT license](https://creativecommons.org/licenses/MIT).
    
  7. d

    Portland, Oregon Test Data Set Freeway Loop Detector Data

    • catalog.data.gov
    • data.virginia.gov
    • +2more
    Updated Jun 16, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    USDOT (2025). Portland, Oregon Test Data Set Freeway Loop Detector Data [Dataset]. https://catalog.data.gov/dataset/portland-oregon-test-data-set-freeway-loop-detector-data
    Explore at:
    Dataset updated
    Jun 16, 2025
    Dataset provided by
    USDOT
    Area covered
    Portland, Oregon
    Description

    This set of data files was acquired under USDOT FHWA cooperative agreement DTFH61-11-H-00025 as one of the four test data sets acquired by the USDOT Data Capture and Management program.The freeway data consists of two months of data (Sept 15 2011 through Nov 15 2011) from dual-loop detectors deployed in the main line and on-ramps of a Portland-area freeway. The section of I-205 NB covered by this test data set is 10.09 miles long and the section of I-205 SB covered by this test data set is 12.01 miles long The data includes: flow, occupancy, and speed.

  8. h

    valorant-object-detection

    • huggingface.co
    Updated Dec 22, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kerem (2022). valorant-object-detection [Dataset]. https://huggingface.co/datasets/keremberke/valorant-object-detection
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 22, 2022
    Authors
    Kerem
    Description

    Dataset Labels

    ['dropped spike', 'enemy', 'planted spike', 'teammate']

      Number of Images
    

    {'valid': 1983, 'train': 6927, 'test': 988}

      How to Use
    

    Install datasets:

    pip install datasets

    Load the dataset:

    from datasets import load_dataset

    ds = load_dataset("keremberke/valorant-object-detection", name="full") example = ds['train'][0]

      Roboflow Dataset Page
    

    https://universe.roboflow.com/daniels-magonis-0pjzx/valorant-9ufcp/dataset/3… See the full description on the dataset page: https://huggingface.co/datasets/keremberke/valorant-object-detection.

  9. R

    Melee Weapon Detector Dataset

    • universe.roboflow.com
    zip
    Updated Apr 26, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Quentin Guillory (2023). Melee Weapon Detector Dataset [Dataset]. https://universe.roboflow.com/quentin-guillory-ubdhh/melee-weapon-detector
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 26, 2023
    Dataset authored and provided by
    Quentin Guillory
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Melee Weapon Bounding Boxes
    Description

    Melee Weapon Detector

    ## Overview
    
    Melee Weapon Detector is a dataset for object detection tasks - it contains Melee Weapon annotations for 381 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  10. League of Legends Champion Mini-Map Dataset

    • kaggle.com
    Updated May 8, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yadola (2020). League of Legends Champion Mini-Map Dataset [Dataset]. http://doi.org/10.34740/kaggle/dsv/1140364
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 8, 2020
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Yadola
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Context

    League of Legends is a MOBA (Multiplayer Online Battle Arena) where 2 teams (blue and red) face off. There are 3 lanes, a jungle, and 5 roles. The goal is to take down the enemy Nexus to win the game.

    Content

    This dataset contains plain images and noised images with bounding boxes drawn to identify the champions from within the mini-map throughout the course of a game of League of Legends.

    The Champions currently available are (the numbers show the class number for the relevant champion): 0 - Veigar 1 - Diana 2 - Vladimir 3 - Ryze 4 - Ekko 5 - Irelia 6 - Master Yi 7 - Nocturne 8 - Pantheon 9 - Yorick

    A YOLOv3 weights file is also added where it has been trained to identify the above mentioned champions.

    Acknowledgements

    I would like to thank Riot Games for developing and supporting League of Legends and Make Sense AI for enabling the creation of this dataset.

    Inspiration

    This dataset is available to help encourage and improve the information captured from the mini-map during the course of a League of Legends Game by developers.

  11. Precision, Recall, F1 Score.

    • plos.figshare.com
    xls
    Updated Oct 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sarah Kaleem; Adnan Sohail; Muhammad Usman Tariq; Muhammad Babar; Basit Qureshi (2023). Precision, Recall, F1 Score. [Dataset]. http://doi.org/10.1371/journal.pone.0292587.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Oct 11, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Sarah Kaleem; Adnan Sohail; Muhammad Usman Tariq; Muhammad Babar; Basit Qureshi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Coronavirus disease (COVID-19), which has caused a global pandemic, continues to have severe effects on human lives worldwide. Characterized by symptoms similar to pneumonia, its rapid spread requires innovative strategies for its early detection and management. In response to this crisis, data science and machine learning (ML) offer crucial solutions to complex problems, including those posed by COVID-19. One cost-effective approach to detect the disease is the use of chest X-rays, which is a common initial testing method. Although existing techniques are useful for detecting COVID-19 using X-rays, there is a need for further improvement in efficiency, particularly in terms of training and execution time. This article introduces an advanced architecture that leverages an ensemble learning technique for COVID-19 detection from chest X-ray images. Using a parallel and distributed framework, the proposed model integrates ensemble learning with big data analytics to facilitate parallel processing. This approach aims to enhance both execution and training times, ensuring a more effective detection process. The model’s efficacy was validated through a comprehensive analysis of predicted and actual values, and its performance was meticulously evaluated for accuracy, precision, recall, and F-measure, and compared to state-of-the-art models. The work presented here not only contributes to the ongoing fight against COVID-19 but also showcases the wider applicability and potential of ensemble learning techniques in healthcare.

  12. f

    KEANE dataset instance structure.

    • plos.figshare.com
    xls
    Updated Jul 8, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Juan R. Martinez-Rico; Lourdes Araujo; Juan Martinez-Romo (2024). KEANE dataset instance structure. [Dataset]. http://doi.org/10.1371/journal.pone.0305362.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jul 8, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Juan R. Martinez-Rico; Lourdes Araujo; Juan Martinez-Romo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Disinformation in the medical field is a growing problem that carries a significant risk. Therefore, it is crucial to detect and combat it effectively. In this article, we provide three elements to aid in this fight: 1) a new framework that collects health-related articles from verification entities and facilitates their check-worthiness and fact-checking annotation at the sentence level; 2) a corpus generated using this framework, composed of 10335 sentences annotated in these two concepts and grouped into 327 articles, which we call KEANE (faKe nEws At seNtence lEvel); and 3) a new model for verifying fake news that combines specific identifiers of the medical domain with triplets subject-predicate-object, using Transformers and feedforward neural networks at the sentence level. This model predicts the fact-checking of sentences and evaluates the veracity of the entire article. After training this model on our corpus, we achieved remarkable results in the binary classification of sentences (check-worthiness F1: 0.749, fact-checking F1: 0.698) and in the final classification of complete articles (F1: 0.703). We also tested its performance against another public dataset and found that it performed better than most systems evaluated on that dataset. Moreover, the corpus we provide differs from other existing corpora in its duality of sentence-article annotation, which can provide an additional level of justification of the prediction of truth or untruth made by the model.

  13. Gun Dataset YOLO v8

    • kaggle.com
    Updated Oct 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Abuzar Khan (2024). Gun Dataset YOLO v8 [Dataset]. https://www.kaggle.com/datasets/abuzarkhaaan/helmetandguntesting
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 3, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Abuzar Khan
    License

    https://www.reddit.com/wiki/apihttps://www.reddit.com/wiki/api

    Description

    This dataset contains labeled data for gun detection collected from various videos on YouTube. The dataset has been specifically curated and labeled by me to aid in training machine learning models, particularly for real-time gun detection tasks. It is formatted for easy use with YOLO (You Only Look Once), one of the most popular object detection models.

    Key Features: Source: The videos were sourced from YouTube and feature diverse environments, including indoor and outdoor settings, with varying lighting conditions and backgrounds. Annotations: The dataset is fully labeled with bounding boxes around guns, following the YOLO format (.txt files for annotations). Each annotation provides the class (gun) and the coordinates of the bounding box. YOLO-Compatible: The dataset is ready to be used with any YOLO model (YOLOv3, YOLOv4, YOLOv5, etc.), ensuring seamless integration for object detection training. Realistic Scenarios: The dataset includes footage of guns from various perspectives and angles, making it useful for training models that can generalize to real-world detection tasks. This dataset is ideal for researchers and developers working on gun detection systems, security applications, or surveillance systems that require fast and accurate detection of firearms.

  14. h

    Persian-Object-Detection-Dataset

    • huggingface.co
    Updated Feb 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ali Pouryousefi (2025). Persian-Object-Detection-Dataset [Dataset]. https://huggingface.co/datasets/AliPouryousefi86/Persian-Object-Detection-Dataset
    Explore at:
    Dataset updated
    Feb 12, 2025
    Authors
    Ali Pouryousefi
    Description

    AliPouryousefi86/Persian-Object-Detection-Dataset dataset hosted on Hugging Face and contributed by the HF Datasets community

  15. g

    Knife vs Pistol Detection

    • gts.ai
    json
    Updated Sep 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GTS (2024). Knife vs Pistol Detection [Dataset]. https://gts.ai/dataset-download/knife-vs-pistol-detection/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Sep 21, 2024
    Dataset provided by
    GLOBOSE TECHNOLOGY SOLUTIONS PRIVATE LIMITED
    Authors
    GTS
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Download the Knife vs Pistol Detection Dataset for developing AI models capable of identifying weapons in images.

  16. f

    Supervised pipeline (test data).

    • plos.figshare.com
    xls
    Updated Oct 11, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sarah Kaleem; Adnan Sohail; Muhammad Usman Tariq; Muhammad Babar; Basit Qureshi (2023). Supervised pipeline (test data). [Dataset]. http://doi.org/10.1371/journal.pone.0292587.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Oct 11, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Sarah Kaleem; Adnan Sohail; Muhammad Usman Tariq; Muhammad Babar; Basit Qureshi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Coronavirus disease (COVID-19), which has caused a global pandemic, continues to have severe effects on human lives worldwide. Characterized by symptoms similar to pneumonia, its rapid spread requires innovative strategies for its early detection and management. In response to this crisis, data science and machine learning (ML) offer crucial solutions to complex problems, including those posed by COVID-19. One cost-effective approach to detect the disease is the use of chest X-rays, which is a common initial testing method. Although existing techniques are useful for detecting COVID-19 using X-rays, there is a need for further improvement in efficiency, particularly in terms of training and execution time. This article introduces an advanced architecture that leverages an ensemble learning technique for COVID-19 detection from chest X-ray images. Using a parallel and distributed framework, the proposed model integrates ensemble learning with big data analytics to facilitate parallel processing. This approach aims to enhance both execution and training times, ensuring a more effective detection process. The model’s efficacy was validated through a comprehensive analysis of predicted and actual values, and its performance was meticulously evaluated for accuracy, precision, recall, and F-measure, and compared to state-of-the-art models. The work presented here not only contributes to the ongoing fight against COVID-19 but also showcases the wider applicability and potential of ensemble learning techniques in healthcare.

  17. m

    Event Detection Dataset

    • data.mendeley.com
    • datosdeinvestigacion.conicet.gov.ar
    • +2more
    Updated Jul 11, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mariano Maisonnave (2020). Event Detection Dataset [Dataset]. http://doi.org/10.17632/7d54rvzxkr.1
    Explore at:
    Dataset updated
    Jul 11, 2020
    Authors
    Mariano Maisonnave
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The present is a manually labeled data set for the task of Event Detection (ED). The task of ED consists of identifying event triggers, the word that most clearly indicates the occurrence of an event.

    The present data set consists of 2,200 news extracts from The New York Times (NYT) Annotated Corpus, separated into training (2,000) and testing (200) sets. Each news extract contains the plain text with the labels (event mentions), along with two metadata (publication date and an identifier).

    Labels description: We consider as event any ongoing real-world event or situation reported in the news articles. It is important to distinguish those events and situations that are in progress (or are reported as fresh events) at the moment the news is delivered from past events that are simply brought back, future events, hypothetical events, or events that will not take place. In our data set we only labeled as event the first type of event. Based on this criterion, some words that are typically considered as events are labeled as non-event triggers if they do not refer to ongoing events at the time the analyzed news is released. Take for instance the following news extract: "devaluation is not a realistic option to the current account deficit since it would only contribute to weakening the credibility of economic policies as it did during the last crisis." The only word that is labeled as event trigger in this example is "deficit" because it is the only ongoing event refereed in the news. Note that the words "devaluation", "weakening" and "crisis" could be labeled as event triggers in other news extracts, where the context of use of these words is different, but not in the given example.

    Further information: For a more detailed description of the data set and the data collection process please visit: https://cs.uns.edu.ar/~mmaisonnave/resources/ED_data.

    Data format: The dataset is split in two folders: training and testing. The first folder contains 2,000 XML files. The second folder contains 200 XML files. Each XML file has the following format.

    <?xml version="1.0" encoding="UTF-8"?>

    The first three tags (pubdate, file-id and sent-idx) contain metadata information. The first one is the publication date of the news article that contained that text extract. The next two tags represent a unique identifier for the text extract. The file-id uniquely identifies a news article, that can hold several text extracts. The second one is the index that identifies that text extract inside the full article.

    The last tag (sentence) defines the beginning and end of the text extract. Inside that text are the tags. Each of these tags surrounds one word that was manually labeled as an event trigger.

  18. g

    Trucks Detection Dataset

    • gts.ai
    json
    Updated Jul 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GTS (2024). Trucks Detection Dataset [Dataset]. https://gts.ai/dataset-download/trucks-detection-dataset/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Jul 8, 2024
    Dataset provided by
    GLOBOSE TECHNOLOGY SOLUTIONS PRIVATE LIMITED
    Authors
    GTS
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Explore our Trucks Detection Dataset, featuring 746 annotated images ideal for training machine learning models.

  19. tiny-object-detection-aquarium-dataset

    • huggingface.co
    Updated Oct 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hugging Face Internal Testing Organization (2024). tiny-object-detection-aquarium-dataset [Dataset]. https://huggingface.co/datasets/hf-internal-testing/tiny-object-detection-aquarium-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 2, 2024
    Dataset provided by
    Hugging Facehttps://huggingface.co/
    Authors
    Hugging Face Internal Testing Organization
    Description

    hf-internal-testing/tiny-object-detection-aquarium-dataset dataset hosted on Hugging Face and contributed by the HF Datasets community

  20. R

    Chicken Detector Dataset

    • universe.roboflow.com
    zip
    Updated Jul 18, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    computer vision (2024). Chicken Detector Dataset [Dataset]. https://universe.roboflow.com/computer-vision-nhbcp/chicken-detector-opqvg/model/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 18, 2024
    Dataset authored and provided by
    computer vision
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Chicken Bounding Boxes
    Description

    Chicken Detector

    ## Overview
    
    Chicken Detector is a dataset for object detection tasks - it contains Chicken annotations for 562 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Nievas, Enrique Bermejo and Suarez, Oscar Deniz and Garcia, Gloria Bueno and Sukthankar, Rahul (2016). Movies Fight Detection Dataset [Dataset]. https://academictorrents.com/details/70e0794e2292fc051a13f05ea6f5b6c16f3d3635

Movies Fight Detection Dataset

Explore at:
25 scholarly articles cite this dataset (View in Google Scholar)
bittorrent(409966355)Available download formats
Dataset updated
Feb 16, 2016
Dataset authored and provided by
Nievas, Enrique Bermejo and Suarez, Oscar Deniz and Garcia, Gloria Bueno and Sukthankar, Rahul
License

https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified

Description

Whereas the action recognition community has focused mostly on detecting simple actions like clapping, walking or jogging, the detection of fights or in general aggressive behaviors has been comparatively less studied. Such capability may be extremely useful in some video surveillance scenarios like in prisons, psychiatric or elderly centers or even in camera phones. After an analysis of previous approaches we test the well-known Bag-of-Words framework used for action recognition in the specific problem of fight detection, along with two of the best action descriptors currently available: STIP and MoSIFT. For the purpose of evaluation and to foster research on violence detection in video we introduce a new video database containing 1000 sequences divided in two groups: fights and non-fights. Experiments on this database and another one with fights from action movies show that fights can be detected with near 90% accuracy.

Search
Clear search
Close search
Google apps
Main menu