3 datasets found
  1. R

    Accident Detection Model Dataset

    • universe.roboflow.com
    zip
    Updated Apr 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Accident detection model (2024). Accident Detection Model Dataset [Dataset]. https://universe.roboflow.com/accident-detection-model/accident-detection-model/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 8, 2024
    Dataset authored and provided by
    Accident detection model
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Accident Bounding Boxes
    Description

    Accident-Detection-Model

    Accident Detection Model is made using YOLOv8, Google Collab, Python, Roboflow, Deep Learning, OpenCV, Machine Learning, Artificial Intelligence. It can detect an accident on any accident by live camera, image or video provided. This model is trained on a dataset of 3200+ images, These images were annotated on roboflow.

    Problem Statement

    • Road accidents are a major problem in India, with thousands of people losing their lives and many more suffering serious injuries every year.
    • According to the Ministry of Road Transport and Highways, India witnessed around 4.5 lakh road accidents in 2019, which resulted in the deaths of more than 1.5 lakh people.
    • The age range that is most severely hit by road accidents is 18 to 45 years old, which accounts for almost 67 percent of all accidental deaths.

    Accidents survey

    https://user-images.githubusercontent.com/78155393/233774342-287492bb-26c1-4acf-bc2c-9462e97a03ca.png" alt="Survey">

    Literature Survey

    • Sreyan Ghosh in Mar-2019, The goal is to develop a system using deep learning convolutional neural network that has been trained to identify video frames as accident or non-accident.
    • Deeksha Gour Sep-2019, uses computer vision technology, neural networks, deep learning, and various approaches and algorithms to detect objects.

    Research Gap

    • Lack of real-world data - We trained model for more then 3200 images.
    • Large interpretability time and space needed - Using google collab to reduce interpretability time and space required.
    • Outdated Versions of previous works - We aer using Latest version of Yolo v8.

    Proposed methodology

    • We are using Yolov8 to train our custom dataset which has been 3200+ images, collected from different platforms.
    • This model after training with 25 iterations and is ready to detect an accident with a significant probability.

    Model Set-up

    Preparing Custom dataset

    • We have collected 1200+ images from different sources like YouTube, Google images, Kaggle.com etc.
    • Then we annotated all of them individually on a tool called roboflow.
    • During Annotation we marked the images with no accident as NULL and we drew a box on the site of accident on the images having an accident
    • Then we divided the data set into train, val, test in the ratio of 8:1:1
    • At the final step we downloaded the dataset in yolov8 format.
      #### Using Google Collab
    • We are using google colaboratory to code this model because google collab uses gpu which is faster than local environments.
    • You can use Jupyter notebooks, which let you blend code, text, and visualisations in a single document, to write and run Python code using Google Colab.
    • Users can run individual code cells in Jupyter Notebooks and quickly view the results, which is helpful for experimenting and debugging. Additionally, they enable the development of visualisations that make use of well-known frameworks like Matplotlib, Seaborn, and Plotly.
    • In Google collab, First of all we Changed runtime from TPU to GPU.
    • We cross checked it by running command ‘!nvidia-smi’
      #### Coding
    • First of all, We installed Yolov8 by the command ‘!pip install ultralytics==8.0.20’
    • Further we checked about Yolov8 by the command ‘from ultralytics import YOLO from IPython.display import display, Image’
    • Then we connected and mounted our google drive account by the code ‘from google.colab import drive drive.mount('/content/drive')’
    • Then we ran our main command to run the training process ‘%cd /content/drive/MyDrive/Accident Detection model !yolo task=detect mode=train model=yolov8s.pt data= data.yaml epochs=1 imgsz=640 plots=True’
    • After the training we ran command to test and validate our model ‘!yolo task=detect mode=val model=runs/detect/train/weights/best.pt data=data.yaml’ ‘!yolo task=detect mode=predict model=runs/detect/train/weights/best.pt conf=0.25 source=data/test/images’
    • Further to get result from any video or image we ran this command ‘!yolo task=detect mode=predict model=runs/detect/train/weights/best.pt source="/content/drive/MyDrive/Accident-Detection-model/data/testing1.jpg/mp4"’
    • The results are stored in the runs/detect/predict folder.
      Hence our model is trained, validated and tested to be able to detect accidents on any video or image.

    Challenges I ran into

    I majorly ran into 3 problems while making this model

    • I got difficulty while saving the results in a folder, as yolov8 is latest version so it is still underdevelopment. so i then read some blogs, referred to stackoverflow then i got to know that we need to writ an extra command in new v8 that ''save=true'' This made me save my results in a folder.
    • I was facing problem on cvat website because i was not sure what
  2. R

    Aquarium Shrimp Detection (caridina_neocaridina) Dataset

    • universe.roboflow.com
    zip
    Updated May 25, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dee Dee (2023). Aquarium Shrimp Detection (caridina_neocaridina) Dataset [Dataset]. https://universe.roboflow.com/dee-dee-b9kev/aquarium-shrimp-detection-caridina_neocaridina/model/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 25, 2023
    Dataset authored and provided by
    Dee Dee
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Caridina And NeoCardina Polygons
    Description

    https://drive.google.com/uc?id=1x6OsMmimLrwrYwiNm9EuIDh0-GThLik-" alt="">

    Project Overview: The Caridina and Neocaridina Shrimp Detection Project aims to develop and improve computer vision algorithms for detecting and distinguishing between different shrimp varieties. This project is centered around aquarium fish keeping hobbyist and how computer vision can be beneficial to improving the care of dwarf shrimp. This project will focus on zoning a feeding area and tracking and counting caridina shrimp in area.

    Caridina and neo-caridina shrimp are two distinct species that require different water parameters for optimal health. Neocaridina shrimp are generally more hardy and easier to keep than caridina species, while caridina shrimp are known for their striking distinctive patterns. The body structure of both species are similar. However, there are specific features that should allow enough sensitivity to at least distinguish between caridina shrimp.

    Descriptions of Each Class Type: The dataset for this project includes thirteen different class types. The neo-caridina species have been grouped together to test if the model can distinguish between caridina and neo-caridina shrimp. The remaining classes are all different types of caridina shrimp.

    The RGalaxyPinto and BGalaxyPinto varieties are caridina shrimp, with the only difference being their color: one is wine-red while the other dark-blue-black. Both varieties have distinctive spots on the head region and stripes on their backs, making them ideal for testing the model's ability to distinguish between color.

    https://drive.google.com/uc?id=19zPYu8YbCiRHUF9K_3kCsyw0X2Tog-Ts" alt="">https://drive.google.com/uc?id=1Ay728IysDP8yMCwPEi743Bp6mnq5Xrix" alt="">
    https://drive.google.com/uc?id=1Asa3DwuWop5UDpBThHgGG6otBSyXgJTV" alt="">

    The CRS-CBS Crystal Red Shrimp and Crystal Black Shrimp have similar patterns to the Panda Bee shrimp, but the hues are different. Panda shrimp tend to be a deeper and richer color than CRS-CBS shrimp, CRS-CBS tend to have thicker white rings.

    https://drive.google.com/uc?id=1AXlBcHGGZ9VEnNuoxeEFZf0DTPQa5hTR" alt="">https://drive.google.com/uc?id=1BO2DwW77AqzDrj3xP9VOEYOXSP4wgRzz" alt="">
    https://drive.google.com/uc?id=19yO42UW_ai11Da3KgaEiUEHn0OnJc0As" alt="">

    The Panda Bee variety, on the other hand, is known for its panda-like pattern white and black/red rings.The color rings tend to be thicker and more pronounced than the Crystal Red/Black Shrimp.

    Within the Caridina species, there are various tiger varieties. These include Fancy Tiger, Raccoon Tiger, Tangerine Tiger, Orange Eyed Tiger (Blonde and Full Body). All of these have stripes along the sides of their bodies. Fancy Tiger shrimp have a similar color to CRS, but with a tiger stripe pattern. Raccoon Tiger and Orange Eyed Tiger Blonde look very similar, but the body of the Raccoon Tiger appears larger, and the Orange Eyed Tiger is known for its orange eyes. Tangerine Tigers vary in stripe pattern and can often be confused with certain neo-caridina, specifically yellow or orange varieties.

    https://drive.google.com/uc?id=1APx9jQ5WUdPbv1US8ihOEBpVBjvhN0Z3" alt="">https://drive.google.com/uc?id=1B6MbiN9FY9fomf6-P6zy-jkoGJKEiXlW" alt="">https://drive.google.com/uc?id=1A3qYXbPkqjeK2oCJfSLAPwEsEZN9nw8NN" alt="">
    https://drive.google.com/uc?id=19ukHly3uZ05FeGdW_hVBWwlHRFvgnMMC" alt="">https://drive.google.com/uc?id=1AztJj471aIWcRYHNC1lrJse7raO2dUqm" alt="">

    The remaining are popular favorites for breeding and distinct color patterns namely Bluebolt, Shadow Mosura, White Bee/Golden Bee, and King Kong Bee.

    https://drive.google.com/uc?id=19yEpuJ6ENmkcImu0OfCzliITP_UnCNoM" alt="">https://drive.google.com/uc?id=19uglS20nyTSi-_b1ls8f09cIuJUHOpSm" alt="">
    https://drive.google.com/uc?id=1AbbCVRnlIQL1MlqY3MJnX9t2WVdyq2zJ" alt="">

    Links to External Resources: Here are some resources that provide additional information on the shrimp varieties and other resources used in this project:

    Caridina Shrimp: https://en.wikipedia.org/wiki/Bee_shrimp
    Neo-Caridina Shrimp: https://en.wikipedia.org/wiki/Neocaridina
      Roboflow Polygon Zoning/Tracking/Counting:https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-detect-and-count-objects-in-polygon-zone.ipynb
      Roboflow SAM: https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-segment-anything-with-sam.ipynb
      Ultralytics Hub:https://github.com/ultralytics/hub
    
  3. R

    Cracks In Concrete Images Dataset

    • universe.roboflow.com
    zip
    Updated Mar 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GOOGLE COLAB (2025). Cracks In Concrete Images Dataset [Dataset]. https://universe.roboflow.com/google-colab-cmvo4/cracks-in-concrete-images-n50em/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 20, 2025
    Dataset authored and provided by
    GOOGLE COLAB
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    POSITIVE Polygons
    Description

    CRACKS IN CONCRETE IMAGES

    ## Overview
    
    CRACKS IN CONCRETE IMAGES is a dataset for instance segmentation tasks - it contains POSITIVE annotations for 773 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  4. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Accident detection model (2024). Accident Detection Model Dataset [Dataset]. https://universe.roboflow.com/accident-detection-model/accident-detection-model/model/1

Accident Detection Model Dataset

accident-detection-model

accident-detection-model-dataset

Explore at:
259 scholarly articles cite this dataset (View in Google Scholar)
zipAvailable download formats
Dataset updated
Apr 8, 2024
Dataset authored and provided by
Accident detection model
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Variables measured
Accident Bounding Boxes
Description

Accident-Detection-Model

Accident Detection Model is made using YOLOv8, Google Collab, Python, Roboflow, Deep Learning, OpenCV, Machine Learning, Artificial Intelligence. It can detect an accident on any accident by live camera, image or video provided. This model is trained on a dataset of 3200+ images, These images were annotated on roboflow.

Problem Statement

  • Road accidents are a major problem in India, with thousands of people losing their lives and many more suffering serious injuries every year.
  • According to the Ministry of Road Transport and Highways, India witnessed around 4.5 lakh road accidents in 2019, which resulted in the deaths of more than 1.5 lakh people.
  • The age range that is most severely hit by road accidents is 18 to 45 years old, which accounts for almost 67 percent of all accidental deaths.

Accidents survey

https://user-images.githubusercontent.com/78155393/233774342-287492bb-26c1-4acf-bc2c-9462e97a03ca.png" alt="Survey">

Literature Survey

  • Sreyan Ghosh in Mar-2019, The goal is to develop a system using deep learning convolutional neural network that has been trained to identify video frames as accident or non-accident.
  • Deeksha Gour Sep-2019, uses computer vision technology, neural networks, deep learning, and various approaches and algorithms to detect objects.

Research Gap

  • Lack of real-world data - We trained model for more then 3200 images.
  • Large interpretability time and space needed - Using google collab to reduce interpretability time and space required.
  • Outdated Versions of previous works - We aer using Latest version of Yolo v8.

Proposed methodology

  • We are using Yolov8 to train our custom dataset which has been 3200+ images, collected from different platforms.
  • This model after training with 25 iterations and is ready to detect an accident with a significant probability.

Model Set-up

Preparing Custom dataset

  • We have collected 1200+ images from different sources like YouTube, Google images, Kaggle.com etc.
  • Then we annotated all of them individually on a tool called roboflow.
  • During Annotation we marked the images with no accident as NULL and we drew a box on the site of accident on the images having an accident
  • Then we divided the data set into train, val, test in the ratio of 8:1:1
  • At the final step we downloaded the dataset in yolov8 format.
    #### Using Google Collab
  • We are using google colaboratory to code this model because google collab uses gpu which is faster than local environments.
  • You can use Jupyter notebooks, which let you blend code, text, and visualisations in a single document, to write and run Python code using Google Colab.
  • Users can run individual code cells in Jupyter Notebooks and quickly view the results, which is helpful for experimenting and debugging. Additionally, they enable the development of visualisations that make use of well-known frameworks like Matplotlib, Seaborn, and Plotly.
  • In Google collab, First of all we Changed runtime from TPU to GPU.
  • We cross checked it by running command ‘!nvidia-smi’
    #### Coding
  • First of all, We installed Yolov8 by the command ‘!pip install ultralytics==8.0.20’
  • Further we checked about Yolov8 by the command ‘from ultralytics import YOLO from IPython.display import display, Image’
  • Then we connected and mounted our google drive account by the code ‘from google.colab import drive drive.mount('/content/drive')’
  • Then we ran our main command to run the training process ‘%cd /content/drive/MyDrive/Accident Detection model !yolo task=detect mode=train model=yolov8s.pt data= data.yaml epochs=1 imgsz=640 plots=True’
  • After the training we ran command to test and validate our model ‘!yolo task=detect mode=val model=runs/detect/train/weights/best.pt data=data.yaml’ ‘!yolo task=detect mode=predict model=runs/detect/train/weights/best.pt conf=0.25 source=data/test/images’
  • Further to get result from any video or image we ran this command ‘!yolo task=detect mode=predict model=runs/detect/train/weights/best.pt source="/content/drive/MyDrive/Accident-Detection-model/data/testing1.jpg/mp4"’
  • The results are stored in the runs/detect/predict folder.
    Hence our model is trained, validated and tested to be able to detect accidents on any video or image.

Challenges I ran into

I majorly ran into 3 problems while making this model

  • I got difficulty while saving the results in a folder, as yolov8 is latest version so it is still underdevelopment. so i then read some blogs, referred to stackoverflow then i got to know that we need to writ an extra command in new v8 that ''save=true'' This made me save my results in a folder.
  • I was facing problem on cvat website because i was not sure what
Search
Clear search
Close search
Google apps
Main menu