3 datasets found
  1. R

    332_150x150 Dataset

    • universe.roboflow.com
    zip
    Updated Sep 13, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    yolov10 (2024). 332_150x150 Dataset [Dataset]. https://universe.roboflow.com/yolov10-7irmg/332_150x150/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 13, 2024
    Dataset authored and provided by
    yolov10
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    332_150x150 Bounding Boxes
    Description

    332_150x150

    ## Overview
    
    332_150x150 is a dataset for object detection tasks - it contains 332_150x150 annotations for 316 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  2. z

    Cartographic Sign Detection Dataset (CaSiDD)

    • zenodo.org
    bin, txt, zip
    Updated Aug 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Remi Petitpierre; Remi Petitpierre; Jiaming Jiang; Jiaming Jiang (2025). Cartographic Sign Detection Dataset (CaSiDD) [Dataset]. http://doi.org/10.5281/zenodo.16925731
    Explore at:
    txt, zip, binAvailable download formats
    Dataset updated
    Aug 27, 2025
    Dataset provided by
    EPFL
    Authors
    Remi Petitpierre; Remi Petitpierre; Jiaming Jiang; Jiaming Jiang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Aug 27, 2025
    Description

    The Cartographic Sign Detection Dataset (CaSiDD) comprises 796 manually annotated historical map samples, corresponding to 18,750 cartographic signs, such as icons and symbols. Moreover, the signs are categorized into 24 distinct classes, such as tree, mill, hill, religious edifice, or grave. The original images are part of the Semap dataset [1].

    The dataset is published in the context of R. Petitpierre's PhD thesis: Studying Maps at Scale: A Digital Investigation of Cartography and the Evolution of Figuration [2]. Details on the annotation process and statistics on the annotated cartographic signs are provided in the manuscript.

    Organization of the data

    The data is organized following the COCO dataset format.

    project_root/
    ├── classes.txt
    ├── images/
    │ ├── train/
    │ │ ├── image1.png
    │ │ └── image2.png
    │ └── val/
    │ ├── image3.png
    │ └── image4.png
    └── labels/
    ├── train/
    │ ├── image1.txt
    │ └── image2.txt
    └── val/
    ├── image3.txt
    └── image4.txt

    Label syntax

    The labels are stored in separate text files, one for each image. In the text files, object classes and coordinates are stored line by line, using the following syntax:

    class_id x_center y_center width height

    Where x is the horizontal axis. The dimensions are expressed relative to the size of the labeled image. Example:

    13 0.095339 0.271003 0.061719 0.027161
    1 0.154258 0.490052 0.017370 0.019010
    8 0.317982 0.556484 0.017370 0.014063

    Classes

    0 battlefield
    1 tree
    2 train (e.g. wagon)
    3 mill (watermill or windmill)
    4 bridge
    5 settlement or building
    6 army
    7 grave
    8 bush
    9 marsh
    10 grass
    11 vine
    12 religious monument
    13 hill/mountain
    14 cannon
    15 rock
    16 tower
    17 signal or survey point
    18 gate (e.g. city gate)
    19 ship/boat/shipwreck
    20 station (e.g. metro/tram/train station)
    21 dam/lock
    22 harbor
    23 well/basin/reservoir
    24 miscellaneous (e.g. post office, spring, hospital, school, etc.)

    Model weights

    A YOLOv10 model yolov10_single_class_model.pt, trained as described in [2], is provided for convenience and reproducibility. The model does not support multi-class object detection. The YOLOv10 implementation used is distributed by Ultralytics [3].

    Descriptive statistics

    Number of distinct classes: 24 + misc
    Number of image samples: 796
    Number of annotations: 18,750
    Study period: 1492–1948.

    Use and Citation

    For any mention of this dataset, please cite :

    @misc{casidd_petitpierre_2025,
    author = {Petitpierre, R{\'{e}}mi and Jiang, Jiaming},
    title = {{Cartographic Sign Detection Dataset (CaSiDD)}},
    year = {2025},
    publisher = {EPFL},
    url = {https://doi.org/10.5281/zenodo.16278380}}


    @phdthesis{studying_maps_petitpierre_2025,
    author = {Petitpierre, R{\'{e}}mi},
    title = {{Studying Maps at Scale: A Digital Investigation of Cartography and the Evolution of Figuration}},
    year = {2025},
    school = {EPFL}}

    Corresponding author

    Rémi PETITPIERRE - remi.petitpierre@epfl.ch - ORCID - Github - Scholar - ResearchGate

    Work ethics

    85% of the data were annotated by RP. The remainder was annotated by JJ, a master's student from EPFL, Switzerland.

    License

    This project is licensed under the CC BY 4.0 License. See the license_images file for details about the respective reuse policy of digitized map images.

    Liability

    We do not assume any liability for the use of this dataset.

    References

    1. Petitpierre R., Gomez Donoso D., Kriesel B. (2025) Semantic Segmentation Map Dataset (Semap). EPFL. https://doi.org/10.5281/zenodo.16164781
    2. Petitpierre R. (2025) Studying Maps at Scale: A Digital Investigation of Cartography and the Evolution of Figuration. PhD thesis. École Polytechnique Fédérale de Lausanne.
    3. Jocher G. et al. (2024) Ultralytics YOLO. v8.3.39. https://github.com/ultralytics/ultralytics
  3. m

    Real-Time Detection of Bangla Sign Language for Shopkeeper-Customer...

    • data.mendeley.com
    Updated Jun 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amit Azim Amit (2025). Real-Time Detection of Bangla Sign Language for Shopkeeper-Customer Communication [Dataset]. http://doi.org/10.17632/kyb87r9x2w.1
    Explore at:
    Dataset updated
    Jun 24, 2025
    Authors
    Amit Azim Amit
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data set provides a sample of 6480 labeled images of still hand gestures depicting 14 distinct signs of the Bangla Sign Language (BdSL). It is specifically geared towards helping to advance the real-time communication tools to the deaf and the hard-of-hearing people and specifically in the business world like shopkeeper-customer relationship. The data was formed under the free will of people of different ages and types. The images were shot through the use of different smartphone-specific camera (13 MP and above), in varied real-life conditions. The light conditions include changes in light environment including natural light and dark light, the position and the backgrounds. All the cameras operated the auto exposure mode and had standard settings without manual settings and filters.

    Each image in the dataset corresponds to one of 14 commonly used Bangla words or phrases: "আমি" (I), "আপনি" (You), "স্যার" (Sir), "প্যাকেট" (Packet), "বিস্কুট" (Biscuit), "খাওয়া" (Eat), "এক" (One), "দুই" (Two), "তিন" (Three), "চার" (Four), "পাঁচ" (Five), "ওজন" (Weight), "টাকা" (Money), and "আমি তোমাকে ভালোবাসি" (I love you). The data is split into two groups in order to fulfil the requirements of different machine learning reasons: • Training (Detection): This folder consists of annotated images (bounding box) that is used to train object detection models such as YOLOv10. • Testing (Recognition): It has images that are not labeled and images that are labeled with the classes and may be used to test and train a gesture classification model. They are all in JPG format and have been filtered and compressed to make the file size smaller and still maintain quality of the image. This brings the dataset to be more available to the researchers with limited resources of computation. The current data is especially useful to those of us in the sphere of: • recognition of sign language • Human to computer interaction • Supporting technologies of the deaf and hard-of-hearing population The fact that the dataset allows to accurately identify and detect gestures of the Bangla Sign Language enables inclusiveness to its members and closes the communication gap between the hearing impaired and the rest of the society. It can also be used as a benchmark corpus on future study on regional sign language systems, which are usually underexamined in the global datasets.

  4. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
yolov10 (2024). 332_150x150 Dataset [Dataset]. https://universe.roboflow.com/yolov10-7irmg/332_150x150/model/1

332_150x150 Dataset

332_150x150

332_150x150-dataset

Explore at:
zipAvailable download formats
Dataset updated
Sep 13, 2024
Dataset authored and provided by
yolov10
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Variables measured
332_150x150 Bounding Boxes
Description

332_150x150

## Overview

332_150x150 is a dataset for object detection tasks - it contains 332_150x150 annotations for 316 images.

## Getting Started

You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.

  ## License

  This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Search
Clear search
Close search
Google apps
Main menu