75 datasets found
  1. R

    Mmdetection Dataset

    • universe.roboflow.com
    zip
    Updated Mar 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mishal (2023). Mmdetection Dataset [Dataset]. https://universe.roboflow.com/mishal/mmdetection-iuyiz/model/7
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 1, 2023
    Dataset authored and provided by
    Mishal
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Handwriting Bounding Boxes
    Description

    Mmdetection

    ## Overview
    
    Mmdetection is a dataset for object detection tasks - it contains Handwriting annotations for 566 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  2. mmdetection

    • kaggle.com
    zip
    Updated Apr 27, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Carno Zhao (2022). mmdetection [Dataset]. https://www.kaggle.com/datasets/carnozhao/mmdetection
    Explore at:
    zip(90651227 bytes)Available download formats
    Dataset updated
    Apr 27, 2022
    Authors
    Carno Zhao
    Description

    Dataset

    This dataset was created by Carno Zhao

    Contents

  3. mmdetection_v2.18

    • kaggle.com
    zip
    Updated Dec 13, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Neo (2021). mmdetection_v2.18 [Dataset]. https://www.kaggle.com/mlneo07/mmdetection-v217
    Explore at:
    zip(78322800 bytes)Available download formats
    Dataset updated
    Dec 13, 2021
    Authors
    Neo
    Description
    !pip install '/kaggle/input/mmdetection-v217/mmdetection/addict-2.4.0-py3-none-any.whl' --no-deps
    !pip install '/kaggle/input/mmdetection-v217/mmdetection/yapf-0.31.0-py2.py3-none-any.whl' --no-deps
    !pip install '/kaggle/input/mmdetection-v217/mmdetection/terminal-0.4.0-py3-none-any.whl' --no-deps
    !pip install '/kaggle/input/mmdetection-v217/mmdetection/terminaltables-3.1.0-py3-none-any.whl' --no-deps
    !pip install '/kaggle/input/mmdetection-v217/mmdetection/mmcv_full-1.3.x-py2.py3-none-any/mmcv_full-1.3.16-cp37-cp37m-manylinux1_x86_64.whl' --no-deps
    !pip install '/kaggle/input/mmdetection-v217/mmdetection/pycocotools-2.0.2/pycocotools-2.0.2' --no-deps
    !pip install '/kaggle/input/mmdetection-v217/mmdetection/mmpycocotools-12.0.3/mmpycocotools-12.0.3' --no-deps
    
    !rm -rf mmdetection
    
    !cp -r /kaggle/input/mmdetection-v217/mmdetection/mmdetection-2.18.0 /kaggle/working/
    !mv /kaggle/working/mmdetection-2.18.0 /kaggle/working/mmdetection
    %cd /kaggle/working/mmdetection
    !pip install -e .
    
    %cd ..
    
    # !rm -rf mmdetection
    
    # !git clone https://github.com/open-mmlab/mmdetection.git /kaggle/working/mmdetection
    
  4. mmDetection results

    • springernature.figshare.com
    zip
    Updated Jan 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alex Olar (2024). mmDetection results [Dataset]. http://doi.org/10.6084/m9.figshare.24306013.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 16, 2024
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Alex Olar
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Containing subfolders: faster_rcnn_r50_fpn_adhd/ and faster_rcnn_r50_gfap/. Both of them contain the training configuration, loss curves, metrics and evaluations on each individual test set against the GT annotations. We share these results for reproducibility.

  5. mmdetection-train-baseline

    • kaggle.com
    zip
    Updated Jul 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    YAOYAO-BIGHEAD (2023). mmdetection-train-baseline [Dataset]. https://www.kaggle.com/datasets/jiangyunyao/mmdetection-train-baseline
    Explore at:
    zip(12963951453 bytes)Available download formats
    Dataset updated
    Jul 2, 2023
    Authors
    YAOYAO-BIGHEAD
    Description

    Dataset

    This dataset was created by YAOYAO-BIGHEAD

    Contents

  6. R

    Mm Detection Dataset

    • universe.roboflow.com
    zip
    Updated Nov 15, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MM (2025). Mm Detection Dataset [Dataset]. https://universe.roboflow.com/mm-jcrbn/mm-detection-ueltc/model/3
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 15, 2025
    Dataset authored and provided by
    MM
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Variables measured
    Objects Bounding Boxes
    Description

    MM Detection

    ## Overview
    
    MM Detection is a dataset for object detection tasks - it contains Objects annotations for 217 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [MIT license](https://creativecommons.org/licenses/MIT).
    
  7. mmdetection-song gao.rar

    • figshare.com
    bin
    Updated Aug 8, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Song Gao (2022). mmdetection-song gao.rar [Dataset]. http://doi.org/10.6084/m9.figshare.20449530.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Aug 8, 2022
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Song Gao
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    As an outstanding method for ocean monitoring, Synthetic aperture radar (SAR) has received much attention from scholars in recent years. With the rapid advances in the field of SAR technology and image processing, significant progress has also been made in ship detection in SAR images. When dealing with large-scale ships on a wide sea surface, most existing algorithms can achieve great detection results. However, small ships in SAR images contain few feature information. It is difficult to detect them from the background clutter, and there is a problem of low detection rate and high false alarm. To improve the detection accuracy for small-scale ships, we propose an efficient ship detection model based on YOLOX, called YOLO-SD. First, Multi-Scale Convolution (MSC) is proposed to fuse feature information at different scales so as to resolve the problem of unbalanced semantic information in the lower layer and improve the ability of feature extraction. Further, the Feature Transformer Module (FTM) is designed to capture global features and link them to the context for the purpose of optimizing high-layer semantic information and ultimately achieving excellent detection performance. A large number of experiments on the HRSID and LS-SSDD-v1.0 show that YOLO-SD achieves a better detection performance than the baseline YOLOX. Compared with other excellent object detection models, YOLO-SD still has an edge in overall performance.

  8. mmdetection 2.3.0rc0

    • kaggle.com
    zip
    Updated Jul 28, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    unfinity (2020). mmdetection 2.3.0rc0 [Dataset]. https://www.kaggle.com/datasets/unfinity/mmdetection-230rc0
    Explore at:
    zip(20813900 bytes)Available download formats
    Dataset updated
    Jul 28, 2020
    Authors
    unfinity
    Description

    Dataset

    This dataset was created by unfinity

    Contents

  9. mmdetection-v280

    • kaggle.com
    zip
    Updated Feb 5, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    tito (2021). mmdetection-v280 [Dataset]. https://www.kaggle.com/its7171/mmdetection-v280
    Explore at:
    zip(566535761 bytes)Available download formats
    Dataset updated
    Feb 5, 2021
    Authors
    tito
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Dataset

    This dataset was created by tito

    Released under CC0: Public Domain

    Contents

  10. Configuration details of the object detection models implemented in...

    • plos.figshare.com
    xls
    Updated Sep 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Parminder Kaur; Anna Grassi; Federica Bonini; Barbara Valle; Marina Serena Borgatti; Giovanni Rivieccio; Agnese Denaro; Leopoldo de Simone; Emanuele Fanfarillo; Paolo Remagnino (2025). Configuration details of the object detection models implemented in MMDetection. Here, FPN = Feature Pyramid Network, AHs = Attention Heads, TLs = Transformer Layers, SGD = Stochastic Gradient Descent. The weight decay value is 0.0001 for all the methods. [Dataset]. http://doi.org/10.1371/journal.pone.0327969.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Sep 5, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Parminder Kaur; Anna Grassi; Federica Bonini; Barbara Valle; Marina Serena Borgatti; Giovanni Rivieccio; Agnese Denaro; Leopoldo de Simone; Emanuele Fanfarillo; Paolo Remagnino
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Configuration details of the object detection models implemented in MMDetection. Here, FPN = Feature Pyramid Network, AHs = Attention Heads, TLs = Transformer Layers, SGD = Stochastic Gradient Descent. The weight decay value is 0.0001 for all the methods.

  11. Z

    Supplemental data for characterization of mixing in nanoparticle...

    • data.niaid.nih.gov
    Updated Mar 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mahr, Christoph; Stahl, Jakob; Gerken, Beeke; Baric, Valentin; Frei, Max; Krause, Florian F.; Grieb, Tim; Schowalter, Marco; Mehrtens, Thorsten; Kruis, Einar; Mädler, Lutz; Rosenauer, Andreas (2024). Supplemental data for characterization of mixing in nanoparticle hetero-aggregates using convolutional neural networks [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8199394
    Explore at:
    Dataset updated
    Mar 25, 2024
    Dataset provided by
    University of Bremen
    Institute of Technology for Nanostructures and Center for Nanointegration Duisburg- Essen
    Leibnitz Institut für Werkstofforientierte Technologien
    Authors
    Mahr, Christoph; Stahl, Jakob; Gerken, Beeke; Baric, Valentin; Frei, Max; Krause, Florian F.; Grieb, Tim; Schowalter, Marco; Mehrtens, Thorsten; Kruis, Einar; Mädler, Lutz; Rosenauer, Andreas
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is the supplemental data for the manuscript titled Characterization of mixing in nanoparticle hetero-aggregates using convolutional neural networks submitted to Nano Select.

    Motivation:

    Detection of nanoparticles and classification of the material type in scanning transmission electron microscopy (STEM) images can be a tedious task, if it has to be done manually. Therefore, a convolutional neural network is trained to do this task for STEM-images of TiO2-WO3 nanoparticle hetero-aggregates. The present dataset contains the training data and some jupyter-notebooks that can be used after installation of the MMDetection toolbox (https://github.com/open-mmlab/mmdetection) to train the CNN. Details are provided in the manuscript submitted to Nano Select and in the comments of the jupyter-notebooks.

    Authors and funding:

    The present dataset was created by the authors. The work was funded by the Deutsche Forschungsgemeinschaft within the priority program SPP2289 under contract numbers RO2057/17-1 and MA3333/25-1.

    Dataset description:

    Four jupyter-notebooks are provided, which can be used for different tasks, according to their names. Details can be found within the comments and markdowns. These notebooks can be run after installation of MMDetection within the mmdetection folder.

    particle_detection_training.ipynb: This notebook can be used for network training.

    particle_detection_evaluation.ipynb: This notebook is for evaluation of a trained network with simulated test images.

    particle_detection_evaluation_experiment.ipynb: This notebook is for evaluation of a trained network with experimental test images.

    particle_detection_measurement_experiment.ipynb: This notebook is for application of a trained network to experimental data.

    In addition, a script titled particle_detection_functions.py is provided which contains functions required by the notebooks. Details can be found within the comments.

    The zip archive training_data.zip contains the training data. The subfolder HAADF contains the images (sorted as training, validation and test images), the subfolder json contains the annotation (sorted as training, validation and test images). Each file within the json folder provides for each image the following information:

    aggregat_no: image id, the number of the corresponding image file

    particle_position_x: list of particle position x-coordinates in nm

    particle_position_y: list of particle position y-coordinates in nm

    particle_position_z: list of particle position z-coordinates in nm

    particle_radius: list of volume equivalent particle radii in nm

    particle_type: list of material types, 1: TiO2, 2: WO3

    particle_shape: list of particle shapes: 0: sphere, 1: box, 2: icosahedron

    rotation: list of particle rotations in rad. Each particle is rotated twice by the listed angle (before and after deformation)

    deformation: list of particle deformations. After the first rotation the particle x-coordinates of the particle’s surface mesh are scaled by the factor listed in deformation, y- and z-coordinates are scaled according to 1/sqrt(deformation).

    cluster_index: list of cluster indices for each particle

    initial_cluster_index: list of initial cluster indices for each particle, before primary clusters of the same material were merged

    fractal_dimension: the intended fractal dimension of the aggregate

    fractal_dimension_true: the realized geometric fractal dimension of the aggregate (neglecting particle densities)

    fractal_dimension_weight_true: the realized fractal dimension of the aggregate (including particle densities)

    fractal_prefactor: fractal prefactor

    mixing_ratio_intended: the intended mixing ratio (fraction of WO3 particles)

    mixing_ratio_true: the realised mixing ratio (fraction of WO3 particles)

    mixing_ratio_volume: the realised mixing ratio (fraction of WO3 volume)

    mixing_ratio_weight: the realised mixing ratio (fraction of WO3 weight)

    particle_1_rho: density of TiO2 used for the calculations

    particle_1_size_mean: mean TiO2 radius

    particle_1_size_min: smallest TiO2 radius

    particle_1_size_max: largest TiO2 radius

    particle_1_size_std: standard deviation of TiO2 radii

    particle_1_clustersize: average TiO2 cluster size

    particle_1_clustersize_init: average TiO2 cluster size of primary clusters (before merging into larger clusters)

    particle_1_clustersize_init_intended: intended TiO2 cluster size of primary clusters

    particle_2_rho: density of WO3 used for the calculations

    particle_2_size_mean: mean WO3 radius

    particle_2_size_min: smallest WO3 radius

    particle_2_size_max: largest WO3 radius

    particle_2_size_std: standard deviation of WO3 radii

    particle_2_clustersize: average WO3 cluster size

    particle_2_clustersize_init: average WO3 cluster size of primary clusters (before merging into larger clusters)

    particle_2_clustersize_init_intended: intended WO3 cluster size of primary clusters

    number_of_primary_particles: number of particles within the aggregate

    gyration_radius_geometric: gyration radius of the aggregate (neglecting particle densities)

    gyration_radius_weighted: gyration radius of the aggregate (including particle densities)

    mean_coordination: mean total coordination number (particle contacts)

    mean_coordination_heterogen: mean heterogeneous coordination number (contacts with particles of the different material)

    mean_coordination_homogen: mean homogeneous coordination number (contacts with particles of the same material)

    radius_equiv: list of area equivalent particle radii (in projection)

    k_proj: projection direction of the aggregate: 0: z-direction (axis = 2), 1: x-direction (axis = 1), 2: y-direction (axis = 0)

    polygons: list of polygons that surround the particle (COCO annotation)

    bboxes: list of particle bounding boxes

    aggregate_size: projected area of the aggregate translated into the radius of a circle in nm

    n_pix: number of pixel per image in horizontal and vertical direction (squared images)

    pixel_size: pixel size in nm

    image_size: image size in nm

    add_poisson_noise: 1 if poisson noise was added, 0 otherwise

    frame_time: simulated frame time (required for poisson noise)

    dwell_time: dwell time per pixel (required for poisson noise)

    beam_current: beam current (required for poisson noise)

    electrons_per_pixel: number of electrons per pixel

    dose: electron dose in electrons per Å2

    add_scan_noise: 1 if scan noise was added, 0 otherwise

    beam misposition: parameter that describes how far the beam can be misplaced in pm (required for scan noise)

    scan_noise: parameter that describes how far the beam can be misplaced in pixel (required for scan noise)

    add_focus_dependence: 1 if a focus effect is included, 0 otherwise

    data_format: data format of the images, e.g. uint8

    There are 24000 training images, 5500 validation images, 5500 test images, and their corresponding annotations. Aggregates and STEM images were obtained with the algorithm explained in the main work. The important data for CNN training is extracted from the files of individual aggregates and concluded in the subfolder COCO. For training, validation and test data there is a file annotation_COCO.json that includes all information required for the CNN training.

    The zip archive experiment_test_data.zip includes manually annotated experimental images. All experimental images were filtered as explained in the main work. The subfolder HAADF includes thirteen images. The subfolder json includes an annotation file for each image in COCO format. A single file concluding all annotations is stored in json/COCO/annotation_COCO.json.

    The zip archive experiment_measurement.zip includes the experimental images investigated in the manuscript. It contains four subfolders corresponding to the four investigated samples. All experimental images were filtered as explained in the manuscript.

    The zip archive particle_detection.zip includes the network, that was trained, evaluated and used for the investigation in the manuscript. The network weights are stored in the file particle_detection/logs/fit/20230622-222721/iter_60000.pth. These weights can be loaded with the jupyter-notebook files. Furthermore, a configuration file, which is required by the notebooks, is stored as particle_detection/logs/fit/20230622-222721/config_file.py.

    There is no confidential data in this dataset. It is neither offensive, nor insulting or threatening.

    The dataset was generated to discriminate between TiO2 and WO3 nanoparticles in STEM-images. It might be possible that it can discriminate between different materials if the STEM contrast is similar to the contrast of TiO2 and WO3 but there is no guarantee.

  12. h

    V3Det

    • huggingface.co
    Updated Aug 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jiaqi Wang (2023). V3Det [Dataset]. https://huggingface.co/datasets/myownskyW7/V3Det
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 17, 2023
    Authors
    Jiaqi Wang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    V3Det: Vast Vocabulary Visual Detection Dataset

    Jiaqi Wang*,
    Pan Zhang*,
    Tao Chu*,
    Yuhang Cao*, 
    Yujie Zhou,
    Tong Wu,
    Bin Wang,
    Conghui He,
    Dahua Lin
    (* equal contribution)
    Accepted to ICCV 2023 (Oral)
    
    
    
    
    
      Paper, 
      Dataset
    
    
    
    
    
    
    
    
    
      Codebase
    
    
    
    
    
      Object Detection
    

    mmdetection: https://github.com/V3Det/mmdetection-V3Det/tree/main/configs/v3det Detectron2:… See the full description on the dataset page: https://huggingface.co/datasets/myownskyW7/V3Det.

  13. MMDetection20_5_13

    • kaggle.com
    zip
    Updated May 14, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zikai (2020). MMDetection20_5_13 [Dataset]. https://www.kaggle.com/datasets/superkevingit/mmdetection20-5-13
    Explore at:
    zip(28000898 bytes)Available download formats
    Dataset updated
    May 14, 2020
    Authors
    Zikai
    Description

    Dataset

    This dataset was created by Zikai

    Contents

  14. Sartorius: MMDetection [Train] ds

    • kaggle.com
    zip
    Updated Oct 21, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Awsaf (2021). Sartorius: MMDetection [Train] ds [Dataset]. https://www.kaggle.com/datasets/awsaf49/sartorius-mmdetection-train-ds
    Explore at:
    zip(4203930479 bytes)Available download formats
    Dataset updated
    Oct 21, 2021
    Authors
    Awsaf
    Description

    Dataset

    This dataset was created by Awsaf

    Contents

  15. DeepScoresV2

    • zenodo.org
    • data.niaid.nih.gov
    application/gzip
    Updated Jun 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lukas Tuggener; Yvan Putra Satyawan; Alexander Pacha; Jürgen Schmidhuber; Thilo Stadelmann; Lukas Tuggener; Yvan Putra Satyawan; Alexander Pacha; Jürgen Schmidhuber; Thilo Stadelmann (2023). DeepScoresV2 [Dataset]. http://doi.org/10.5281/zenodo.4012193
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Jun 7, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Lukas Tuggener; Yvan Putra Satyawan; Alexander Pacha; Jürgen Schmidhuber; Thilo Stadelmann; Lukas Tuggener; Yvan Putra Satyawan; Alexander Pacha; Jürgen Schmidhuber; Thilo Stadelmann
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The DeepScoresV2 Dataset for Music Object Detection contains digitally rendered images of written sheet music, together with the corresponding ground truth to fit various types of machine learning models. A total of 151 Million different instances of music symbols, belonging to 135 different classes are annotated. The total Dataset contains 255,385 Images. For most researches, the dense version, containing 1714 of the most diverse and interesting images, should suffice.

    The dataset contains ground in the form of:

    • Non-oriented bounding boxes
    • Oriented bounding boxes
    • Semantic segmentation
    • Instance segmentation

    The accompaning paper The DeepScoresV2 Dataset and Benchmark for Music Object Detection published at ICPR2020 can be found here:

    https://digitalcollection.zhaw.ch/handle/11475/20647

    A toolkit for convenient loading and inspection of the data can be found here:

    https://github.com/yvan674/obb_anns

    Code to train baseline models can be found here:

    https://github.com/tuggeluk/mmdetection/tree/DSV2_Baseline_FasterRCNN

    https://github.com/tuggeluk/DeepWatershedDetection/tree/dwd_old

  16. Z

    Data archive for 'Automated river barrier detection evidences severe habitat...

    • datasetcatalog.nlm.nih.gov
    Updated Dec 8, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wang, Yan; Sun, Jingrui; Tao, Juan; Lucas, Martyn C.; Ding, Chengzhi; Chen, Jinnan; He, Daming; Cheng, Hiuyi; Ji, Xuan; Li, Mingbo; Ding, Liuyong (2022). Data archive for 'Automated river barrier detection evidences severe habitat fragmentation of the megadiverse Mekong River ' [Dataset]. http://doi.org/10.5281/zenodo.7397063
    Explore at:
    Dataset updated
    Dec 8, 2022
    Authors
    Wang, Yan; Sun, Jingrui; Tao, Juan; Lucas, Martyn C.; Ding, Chengzhi; Chen, Jinnan; He, Daming; Cheng, Hiuyi; Ji, Xuan; Li, Mingbo; Ding, Liuyong
    Area covered
    Mekong River
    Description

    This repository contains the data used in the paper 'Automated river barrier detection evidences severe habitat fragmentation of the megadiverse Mekong River'. The 'FCOS' folder contains the python file of the barrier detection model (FCOS ResNext-101-2x-FPN) which used to train and detect river barriers from remotely sensed photographic images in the MMDetection framework. The 'R_script' folder contains R files used in the paper. Coordinate.R was used to extract coordinate from bounding boxes in each TIF image. CAFI.R was used to calculate the CAFI index in each sub-catchment. For more information on the MMDetection, refer to the following GitHub repository: https://github.com/open-mmlab/mmdetection

  17. Data archive for 'Convolutional neural networks facilitate river barrier...

    • zenodo.org
    • data.niaid.nih.gov
    bin
    Updated May 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jingrui Sun; Jingrui Sun; Chengzhi Ding; Chengzhi Ding; Martyn C. Lucas; Martyn C. Lucas; Juan Tao; Hiuyi Cheng; Jinnan Chen; Mingbo Li; Liuyong Ding; Xuan Ji; Yan Wang; Daming He; Juan Tao; Hiuyi Cheng; Jinnan Chen; Mingbo Li; Liuyong Ding; Xuan Ji; Yan Wang; Daming He (2023). Data archive for 'Convolutional neural networks facilitate river barrier detection and evidence severe habitat fragmentation in the Mekong River biodiversity hotspot' [Dataset]. http://doi.org/10.5281/zenodo.7928088
    Explore at:
    binAvailable download formats
    Dataset updated
    May 13, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jingrui Sun; Jingrui Sun; Chengzhi Ding; Chengzhi Ding; Martyn C. Lucas; Martyn C. Lucas; Juan Tao; Hiuyi Cheng; Jinnan Chen; Mingbo Li; Liuyong Ding; Xuan Ji; Yan Wang; Daming He; Juan Tao; Hiuyi Cheng; Jinnan Chen; Mingbo Li; Liuyong Ding; Xuan Ji; Yan Wang; Daming He
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Mekong River
    Description

    This repository contains the code and database used in the paper 'Convolutional neural networks facilitate river barrier detection and evidence severe habitat fragmentation in the Mekong River biodiversity hotspot'.

    The 'FCOS' folder contains Python files of the barrier detection model (FCOS ResNext-101-FPN) which was trained to detect river barriers from remotely sensed photographic images in the MMDetection framework.

    The 'R_script' folder contains R files used in the paper. Coordinate.R was used to extract coordinates from bounding boxes in each TIF image. CAFI.R was used to calculate the CAFI index in each sub-catchment.

    The 'Barrier Database' folder contains the 'Training Validation Database' used during the training process, and the 'Mekong River Barrier Database' generated in the study.

    For more information on the MMDetection framework, refer to the following GitHub repository: https://github.com/open-mmlab/mmdetection

  18. R

    Dumpsite Detection Dataset

    • universe.roboflow.com
    zip
    Updated Dec 5, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Igor Dimitrovski (2024). Dumpsite Detection Dataset [Dataset]. https://universe.roboflow.com/igor-dimitrovski-qowpq/dumpsite-detection/model/8
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 5, 2024
    Dataset authored and provided by
    Igor Dimitrovski
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Variables measured
    Waste Bounding Boxes
    Description

    Dataset for Waste/Dumpsite Detection using drone imagery

    Contains 2115 drone images of illegal waste dumpsites

    • 1280 x 1280 px resolution
    • Nadir perspective (camera pointing straight down at a 90-degree angle to the ground)

    Annotations and Images

    • train | valid | test

      • actual images
    • COCO - _annotations_coco.json files in each split directory_

    • .parquet files in data directory with embeded images

    The dataset was collected as part of the [ Raven Scan ] project, more details bellow

    [ Raven Scan ]

    Platform Overview

    The Raven Scan platform leverages advanced drone and satellite imagery to enhance waste management and environmental monitoring through cutting-edge technology.

    Utilizing high-resolution images combined with sophisticated image annotation, object detection models, and geospatial analysis, our system offers robust tools to identify illegal dump sites and effectively manage regulated landfills.

    User Guides and Documentation

    Guides

    Explore each feature through our User Guides

    Documentation Page

    Read our official Documentation

    Key Features

    • Dataset Management

      • Manage extensive datasets of drone and satellite images with tools for uploading, categorizing, and maintaining image data.
      • Features include tagging, filtering, and robust data integrity checks to ensure dataset accuracy and usability for environmental monitoring tasks.
    • Image Annotation

      • Annotate high-resolution drone and satellite imagery to help train object detection models specifically designed for precise waste detection.
    • Object Detection Model Training

      • Train sophisticated models with diverse image datasets from drone and satellite sources to enhance detection accuracy across varied environmental conditions.
    • Detection and Monitoring

      • Deploy models, both pre-trained and newly trained, to detect waste sites from aerial perspectives.
      • Results are displayed on a georeferenced map, providing a clear and actionable visual representation.
    • Landfill Management

      • Advanced tools for managing legal landfills include the submission of waste forms, types, trucks, reports ...
      • Integration of 3D point cloud scans derived from drone technology for detailed, real-time monitoring.

    Learn more from our detailed Feature Documentation

    This repository aims to equip researchers, environmental agencies, and policymakers with the tools needed to monitor and respond to environmental challenges efficiently.

    Join us in leveraging these capabilities to maintain ecological integrity and promote sustainable practices in waste management.

    Our complete Project Charter.

    Acknowledgments

    We would like to extend our deepest gratitude to the following organizations and platforms for their invaluable support

    UNICEF Venture Fund

    We express our profound gratitude to the UNICEF Venture Fund for their generous support of our project. Their commitment to fostering innovation and sponsoring projects that utilize frontier technology is truly commendable and instrumental in driving positive change.

    MMDetection

    A special thanks to the open-source AI training platform MMDetection. Your robust tools and extensive resources have significantly accelerated our development process.

    Third Party Notices

    Our project would not have been possible without the myriad of libraries and frameworks that have empowered us along the way. We owe a great debt of gratitude to all the contributors and maintainers of these projects.

    Thank you to everyone who has made this project possible. We couldn't have done it without you!

    Raven Scan uses third-party libraries or other resources that may be distributed under licenses different than the Raven Scan software.

    In the event that we accidentally failed to list a required notice, please bring it to our attention by posting an issue on out GitHub Page.

    Each team member has played a pivotal role in bringing this project to fruition, and we are immensely thankful for their hard work and dedication.

  19. MMDetection-3.2.0

    • kaggle.com
    zip
    Updated Dec 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    PraMamba (2023). MMDetection-3.2.0 [Dataset]. https://www.kaggle.com/datasets/pramamba/mmdetection-3-2-0
    Explore at:
    zip(2098674771 bytes)Available download formats
    Dataset updated
    Dec 30, 2023
    Authors
    PraMamba
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Dataset

    This dataset was created by PraMamba

    Released under Apache 2.0

    Contents

  20. MMDetection Wheel

    • kaggle.com
    zip
    Updated Mar 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    I2nfinit3y (2025). MMDetection Wheel [Dataset]. https://www.kaggle.com/datasets/i2nfinit3y/mmdetection-wheel
    Explore at:
    zip(1994613477 bytes)Available download formats
    Dataset updated
    Mar 29, 2025
    Authors
    I2nfinit3y
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Dataset

    This dataset was created by I2nfinit3y

    Released under Apache 2.0

    Contents

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Mishal (2023). Mmdetection Dataset [Dataset]. https://universe.roboflow.com/mishal/mmdetection-iuyiz/model/7

Mmdetection Dataset

mmdetection-iuyiz

mmdetection-dataset

Explore at:
zipAvailable download formats
Dataset updated
Mar 1, 2023
Dataset authored and provided by
Mishal
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Variables measured
Handwriting Bounding Boxes
Description

Mmdetection

## Overview

Mmdetection is a dataset for object detection tasks - it contains Handwriting annotations for 566 images.

## Getting Started

You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.

  ## License

  This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Search
Clear search
Close search
Google apps
Main menu