6 datasets found
  1. R

    Efficient D7 D0 Dataset

    • universe.roboflow.com
    zip
    Updated Oct 30, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    samriti dogra (2021). Efficient D7 D0 Dataset [Dataset]. https://universe.roboflow.com/samriti-dogra/efficient-d7-d0/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 30, 2021
    Dataset authored and provided by
    samriti dogra
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Books Phones Bounding Boxes
    Description

    Efficient D7 D0

    ## Overview
    
    Efficient D7 D0 is a dataset for object detection tasks - it contains Books Phones annotations for 555 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  2. Model_wh_effDet

    • kaggle.com
    Updated Jun 13, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    JamesCho (2020). Model_wh_effDet [Dataset]. https://www.kaggle.com/datasets/lov4jin/model-wh-effdet
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 13, 2020
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    JamesCho
    License

    http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.htmlhttp://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html

    Description

    Context

    Efficient Det model save

    Content

    What's inside is more than just rows and columns. Make it easy for others to get started by describing how you acquired the data and what time period it represents, too.

    Acknowledgements

    We wouldn't be here without the help of others. If you owe any attributions or thanks, include them here along with any citations of past research.

    Inspiration

    Your data will be in front of the world's largest data science community. What questions do you want to see answered?

  3. R

    Code underlying the publication: Efficient Sequential Neural Network based...

    • data.4tu.nl
    zip
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yongqi Dong; Sandeep Patil; Haneen Farah; Hans Hellendoorn, Code underlying the publication: Efficient Sequential Neural Network based on Spatial-Temporal Attention and Linear LSTM for Robust Lane Detection Using Multi-Frame Images [Dataset]. http://doi.org/10.4121/4619cab6-ae4a-40d5-af77-582a77f3d821.v2
    Explore at:
    zipAvailable download formats
    Dataset provided by
    4TU.ResearchData
    Authors
    Yongqi Dong; Sandeep Patil; Haneen Farah; Hans Hellendoorn
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    Applied and Technical Sciences (TTW), a subdomain of the Dutch Institute for Scientific Research (NWO)
    Description

    This is the source code of the paper:

    Patil, S.#, Dong, Y.#,*, Farah, H, & Hellendoorn, J. (2024). Efficient Sequential Neural Network Based on Spatial-Temporal Attention and Linear LSTM for Robust Lane Detection Using Multi-Frame Images (Under Review)


    How to use the codes

    (1) Download tvtLANE Dataset:

    You can download this **dataset** from the link in the '**Dataset-Description-v1.2.pdf**' file.

    BaiduYun:https://pan.baidu.com/s/1lE2CjuFa9OQwLIbi-OomTQ passcodes:tf9x

    Or

    Google Drive: https://drive.google.com/drive/folders/1MI5gMDspzuV44lfwzpK6PX0vKuOHUbb_?usp=sharing

    The **pretrained model** is also provided in the "/model" folder, named as 98.48263448061671_RAd_lr0.001_batch70_FocalLoss_poly_alpha0.25_gamma2.0_Attention_UNet_LSTM.pth .

    (2) Set up

    ## Requirements

    PyTorch 0.4.0

    Python 3.9

    CUDA 8.0

    ## Preparation

    ### Data Preparation

    The dataset contains 19383 continuous driving scenes image sequences, and 39460 frames of them are labeled. The size of images is 128*256.

    The training set contains 19096 image sequences. Each 13th and 20th frame in a sequence are labeled, and the image and their labels are in “clips_13(_truth)” and “clips_20(_truth)”. All images are contained in “clips_all”.

    Sequences in “0313”, “0531” and “0601” subfolders are constructed on TuSimple lane detection dataset, containing scenes in American highway. The four “weadd” folders are added images in rural road in China.

    The testset has two parts: Testset #1 (270 sequences, each 13th and 20th image is labeled) for testing the overall performance of algorithms. The Testset #2 (12 kinds of hard scenes, all frames are labeled) for testing the robustness of algorithms.

    To input the data, we provide three index files(train_index, val_index, and test_index). Each row in the index represents for a sequence and its label, including the former 5 input images and the last ground truth (corresponding to the last frame of 5 inputs).

    The dataset needs to be put into a folder with regards to the location in index files, (i.e., txt files in "./data/". The index files should also be modified add cording to your local computer settings. If you want to use your own data, please refer to the format of our dataset and indexes.

    (3) Training

    Before training, change the paths including "train_path"(for train_index.txt), "val_path"(for val_index.txt), "pretrained_path" in config_Att.py to adapt to your environment.

    Choose the models (UNet_ConvLSTM | SCNN_UNet_ConvLSTM | SCNN_UNet_Attention) as the default one which is also indicated by default='UNet-ConvLSTM' thus you do not need to make change for this. And adjust the arguments such as class weights (now the weights are set to fit the tvtLANE dataset), batch size, learning rate, and epochs in config_Att.py. You can also adjust other settings, e.g., optimizer, check in the codes for details.

    Then simply run: train.py. If running successfully, there will be model files saved in the "./model" folder. The validating results will also be printed.

    (4) Test

    To evaluate the performance of a trained model, please select the trained model or put your own models into the "./model/" folder and change "pretrained_path" in test.py according to the local setting, then change "test_path" to the location of test_index.txt, and "save_path" for the saved results.

    Choose the right model that would be evaluated, and then simply run: test.py.

    The quantitative evaluations of Accuracy, Precision, Recall, and F1 measure would be printed, and the lane detection segmented results will be saved in the "./save/" folder as pictures.

    Authors

    Yongqi Dong (yongqidong369@gmail.com), Sandeep Patil, Haneen Farah, Hans Hellendoorn


  4. Z

    Jingju a cappella singing voice test dataset for "An efficient deep learning...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Nov 5, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xavier Serra (2020). Jingju a cappella singing voice test dataset for "An efficient deep learning model for musical onset detection" [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_1341070
    Explore at:
    Dataset updated
    Nov 5, 2020
    Dataset provided by
    Xavier Serra
    Rong Gong
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Jingju a cappella singing voice test dataset used in the paper "An efficient deep learning model for musical onset detection".

    Arxiv paper link: https://arxiv.org/abs/1806.06773

    Supplementary information and code for the paper: https://github.com/ronggong/musical-onset-efficient

    Content:

    ismir_2018_dataset_for_reviewing.zip: audio, syllable boundary and label annotation

    jingju dataset train test split filenames.xlsx: train and test split filename list

    Citation:

    @article{gong2018towards, title={Towards an efficient deep learning model for musical onset detection}, author={Gong, Rong and Serra, Xavier}, journal={arXiv preprint arXiv:1806.06773}, year={2018} }

    Contact:

    Rong Gong: rong.gongupf.edu

  5. D

    Data from: TacoDepth: Towards Efficient Radar-Camera Depth Estimation with...

    • researchdata.ntu.edu.sg
    Updated Apr 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DR-NTU (Data) (2025). TacoDepth: Towards Efficient Radar-Camera Depth Estimation with One-stage Fusion [Dataset]. http://doi.org/10.21979/N9/Q57ZYR
    Explore at:
    Dataset updated
    Apr 29, 2025
    Dataset provided by
    DR-NTU (Data)
    License

    https://researchdata.ntu.edu.sg/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.21979/N9/Q57ZYRhttps://researchdata.ntu.edu.sg/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.21979/N9/Q57ZYR

    Dataset funded by
    Industry Collaboration Projects (IAF-ICP) Funding Initiative: RIE2020
    Description

    Radar-Camera depth estimation aims to predict dense and accurate metric depth by fusing input images and Radar data. Model efficiency is crucial for this task in pursuit of real-time processing on autonomous vehicles and robotic platforms. However, due to the sparsity of Radar returns, the prevailing methods adopt multi-stage frameworks with intermediate quasi-dense depth, which are time-consuming and not robust. To address these challenges, we propose TacoDepth, an efficient and accurate Radar-Camera depth estimation model with one-stage fusion. Specifically, the graph-based Radar structure extractor and the pyramid-based Radar fusion module are designed to capture and integrate the graph structures of Radar point clouds, delivering superior model efficiency and robustness without relying on the intermediate depth results. Moreover, TacoDepth can be flexible for different inference modes, providing a better balance of speed and accuracy. Extensive experiments are conducted to demonstrate the efficacy of our method. Compared with the previous state-of-the-art approach, TacoDepth improves depth accuracy and processing speed by 12.8% and 91.8%. Our work provides a new perspective on efficient Radar-Camera depth estimation.

  6. f

    Description of Datasets.

    • plos.figshare.com
    xls
    Updated May 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ahmed Abdelaziz; Alia Nabil Mahmoud; Vitor Santos; Mario M. Freire (2025). Description of Datasets. [Dataset]. http://doi.org/10.1371/journal.pone.0319562.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 13, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Ahmed Abdelaziz; Alia Nabil Mahmoud; Vitor Santos; Mario M. Freire
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The increasing importance of deep learning in software development has greatly improved software quality by enabling the efficient identification of defects, a persistent challenge throughout the software development lifecycle. This study seeks to determine the most effective model for detecting defects in software projects. It introduces an intelligent approach that combines Temporal Convolutional Networks (TCN) with Antlion Optimization (ALO). TCN is employed for defect detection, while ALO optimizes the network’s weights. Two models are proposed to address the research problem: (a) a basic TCN without parameter optimization and (b) a hybrid model integrating TCN with ALO. The findings demonstrate that the hybrid model significantly outperforms the basic TCN in multiple performance metrics, including area under the curve, sensitivity, specificity, accuracy, and error rate. Moreover, the hybrid model surpasses state-of-the-art methods, such as Convolutional Neural Networks, Gated Recurrent Units, and Bidirectional Long Short-Term Memory, with accuracy improvements of 21.8%, 19.6%, and 31.3%, respectively. Additionally, the proposed model achieves a 13.6% higher area under the curve across all datasets compared to the Deep Forest method. These results confirm the effectiveness of the proposed hybrid model in accurately detecting defects across diverse software projects.

  7. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
samriti dogra (2021). Efficient D7 D0 Dataset [Dataset]. https://universe.roboflow.com/samriti-dogra/efficient-d7-d0/dataset/1

Efficient D7 D0 Dataset

efficient-d7-d0

efficient-d7-d0-dataset

Explore at:
zipAvailable download formats
Dataset updated
Oct 30, 2021
Dataset authored and provided by
samriti dogra
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Variables measured
Books Phones Bounding Boxes
Description

Efficient D7 D0

## Overview

Efficient D7 D0 is a dataset for object detection tasks - it contains Books Phones annotations for 555 images.

## Getting Started

You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.

  ## License

  This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Search
Clear search
Close search
Google apps
Main menu