4 datasets found
  1. f

    The effects of different modules in YOLOv7-Tiny for the cars detection...

    • plos.figshare.com
    xls
    Updated Apr 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cuiying Yu; Lei Zhou; Bushi Liu; Yue Zhao; Pengcheng Zhu; Liqing Chen; Bolun Chen (2024). The effects of different modules in YOLOv7-Tiny for the cars detection dataset and traffic detection dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0299959.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Apr 24, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Cuiying Yu; Lei Zhou; Bushi Liu; Yue Zhao; Pengcheng Zhu; Liqing Chen; Bolun Chen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The effects of different modules in YOLOv7-Tiny for the cars detection dataset and traffic detection dataset.

  2. f

    Comparison of ablation experiment results of models on hazardous chemical...

    • plos.figshare.com
    xls
    Updated Apr 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cuiying Yu; Lei Zhou; Bushi Liu; Yue Zhao; Pengcheng Zhu; Liqing Chen; Bolun Chen (2024). Comparison of ablation experiment results of models on hazardous chemical vehicle dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0299959.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Apr 24, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Cuiying Yu; Lei Zhou; Bushi Liu; Yue Zhao; Pengcheng Zhu; Liqing Chen; Bolun Chen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Comparison of ablation experiment results of models on hazardous chemical vehicle dataset.

  3. f

    datasheet1_Artificial Intelligence for Prognostic Scores in Oncology: a...

    • frontiersin.figshare.com
    zip
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hugo Loureiro; Tim Becker; Anna Bauer-Mehren; Narges Ahmidi; Janick Weberpals (2023). datasheet1_Artificial Intelligence for Prognostic Scores in Oncology: a Benchmarking Study.zip [Dataset]. http://doi.org/10.3389/frai.2021.625573.s001
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Frontiers
    Authors
    Hugo Loureiro; Tim Becker; Anna Bauer-Mehren; Narges Ahmidi; Janick Weberpals
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Introduction: Prognostic scores are important tools in oncology to facilitate clinical decision-making based on patient characteristics. To date, classic survival analysis using Cox proportional hazards regression has been employed in the development of these prognostic scores. With the advance of analytical models, this study aimed to determine if more complex machine-learning algorithms could outperform classical survival analysis methods.Methods: In this benchmarking study, two datasets were used to develop and compare different prognostic models for overall survival in pan-cancer populations: a nationwide EHR-derived de-identified database for training and in-sample testing and the OAK (phase III clinical trial) dataset for out-of-sample testing. A real-world database comprised 136K first-line treated cancer patients across multiple cancer types and was split into a 90% training and 10% testing dataset, respectively. The OAK dataset comprised 1,187 patients diagnosed with non-small cell lung cancer. To assess the effect of the covariate number on prognostic performance, we formed three feature sets with 27, 44 and 88 covariates. In terms of methods, we benchmarked ROPRO, a prognostic score based on the Cox model, against eight complex machine-learning models: regularized Cox, Random Survival Forests (RSF), Gradient Boosting (GB), DeepSurv (DS), Autoencoder (AE) and Super Learner (SL). The C-index was used as the performance metric to compare different models.Results: For in-sample testing on the real-world database the resulting C-index [95% CI] values for RSF 0.720 [0.716, 0.725], GB 0.722 [0.718, 0.727], DS 0.721 [0.717, 0.726] and lastly, SL 0.723 [0.718, 0.728] showed significantly better performance as compared to ROPRO 0.701 [0.696, 0.706]. Similar results were derived across all feature sets. However, for the out-of-sample validation on OAK, the stronger performance of the more complex models was not apparent anymore. Consistently, the increase in the number of prognostic covariates did not lead to an increase in model performance.Discussion: The stronger performance of the more complex models did not generalize when applied to an out-of-sample dataset. We hypothesize that future research may benefit by adding multimodal data to exploit advantages of more complex models.

  4. f

    table2_Artificial Intelligence for Prognostic Scores in Oncology: a...

    • frontiersin.figshare.com
    xlsx
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hugo Loureiro; Tim Becker; Anna Bauer-Mehren; Narges Ahmidi; Janick Weberpals (2023). table2_Artificial Intelligence for Prognostic Scores in Oncology: a Benchmarking Study.xlsx [Dataset]. http://doi.org/10.3389/frai.2021.625573.s003
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Frontiers
    Authors
    Hugo Loureiro; Tim Becker; Anna Bauer-Mehren; Narges Ahmidi; Janick Weberpals
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Introduction: Prognostic scores are important tools in oncology to facilitate clinical decision-making based on patient characteristics. To date, classic survival analysis using Cox proportional hazards regression has been employed in the development of these prognostic scores. With the advance of analytical models, this study aimed to determine if more complex machine-learning algorithms could outperform classical survival analysis methods.Methods: In this benchmarking study, two datasets were used to develop and compare different prognostic models for overall survival in pan-cancer populations: a nationwide EHR-derived de-identified database for training and in-sample testing and the OAK (phase III clinical trial) dataset for out-of-sample testing. A real-world database comprised 136K first-line treated cancer patients across multiple cancer types and was split into a 90% training and 10% testing dataset, respectively. The OAK dataset comprised 1,187 patients diagnosed with non-small cell lung cancer. To assess the effect of the covariate number on prognostic performance, we formed three feature sets with 27, 44 and 88 covariates. In terms of methods, we benchmarked ROPRO, a prognostic score based on the Cox model, against eight complex machine-learning models: regularized Cox, Random Survival Forests (RSF), Gradient Boosting (GB), DeepSurv (DS), Autoencoder (AE) and Super Learner (SL). The C-index was used as the performance metric to compare different models.Results: For in-sample testing on the real-world database the resulting C-index [95% CI] values for RSF 0.720 [0.716, 0.725], GB 0.722 [0.718, 0.727], DS 0.721 [0.717, 0.726] and lastly, SL 0.723 [0.718, 0.728] showed significantly better performance as compared to ROPRO 0.701 [0.696, 0.706]. Similar results were derived across all feature sets. However, for the out-of-sample validation on OAK, the stronger performance of the more complex models was not apparent anymore. Consistently, the increase in the number of prognostic covariates did not lead to an increase in model performance.Discussion: The stronger performance of the more complex models did not generalize when applied to an out-of-sample dataset. We hypothesize that future research may benefit by adding multimodal data to exploit advantages of more complex models.

  5. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Cuiying Yu; Lei Zhou; Bushi Liu; Yue Zhao; Pengcheng Zhu; Liqing Chen; Bolun Chen (2024). The effects of different modules in YOLOv7-Tiny for the cars detection dataset and traffic detection dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0299959.t005

The effects of different modules in YOLOv7-Tiny for the cars detection dataset and traffic detection dataset.

Related Article
Explore at:
xlsAvailable download formats
Dataset updated
Apr 24, 2024
Dataset provided by
PLOS ONE
Authors
Cuiying Yu; Lei Zhou; Bushi Liu; Yue Zhao; Pengcheng Zhu; Liqing Chen; Bolun Chen
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

The effects of different modules in YOLOv7-Tiny for the cars detection dataset and traffic detection dataset.

Search
Clear search
Close search
Google apps
Main menu