100+ datasets found
  1. Leave-one-out cross-validation results.

    • figshare.com
    xls
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Enrico Glaab; Jaume Bacardit; Jonathan M. Garibaldi; Natalio Krasnogor (2023). Leave-one-out cross-validation results. [Dataset]. http://doi.org/10.1371/journal.pone.0039932.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Enrico Glaab; Jaume Bacardit; Jonathan M. Garibaldi; Natalio Krasnogor
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Leave-one-out cross-validation results obtained with BioHEL, SVM, RF and PAM on the three microarray datasets using three feature selection methods (CFS, PLSS, RFS); AVG  =  average accuracy, STDDEV  =  standard deviation; the highest accuracies achieved with BioHEL and the best alternative are both shown in bold type for each dataset.

  2. f

    Logistic regression leave-one-out cross-validation classification rate.

    • plos.figshare.com
    xls
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hui Wang; Chen Chen; Hsieh Fushing (2023). Logistic regression leave-one-out cross-validation classification rate. [Dataset]. http://doi.org/10.1371/journal.pone.0045502.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Hui Wang; Chen Chen; Hsieh Fushing
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Logistic regression leave-one-out cross-validation classification rate.

  3. f

    The training dataset and the leave-one-out validation results on Baskerville...

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dong Wang; Ming Lu; Jing Miao; Tingting Li; Edwin Wang; Qinghua Cui (2023). The training dataset and the leave-one-out validation results on Baskerville et al.' data [14]. [Dataset]. http://doi.org/10.1371/journal.pone.0004421.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Dong Wang; Ming Lu; Jing Miao; Tingting Li; Edwin Wang; Qinghua Cui
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    *Once again, kicking one sample out as the testing sample, the rest 28 samples are the training dataset.The four features (columns “1”, “2”, “3”, and “4”) of each miRNA are calculated based on the genomic coordinates of the miRNA, the miRNA hosting intron, and the host gene.ER represents the experimental results and PR represents the prediction results. The symbol “+” means high co-expression and the symbol “−” means low co-expression.

  4. S

    Appendix of "Kriging Model Averaging Based on Leave-One-Out Cross-Validation...

    • scidb.cn
    Updated Jul 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ziheng Feng; Xianpeng Zong; Tianfa Xie; Xinyu Zhang (2024). Appendix of "Kriging Model Averaging Based on Leave-One-Out Cross-Validation Method" [Dataset]. http://doi.org/10.57760/sciencedb.j00207.00009
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 8, 2024
    Dataset provided by
    Science Data Bank
    Authors
    Ziheng Feng; Xianpeng Zong; Tianfa Xie; Xinyu Zhang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This appendix contains all supplementary materials for the accepted manuscript JSSC-2023-0150, including detailed proofs of all theorems presented in the paper as well as additional simulation results.

  5. i

    LEAVE-ONE-OUT ELECTROMYOGRAPHY (EMG) DATA SET OF 4 GESTURES PERFORMED WITH...

    • ieee-dataport.org
    Updated Aug 14, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bolivar Nunez (2023). LEAVE-ONE-OUT ELECTROMYOGRAPHY (EMG) DATA SET OF 4 GESTURES PERFORMED WITH THE RIGHT HAND [Dataset]. https://ieee-dataport.org/documents/leave-one-out-electromyography-emg-data-set-4-gestures-performed-right-hand
    Explore at:
    Dataset updated
    Aug 14, 2023
    Authors
    Bolivar Nunez
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    A new design and implementation of a control system for an anthropomorphic robotic hand has been developed for the Bioinformatics and Autonomous Learning Laboratory (BALL) at ESPOL. Myoelectric signals were acquired using a bioelectric data acquisition board (CYTON BOARD) with six out of the available eight channels. These signals had an amplitude of 200 [uV] and were sampled at a frequency of 250 [Hz].

  6. d

    Data from: TipDatingBeast: an R package to assist the implementation of...

    • search.dataone.org
    • data.niaid.nih.gov
    • +2more
    Updated Jun 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Adrien Rieux; Camilo E. Khatchikian (2025). TipDatingBeast: an R package to assist the implementation of phylogenetic tip-dating tests using BEAST [Dataset]. http://doi.org/10.5061/dryad.43q71
    Explore at:
    Dataset updated
    Jun 9, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Adrien Rieux; Camilo E. Khatchikian
    Time period covered
    Jan 1, 2016
    Description

    Molecular tip-dating of phylogenetic trees is a growing discipline that uses DNA sequences sampled at different points in time to co-estimate the timing of evolutionary events with rates of molecular evolution. In this context, BEAST, a program for Bayesian analysis of molecular sequences, is the most widely used phylogenetic tool. Here, we introduce TipDatingBeast, an R package built to assist the implementation of various phylogenetic tip-dating tests using BEAST. TipDatingBeast currently contains two main functions. The first one allows preparing date-randomization analyses, which assess the temporal signal of a dataset. The second function allows performing leave-one-out analyses, which test for the consistency between independent calibration sequences and allow pinpointing those leading to potential bias. We apply those functions to an empirical dataset and supply practical guidance for results interpretation.

  7. f

    The leave-one-out cross-validation (Jackknife test) success rates by a...

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lele Hu; Tao Huang; Xiao-Jun Liu; Yu-Dong Cai (2023). The leave-one-out cross-validation (Jackknife test) success rates by a random guess and the network-based method. [Dataset]. http://doi.org/10.1371/journal.pone.0017668.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Lele Hu; Tao Huang; Xiao-Jun Liu; Yu-Dong Cai
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The leave-one-out cross-validation (Jackknife test) success rates by a random guess and the network-based method.

  8. f

    Leave-one-out cross-validation results of all methods on STRING.

    • figshare.com
    xls
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joana P. Gonçalves; Alexandre P. Francisco; Yves Moreau; Sara C. Madeira (2023). Leave-one-out cross-validation results of all methods on STRING. [Dataset]. http://doi.org/10.1371/journal.pone.0049634.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Joana P. Gonçalves; Alexandre P. Francisco; Yves Moreau; Sara C. Madeira
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Results of each tested prioritization method on the STRINGv8.2 network. Mean and standard deviation of four evaluation measures (AUC, MAP, and percentage of left-out genes ranked in tops 10 and 20), obtained for 10 complete leave-one-out cross-validations on the 29 disease sets using 10 distinct previously generated candidate sets. ‘SRec’: percentage of left-out genes (from the total number of seeds in the original seed sets: 620) effectively ranked, that is, yielding a ranking score larger than zero. ‘DRec’: percentage of recovered diseases among the 29 diseases with seeds (a disease is recovered if at least one of its left-out genes obtained a ranking score larger than zero). ‘SEval’: percentage of left-out genes (from the total number of seeds originally in the seed sets: 620) in the network. All evaluation measures, AUC, MAP, TOP 10 and TOP 20, were computed taking into account only the left-out genes present in each network (SEval), rather than all the genes originally in the seed sets. Parameters: HDiffusion (, ), PRank (, ).

  9. f

    Comparison of the leave-one-out cross-validation errors and the selected 's....

    • plos.figshare.com
    xls
    Updated Jun 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yinglei Lai (2023). Comparison of the leave-one-out cross-validation errors and the selected 's. [Dataset]. http://doi.org/10.1371/journal.pone.0019754.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 8, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Yinglei Lai
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    DP represents our dynamic programming algorithm, RC and RP represent the recursive combination and recursive partition algorithms, respectively.

  10. Data from: Identifying the best approximating model in Bayesian...

    • zenodo.org
    • dataone.org
    • +2more
    application/gzip, bin
    Updated Feb 14, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nicolas Lartillot; Nicolas Lartillot (2023). Identifying the best approximating model in Bayesian phylogenetics: Bayes factors, cross-validation or wAIC? [Dataset]. http://doi.org/10.5061/dryad.j9kd51cfq
    Explore at:
    application/gzip, binAvailable download formats
    Dataset updated
    Feb 14, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Nicolas Lartillot; Nicolas Lartillot
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    There is still no consensus as to how to select models in Bayesian phylogenetics, and more generally in applied Bayesian statistics. Bayes factors are often presented as the method of choice, yet other approaches have been proposed, such as cross-validation or information criteria. Each of these paradigms raises specific computational challenges, but they also differ in their statistical meaning, being motivated by different objectives: either testing hypotheses or finding the best-approximating model. These alternative goals entail different compromises, and as a result, Bayes factors, cross-validation and information criteria may be valid for addressing different questions. Here, the question of Bayesian model selection is revisited, with a focus on the problem of finding the best-approximating model. Several model selection approaches were re-implemented, numerically assessed and compared: Bayes factors, cross-validation (CV), in its different forms (k-fold or leave-one-out), and the widely applicable information criterion (wAIC), which is asymptotically equivalent to leave-one-out cross validation (LOO-CV). Using a combination of analytical results and empirical and simulation analyses, it is shown that Bayes factors are unduly conservative. In contrast, cross-validation represents a more adequate formalism for selecting the model returning the best approximation of the data-generating process and the most accurate estimates of the parameters of interest. Among alternative CV schemes, LOO-CV and its asymptotic equivalent represented by the wAIC, stand out as the best choices, conceptually and computationally, given that both can be simultaneously computed based on standard MCMC runs under the posterior distribution.

  11. f

    Cross-validation using the leave-one-out method of regression models in...

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vincent T. van Hees; Frida Renström; Antony Wright; Anna Gradmark; Michael Catt; Kong Y. Chen; Marie Löf; Les Bluck; Jeremy Pomeroy; Nicholas J. Wareham; Ulf Ekelund; Søren Brage; Paul W. Franks (2023). Cross-validation using the leave-one-out method of regression models in which PAEE (MJ day−1) is the dependent variable. [Dataset]. http://doi.org/10.1371/journal.pone.0022922.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Vincent T. van Hees; Frida Renström; Antony Wright; Anna Gradmark; Michael Catt; Kong Y. Chen; Marie Löf; Les Bluck; Jeremy Pomeroy; Nicholas J. Wareham; Ulf Ekelund; Søren Brage; Paul W. Franks
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    [Acc2 average acceleration (g) where non-wear time was imputed by all wear-time data at similar time of the day for that participant; RMSE: Root mean square of the error; body side, monitor attachment to dominant wrist vs. non-dominant wrist;***: p

  12. The leave-one-out cross-validation results of the SVM-based enzyme “yes or...

    • plos.figshare.com
    xls
    Updated Jun 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zheng Wang; Xue-Cheng Zhang; Mi Ha Le; Dong Xu; Gary Stacey; Jianlin Cheng (2023). The leave-one-out cross-validation results of the SVM-based enzyme “yes or no” predictions. [Dataset]. http://doi.org/10.1371/journal.pone.0017906.t008
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 5, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Zheng Wang; Xue-Cheng Zhang; Mi Ha Le; Dong Xu; Gary Stacey; Jianlin Cheng
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    “Ratio” standards for the percentage of correctly predicted domains in the cross-validation. The feature used are only GO term frequencies gained from radius one neighboring domains.

  13. f

    The leave-one-out cross-validation results for each model in the qpure...

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sarah Song; Katia Nones; David Miller; Ivon Harliwong; Karin S. Kassahn; Mark Pinese; Marina Pajic; Anthony J. Gill; Amber L. Johns; Matthew Anderson; Oliver Holmes; Conrad Leonard; Darrin Taylor; Scott Wood; Qinying Xu; Felicity Newell; Mark J. Cowley; Jianmin Wu; Peter Wilson; Lynn Fink; Andrew V. Biankin; Nic Waddell; Sean M. Grimmond; John V. Pearson (2023). The leave-one-out cross-validation results for each model in the qpure method. [Dataset]. http://doi.org/10.1371/journal.pone.0045835.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Sarah Song; Katia Nones; David Miller; Ivon Harliwong; Karin S. Kassahn; Mark Pinese; Marina Pajic; Anthony J. Gill; Amber L. Johns; Matthew Anderson; Oliver Holmes; Conrad Leonard; Darrin Taylor; Scott Wood; Qinying Xu; Felicity Newell; Mark J. Cowley; Jianmin Wu; Peter Wilson; Lynn Fink; Andrew V. Biankin; Nic Waddell; Sean M. Grimmond; John V. Pearson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In the second column the number in the brackets is the pre-defined number of components. The smaller prediction error is related a better prediction model.

  14. AUC in the framework of leave-one-out cross validation schema under...

    • plos.figshare.com
    xls
    Updated Jun 2, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xing Chen; Ming-Xi Liu; Qing-Hua Cui; Gui-Ying Yan (2023). AUC in the framework of leave-one-out cross validation schema under different weight parameters is calculated to confirm that miREFScan is robust to the selection of parameter values. [Dataset]. http://doi.org/10.1371/journal.pone.0043425.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Xing Chen; Ming-Xi Liu; Qing-Hua Cui; Gui-Ying Yan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    AUC in the framework of leave-one-out cross validation schema under different weight parameters is calculated to confirm that miREFScan is robust to the selection of parameter values.

  15. Evaluation results for MALLET ten-fold cross validation with leave-one-out...

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tudor Groza; Jane Hunter; Andreas Zankl (2023). Evaluation results for MALLET ten-fold cross validation with leave-one-out feature. [Dataset]. http://doi.org/10.1371/journal.pone.0055656.t006
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Tudor Groza; Jane Hunter; Andreas Zankl
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This overview shows the individual importance of each of the features in the overall classification model. The large majority of features have very little impact over the model, i.e., a decrease in performance of 1–2%. The only two features that make a difference are the Prefix and the token context (Token_Bi3) that affect the overall performance with almost 15%.

  16. f

    Leave-one-out cross-validation results of all methods on the NCBI PPI...

    • figshare.com
    xls
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joana P. Gonçalves; Alexandre P. Francisco; Yves Moreau; Sara C. Madeira (2023). Leave-one-out cross-validation results of all methods on the NCBI PPI network. [Dataset]. http://doi.org/10.1371/journal.pone.0049634.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Joana P. Gonçalves; Alexandre P. Francisco; Yves Moreau; Sara C. Madeira
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Results of each tested prioritization method on the NCBI PPI network. Mean and standard deviation of four evaluation measures (AUC, MAP, and percentage of left-out genes ranked in tops 10 and 20), obtained for 10 complete leave-one-out cross-validations on the 29 disease sets using 10 distinct previously generated candidate sets. ‘SRec’: percentage of left-out genes (from the total number of seeds in the original seed sets: 620) effectively ranked, that is, yielding a ranking score larger than zero. ‘DRec’: percentage of recovered diseases among the 29 diseases with seeds (a disease is recovered if at least one of its left-out genes obtained a ranking score larger than zero). ‘SEval’: percentage of left-out genes (from the total number of seeds originally in the seed sets: 620) in the network. All evaluation measures, AUC, MAP, TOP 10 and TOP 20, were computed taking into account only the left-out genes present in each network (SEval), rather than all the genes originally in the seed sets. Parameters: HDiffusion (, ), PRank (, ).

  17. f

    Latitudinal and longitudinal errors of our full leave-one-out validation...

    • figshare.com
    xls
    Updated Jun 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Petros Drineas; Jamey Lewis; Peristera Paschou (2023). Latitudinal and longitudinal errors of our full leave-one-out validation experiment on 1,200 samples from 11 populations. [Dataset]. http://doi.org/10.1371/journal.pone.0011892.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 7, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Petros Drineas; Jamey Lewis; Peristera Paschou
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Results are reported for three panel sizes (P1:500 SNPs, P2:800 SNPs, P3:1000 SNPs). For each panel size and for each population we report the average error and the standard deviation.

  18. Leave-one-out cross validation of two SVM models.

    • plos.figshare.com
    xls
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nan Zhao; Bin Pang; Chi-Ren Shyu; Dmitry Korkin (2023). Leave-one-out cross validation of two SVM models. [Dataset]. http://doi.org/10.1371/journal.pone.0019554.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Nan Zhao; Bin Pang; Chi-Ren Shyu; Dmitry Korkin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ModelND is trained on PositiveH, PositiveC, and NegativeND. ModelNDNN is trained using the same positive set, and a negative set that includes NegativeND together with NegativeNN. Accuracy (Acc), precision (Pre), and recall (Rec) were calculated for both kernerls, RBF and Polynomial.

  19. Leave-one-out cross-validation of known FEB and GEFS+ genes.

    • plos.figshare.com
    xls
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rosario M. Piro; Ivan Molineris; Ugo Ala; Ferdinando Di Cunto (2023). Leave-one-out cross-validation of known FEB and GEFS+ genes. [Dataset]. http://doi.org/10.1371/journal.pone.0023149.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Rosario M. Piro; Ivan Molineris; Ugo Ala; Ferdinando Di Cunto
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Absolute () and relative () rankings of the known FEB and GEFS+ genes (see Table 1) for LOOCVs using different artificial locus sizes (up to 2+1 genes). Results are shown for both gene expression datasets, HBA and GEO. Ranks among the best 10% () are evidenced by bold face font. Ranks among the top 10 () are additionally marked by a single star () and ranks among the top 3 () by three stars ().

  20. f

    Predictive power (PP) in leave-one-out validation of the respectively...

    • plos.figshare.com
    xls
    Updated Jun 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tanja Gärtner; Matthias Steinfath; Sandra Andorf; Jan Lisec; Rhonda C. Meyer; Thomas Altmann; Lothar Willmitzer; Joachim Selbig (2023). Predictive power (PP) in leave-one-out validation of the respectively optimal selections of predictors for the relative mid-parent heterosis regarding the two different testcross set-ups. [Dataset]. http://doi.org/10.1371/journal.pone.0005220.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 8, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Tanja Gärtner; Matthias Steinfath; Sandra Andorf; Jan Lisec; Rhonda C. Meyer; Thomas Altmann; Lothar Willmitzer; Joachim Selbig
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Predictive power (PP) in leave-one-out validation of the respectively optimal selections of predictors for the relative mid-parent heterosis regarding the two different testcross set-ups.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Enrico Glaab; Jaume Bacardit; Jonathan M. Garibaldi; Natalio Krasnogor (2023). Leave-one-out cross-validation results. [Dataset]. http://doi.org/10.1371/journal.pone.0039932.t003
Organization logo

Leave-one-out cross-validation results.

Related Article
Explore at:
xlsAvailable download formats
Dataset updated
May 30, 2023
Dataset provided by
PLOShttp://plos.org/
Authors
Enrico Glaab; Jaume Bacardit; Jonathan M. Garibaldi; Natalio Krasnogor
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Leave-one-out cross-validation results obtained with BioHEL, SVM, RF and PAM on the three microarray datasets using three feature selection methods (CFS, PLSS, RFS); AVG  =  average accuracy, STDDEV  =  standard deviation; the highest accuracies achieved with BioHEL and the best alternative are both shown in bold type for each dataset.

Search
Clear search
Close search
Google apps
Main menu