rasdani/crux-eval_math-eval-logs dataset hosted on Hugging Face and contributed by the HF Datasets community
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
To achieve accurate assignment of peptide sequences to observed fragmentation spectra, a shotgun proteomics database search tool must make good use of the very high-resolution information produced by state-of-the-art mass spectrometers. However, making use of this information while also ensuring that the search engine’s scores are well calibrated, that is, that the score assigned to one spectrum can be meaningfully compared to the score assigned to a different spectrum, has proven to be challenging. Here we describe a database search score function, the “residue evidence” (res-ev) score, that achieves both of these goals simultaneously. We also demonstrate how to combine calibrated res-ev scores with calibrated XCorr scores to produce a “combined p value” score function. We provide a benchmark consisting of four mass spectrometry data sets, which we use to compare the combined p value to the score functions used by several existing search engines. Our results suggest that the combined p value achieves state-of-the-art performance, generally outperforming MS Amanda and Morpheus and performing comparably to MS-GF+. The res-ev and combined p-value score functions are freely available as part of the Tide search engine in the Crux mass spectrometry toolkit (http://crux.ms).
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
CRUXEval: Code Reasoning, Understanding, and Execution Evaluation
🏠 Home Page •
💻 GitHub Repository •
🏆 Leaderboard •
🔎 Sample Explorer
CRUXEval (Code Reasoning, Understanding, and eXecution Evaluation) is a benchmark of 800 Python functions and input-output pairs. The benchmark consists of two tasks, CRUXEval-I (input prediction) and CRUXEval-O (output prediction). The benchmark was constructed as follows: first, we use Code Llama 34B to generate a large set of… See the full description on the dataset page: https://huggingface.co/datasets/cruxeval-org/cruxeval.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Instructions are available here:
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset contains the digitized treatments in Plazi based on the original journal article Piccoli, Costanza, Belluardo, Francesco, Lobon-Rovira, Javier, Oliveira Alves, Ivo, Rasoazanany, Malalatiana, Andreone, Franco, Rosa, Goncalo M., Crottini, Angelica (2023): Another step through the crux: a new microendemic rock-dwelling Paroedura (Squamata, Gekkonidae) from south-central Madagascar. ZooKeys 1181: 125-154, DOI: http://dx.doi.org/10.3897/zookeys.1181.108134, URL: http://dx.doi.org/10.3897/zookeys.1181.108134
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Executable file to run Crux toolkit for Q-ranker
https://project-open-data.cio.gov/unknown-license/https://project-open-data.cio.gov/unknown-license/
CASTLE receives Time and Attendace data from CruX ; Is used by employees to enter their T&A information; CruX is used to manage schedules T&A data and LDR information
CRUX-CO1Filtered CRUX-CO1 reference databaseCO1.zipCRUX-FITSFiltered CRUX-FITS (fungal its) reference databaseFITS.zipCRUX-PITSFiltered CRUX-PITS (Plant ITS2) reference databasePITS.zipCRUX-16SFiltered CRUX-16S reference database16S.zipCRUX_18SFiltered CRUX_18S reference database18S.zipCRUX_12SFiltered CRUX_12S reference database12S.zipCRUX_V8-9_18SFiltered CRUX_V8-9_18S reference databaseV8-9_18S.zipCRUX_V4_18SFiltered CRUX_V4_18S reference databaseV4_18S.zipAnacapahttps://github.com/limey-bean/Anacapa as of 5-10-2019CRUX_Creating-Reference-libraries-Using-eXisting-toolshttps://github.com/limey-bean/CRUX_Creating-Reference-libraries-Using-eXisting-tools as of 5-10-2019R-P-HCRUX length Restricted Porter & Hajibabaei CO1 databaseR-mitofishCRUX length restricted mitofish reference databaseR-MidoriCRUX length restricted Midori reference databaseR-CO-ARBitratorCRUX length restricted CO-ARBitrator reference databaseR_UNITECRUX length restricted UNITE reference databaseR-WITSCRUX length r...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
aTarget: database of 236 neuropeptide sequences; K103: k-permuted decoy database size of 236,000 peptides; K104: k-permuted decoy database size = 2,360,000 peptides; K105: k-permuted decoy database size = 23,600,000 peptides; K106: k-permuted decoy database size = 236,000,000 peptides.b# ions: permutation p-values computed for the number of matched b- and y-ions. XCorr: permutation p-values computed from the XCorr scores of the matches. Sp: permutation p-values computed from the Sp scores of the matches. ΔCn: permutation p-values computed using X! Tandem ΔCn.cSignificance threshold (t) for matched to be considered significant at p-value
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
aSignificance threshold (t) for matched to be considered significant at an E-value or p-value < 1×10−t (t = 0 to > = 6).bCumulative number of peptides with an E-value or p-value < 1×10−2.cNumber of peptides missed by program.dNumber of peptides with incorrect post-translational modification assignment.Peptide detection significance levels using experimental spectra of the 103 peptides with and without any post-translational modifications (PTMs) against a standard target database across database search programs (OMSSA, X! Tandem, and Crux).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Datasets used to compare SpeCollate’s performance against Crux and MSFragger. (XLSX)
Not seeing a result you expected?
Learn how you can add new datasets to our index.
rasdani/crux-eval_math-eval-logs dataset hosted on Hugging Face and contributed by the HF Datasets community