100+ datasets found
  1. DATA -- collection of ReadMe files

    • figshare.com
    zip
    Updated Feb 14, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrea Capiluppi (2020). DATA -- collection of ReadMe files [Dataset]. http://doi.org/10.6084/m9.figshare.10280558.v2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 14, 2020
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Andrea Capiluppi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    zip file containing the collection of ReadMe files contained in the 50 projects listed. The correspondence is as follows:android-gpuimage => 101.datansj_seg => 102.datarrow => 103.datatmosphere => 104.datautorest => 105.datblurkit-android => 106.datbytecode-viewer => 107.datcglib => 108.datdagger => 109.datExpectAnim => 110.datgraal => 111.datgraphql-java => 112.dathalo => 113.datHikariCP => 114.dathttp-request => 115.datinterviews => 116.datjava-learning => 117.datJava-WebSocket => 118.datjeecg-boot => 119.datjeesite => 120.datJFoenix => 121.datjna => 122.datjoda-time => 123.datjodd => 124.datJsonPath => 125.datjunit4 => 126.datlibrec => 127.datlight-task-scheduler => 128.datmal => 129.datmall => 130.datmosby => 131.datmybatis-plus => 132.datnanohttpd => 133.datNullAway => 134.datparceler => 135.datPermissionsDispatcher => 136.datPhoenix => 137.datquasar => 138.datrequery => 139.datretrofit => 140.datretrolambda => 141.datSentinel => 142.datsimplify => 143.datswagger-core => 144.dattcc-transaction => 145.datsymphony => 146.dattestcontainers-java => 147.datUltimateRecyclerView => 148.datweixin-java-tools => 149.datwire => 150.dat

  2. f

    README

    • figshare.com
    • springernature.figshare.com
    docx
    Updated May 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Paul Brown (2023). README [Dataset]. http://doi.org/10.6084/m9.figshare.21905862.v1
    Explore at:
    docxAvailable download formats
    Dataset updated
    May 15, 2023
    Dataset provided by
    figshare
    Authors
    Paul Brown
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    README file

  3. f

    The README file for the dataset.

    • figshare.com
    • search.datacite.org
    txt
    Updated Jan 18, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xin Ye (2016). The README file for the dataset. [Dataset]. http://doi.org/10.6084/m9.figshare.951985.v2
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jan 18, 2016
    Dataset provided by
    figshare
    Authors
    Xin Ye
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The README file illustrating the dataset.

  4. f

    readme

    • figshare.com
    txt
    Updated Jan 20, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jun Cheng; Wei Huang; Shuangliang Cao; Ru Yang; Wei Yang; Zhaoqiang Yun; Qianjin Feng (2016). readme [Dataset]. http://doi.org/10.6084/m9.figshare.1512426.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jan 20, 2016
    Dataset provided by
    figshare
    Authors
    Jun Cheng; Wei Huang; Shuangliang Cao; Ru Yang; Wei Yang; Zhaoqiang Yun; Qianjin Feng
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    a readme file to describe dataset features.

  5. All NanoPUZZLES ISA-TAB-Nano datasets

    • zenodo.org
    • nanocommons.github.io
    zip
    Updated Jan 21, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Richard L. Marchese Robinson; Antonio Cassano; Richard L. Marchese Robinson; Antonio Cassano (2020). All NanoPUZZLES ISA-TAB-Nano datasets [Dataset]. http://doi.org/10.5281/zenodo.35493
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 21, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Richard L. Marchese Robinson; Antonio Cassano; Richard L. Marchese Robinson; Antonio Cassano
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This file is a ZIP archive which contains ALL publicly released ISA-TAB-Nano datasets developed within the NanoPUZZLES EU project [http://www.nanopuzzles.eu]. The (meta)data in these datasets were extracted from literature references.

    These datasets are also available via FigShare (see below). ****Any necessary updates, e.g. to correct errors not spotted during the review of the datasets within the NanoPUZZLES project prior to their being released, will be uploaded to FigShare and the changes documented in the FigShare dataset descriptions. This Zenodo entry corresponds to the original publicly released versions of these datasets.****

    *****Before working with these datasets, you are strongly advised to read the following text - especially the "Disclaimers".*****


    ISA-TAB-Nano [1,2,3] has been proposed as a nanomaterial data exchange standard. As is explained in the README file contained within each dataset, as well as the "Investigation Description" field of the Investigation file regarding dataset specific deviations, the manner in which certain data and metadata were recorded within these datasets deviates from the expectations of the generic ISA-TAB-Nano specification. Marchese Robinson et al. [3], distributed within each dataset, discusses this in more detail. However, some additional new business rules, going beyond those described in Marchese Robinson et al. [3], may also have been applied to each dataset - as documented in the README file.

    Each dataset was developed using Excel-based templates developed in the NanoPUZZLES project [4]. (N.B. The latest version of the templates, at the time of writing, was version 4 as opposed to version 3 which was described in Marchese Robinson et al. [3]. This latest version of the templates should be contained within the README file of each dataset.) Since these templates were iteratively updated, not all datasets may be perfectly consistent with the latest version - although efforts were made to minimise inconsistencies.

    The three copies of each dataset contained within each individual [DATASET ID]_all_copies.zip are as follows:
    (a) [DATASET ID].zip: the original dataset prepared within Excel
    (b) [DATASET ID]-txt_opt-N.zip: a tab-delimited text version of each dataset prepared using version 2.0 of the cited Python program [5], with the -N flag selected (designed to minimise inconsistencies with the latest version of the NanoPUZZLES templates)
    (c) [DATASET ID]-txt_opt-a_opt-c_opt-N.zip: a tab-delimited text version of each dataset prepared using version 2.0 of the cited Python program [5], with the -N, -a (truncate ontology IDs) and -c (remove Investigation file comments) flags selected, as required for submission to the nanoDMS online database system [3,6].

    The original datasets prepared in Excel were prepared via manual curation. In some cases, it was necessary to extract data from graphs. In some cases, the GSYS software program was employed to facilitate estimation of the values of numerical data points reported in graphs [7,8].


    Disclaimers:

    (1) this work has not undergone peer review
    (2) no endorsement by third parties should be inferred
    (3) *You are strongly advised to read the README file and the "Investigation Description" field of the Investigation file before working with anyone of these datasets. The latter field may document dataset specific caveats such as possible problems or uncertainties associated with curation from the original reference(s). *Other such comments may be found in Study, Material or Assay file "Comment" fields.

    Cited references:
    [1] Thomas, D.G. et al. BMC Biotechnol. 2013, 13, 2. doi:10.1186/1472-6750-13-2
    [2] https://wiki.nci.nih.gov/display/ICR/ISA-TAB-Nano (accessed 18th of December 2015)
    [3] Marchese Robinson, R.L. et al. Beilstein J. Nanotechnol. 2015, 6, 1978–1999. doi:10.3762/bjnano.6.202
    [4] http://www.myexperiment.org/files/1356.html (accessed 18th of December 2015)
    [5] https://github.com/RichardLMR/xls2txtISA.NANO.archive (accessed 18th of December 2015)
    [6] http://biocenitc-deq.urv.cat/nanodms (accessed 18th of December 2015)

    [7] http://www.jcprg.org/gsys/2.4/ (last accessed 11th of April 2016)

    [8] R. Suzuki, "Introduction, Design and Implementation of Digitization Software GSYS", IAEA Report INDC(NDS)-0629, p. 19, IAEA, Vienna, Austria (2013)

    FigShare versions:

    https://figshare.com/articles/NanoPUZZLES_ISA_TAB_Nano_dataset_Cytotoxicity_and_some_physicochemical_data_reported_by_Wang_et_al_2014_DOI_10_3109_17435390_2013_796534_/2056140

    https://figshare.com/articles/NanoPUZZLES_ISA_TAB_Nano_dataset_Zebrafish_mortality_and_basic_nanomaterial_composition_data_extracted_from_Kovriznych_et_al_2013_doi_10_2478_intox_2013_0012_/2056137

    https://figshare.com/articles/NanoPUZZLES_ISA_TAB_Nano_dataset_Physicochemical_and_in_vitro_cytotoxicity_data_LDH_membrane_damage_extracted_from_Sayes_and_Ivanov_2010_DOI_10_1111_j_1539_6924_2010_01438_x_/2056134

    https://figshare.com/articles/NanoPUZZLES_ISA_TAB_Nano_dataset_Cytotoxicity_and_physicochemical_data_for_nanomaterials_extracted_from_Murdock_et_al_2008_DOI_10_1093_toxsci_kfm240_/2056131

    https://figshare.com/articles/NanoPUZZLES_ISA_TAB_Nano_dataset_Data_reported_in_Shaw_et_al_2008_DOI_10_1073_pnas_0802878105_/2056128

    https://figshare.com/articles/NanoPUZZLES_ISA_TAB_Nano_dataset_Data_extracted_from_Puzyn_et_al_2011_DOI_10_1038_NNANO_2011_10_/2056125

    https://figshare.com/articles/NanoPUZZLES_ISA_TAB_Nano_dataset_Toxicity_and_physicochemical_data_extracted_from_Zhang_et_al_2012_DOI_10_1021_nn3010087_/2056122

    https://figshare.com/articles/NanoPUZZLES_ISA_TAB_Nano_dataset_Curation_of_carbon_nanotubes_experimental_data_reported_by_Zhou_et_al_2008_DOI_10_1021_nl0730155_supplemented_with_carbon_nanotubes_structure_files_3D_SDF_created_according_to_the_approach_described_by_Shao_et_al_/2056110

    https://figshare.com/articles/NanoPUZZLES_ISA_TAB_Nano_dataset_C60_fullerene_nanoparticle_Ames_test_and_in_vivo_micronucleus_data_extracted_from_Shinohara_et_al_2009_DOI_10_1016_j_toxlet_2009_09_012_/2056104

    https://figshare.com/articles/NanoPUZZLES_ISA_TAB_Nano_dataset_Data_extracted_from_NanoCare_project_final_scientific_report/2056095


    Funding:

    The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/ 2007-2013) under grant agreement no. 309837 (NanoPUZZLES project).

  6. Nerwip Corpus

    • commons.datacite.org
    • data.niaid.nih.gov
    • +1more
    Updated Jan 19, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vincent Labatut (2016). Nerwip Corpus [Dataset]. http://doi.org/10.6084/m9.figshare.1289791.v17
    Explore at:
    Dataset updated
    Jan 19, 2016
    Dataset provided by
    DataCitehttps://www.datacite.org/
    figshare
    Authors
    Vincent Labatut
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This corpus contains 408 Wikipedia articles. Those are biographies, manually annotated to higlight entities of the following types: Dates, Locations, Organizations and Persons. It was designed to be used by our tool Nerwip (https://github.com/CompNet/nerwip), in order to evaluate and compare existing NER tools on biographic data. It was constituted by Burcu Küpelioglu during her end of study project, and then cleaned and corrected by Samet Atdag during his MSc, to get a total of 250 articles (v3). Vincent Labatut then completed it further, to reach 408 articles (v4). The dataset is shared under a Creative Commons 0 license. If you use it, please cite the following article: A Comparison of Named Entity Recognition Tools Applied to Biographical Texts, S. Atdag & V. Labatut, 2013. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6632052&tag=1 The other files are NER tools-related data (models, dictionaries, etc.), needed by Nerwip to detect entities. If you want to use the tool, you need to unzip these files as explained in the README file associated to Nerwip on GitHub.

  7. f

    Aggregated dataset and readme file

    • uvaauas.figshare.com
    txt
    Updated Sep 20, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    M.L. Vu; Ivan Soraperra; Margarita Leib; Joel van der Weele; Shaul Shalvi (2023). Aggregated dataset and readme file [Dataset]. http://doi.org/10.21942/uva.19506658.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Sep 20, 2023
    Dataset provided by
    University of Amsterdam / Amsterdam University of Applied Sciences
    Authors
    M.L. Vu; Ivan Soraperra; Margarita Leib; Joel van der Weele; Shaul Shalvi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset for the paper: Willful ignorance: a Meta-analysis

  8. WS-DREAM dataset1

    • search.datacite.org
    Updated Nov 2, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jamie Zhu (2016). WS-DREAM dataset1 [Dataset]. http://doi.org/10.6084/m9.figshare.4040112.v4
    Explore at:
    Dataset updated
    Nov 2, 2016
    Dataset provided by
    DataCitehttps://www.datacite.org/
    Figsharehttp://figshare.com/
    Authors
    Jamie Zhu
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    **************************************************************************** * README file for 339 * 5825 Web service QoS dataset * Last updated: 2016/04/27 **************************************************************************** This dataset describes real-world QoS evaluation results from 339 users on 5,825 Web services. Note that we have recently updated the location information (e.g., IP, AS, Latitude, Longitude) of users and services into the dataset.
    **************************************************************************** Reference paper**************************************************************************** Please refer to the following papers for the detailed descriptions of this dataset:
    - Zibin Zheng, Yilei Zhang, and Michael R. Lyu, “Investigating QoS of Real- World Web Services”, IEEE Transactions on Services Computing , vol.7, no.1, pp.32-39, 2014.
    - Zibin Zheng, Yilei Zhang, and Michael R. Lyu, “Distributed QoS Evaluation for Real-World Web Services,” in Proc. of the 8th International Conference on Web Services (ICWS'10), Miami, Florida, USA, July 5-10, 2010, pp.83-90.
    IF YOU USE THIS DATASET IN PUBLISHED RESEARCH, PLEASE CITE EITHER OF THE ABOVE PAPERS. THANKS!
    **************************************************************************** Acknowledgements**************************************************************************** We would like to thank PlanetLab (http://www.planet-lab.org/) for collecting the dataset, and IPLocation (http://www.iplocation.net/) for collecting the location information. We also thank Prof. Mingdong Tang (HNUST) for contributing the AS information of the users and services.
    **************************************************************************** License**************************************************************************** The MIT License (MIT)Copyright (c) 2016, WS-DREAM, CUHK (https://wsdream.github.io)


  9. l

    LScD (Leicester Scientific Dictionary)

    • figshare.le.ac.uk
    docx
    Updated Apr 15, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Neslihan Suzen (2020). LScD (Leicester Scientific Dictionary) [Dataset]. http://doi.org/10.25392/leicester.data.9746900.v3
    Explore at:
    docxAvailable download formats
    Dataset updated
    Apr 15, 2020
    Dataset provided by
    University of Leicester
    Authors
    Neslihan Suzen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Leicester
    Description

    LScD (Leicester Scientific Dictionary)April 2020 by Neslihan Suzen, PhD student at the University of Leicester (ns433@leicester.ac.uk/suzenneslihan@hotmail.com)Supervised by Prof Alexander Gorban and Dr Evgeny Mirkes[Version 3] The third version of LScD (Leicester Scientific Dictionary) is created from the updated LSC (Leicester Scientific Corpus) - Version 2*. All pre-processing steps applied to build the new version of the dictionary are the same as in Version 2** and can be found in description of Version 2 below. We did not repeat the explanation. After pre-processing steps, the total number of unique words in the new version of the dictionary is 972,060. The files provided with this description are also same as described as for LScD Version 2 below.* Suzen, Neslihan (2019): LSC (Leicester Scientific Corpus). figshare. Dataset. https://doi.org/10.25392/leicester.data.9449639.v2** Suzen, Neslihan (2019): LScD (Leicester Scientific Dictionary). figshare. Dataset. https://doi.org/10.25392/leicester.data.9746900.v2[Version 2] Getting StartedThis document provides the pre-processing steps for creating an ordered list of words from the LSC (Leicester Scientific Corpus) [1] and the description of LScD (Leicester Scientific Dictionary). This dictionary is created to be used in future work on the quantification of the meaning of research texts. R code for producing the dictionary from LSC and instructions for usage of the code are available in [2]. The code can be also used for list of texts from other sources, amendments to the code may be required.LSC is a collection of abstracts of articles and proceeding papers published in 2014 and indexed by the Web of Science (WoS) database [3]. Each document contains title, list of authors, list of categories, list of research areas, and times cited. The corpus contains only documents in English. The corpus was collected in July 2018 and contains the number of citations from publication date to July 2018. The total number of documents in LSC is 1,673,824.LScD is an ordered list of words from texts of abstracts in LSC.The dictionary stores 974,238 unique words, is sorted by the number of documents containing the word in descending order. All words in the LScD are in stemmed form of words. The LScD contains the following information:1.Unique words in abstracts2.Number of documents containing each word3.Number of appearance of a word in the entire corpusProcessing the LSCStep 1.Downloading the LSC Online: Use of the LSC is subject to acceptance of request of the link by email. To access the LSC for research purposes, please email to ns433@le.ac.uk. The data are extracted from Web of Science [3]. You may not copy or distribute these data in whole or in part without the written consent of Clarivate Analytics.Step 2.Importing the Corpus to R: The full R code for processing the corpus can be found in the GitHub [2].All following steps can be applied for arbitrary list of texts from any source with changes of parameter. The structure of the corpus such as file format and names (also the position) of fields should be taken into account to apply our code. The organisation of CSV files of LSC is described in README file for LSC [1].Step 3.Extracting Abstracts and Saving Metadata: Metadata that include all fields in a document excluding abstracts and the field of abstracts are separated. Metadata are then saved as MetaData.R. Fields of metadata are: List_of_Authors, Title, Categories, Research_Areas, Total_Times_Cited and Times_cited_in_Core_Collection.Step 4.Text Pre-processing Steps on the Collection of Abstracts: In this section, we presented our approaches to pre-process abstracts of the LSC.1.Removing punctuations and special characters: This is the process of substitution of all non-alphanumeric characters by space. We did not substitute the character “-” in this step, because we need to keep words like “z-score”, “non-payment” and “pre-processing” in order not to lose the actual meaning of such words. A processing of uniting prefixes with words are performed in later steps of pre-processing.2.Lowercasing the text data: Lowercasing is performed to avoid considering same words like “Corpus”, “corpus” and “CORPUS” differently. Entire collection of texts are converted to lowercase.3.Uniting prefixes of words: Words containing prefixes joined with character “-” are united as a word. The list of prefixes united for this research are listed in the file “list_of_prefixes.csv”. The most of prefixes are extracted from [4]. We also added commonly used prefixes: ‘e’, ‘extra’, ‘per’, ‘self’ and ‘ultra’.4.Substitution of words: Some of words joined with “-” in the abstracts of the LSC require an additional process of substitution to avoid losing the meaning of the word before removing the character “-”. Some examples of such words are “z-test”, “well-known” and “chi-square”. These words have been substituted to “ztest”, “wellknown” and “chisquare”. Identification of such words is done by sampling of abstracts form LSC. The full list of such words and decision taken for substitution are presented in the file “list_of_substitution.csv”.5.Removing the character “-”: All remaining character “-” are replaced by space.6.Removing numbers: All digits which are not included in a word are replaced by space. All words that contain digits and letters are kept because alphanumeric characters such as chemical formula might be important for our analysis. Some examples are “co2”, “h2o” and “21st”.7.Stemming: Stemming is the process of converting inflected words into their word stem. This step results in uniting several forms of words with similar meaning into one form and also saving memory space and time [5]. All words in the LScD are stemmed to their word stem.8.Stop words removal: Stop words are words that are extreme common but provide little value in a language. Some common stop words in English are ‘I’, ‘the’, ‘a’ etc. We used ‘tm’ package in R to remove stop words [6]. There are 174 English stop words listed in the package.Step 5.Writing the LScD into CSV Format: There are 1,673,824 plain processed texts for further analysis. All unique words in the corpus are extracted and written in the file “LScD.csv”.The Organisation of the LScDThe total number of words in the file “LScD.csv” is 974,238. Each field is described below:Word: It contains unique words from the corpus. All words are in lowercase and their stem forms. The field is sorted by the number of documents that contain words in descending order.Number of Documents Containing the Word: In this content, binary calculation is used: if a word exists in an abstract then there is a count of 1. If the word exits more than once in a document, the count is still 1. Total number of document containing the word is counted as the sum of 1s in the entire corpus.Number of Appearance in Corpus: It contains how many times a word occurs in the corpus when the corpus is considered as one large document.Instructions for R CodeLScD_Creation.R is an R script for processing the LSC to create an ordered list of words from the corpus [2]. Outputs of the code are saved as RData file and in CSV format. Outputs of the code are:Metadata File: It includes all fields in a document excluding abstracts. Fields are List_of_Authors, Title, Categories, Research_Areas, Total_Times_Cited and Times_cited_in_Core_Collection.File of Abstracts: It contains all abstracts after pre-processing steps defined in the step 4.DTM: It is the Document Term Matrix constructed from the LSC[6]. Each entry of the matrix is the number of times the word occurs in the corresponding document.LScD: An ordered list of words from LSC as defined in the previous section.The code can be used by:1.Download the folder ‘LSC’, ‘list_of_prefixes.csv’ and ‘list_of_substitution.csv’2.Open LScD_Creation.R script3.Change parameters in the script: replace with the full path of the directory with source files and the full path of the directory to write output files4.Run the full code.References[1]N. Suzen. (2019). LSC (Leicester Scientific Corpus) [Dataset]. Available: https://doi.org/10.25392/leicester.data.9449639.v1[2]N. Suzen. (2019). LScD-LEICESTER SCIENTIFIC DICTIONARY CREATION. Available: https://github.com/neslihansuzen/LScD-LEICESTER-SCIENTIFIC-DICTIONARY-CREATION[3]Web of Science. (15 July). Available: https://apps.webofknowledge.com/[4]A. Thomas, "Common Prefixes, Suffixes and Roots," Center for Development and Learning, 2013.[5]C. Ramasubramanian and R. Ramya, "Effective pre-processing activities in text mining using improved porter’s stemming algorithm," International Journal of Advanced Research in Computer and Communication Engineering, vol. 2, no. 12, pp. 4536-4538, 2013.[6]I. Feinerer, "Introduction to the tm Package Text Mining in R," Accessible en ligne: https://cran.r-project.org/web/packages/tm/vignettes/tm.pdf, 2013.

  10. c

    FST raw data

    • kilthub.cmu.edu
    • figshare.com
    zip
    Updated May 12, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marlene Behrmann (2017). FST raw data [Dataset]. http://doi.org/10.6084/m9.figshare.4233107.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 12, 2017
    Dataset provided by
    Carnegie Mellon University
    Authors
    Marlene Behrmann
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Magnetoencephalography was recorded while four participants viewed and discriminated among 91 face identities. For each participant, the data set includes MEG recordings for each block of the task and for an independent localizer task. It also includes MRI recordings for source modeling of MEG signals, and behavioral data from the MEG face identity task. Also included are pairwise behavioral similarity ratings of a subset of the stimuli from 7 participants who did not participate in the main MEG experiment. See readme file for full description of project, data files, and data acquisition.

  11. r

    4D 128 cube topological dataset

    • researchdata.edu.au
    • figshare.com
    Updated Sep 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hannouch Khalil; Chalup Stephan (2023). 4D 128 cube topological dataset [Dataset]. https://researchdata.edu.au/4d-128-cube-topological-dataset/2824536
    Explore at:
    Dataset updated
    Sep 28, 2023
    Dataset provided by
    The University of Newcastle
    Authors
    Hannouch Khalil; Chalup Stephan
    Description

    The materials in this repository comprise a 4D topological dataset and two 4D visualisations in video format. The dataset can be downloaded directly as a series of twenty 1.5GB .zip files. More details can be found in the README.txt file. This data is also available via mediated access in the following formats: A single file approximately 30GB .zip file, or a series of ten 3GB .zip files.

    All code and data released with this supplementary material uses the following license: Creative Commons Attribution 4.0 International (CC BY 4.0) http://creativecommons.org/licenses/by/4.0 This license permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original authors & provide a link to the Creative Commons license, and indicate if changes were made.

    Please contact researchdata@newcastle.edu.au to access the data in alternative formats.

  12. Env 188B sample readme file

    • figshare.com
    txt
    Updated May 30, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tony Aponte (2017). Env 188B sample readme file [Dataset]. http://doi.org/10.6084/m9.figshare.5038955.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 30, 2017
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Tony Aponte
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Sample readme.txt for UCLA Env 188B students to use as a template for creating their own readme files.

  13. brain tumor dataset

    • figshare.com
    zip
    Updated Dec 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jun Cheng (2024). brain tumor dataset [Dataset]. http://doi.org/10.6084/m9.figshare.1512427.v8
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 21, 2024
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Jun Cheng
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This brain tumor dataset contains 3064 T1-weighted contrast-inhanced images with three kinds of brain tumor. Detailed information of the dataset can be found in the readme file.The README file is updated:Add image acquisition protocolAdd MATLAB code to convert .mat file to jpg images

  14. DepMap 19Q2 Public

    • figshare.com
    txt
    Updated May 31, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Broad DepMap (2023). DepMap 19Q2 Public [Dataset]. http://doi.org/10.6084/m9.figshare.8061398.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Broad DepMap
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains the results of Avana library CRISPR-Cas9 genome-scale knockout (prefixed with Achilles) as well as mutation, copy number and gene expression data (prefixed with CCLE) for cancer cell lines as part of the Broad Institute’s Cancer Dependency Map project. We have repackaged our fileset to include all quarterly-updating datasets produced by DepMap. The Avana CRISPR-Cas9 genome-scale knockout data has expanded to include 563 cell lines, the RNAseq data includes 1200 cell lines, and the copy number data includes 1,626 cell lines. Please see the README files for details regarding data processing pipeline procedures updates.As our screening efforts continue, we will be releasing additional cancer dependency data on a quarterly basis for unrestricted use. For the latest datasets available, further analyses, and to subscribe to our mailing list visit https://depmap.org.Descriptions of the experimental methods and the CERES algorithm are published in http://dx.doi.org/10.1038/ng.3984. Some cell lines were process using copy number data based on the Sanger Institute whole exome sequencing data (COSMIC: http://cancer.sanger.ac.uk.cell_lines, EGA accession number: EGAD00001001039) reprocessed using CCLE pipelines. A detailed description of the pipelines and tool versions for CCLE expression can be found here: https://github.com/broadinstitute/gtex-pipeline/blob/v9/TOPMed_RNAseq_pipeline.md.

  15. metaseq supplemental data (README file)

    • figshare.com
    txt
    Updated Jan 19, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ryan Dale (2016). metaseq supplemental data (README file) [Dataset]. http://doi.org/10.6084/m9.figshare.1092575.v6
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jan 19, 2016
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Ryan Dale
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Supplementary data for the metaseq manuscript. This dataset is intended to be downloaded and unpacked automatically, along with the rest of the supplemental data files, using the download-metaseq-supplemental.py script found at http://files.figshare.com/1581826/download_metaseq_supplemental.py.

  16. f

    LUMIERE dataset - Readme file

    • springernature.figshare.com
    pdf
    Updated Dec 13, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yannick Suter; Urspeter Knecht; Waldo Valenzuela; Michelle Notter; Ekkehard Hewer; Philippe Schucht; Roland Wiest; Mauricio Reyes (2022). LUMIERE dataset - Readme file [Dataset]. http://doi.org/10.6084/m9.figshare.21266241.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Dec 13, 2022
    Dataset provided by
    figshare
    Authors
    Yannick Suter; Urspeter Knecht; Waldo Valenzuela; Michelle Notter; Ekkehard Hewer; Philippe Schucht; Roland Wiest; Mauricio Reyes
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This read me file contains descriptions of the files contained in the LUMIERE dataset.

  17. metaseq supplemental data

    • figshare.com
    application/gzip
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ryan Dale (2023). metaseq supplemental data [Dataset]. http://doi.org/10.6084/m9.figshare.1091467.v4
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Ryan Dale
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Supplementary data for the metaseq manuscript in Nucleic Acids Research (doi: 10.1093/nar/gku644, or http://doi.org/10.1093/nar/gku644). Due to size restrictions of individual files on figshare, this dataset has been split into many pieces. Use the download-metaseq-supplemental.py script found at http://files.figshare.com/1581826/download_metaseq_supplemental.py to download and unpack all data and source code into the correct directory structure to your local directory, and then see the README file in that directory.

  18. README

    • figshare.com
    txt
    Updated Jun 8, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jesse Marshall (2021). README [Dataset]. http://doi.org/10.6084/m9.figshare.14746044.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 8, 2021
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Jesse Marshall
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Readme file for the PAIR-R24M dataset.

  19. Data, Code, and Readme Files

    • figshare.com
    docx
    Updated Apr 5, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    James Klarevas-Irby; Damien R Farine; Martin Wikelski (2021). Data, Code, and Readme Files [Dataset]. http://doi.org/10.6084/m9.figshare.14363600.v1
    Explore at:
    docxAvailable download formats
    Dataset updated
    Apr 5, 2021
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    James Klarevas-Irby; Damien R Farine; Martin Wikelski
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data and code in support of [Klarevas-Irby, Wikelski, and Farine (2021)], in addition to a readme file with further information. All scripts are written for implementation in R ver. 4.0 and are numbered in order that they are implemented (order does not matter between multiple scripts with same number).

  20. f

    ReadMe

    • springernature.figshare.com
    pdf
    Updated Dec 8, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alessandra Sciutti; Nicoletta Noceti; Elena Nicora; Gaurvi Goyal; Alessia Vignolo; Francesca Odone (2020). ReadMe [Dataset]. http://doi.org/10.6084/m9.figshare.12049161.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Dec 8, 2020
    Dataset provided by
    figshare
    Authors
    Alessandra Sciutti; Nicoletta Noceti; Elena Nicora; Gaurvi Goyal; Alessia Vignolo; Francesca Odone
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    ReadMe file with indications on how to use the dataset

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Andrea Capiluppi (2020). DATA -- collection of ReadMe files [Dataset]. http://doi.org/10.6084/m9.figshare.10280558.v2
Organization logoOrganization logo

DATA -- collection of ReadMe files

Explore at:
zipAvailable download formats
Dataset updated
Feb 14, 2020
Dataset provided by
Figsharehttp://figshare.com/
figshare
Authors
Andrea Capiluppi
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

zip file containing the collection of ReadMe files contained in the 50 projects listed. The correspondence is as follows:android-gpuimage => 101.datansj_seg => 102.datarrow => 103.datatmosphere => 104.datautorest => 105.datblurkit-android => 106.datbytecode-viewer => 107.datcglib => 108.datdagger => 109.datExpectAnim => 110.datgraal => 111.datgraphql-java => 112.dathalo => 113.datHikariCP => 114.dathttp-request => 115.datinterviews => 116.datjava-learning => 117.datJava-WebSocket => 118.datjeecg-boot => 119.datjeesite => 120.datJFoenix => 121.datjna => 122.datjoda-time => 123.datjodd => 124.datJsonPath => 125.datjunit4 => 126.datlibrec => 127.datlight-task-scheduler => 128.datmal => 129.datmall => 130.datmosby => 131.datmybatis-plus => 132.datnanohttpd => 133.datNullAway => 134.datparceler => 135.datPermissionsDispatcher => 136.datPhoenix => 137.datquasar => 138.datrequery => 139.datretrofit => 140.datretrolambda => 141.datSentinel => 142.datsimplify => 143.datswagger-core => 144.dattcc-transaction => 145.datsymphony => 146.dattestcontainers-java => 147.datUltimateRecyclerView => 148.datweixin-java-tools => 149.datwire => 150.dat

Search
Clear search
Close search
Google apps
Main menu