Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Files and datasets in Parquet format related to molecular dynamics and retrieved from the Zenodo, Figshare and OSF data repositories. The file 'data_model_parquet.md' is a codebook that contains data models for the Parquet files.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Please refer to GitHub (https://doi.org/10.5281/zenodo.7683559) for further data and analysis information
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset combines multimodal biosignals and eye tracking information gathered under a human-computer interaction framework. The dataset was developed in the vein of the MAMEM project that aims to endow people with motor disabilities with the ability to edit and author multimedia content through mental commands and gaze activity. The dataset includes EEG, eye-tracking, and physiological (GSR and Heart rate) signals along with demographic, clinical and behavioral data collected from 36 individuals (18 able-bodied and 18 motor-impaired). Data were collected during the interaction with specifically designed interface for web browsing and multimedia content manipulation and during imaginary movement tasks. Alongside these data we also include evaluation reports both from the subjects and the experimenters as far as the experimental procedure and collected dataset are concerned. We believe that the presented dataset will contribute towards the development and evaluation of modern human-computer interaction systems that would foster the integration of people with severe motor impairments back into society.Please use the following citation: Nikolopoulos, Spiros, Georgiadis, Kostas, Kalaganis, Fotis, Liaros, Georgios, Lazarou, Ioulietta, Adam, Katerina, Papazoglou – Chalikias, Anastasios, Chatzilari, Elisavet , Oikonomou, Vangelis P., Petrantonakis, Panagiotis C., Kompatsiaris, Ioannis, Kumar, Chandan, Menges, Raphael, Staab, Steffen, Müller, Daniel, Sengupta, Korok, Bostantjopoulou, Sevasti, Zoe, Katsarou , Zeilig, Gabi, Plotnik, Meir, Gottlieb, Amihai, Fountoukidou, Sofia, Ham, Jaap, Athanasiou, Dimitrios, Mariakaki, Agnes, Comanducci, Dario, Sabatini, Edoardo, Nistico, Walter & Plank, Markus. (2017). The MAMEM Project - A dataset for multimodal human-computer interaction using biosignals and eye tracking information. Zenodo. http://doi.org/10.5281/zenodo.834154Read/analyze using the following software:https://github.com/MAMEM/eeg-processing-toolbox
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The R package 'packagefinder' was used to query CRAN and compile this data table. The terms ecology AND evolution were used, and this approach searches all fields.Search conducted July 2019.Code published at Zenodo.https://zenodo.org/account/settings/github/repository/cjlortie/R_package_chooser_checklist
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset of three TV Series with manual annotations.Cite as:@inproceedings{Bost2020, title = {Serial Speakers: a Dataset of TV Series}, author = {Bost, Xavier and Labatut, Vincent and Linares, Georges}, url = {https://hal.archives-ouvertes.fr/hal-02477736}, booktitle = {12th International Conference on Language Resources and Evaluation (LREC 2020)}, address = {Marseille, France}, year = {2020}} The dataset consists of 3 TV series: - Breaking Bad: S01--S05 (file 'bb.json')- Game of Thrones: S01--08 (file 'got.json')- House of Cards: S01--S02 (file 'hoc.json')All three files are in .json format and contain TV Series annotated data.Each TV Series is defined by its name,A TV Series contains seasons, defined by their ids.Every season is made of episodes, defined by their ids, titles, duration and fps .Each episode contains two basic kinds of data: scenes and speech segments.Scenes are defined by starting points and are made of shots (Seasons 1 only).A shot is defined by:- Starting and ending positions.- Recurring shot ids.The speech segments are defined by their:- Starting and ending points.- Textual content (here encrypted for copyright reasons).- Speaker.- Possible interlocutors (for the following episodes only: bb: S01E04, S01E06, S02E03, S02E04; got: S01E03, S01E07, S01E08; hoc: S01E01, S01E07, S01E11).All timestamps are expressed in seconds and are valid for the video files extracted from the commercial DVDs (PAL 25 FPS), with recaps (unannotated) included at the beginning of the House of Cards episodes.In you are interested in the textual content of the dataset, please consider using our text recovering tool on GitHub:https://github.com/bostxavier/Serial-SpeakersA comprehensive description of the dataset can be found at:https://hal.archives-ouvertes.fr/hal-02477736
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset extracted from the source code of OpenJDK 8: http://openjdk.java.net/, generated by using the CodeOntology parser.This dataset is a breakdown in 4 different files of the dataset at: https://doi.org/10.5281/zenodo.579977structuralInformation.nt - Structural information on source code: 1981108 triplesannotations.nt - DBpedia links: 309688 triplessourceCodeLiterals.nt - Actual source code as literals: 134757 triplescomments.nt - Literal Comments: 105881 triplesThe dataset includes different kinds of triples: structural information extracted from source code, DBpedia links generated from javadoc comments, actual source code as literals and literal comments.Background:The associated publication describes the development of CodeOntology as a community-shared software framework supporting expressive queries over source code. This dataset is the product of the CodeOntology parser, which is able to analyze Java source code and serialize it into RDF triples, applied to the source code of OpenJDK 8, gathering a structured dataset consisting of more than 2 million RDF triples. CodeOntology allows the generation of Linked Data from any Java project, thereby enabling the execution of highly expressive queries over source code, by means of a powerful language like SPARQL.A tutorial video is available at https://youtu.be/bd6pvUDy8kAMore information at the CodeOntology website: http://codeontology.org/
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is data required to reproduce the analysis in the cnidarian genomes paper. By adding this file to the code in the source repository at https://doi.org/10.5281/zenodo.8402538 it is possible to run the code.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Additional file 3 Performance comparison between complete (450k) and restricted (21k) datasets.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
source data
Data collection
Supplemental Material
Attribution 1.0 (CC BY 1.0)https://creativecommons.org/licenses/by/1.0/
License information was derived automatically
Initial conditions
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset provides extended data supporting claims of a systematic review (PRISMA 2020 guidelines) about the process of integration of Media and Information Literacy (MIL) into the curriculum: a) research process; b) studies included; c) data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Articles submitted (1st edition) to the preprint server arXiv/bioRxiv were converted to text and analysed as follows :
This dataset describes the results of the above work.
https://dataverse.harvard.edu/api/datasets/:persistentId/versions/2.3/customlicense?persistentId=doi:10.7910/DVN/R33RS9https://dataverse.harvard.edu/api/datasets/:persistentId/versions/2.3/customlicense?persistentId=doi:10.7910/DVN/R33RS9
Harvard Dataverse => Digital Library - Projects & Theses - Prof. Dr. Scholz ----- Introduction and background information to "Digital Library - Projects & Theses - Prof. Dr. Scholz". The URL of the dataverse: http://dataverse.harvard.edu/dataverse/LibraryProfScholz The URL of this (introduction) dataset: http://doi.org/10.7910/DVN/R33RS9 YOU MAY HAVE BEEN DIRECTED HERE, BECAUSE THE CALLING PAGE HAS NO OTHER ENTRY POINT (with DOI) INTO THIS DATAVERSE. Click on the title of this page to reach the start page of the dataverse! Introduction to the Data in this Dataverse This dataverse is about: Aircraft Design Flight Mechanics Aircraft Systems This dataverse contains research data and software produced by students for their projects and theses on above topics. Get linked to all other resources from their reports using the URN from the German National Library (DNB) as given in each dataset under "Metadata": https://nbn-resolving.org/html/urn:nbn:de:gbv:18302-aeroJJJJ-MM-DD.01x Alternative sites that store the data given in this dataverse are: http://library.ProfScholz.de and https://archive.org/details/@profscholz Open an "item". Under "DOWNLOAD OPTIONS" select the file (as far as available) called "ZIP" to download DataXxxx.zip. Alternatively, go to "SHOW ALL"; In the new window select next to DataXxxx.zip click "View Contents" or select URL next to "Data-list". Download single file from DataXxxx.zip. Data Publishing Data publishing means publishing of research data for (re)use by others. It consists of preparing single files or a dataset containing several files for access in the WWW. This practice is part of the open science movement. There is consensus about the benefits resulting from Open Data - especially in connection with Open Access publishing. It is important to link the publication (e.g. thesis) with the underlying data and vice versa. General (not disciplinary) and free data repositories are: Harvard Dataverse (this one!) figshare (emphasis: multi media) Zenodo (emphasis: results from EU research, mainly text) Mendeley Data (emphasis: data associated with journal articles) To find data repositories use http://re3data.org Read more on https://en.wikipedia.org/wiki/Data_publishing
Supplementary Table 1
Liftover file to convert D. simulans version 2 genome coordinates to D. melanogaster v5 (dm3) coordinatesLiftover file for use with UCSC liftOver utility for converting genomic coordinates of the version 2 D. simulans genome (Hu et al 2012) to version 5 (dm3) D. melanogaster genome.ds2_to_dm3.liftOver.gzAnntotated VCF fileAnnotated VCF file containing allele frequency and read depths of the 14 samples described in Bergland et al. Annotations include p- and q-values from models of seasonality and clinality; genic annotations; average frequencies; and, quality filters. This VCF file only contains SNPs with average minor allele frequency > 0.15.6d_v7.3_output.vcf.gz
Dryad_PlosOne Baseline vs. Follow-Up DataThis dataset combines items from a 2009/2010 survey and a 2013/2014 survey to examine changes in data sharing/reuse perceptions and practices among research scientists. See ReadMe file for details.Final Dryad PLOS ONE Scientist FollowUp OnlyCorrected version of follow-up (2013/2014) study data only. Corresponds with second half of results in manuscript.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Files and datasets in Parquet format related to molecular dynamics and retrieved from the Zenodo, Figshare and OSF data repositories. The file 'data_model_parquet.md' is a codebook that contains data models for the Parquet files.