Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Poster presented at the Research Data Alliance 5th Plenary Meeting, March 2015. To best encourage data publishing by scientific researchers, the burden of submission needs to be low. Data archiving at the time of and in conjunction with article publication can be an effective means, by catching authors when they’re motivated and tying data submission into an already-familiar publication process. Here we share Dryad’s experiences with integrating journals using various workflows.
Facebook
TwitterThe journals’ author guidelines and/or editorial policies were examined on whether they take a stance with regard to the availability of the underlying data of the submitted article. The mere explicated possibility of providing supplementary material along with the submitted article was not considered as a research data policy in the present study. Furthermore, the present article excluded source codes or algorithms from the scope of the paper and thus policies related to them are not included in the analysis of the present article.
For selection of journals within the field of neurosciences, Clarivate Analytics’ InCites Journal Citation Reports database was searched using categories of neurosciences and neuroimaging. From the results, journals with the 40 highest Impact Factor (for the year 2017) indicators were extracted for scrutiny of research data policies. Respectively, the selection journals within the field of physics was created by performing a similar search with the categories of physics, applied; physics, atomic, molecular & chemical; physics, condensed matter; physics, fluids & plasmas; physics, mathematical; physics, multidisciplinary; physics, nuclear and physics, particles & fields. From the results, journals with the 40 highest Impact Factor indicators were again extracted for scrutiny. Similarly, the 40 journals representing the field of operations research were extracted by using the search category of operations research and management.
Journal-specific data policies were sought from journal specific websites providing journal specific author guidelines or editorial policies. Within the present study, the examination of journal data policies was done in May 2019. The primary data source was journal-specific author guidelines. If journal guidelines explicitly linked to the publisher’s general policy with regard to research data, these were used in the analyses of the present article. If journal-specific research data policy, or lack of, was inconsistent with the publisher’s general policies, the journal-specific policies and guidelines were prioritized and used in the present article’s data. If journals’ author guidelines were not openly available online due to, e.g., accepting submissions on an invite-only basis, the journal was not included in the data of the present article. Also journals that exclusively publish review articles were excluded and replaced with the journal having the next highest Impact Factor indicator so that each set representing the three field of sciences consisted of 40 journals. The final data thus consisted of 120 journals in total.
‘Public deposition’ refers to a scenario where researcher deposits data to a public repository and thus gives the administrative role of the data to the receiving repository. ‘Scientific sharing’ refers to a scenario where researcher administers his or her data locally and by request provides it to interested reader. Note that none of the journals examined in the present article required that all data types underlying a submitted work should be deposited into a public data repositories. However, some journals required public deposition of data of specific types. Within the journal research data policies examined in the present article, these data types are well presented by the Springer Nature policy on “Availability of data, materials, code and protocols” (Springer Nature, 2018), that is, DNA and RNA data; protein sequences and DNA and RNA sequencing data; genetic polymorphisms data; linked phenotype and genotype data; gene expression microarray data; proteomics data; macromolecular structures and crystallographic data for small molecules. Furthermore, the registration of clinical trials in a public repository was also considered as a data type in this study. The term specific data types used in the custom coding framework of the present study thus refers to both life sciences data and public registration of clinical trials. These data types have community-endorsed public repositories where deposition was most often mandated within the journals’ research data policies.
The term ‘location’ refers to whether the journal’s data policy provides suggestions or requirements for the repositories or services used to share the underlying data of the submitted works. A mere general reference to ‘public repositories’ was not considered a location suggestion, but only references to individual repositories and services. The category of ‘immediate release of data’ examines whether the journals’ research data policy addresses the timing of publication of the underlying data of submitted works. Note that even though the journals may only encourage public deposition of the data, the editorial processes could be set up so that it leads to either publication of the research data or the research data metadata in conjunction to publishing of the submitted work.
Facebook
TwitterThis dataset was created by Mohamed_Yousef95
Facebook
TwitterThis dataset was created by tornikeo
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Abstract: Granting agencies invest millions of dollars on the generation and analysis of data, making these products extremely valuable. However, without sufficient annotation of the methods used to collect and analyze the data, the ability to reproduce and reuse those products suffers. This lack of assurance of the quality and credibility of the data at the different stages in the research process essentially wastes much of the investment of time and funding and fails to drive research forward to the level of potential possible if everything was effectively annotated and disseminated to the wider research community. In order to address this issue for the Hawai’i Established Program to Stimulate Competitive Research (EPSCoR) project, a water science gateway was developed at the University of Hawai‘i (UH), called the ‘Ike Wai Gateway. In Hawaiian, ‘Ike means knowledge and Wai means water. The gateway supports research in hydrology and water management by providing tools to address questions of water sustainability in Hawai‘i. The gateway provides a framework for data acquisition, analysis, model integration, and display of data products. The gateway is intended to complement and integrate with the capabilities of the Consortium of Universities for the Advancement of Hydrologic Science’s (CUAHSI) Hydroshare by providing sound data and metadata management capabilities for multi-domain field observations, analytical lab actions, and modeling outputs. Functionality provided by the gateway is supported by a subset of the CUAHSI’s Observations Data Model (ODM) delivered as centralized web based user interfaces and APIs supporting multi-domain data management, computation, analysis, and visualization tools to support reproducible science, modeling, data discovery, and decision support for the Hawai’i EPSCoR ‘Ike Wai research team and wider Hawai‘i hydrology community. By leveraging the Tapis platform, UH has constructed a gateway that ties data and advanced computing resources together to support diverse research domains including microbiology, geochemistry, geophysics, economics, and humanities, coupled with computational and modeling workflows delivered in a user friendly web interface with workflows for effectively annotating the project data and products. Disseminating results for the ‘Ike Wai project through the ‘Ike Wai data gateway and Hydroshare makes the research products accessible and reusable.
Facebook
TwitterThis FAIRsharing record describes: Data policy and author guidance for Discover Chemical Engineering by Springer Nature. Discover Chemical Engineering is part of the Discover journal series committed to providing a streamlined submission process, rapid review and publication, and a high level of author service at every stage. It is an open access, community-focused journal publishing research from across all fields relevant to chemical engineering. This journal recommends 1) the resources listed directly within this record, and 2) all resources listed in this record's parent policies (see the "extends" relationship).
Facebook
TwitterArabian journal for science and engineering Acceptance Rate - ResearchHelpDesk - The Arabian Journal for Science and Engineering (AJSE) is a peer-reviewed journal owned by King Fahd University of Petroleum and Minerals and published by Springer. AJSE publishes twelve issues of rigorous and original contributions in the Science disciplines of Biological Sciences, Chemistry, Earth Sciences, and Physics, and in the Engineering disciplines of Chemical, Civil, Computer Science and Engineering, Electrical, Mechanical, Petroleum, and Systems Engineering. Manuscripts must be submitted in the English language and authors must ensure that the article has not been published or submitted for publication elsewhere in any format and that there are no ethical concerns with the contents or data collection. The authors warrant that the information submitted is not redundant and respects general guidelines of ethics in publishing. All papers are evaluated by at least two international referees, who are known scholars in their fields. About KFUPM King Fahd University of Petroleum & Minerals King Fahd University of Petroleum & Minerals (KFUPM) in Saudi Arabia is a leading educational organization for science and technology. The vast petroleum and mineral resources of the Kingdom pose a complex and exciting challenge for scientific, technical, and management education. To meet this challenge, the University has adopted advanced training in the fields of science, engineering, and management as one of its goals in order to promote leadership and service in the Kingdom’s petroleum and mineral industries. The University also furthers knowledge through research in these fields. In addition, because it derives a distinctive character from its being a technological university in the land of Islam, the University is unreservedly committed to deepening and broadening the faith of its Muslim students and to instilling in them an appreciation of the major contributions of their people to the world of mathematics and science. All areas of KFUPM - facilities, faculty, students, and programs - are directed to the attainment of these goals. About AJSE Arabian Journal of Science and Engineering - Sections King Fahd University of Petroleum & Minerals (KFUPM) partnered with Springer to publish the Arabian Journal for Science and Engineering (AJSE). AJSE, which has been published by KFUPM since 1975, is a recognized national, regional and international journal that provides a great opportunity for the dissemination of research advances from the Kingdom of Saudi Arabia, MENA and the world. Arabian Journal of Science and Engineering AJSE publishes twelve issues in both the Engineering (AJSE-Engineering) and Science (AJSE-Science) disciplines. The publication of thematic/special issues on specific topics is also considered. AJSE-Engineering AJSE-Engineering is a section of the Arabian Journal for Science and Engineering (AJSE). It publishes original contributions and refereed research papers in the disciplines of Civil, Chemical, Electrical, Mechanical and Petroleum Engineering, Computer Science and Engineering, and Systems Engineering. AJSE-Engineering publishes full-length original articles, review articles on specialized topics, technical notes, and technical reports. AJSE-Science Chemistry, Earth Sciences, Physics and now also: Biological Sciences AJSE-Science is a section of the Arabian Journal for Science and Engineering (AJSE). AJSE-Science publishes original contributions and refereed research papers in the disciplines of Chemistry, Earth Sciences, Physics, and now also Biological Sciences. AJSE-Science publishes full-length original articles, review articles on specialized topics, technical notes, and technical reports. Abstracted/Indexed in: Academic Search, CSA/Proquest, Current Abstracts, Current Contents/Engineering, Computing and Technology, Current Index to Statistics, EBSCO, Google Scholar, INIS Atomindex, OCLC, Science Citation Index Expanded (SciSearch), SCOPUS, Summon by Serial Solutions, Zentralblatt Math RG Journal Impact: 0.93 * *This value is calculated using ResearchGate data and is based on average citation counts from work published in this journal. The data used in the calculation may not be exhaustive. RG Journal impact history 2020 Available summer 2021 2018 / 2019 0.93 2017 1.12 2016 0.99 2015 1.04 2014 1.17 2013 0.63 2012 0.55 2011 0.58 2010 0.36 2009 0.37 2008 0.15 2007 0.16 2006 0.12 2005 0.25 2004 0.12 2003 0.20 2002 0.10 2001 0.14 2000 0.06 Additional details Cited half-life 4.50 Immediacy index 0.09 Eigenfactor 0.00 Article influence 0.14 Website http://www.kfupm.edu.sa/publications/ajse/ Website description Arabian Journal for Science and Engineering website Other titles Arabian Journal for science and engineering (online), AJSE ISSN 1319-8025 OCLC 264802239 Material type Periodical, Internet resource Document type Internet Resource, Journal / Magazine / Newspaper
Facebook
TwitterThis dataset contains supplementary information for a manuscript describing the ESS-DIVE (Environmental Systems Science Data Infrastructure for a Virtual Ecosystem) data repository's community data and metadata reporting formats. The purpose of creating the ESS-DIVE reporting formats was to provide guidelines for formatting some of the diverse data types that can be found in the ESS-DIVE repository. The 6 teams of community partners who developed the reporting formats included scientists and engineers from across the Department of Energy National Lab network. Additionally, during the development process, 247 individuals representing 128 institutions provided input on the formats.The primary files in this dataset are 10 data and metadata crosswalk for ESS-DIVE’s reporting formats (all files ending in _crosswalk.csv). The crosswalks compare elements used in each of the reporting formats to other related standards and data resources (e.g., repositories, datasets, data systems). This dataset also contains additional files recommended by ESS-DIVE’s file-level metadata reporting format. Each data file has an associated dictionary (files ending in _dd.csv) which provide a brief description of each standard or data resource consulted in the data reporting format development process. The flmd.csv file describes each file contained within the dataset.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data for numbers of new submissions to arXiv (http://arxiv.org/) broken down by subject area and by year. See ASCII data file for more details.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ABSTRACT This paper analyzes the indexing policies of Brazilian journals on Information Science. It considers the scarce approach to the subject in the context of scientific communication, as well as the pragmatic need to systematize the action of assigning keywords by the author of the publication. It aims to analyze the online guidelines for assignment of keywords to articles during the submission process. It is a descriptive research that follows a qualitative and quantitative methodology. It can be characterized as a documentary research as the data was collected from the publication policies and guidelines for authors that are made available by the journals. We also conducted a content analysis to systematize the collected data. The results reveal the existence of guidelines related to the number of terms, mostly connected to selection in indexing. This was not the case for the specifications of the depth of terms and the indexing language, despite the referral to the latter in a total of five journals that use a controlled language. We conclude that Brazilian journals of Information Science need to pay a greater attention to the implementation of indexing policies in order to provide a greater assertiveness to the authors, especially during the attribution of keywords.
Facebook
TwitterThere's a story behind every dataset and here's your opportunity to share yours.
What's inside is more than just rows and columns. Make it easy for others to get started by describing how you acquired the data and what time period it represents, too.
We wouldn't be here without the help of others. If you owe any attributions or thanks, include them here along with any citations of past research.
Your data will be in front of the world's largest data science community. What questions do you want to see answered?
Facebook
TwitterThe ESS-DIVE sample identifiers and metadata reporting format primarily follows the System for Earth Sample Registration (SESAR) Global Sample Number (IGSN) guide and template, with modifications to address Environmental Systems Science (ESS) sample needs and practicalities (IGSN-ESS). IGSNs are associated with standardized metadata to characterize a variety of different sample types (e.g. object type, material) and describe sample collection details (e.g. latitude, longitude, environmental context, date, collection method). Globally unique sample identifiers, particularly IGSNs, facilitate sample discovery, tracking, and reuse; they are especially useful when sample data is shared with collaborators, sent to different laboratories or user facilities for analyses, or distributed in different data files, datasets, and/or publications. To develop recommendations for multidisciplinary ecosystem and environmental sciences, we first conducted research on related sample standards and templates. We provide a comparison of existing sample reporting conventions, which includes mapping metadata elements across existing standards and Environment Ontology (ENVO) terms for sample object types and environmental materials. We worked with eight U.S. Department of Energy (DOE) funded projects, including those from Terrestrial Ecosystem Science and Subsurface Biogeochemical Research Scientific Focus Areas. Project scientists tested the process of registering samples for IGSNs and associated metadata in workflows for multidisciplinary ecosystem sciences.more » We provide modified IGSN metadata guidelines to account for needs of a variety of related biological and environmental samples. While generally following the IGSN core descriptive metadata schema, we provide recommendations for extending sample type terms, and connecting to related templates geared towards biodiversity (Darwin Core) and genomic (Minimum Information about any Sequence, MIxS) samples and specimens. ESS-DIVE recommends registering samples for IGSNs through SESAR, and we include instructions for registration using the IGSN-ESS guidelines. Our resulting sample reporting guidelines, template (IGSN-ESS), and identifier approach can be used by any researcher with sample data for ecosystem sciences.« less
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
GenBank data submission network R data frames by year from 1992-2018.
Facebook
TwitterVolunteers are increasingly being recruited into citizen science projects to collect observations for scientific studies. An additional goal of these projects is to engage and educate these volunteers. Thus, there are few barriers to participation resulting in volunteer observers with varying ability to complete the project’s tasks. To improve the quality of a citizen science project’s outcomes it would be useful to account for inter-observer variation, and to assess the rarely tested presumption that participating in a citizen science projects results in volunteers becoming better observers. Here we present a method for indexing observer variability based on the data routinely submitted by observers participating in the citizen science project eBird, a broad-scale monitoring project in which observers collect and submit lists of the bird species observed while birding. Our method for indexing observer variability uses species accumulation curves, lines that describe how the total number of species reported increase with increasing time spent in collecting observations. We find that differences in species accumulation curves among observers equates to higher rates of species accumulation, particularly for harder-to-identify species, and reveals increased species accumulation rates with continued participation. We suggest that these properties of our analysis provide a measure of observer skill, and that the potential to derive post-hoc data-derived measurements of participant ability should be more widely explored by analysts of data from citizen science projects. We see the potential for inferential results from analyses of citizen science data to be improved by accounting for observer skill.
Facebook
TwitterDetailed description in README file. These are the R code and data files used to produce the submission.
Facebook
TwitterThis dataset was created by Aminat Adebiyi
GCMS Validation Data
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data and gnuplot file to generate graph of submissions by category within computer science, by year. The data are whitespace separateed and for each category there is the count of submissions and then the fraction of all cs submissions that this represents for the year.
Facebook
TwitterESS-DIVE’s (Environmental Systems Science Data Infrastructure for a Virtual Ecosystem) dataset metadata reporting format is intended to compile information about a dataset (e.g., title, description, funding sources) that can enable reuse of data submitted to the ESS-DIVE data repository. The files contained in this dataset include instructions (dataset_metadata_guide.md and README.md) that can be used to understand the types of metadata ESS-DIVE collects. The data dictionary (dd.csv) follows ESS-DIVE’s file-level metadata reporting format and includes brief descriptions about each element of the dataset metadata reporting format. This dataset also includes a terminology crosswalk (dataset_metadata_crosswalk.csv) that shows how ESS-DIVE’s metadata reporting format maps onto other existing metadata standards and reporting formats. Data contributors to ESS-DIVE can provide this metadata by manual entry using a web form or programmatically via ESS-DIVE’s API (Application Programming Interface). A metadata template (dataset_metadata_template.docx or dataset_metadata_template.pdf) can be used to collaboratively compile metadata before providing it to ESS-DIVE. Since being incorporated into ESS-DIVE’s data submission user interface, ESS-DIVE’s dataset metadata reporting format, has enabled features like automated metadata quality checks, and dissemination of ESS-DIVE datasets onto other data platforms including Google Dataset Search and DataCite.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset was created by Mohammed Zakir Bhuiyan
Released under CC0: Public Domain
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Researchers seeking to share their data with coordinating centers such as the National Database for Autism Research (NDAR), face numerous barriers to establishing new connections and maintaining existing ones. We sought to dramatically reduce the time and money required to establish and maintain the interoperability of data between research centers, by establishing a process where manual recoding of data is replaced by data sharing instructions in the form of extraction and transformation scripts. Over the course of seven typical (20-60 subjects, 400-1000 fields each) data submissions to NDAR, the need for duplication, retranscription, or restructuring of the source data was fully eliminated. Separating the extraction and transformation scripts from data files also eradicated the impact of additional data collection on the time required to repeat successful transmissions. Revision controlled management of these scripts also provided a new benefit: traceability of the transformation process itself. Now, point-in-time retrieval of extraction scripts and explanations for modifications to the data sharing interface are possible. This method has proven to be successful and efficient for interfacing research data with NDAR. It presents little-to-no impact to transmitting investigators’ data, ensures high data integrity, trivializes the complexities of repeatedly modifying a growing dataset over time, and introduces traceability to the collaborative process of integrating two collections of data with one another.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Poster presented at the Research Data Alliance 5th Plenary Meeting, March 2015. To best encourage data publishing by scientific researchers, the burden of submission needs to be low. Data archiving at the time of and in conjunction with article publication can be an effective means, by catching authors when they’re motivated and tying data submission into an already-familiar publication process. Here we share Dryad’s experiences with integrating journals using various workflows.