Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These data were generated for an investigation of research data repository (RDR) mentions in biuomedical research articles.
Supplementary Table 1 is a discrete subset of SciCrunch RDRs used to study RDR mentions in biomedical literature. We generated this list by starting with the top 1000 entries in the SciCrunch database, measured by citations, removed entries for organizations (such as universities without a corresponding RDR) or non-relevant tools (such as reference managers), updated links, and consolidated duplicates resulting from RDR mergers and name variations. The resulting list of 737 RDRs is shown in with as a base based on a source list of RDRs in the SciCrunch database. The file includes the Research Resource Identifier (RRID), the RDR name, and a link to the RDR record in the SciCrunch database.
Supplementary Table 2 shows the RDRs, associated journals, and article-mention pairs (records) with text snippets extracted from mined Methods text in 2020 PubMed articles. The dataset has 4 components. The first shows the list of repositories with RDR mentions, and includes the Research Resource Identifier (RRID), the RDR name, the number of articles that mention the RDR, and a link to the record in the SciCrunch database. The second shows the list of journals in the study set with at least 1 RDR mention, andincludes the Journal ID, nam, ESSN/ISSN, the total count of publications in 2020, the number of articles that had text available to mine, the number of article-mention pairs (records), number of articles with RDR mentions, the number of unique RDRs mentioned, % of articles with minable text. The third shows the top 200 journals by RDR mention, normalized by the proportion of articles with available text to mine, with the same metadata as the second table. The fourth shows text snippets for each RDR mention, and includes the RRID, RDR name, PubMedID (PMID), DOI, article publication date, journal name, journal ID, ESSN/ISSN, article title, and snippet.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains article metadata and information about Open Science Indicators for approximately 139,000 research articles published in PLOS journals from 1 January 2018 to 30 March 2025 and a set of approximately 28,000 comparator articles published in non-PLOS journals. This is the tenth release of this dataset, which will be updated with new versions on an annual basis.This version of the Open Science Indicators dataset shares the indicators seen in the previous versions as well as fully operationalised protocols and study registration indicators, which were previously only shared in preliminary forms. The v10 dataset focuses on detection of five Open Science practices by analysing the XML of published research articles:Sharing of research data, in particular data shared in data repositoriesSharing of codePosting of preprintsSharing of protocolsSharing of study registrationsThe dataset provides data and code generation and sharing rates, the location of shared data and code (whether in Supporting Information or in an online repository). It also provides preprint, protocol and study registration sharing rates as well as details of the shared output, such as publication date, URL/DOI/Registration Identifier and platform used. Additional data fields are also provided for each article analysed. This release has been run using an updated preprint detection method (see OSI-Methods-Statement_v10_Jul25.pdf for details). Further information on the methods used to collect and analyse the data can be found in Documentation.Further information on the principles and requirements for developing Open Science Indicators is available in https://doi.org/10.6084/m9.figshare.21640889.Data folders/filesData Files folderThis folder contains the main OSI dataset files PLOS-Dataset_v10_Jul25.csv and Comparator-Dataset_v10_Jul25.csv, which containdescriptive metadata, e.g. article title, publication data, author countries, is taken from the article .xml filesadditional information around the Open Science Indicators derived algorithmicallyand the OSI-Summary-statistics_v10_Jul25.xlsx file contains the summary data for both PLOS-Dataset_v10_Jul25.csv and Comparator-Dataset_v10_Jul25.csv.Documentation folderThis file contains documentation related to the main data files. The file OSI-Methods-Statement_v10_Jul25.pdf describes the methods underlying the data collection and analysis. OSI-Column-Descriptions_v10_Jul25.pdf describes the fields used in PLOS-Dataset_v10_Jul25.csv and Comparator-Dataset_v10_Jul25.csv. OSI-Repository-List_v1_Dec22.xlsx lists the repositories and their characteristics used to identify specific repositories in the PLOS-Dataset_v10_Jul25.csv and Comparator-Dataset_v10_Jul25.csv repository fields.The folder also contains documentation originally shared alongside the preliminary versions of the protocols and study registration indicators in order to give fuller details of their detection methods.Contact details for further information:Iain Hrynaszkiewicz, Director, Open Research Solutions, PLOS, ihrynaszkiewicz@plos.org / plos@plos.orgLauren Cadwallader, Open Research Manager, PLOS, lcadwallader@plos.org / plos@plos.orgAcknowledgements:Thanks to Allegra Pearce, Tim Vines, Asura Enkhbayar, Scott Kerr and parth sarin of DataSeer for contributing to data acquisition and supporting information.
Repository to make datasets resulting from NIH funded research more accessible, citable, shareable, and discoverable. Data submitted will be reviewed to ensure there is no personally identifiable information in data and metadata prior to being published and in line with FAIR -Findable, Accessible, Interoperable, and Reusable principles. Data published on Figshare is assigned persistent, citable DOI (Digital Object Identifier) and is discoverable in Google, Google Scholar, Google Dataset Search, and more.Complited on July,2020. Researches can continue to share NIH funded data and other research product on figshare.com.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Spreadsheet listing data repositories that are recommended by Scientific Data (Springer Nature) as being suitable for hosting data associated with peer-reviewed articles. Please see the repository list on Scientific Data's website for the most up to date list.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
My blog post in collaboration with Research Repository Librarian of Monash University Library. The goal is to promote the use of monash.figshare among graduate research students to share and promote their research.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
R markdown files for:
Free text fields are included in the markdown but have been turned off for knitting and in the HTML file.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In May-June 2020 PLOS surveyed researchers from Europe and North America to rate tasks associated with data sharing on (i) their importance to researchers and (ii) researchers' satisfaction with their ability to complete those tasks. Researchers were recruited via direct email campaigns, promoted Facebook and Twitter posts, a post on the PLOS Blog, and emails to industry contacts who distributed the survey on our behalf. Participation was incentivized with 3 random prize draws, which were managed separately to maintain anonymity.This dataset consists of:1) The survey sent to researchers (pdf).2) The anonymised data export of survey results (xlsx).The data export has been processed to retain the anonymity of participants. The comments left in the final question of the survey (question 17) have been removed. Answers to questions 12 to 16 have been recoded to give each answer a numerical value (see 'Scores' tab of spreadsheet). The counts, means, standard deviations and confidence intervals used in the associated manuscript for each factor are given in rows 619-622.Version 2 contains only the completed responses. Completed responses in the version 2 dataset refer to those who answered all the questions in the survey. The version 1 dataset contains a higher number of responses categorised as 'completed' but this has been reviewed for version 2.Version 1 data was used for the preprint: https://doi.org/10.31219/osf.io/njr5u.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Qualitative research among rare disease patients and advocates about their views on the international sharing of data and biospecimens for research.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data is a subset of a larger survey collected from health and medical researchers, related to their their data management practices. This data is associated with data sharing behaviours and are being used as an evidence-base to design strategies and interventions to address targeted behaviours.
Repository for all data, figures, theses, publications, posters, presentations, filesets, videos, datasets, negative data in a citable, shareable and discoverable manner with Digital Object Identifiers. Allows to upload any file format to be made visualisable in the browser so that figures, datasets, media, papers, posters, presentations and filesets can be disseminated in a way that the current scholarly publishing model does not allow. Features integration with ORCID, Symplectic Elements, can import items from Github and is a source tracked by Altmetric.com. Figshare gives users unlimited public space and 1GB of private storage space for free. Data are digitally preserved by CLOCKSS. Supported by Digital Science, a division of Macmillan Publishers Limited, as a community-based, open science project that retains its autonomy.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Survey period: 08 April - 08 May, 2014 Top 10 Impact Factor journals in each of 22 categories
Figures https://doi.org/10.6084/m9.figshare.6857273.v1
Article https://doi.org/10.20651/jslis.62.1_20 https://doi.org/10.15068/00158168
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is the slides and the speaking manuscript of a presentation given at the event Figshare fest in Amsterdam, Netherlands on June 28th 2017. The presentation is meant to show a case study of organizing research data management in a university both from the management and user perspectives.The presentation describes how the Stockholm University Library team is currently (June 2017) working to manage research data at Stockholm University across different administrative departments. The presentation also includes an overview of the current research data management activities at the Stockholm University with a deeper insight on working with a knowledge hub for research data management based on user feedback.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
To inform current debate around climate change education (CCE) in the school curriculum in England, we surveyed the views of primary and secondary teachers (N=626).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Over the last decade, there have been significant changes in data sharing policies and in the data sharing environment faced by life science researchers. Using data from a 2013 survey of over 1600 life science researchers, we analyze the effects of sharing policies of funding agencies and journals. We also examine the effects of new sharing infrastructure and tools (i.e., third party repositories and online supplements). We find that recently enacted data sharing policies and new sharing infrastructure and tools have had a sizable effect on encouraging data sharing. In particular, third party repositories and online supplements as well as data sharing requirements of funding agencies, particularly the NIH and the National Human Genome Research Institute, were perceived by scientists to have had a large effect on facilitating data sharing. In addition, we found a high degree of compliance with these new policies, although noncompliance resulted in few formal or informal sanctions. Despite the overall effectiveness of data sharing policies, some significant gaps remain: about one third of grant reviewers placed no weight on data sharing plans in their reviews, and a similar percentage ignored the requirements of material transfer agreements. These patterns suggest that although most of these new policies have been effective, there is still room for policy improvement.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The primary data collection element of this project related to observational based fieldwork at four universities in Kenya and South Africa undertaken by Louise Bezuidenhout (hereafter ‘LB’) as the award researcher. The award team selected fieldsites through a series of strategic decisions. First, it was decided that all fieldsites would be in Africa, as this continent is largely missing from discussions about Open Science. Second, two countries were selected – one in southern (South Africa) and one in eastern Africa (Kenya) – based on the existence of the robust national research programs in these countries compared to elsewhere on the continent. As country background, Kenya has 22 public universities, many of whom conduct research. It also has a robust history of international research collaboration – a prime example being the long-standing KEMRI-Wellcome Trust partnership. While the government encourages research, financial support for it remains limited and the focus of national universities is primarily on undergraduate teaching. South Africa has 25 public universities, all of whom conduct research. As a country, South Africa has a long history of academic research, one which continues to be actively supported by the government.
Third, in order to speak to conditions of research in Africa, we sought examples of vibrant, “homegrown” research. While some of the researchers at the sites visited collaborated with others in Europe and North America, by design none of the fieldsites were formally affiliated to large internationally funded research consortia or networks. Fourth, within these two countries four departments or research groups in academic institutions were selected for inclusion based on their common discipline (chemistry/biochemistry) and research interests (medicinal chemistry). These decisions were to ensure that the differences in data sharing practices and perceptions between disciplines noted in previous studies would be minimized.
Within Kenya, site 1 (KY1) and Site 2 (KY2) were both chemistry departments of well-established universities. Both departments had over 15 full time faculty members, however faculty to student ratios were high and the teaching loads considerable. KY1 had a large number of MSc and PhD candidates, the majority of whom were full-time and a number of whom had financial assistance. In contrast, KY2 had a very high number of MSc students, the majority of whom were self-funded and part-time (and thus conducted their laboratory work during holidays). In both departments space in laboratories was at a premium and students shared space and equipment. Neither department had any postdoctoral researchers.
Within South Africa, site 1 (SA1) was a research group within the large chemistry department of a well-established and comparatively well-resourced university with a tradition of research. Site 2 (SA2) was the chemistry/biochemistry department of a university that had previously been designated a university for marginalized population groups under the Apartheid system. Both sites were the recipients of numerous national and international grants. SA2 had one postdoctoral researcher at the time, while SA1 had none.
Empirical data was gathered using a combination of qualitative methods including embedded laboratory observations and semi-structured interviews. Each site visit took between three and six weeks, during which time LB participated in departmental activities, interviewed faculty and postgraduate students, and observed social and physical working environments in the departments and laboratories. Data collection was undertaken over a period of five months between November 2014 and March 2015, with 56 semi-structured interviews in total conducted with faculty and graduate students. Follow-on visits to each site were made in late 2015 by LB and Brian Rappert to solicit feedback on our analysis.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data sharing is crucial to the advancement of science because it facilitates collaboration, transparency, reproducibility, criticism, and re-analysis. Publishers are well-positioned to promote sharing of research data by implementing data sharing policies. While there is an increasing trend toward requiring data sharing, not all journals mandate that data be shared at the time of publication. In this study, we extended previous work to analyze the data sharing policies of 447 journals across several scientific disciplines, including biology, clinical sciences, mathematics, physics, and social sciences. Our results showed that only a small percentage of journals require data sharing as a condition of publication, and that this varies across disciplines and Impact Factors. Both Impact Factors and discipline are associated with the presence of a data sharing policy. Our results suggest that journals with higher Impact Factors are more likely to have data sharing policies; use shared data in peer review; require deposit of specific data types into publicly available data banks; and refer to reproducibility as a rationale for sharing data. Biological science journals are more likely than social science and mathematics journals to require data sharing.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a hands-on workshop on the management of qualitative social science data, with a focus on data sharing and transparency. While the workshop addresses data management throughout the lifecycle – from data management plan to data sharing – its focus is on the particular challenges in sharing qualitative data and in making qualitative research transparent. One set of challenges concerns the ethical and legal concerns in sharing qualitative data. We will consider obtaining permissions for sharing qualitative data from human participants, strategies for (and limits of) de-identifying qualitative data, and options for restricting access to sensitive qualitative data. We will also briefly look at copyright and licensing and how they can inhibit the public sharing of qualitative data.
A second set of challenges concerns the lack of standardized guidelines for making qualitative research processes transparent. Following on some of the themes touched on in the talk, we will jointly explore some cutting edge approaches for making qualitative research transparent and discuss their potentials as well as shortcomings for different forms of research.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The validation of scientific results requires reproducible methods and data. Often, however, data sets supporting research articles are not openly accessible and interlinked. This analysis tests whether open sharing and linking of supporting data through the PANGAEA® data library measurably increases the citation rate of articles published between 1993 and 2010 in the journal Paleoceanography as reported in the Thomson Reuters Web of Science database. The 12.85% (171) of articles with publicly available supporting data sets received 19.94% (8,056) of the aggregate citations (40,409). Publicly available data were thus significantly (p=0.007, 95% confidence interval) associated with about 35% more citations per article than the average of all articles sampled over the 18-year study period (1,331), and the increase is fairly consistent over time (14 of 18 years). This relationship between openly available, curated data and increased citation rate may incentivize researchers to share their data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains Data Availability Statements from 47,593 papers published in PLOS ONE between March 2014 (when the policy went into effect) and May 2016, analyzed for type of statement.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Raw data from the article entitled "Data sharing policies of journals in life, health, and physical sciences indexed in Journal Citation Reports"
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These data were generated for an investigation of research data repository (RDR) mentions in biuomedical research articles.
Supplementary Table 1 is a discrete subset of SciCrunch RDRs used to study RDR mentions in biomedical literature. We generated this list by starting with the top 1000 entries in the SciCrunch database, measured by citations, removed entries for organizations (such as universities without a corresponding RDR) or non-relevant tools (such as reference managers), updated links, and consolidated duplicates resulting from RDR mergers and name variations. The resulting list of 737 RDRs is shown in with as a base based on a source list of RDRs in the SciCrunch database. The file includes the Research Resource Identifier (RRID), the RDR name, and a link to the RDR record in the SciCrunch database.
Supplementary Table 2 shows the RDRs, associated journals, and article-mention pairs (records) with text snippets extracted from mined Methods text in 2020 PubMed articles. The dataset has 4 components. The first shows the list of repositories with RDR mentions, and includes the Research Resource Identifier (RRID), the RDR name, the number of articles that mention the RDR, and a link to the record in the SciCrunch database. The second shows the list of journals in the study set with at least 1 RDR mention, andincludes the Journal ID, nam, ESSN/ISSN, the total count of publications in 2020, the number of articles that had text available to mine, the number of article-mention pairs (records), number of articles with RDR mentions, the number of unique RDRs mentioned, % of articles with minable text. The third shows the top 200 journals by RDR mention, normalized by the proportion of articles with available text to mine, with the same metadata as the second table. The fourth shows text snippets for each RDR mention, and includes the RRID, RDR name, PubMedID (PMID), DOI, article publication date, journal name, journal ID, ESSN/ISSN, article title, and snippet.