89 datasets found
  1. n

    Top 100-Ranked Clinical Journals' Preprint Policies as of April 23, 2020

    • data.niaid.nih.gov
    • search.dataone.org
    • +1more
    zip
    Updated Sep 6, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dorothy Massey; Joshua Wallach; Joseph Ross; Michelle Opare; Harlan Krumholz (2020). Top 100-Ranked Clinical Journals' Preprint Policies as of April 23, 2020 [Dataset]. http://doi.org/10.5061/dryad.jdfn2z38f
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 6, 2020
    Dataset provided by
    Yale School of Public Health
    Yale University
    Yale New Haven Hospital
    Authors
    Dorothy Massey; Joshua Wallach; Joseph Ross; Michelle Opare; Harlan Krumholz
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Objective: To determine the top 100-ranked (by impact factor) clinical journals' policies toward publishing research previously published on preprint servers (preprints).

    Design: Cross sectional. Main outcome measures: Editorial guidelines toward preprints, journal rank by impact factor.

    Results: 86 (86%) of the journals examined will consider papers previously published as preprints (preprints), 13 (13%) determine their decision on a case-by-case basis, and 1 (1%) does not allow preprints.

    Conclusions: We found wide acceptance of publishing preprints in the clinical research community, although researchers may still face uncertainty that their preprints will be accepted by all of their target journals.

    Methods We examined journal policies of the 100 top-ranked clinical journals using the 2018 impact factors as reported by InCites Journal Citation Reports (JCR). First, we examined all journals with an impact factor greater than 5, and then we manually screened by title and category do identify the first 100 clinical journals. We included only those that publish original research. Next, we checked each journal's editorial policy on preprints. We examined, in order, the journal website, the publisher website, the Transpose Database, and the first 10 pages of a Google search with the journal name and the term "preprint." We classified each journal's policy, as shown in this dataset, as allowing preprints, determining based on preprint status on a case-by-case basis, and not allowing any preprints. We collected data on April 23, 2020.

    (Full methods can also be found in previously published paper.)

  2. DataSheet1_The top 100 cited studies on bacterial persisters: A bibliometric...

    • frontiersin.figshare.com
    docx
    Updated Jun 16, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yuan Ju; Haiyue Long; Ping Zhao; Ping Xu; Luwei Sun; Yongqing Bao; Pingjing Yu; Yu Zhang (2023). DataSheet1_The top 100 cited studies on bacterial persisters: A bibliometric analysis.docx [Dataset]. http://doi.org/10.3389/fphar.2022.1001861.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jun 16, 2023
    Dataset provided by
    Frontiers Mediahttp://www.frontiersin.org/
    Authors
    Yuan Ju; Haiyue Long; Ping Zhao; Ping Xu; Luwei Sun; Yongqing Bao; Pingjing Yu; Yu Zhang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Background: Bacterial persisters are thought to be responsible for the recalcitrance and relapse of persistent infections, and they also lead to antibiotic treatment failure in clinics. In recent years, researches on bacterial persisters have attracted worldwide attention and the number of related publications is increasing. The purpose of this study was to better understand research trends on bacterial persisters by identifying and bibliometrics analyzing the top 100 cited publications in this field.Methods: The Web of Science Core Collection was utilized to retrieve the highly cited publications on bacterial persisters, and these publications were cross-matched with Google Scholar and Scopus. The top 100 cited publications were identified after reviewing the full texts. The main information of each publication was extracted and analyzed using Excel, SPSS, and VOSviewer.Results: The top 100 cited papers on bacterial persisters were published between 1997 and 2019. The citation frequency of each publication ranged from 147 to 1815 for the Web of Science Core Collection, 153 to 1883 for Scopus, and 207 to 2,986 for Google Scholar. Among the top 100 cited list, there were 64 original articles, 35 review articles, and 1 editorial material. These papers were published in 51 journals, and the Journal of Bacteriology was the most productive journal with 8 papers. A total of 14 countries made contributions to the top 100 cited publications, and 64 publications were from the United States. 15 institutions have published two or more papers and nearly 87% of them were from the United States. Kim Lewis from Northeastern University was the most influential author with 18 publications. Furthermore, keywords co-occurrence suggested that the main topics on bacterial persisters were mechanisms of persister formation or re-growth. Finally, “Microbiology” was the most frequent category in this field.Conclusion: This study identified and analyzed the top 100 cited publications related to bacterial persisters. The results provided a general overview of bacterial persisters and might help researchers to better understand the classic studies, historical developments, and new findings in this field, thus providing ideas for further research.

  3. Z

    An analysis of the current overlay journals

    • data.niaid.nih.gov
    Updated Oct 18, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rousi, Antti M.; Laakso, Mikael (2022). An analysis of the current overlay journals [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6420517
    Explore at:
    Dataset updated
    Oct 18, 2022
    Dataset provided by
    Hanken School of Economics
    Aalto University
    Authors
    Rousi, Antti M.; Laakso, Mikael
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Research data to accommodate the article "Overlay journals: a study of the current landscape" (https://doi.org/10.1177/09610006221125208)

    Identifying the sample of overlay journals was an explorative process (occurring during April 2021 to February 2022). The sample of investigated overlay journals were identified by using the websites of Episciences.org (2021), Scholastica (2021), Free Journal Network (2021), Open Journals (2021), PubPub (2022), and Wikipedia (2021). In total, this study identified 34 overlay journals. Please see the paper for more details about the excluded journal types.

    The journal ISSN numbers, manuscript source repositories, first overlay volumes, article volumes, publication languages, peer-review type, licence for published articles, author costs, publisher types, submission policy, and preprint availability policy were observed by inspecting journal editorial policies and submission guidelines found from journal websites. The overlay journals’ ISSN numbers were identified by examining journal websites and cross-checking this information with the Ulrich’s periodicals database (Ulrichsweb, 2021). Journals that published review reports, either with reviewers’ names or anonymously, were classified as operating with open peer-review. Publisher types defined by Laakso and Björk (2013) were used to categorise the findings concerning the publishers. If the journal website did not include publisher information, the editorial board was interpreted to publish the journal.

    The Organisation for Economic Co-operation and Development (OECD) field of science classification was used to categorise the journals into different domains of science. The journals’ primary OECD field of sciences were defined by the authors through examining the journal websites.

    Whether the journals were indexed in the Directory of Open Access Journals (DOAJ), Scopus, or Clarivate Analytics’ Web of Science Core collection’s journal master list was examined by searching the services with journal ISSN numbers and journal titles.

    The identified overlay journals were examined from the viewpoint of both qualitative and quantitative journal metrics. The qualitative metrics comprised the Nordic expert panel rankings of scientific journals, namely the Finnish Publication Forum, the Danish Bibliometric Research Indicator and the Norwegian Register for Scientific Journals, Series and Publishers. Searches were conducted from the web portals of the above services with both ISSN numbers and journal titles. Clarivate Analytics’ Journal Citation Reports database was searched with the use of both ISSN numbers and journal titles to identify whether the journals had a Journal Citation Indicator (JCI), Two-Year Impact Factor (IF) and an Impact Factor ranking (IF rank). The examined Journal Impact Factors and Impact Factor rankings were for the year 2020 (as released in 2021).

  4. w

    Data from: Inventory of online public databases and repositories holding...

    • data.wu.ac.at
    • agdatacommons.nal.usda.gov
    • +1more
    csv, pptx
    Updated Feb 2, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department of Agriculture (2018). Inventory of online public databases and repositories holding agricultural data in 2017 [Dataset]. https://data.wu.ac.at/schema/data_gov/YmUxNjhjYTctMDdkNS00Nzk1LWFlNjItNmM4ZThjNGExY2Rl
    Explore at:
    csv, pptxAvailable download formats
    Dataset updated
    Feb 2, 2018
    Dataset provided by
    Department of Agriculture
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    United States agricultural researchers have many options for making their data available online. This dataset aggregates the primary sources of ag-related data and determines where researchers are likely to deposit their agricultural data. These data serve as both a current landscape analysis and also as a baseline for future studies of ag research data.

    Purpose

    As sources of agricultural data become more numerous and disparate, and collaboration and open data become more expected if not required, this research provides a landscape inventory of online sources of open agricultural data.

    An inventory of current agricultural data sharing options will help assess how the Ag Data Commons, a platform for USDA-funded data cataloging and publication, can best support data-intensive and multi-disciplinary research. It will also help agricultural librarians assist their researchers in data management and publication. The goals of this study were to

    • establish where agricultural researchers in the United States-- land grant and USDA researchers, primarily ARS, NRCS, USFS and other agencies -- currently publish their data, including general research data repositories, domain-specific databases, and the top journals
    • compare how much data is in institutional vs. domain-specific vs. federal platforms
    • determine which repositories are recommended by top journals that require or recommend the publication of supporting data
    • ascertain where researchers not affiliated with funding or initiatives possessing a designated open data repository can publish data

    Approach

    The National Agricultural Library team focused on Agricultural Research Service (ARS), Natural Resources Conservation Service (NRCS), and United States Forest Service (USFS) style research data, rather than ag economics, statistics, and social sciences data. To find domain-specific, general, institutional, and federal agency repositories and databases that are open to US research submissions and have some amount of ag data, resources including re3data, libguides, and ARS lists were analysed. Primarily environmental or public health databases were not included, but places where ag grantees would publish data were considered.

    Search methods

    We first compiled a list of known domain specific USDA / ARS datasets / databases that are represented in the Ag Data Commons, including ARS Image Gallery, ARS Nutrition Databases (sub-components), SoyBase, PeanutBase, National Fungus Collection, i5K Workspace @ NAL, and GRIN. We then searched using search engines such as Bing and Google for non-USDA / federal ag databases, using Boolean variations of “agricultural data” /“ag data” / “scientific data” + NOT + USDA (to filter out the federal / USDA results). Most of these results were domain specific, though some contained a mix of data subjects.

    We then used search engines such as Bing and Google to find top agricultural university repositories using variations of “agriculture”, “ag data” and “university” to find schools with agriculture programs. Using that list of universities, we searched each university web site to see if their institution had a repository for their unique, independent research data if not apparent in the initial web browser search. We found both ag specific university repositories and general university repositories that housed a portion of agricultural data. Ag specific university repositories are included in the list of domain-specific repositories. Results included Columbia University – International Research Institute for Climate and Society, UC Davis – Cover Crops Database, etc. If a general university repository existed, we determined whether that repository could filter to include only data results after our chosen ag search terms were applied. General university databases that contain ag data included Colorado State University Digital Collections, University of Michigan ICPSR (Inter-university Consortium for Political and Social Research), and University of Minnesota DRUM (Digital Repository of the University of Minnesota). We then split out NCBI (National Center for Biotechnology Information) repositories.

    Next we searched the internet for open general data repositories using a variety of search engines, and repositories containing a mix of data, journals, books, and other types of records were tested to determine whether that repository could filter for data results after search terms were applied. General subject data repositories include Figshare, Open Science Framework, PANGEA, Protein Data Bank, and Zenodo.

    Finally, we compared scholarly journal suggestions for data repositories against our list to fill in any missing repositories that might contain agricultural data. Extensive lists of journals were compiled, in which USDA published in 2012 and 2016, combining search results in ARIS, Scopus, and the Forest Service's TreeSearch, plus the USDA web sites Economic Research Service (ERS), National Agricultural Statistics Service (NASS), Natural Resources and Conservation Service (NRCS), Food and Nutrition Service (FNS), Rural Development (RD), and Agricultural Marketing Service (AMS). The top 50 journals' author instructions were consulted to see if they (a) ask or require submitters to provide supplemental data, or (b) require submitters to submit data to open repositories.

    Data are provided for Journals based on a 2012 and 2016 study of where USDA employees publish their research studies, ranked by number of articles, including 2015/2016 Impact Factor, Author guidelines, Supplemental Data?, Supplemental Data reviewed?, Open Data (Supplemental or in Repository) Required? and Recommended data repositories, as provided in the online author guidelines for each the top 50 journals.

    Evaluation

    We ran a series of searches on all resulting general subject databases with the designated search terms. From the results, we noted the total number of datasets in the repository, type of resource searched (datasets, data, images, components, etc.), percentage of the total database that each term comprised, any dataset with a search term that comprised at least 1% and 5% of the total collection, and any search term that returned greater than 100 and greater than 500 results.

    We compared domain-specific databases and repositories based on parent organization, type of institution, and whether data submissions were dependent on conditions such as funding or affiliation of some kind.

    Results

    A summary of the major findings from our data review:

    • Over half of the top 50 ag-related journals from our profile require or encourage open data for their published authors.
    • There are few general repositories that are both large AND contain a significant portion of ag data in their collection. GBIF (Global Biodiversity Information Facility), ICPSR, and ORNL DAAC were among those that had over 500 datasets returned with at least one ag search term and had that result comprise at least 5% of the total collection.
    • Not even one quarter of the domain-specific repositories and datasets reviewed allow open submission by any researcher regardless of funding or affiliation.
  5. d

    October 2023 data-update for "Updated science-wide author databases of...

    • elsevier.digitalcommonsdata.com
    Updated Oct 4, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    John P.A. Ioannidis (2023). October 2023 data-update for "Updated science-wide author databases of standardized citation indicators" [Dataset]. http://doi.org/10.17632/btchxktzyw.6
    Explore at:
    Dataset updated
    Oct 4, 2023
    Authors
    John P.A. Ioannidis
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Description

    Citation metrics are widely used and misused. We have created a publicly available database of top-cited scientists that provides standardized information on citations, h-index, co-authorship adjusted hm-index, citations to papers in different authorship positions and a composite indicator (c-score). Separate data are shown for career-long and, separately, for single recent year impact. Metrics with and without self-citations and ratio of citations to citing papers are given. Scientists are classified into 22 scientific fields and 174 sub-fields according to the standard Science-Metrix classification. Field- and subfield-specific percentiles are also provided for all scientists with at least 5 papers. Career-long data are updated to end-of-2022 and single recent year data pertain to citations received during calendar year 2022. The selection is based on the top 100,000 scientists by c-score (with and without self-citations) or a percentile rank of 2% or above in the sub-field. This version (6) is based on the October 1, 2023 snapshot from Scopus, updated to end of citation year 2022. This work uses Scopus data provided by Elsevier through ICSR Lab (https://www.elsevier.com/icsr/icsrlab). Calculations were performed using all Scopus author profiles as of October 1, 2023. If an author is not on the list it is simply because the composite indicator value was not high enough to appear on the list. It does not mean that the author does not do good work.

    PLEASE ALSO NOTE THAT THE DATABASE HAS BEEN PUBLISHED IN AN ARCHIVAL FORM AND WILL NOT BE CHANGED. The published version reflects Scopus author profiles at the time of calculation. We thus advise authors to ensure that their Scopus profiles are accurate. REQUESTS FOR CORRECIONS OF THE SCOPUS DATA (INCLUDING CORRECTIONS IN AFFILIATIONS) SHOULD NOT BE SENT TO US. They should be sent directly to Scopus, preferably by use of the Scopus to ORCID feedback wizard (https://orcid.scopusfeedback.com/) so that the correct data can be used in any future annual updates of the citation indicator databases.

    The c-score focuses on impact (citations) rather than productivity (number of publications) and it also incorporates information on co-authorship and author positions (single, first, last author). If you have additional questions, please read the 3 associated PLoS Biology papers that explain the development, validation and use of these metrics and databases. (https://doi.org/10.1371/journal.pbio.1002501, https://doi.org/10.1371/journal.pbio.3000384 and https://doi.org/10.1371/journal.pbio.3000918).

    Finally, we alert users that all citation metrics have limitations and their use should be tempered and judicious. For more reading, we refer to the Leiden manifesto: https://www.nature.com/articles/520429a

  6. d

    Data of top 50 most cited articles about COVID-19 and the complications of...

    • search.dataone.org
    • data.niaid.nih.gov
    • +2more
    Updated Jul 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tanya Singh; Jagadish Rao Padubidri; Pavanchand Shetty H; Matthew Antony Manoj; Therese Mary; Bhanu Thejaswi Pallempati (2025). Data of top 50 most cited articles about COVID-19 and the complications of COVID-19 [Dataset]. http://doi.org/10.5061/dryad.tx95x6b4m
    Explore at:
    Dataset updated
    Jul 25, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Tanya Singh; Jagadish Rao Padubidri; Pavanchand Shetty H; Matthew Antony Manoj; Therese Mary; Bhanu Thejaswi Pallempati
    Time period covered
    Jan 1, 2023
    Description

    Background This bibliometric analysis examines the top 50 most-cited articles on COVID-19 complications, offering insights into the multifaceted impact of the virus. Since its emergence in Wuhan in December 2019, COVID-19 has evolved into a global health crisis, with over 770 million confirmed cases and 6.9 million deaths as of September 2023. Initially recognized as a respiratory illness causing pneumonia and ARDS, its diverse complications extend to cardiovascular, gastrointestinal, renal, hematological, neurological, endocrinological, ophthalmological, hepatobiliary, and dermatological systems. Methods Identifying the top 50 articles from a pool of 5940 in Scopus, the analysis spans November 2019 to July 2021, employing terms related to COVID-19 and complications. Rigorous review criteria excluded non-relevant studies, basic science research, and animal models. The authors independently reviewed articles, considering factors like title, citations, publication year, journal, impact fa..., A bibliometric analysis of the most cited articles about COVID-19 complications was conducted in July 2021 using all journals indexed in Elsevier’s Scopus and Thomas Reuter’s Web of Science from November 1, 2019 to July 1, 2021. All journals were selected for inclusion regardless of country of origin, language, medical speciality, or electronic availability of articles or abstracts. The terms were combined as follows: (“COVID-19†OR “COVID19†OR “SARS-COV-2†OR “SARSCOV2†OR “SARS 2†OR “Novel coronavirus†OR “2019-nCov†OR “Coronavirus†) AND (“Complication†OR “Long Term Complication†OR “Post-Intensive Care Syndrome†OR “Venous Thromboembolism†OR “Acute Kidney Injury†OR “Acute Liver Injury†OR “Post COVID-19 Syndrome†OR “Acute Cardiac Injury†OR “Cardiac Arrest†OR “Stroke†OR “Embolism†OR “Septic Shock†OR “Disseminated Intravascular Coagulation†OR “Secondary Infection†OR “Blood Clots† OR “Cytokine Release Syndrome†OR “Paediatric Inflammatory Multisystem Syndrome†OR “Vaccine..., , # Data of top 50 most cited articles about COVID-19 and the complications of COVID-19

    This dataset contains information about the top 50 most cited articles about COVID-19 and the complications of COVID-19. We have looked into a variety of research and clinical factors for the analysis.

    Description of the data and file structure

    The data sheet offers a comprehensive analysis of the selected articles. It delves into specifics such as the publication year of the top 50 articles, the journals responsible for publishing them, and the geographical region with the highest number of citations in this elite list. Moreover, the sheet sheds light on the key players involved, including authors and their affiliated departments, in crafting the top 50 most cited articles.

    Beyond these fundamental aspects, the data sheet goes on to provide intricate details related to the study types and topics prevalent in the top 50 articles. To enrich the analysis, it incorporates clinical data, capturing...

  7. Raw data.

    • plos.figshare.com
    7z
    Updated Nov 25, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gerhard Reichmann; Christian Schlögl; Margit Sommersguter-Reichmann (2025). Raw data. [Dataset]. http://doi.org/10.1371/journal.pone.0336492.s001
    Explore at:
    7zAvailable download formats
    Dataset updated
    Nov 25, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Gerhard Reichmann; Christian Schlögl; Margit Sommersguter-Reichmann
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This study examines the impact of methodological variations in publication-based rankings on the evaluation of individual research performance in business administration. Drawing on a unique dataset comprising complete personal publication lists of 233 professors from Austrian public universities (2009–2018), we apply ten distinct ranking variants that differ in their treatment of data sources, co-authorship, publication languages, article lengths, and journal qualities. These variants are categorized into purely quantity-focused and predominantly quality-focused rankings. Our results demonstrate that researcher rankings are susceptible to specification choices. While quantity-focused rankings produce relatively small performance differentials and high variability, quality-focused variants consistently identify a stable group of leading researchers. These scholars publish more frequently in English, in journals indexed by Web of Science (WoS), and in top-tier outlets according to the JOURQUAL ranking. Notably, leading researchers publish over twice as many articles in high-ranking journals as their peers. The findings underscore the significant implications of ranking design for career advancement and research strategy. For early-career researchers, aligning publication efforts with the logic of quality-focused rankings—favoring English-language publications in highly ranked, peer-reviewed journals—is crucial for enhancing academic visibility and competitiveness. Moreover, our study offers a methodological stress test for ranking systems, revealing the extent to which technical design influences outcomes. By leveraging comprehensive and multilingual publication data and systematically comparing multiple ranking methodologies, this study contributes to both the academic evaluation literature and practical guidance for researchers navigating the demands of a metric-driven academic environment.

  8. f

    Supplementary Material for: Searching the Footprints of Pioneers on...

    • datasetcatalog.nlm.nih.gov
    Updated Jan 19, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    D. Y. , Yoon; J. -E. , Kim; J. S. , Bae; Y. , Kim; S. E. , Kim; K. M. , Park (2017). Supplementary Material for: Searching the Footprints of Pioneers on Neurology: A Bibliometric Analysis [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001765996
    Explore at:
    Dataset updated
    Jan 19, 2017
    Authors
    D. Y. , Yoon; J. -E. , Kim; J. S. , Bae; Y. , Kim; S. E. , Kim; K. M. , Park
    Description

    Background: We identify the most cited articles that have influenced the clinical practices of neurologists. Methods: We first analyzed the top 100 cited articles published in 50 neurology journals with high impact factors. We collected all of the original articles on clinical neurology published in all 554 medical journals. The Institute for Scientific Information Web of Science search tools were used to identify the top 100 cited articles in the database of Journal Citation Reports since 1950, which were then manually reviewed to discover their contents. Results: In the first part of analysis, the top 100 cited articles were all published in 17 journals, with 26 articles published in Neurology. The most frequent topic subject of neurodegeneration appeared in 40 articles. The second part of the analysis revealed that the top 100 cited articles were also all published in 17 journals, with 30 articles published in New England Journal of Medicine. In contrast to the first part of the analysis, stroke was the most frequent topic subject (in 38 articles). Conclusions: Our bibliometric analysis has yielded 2 detailed lists of the top 100 cited articles that were listed separately using different methods. This approach can provide information about the trends and academic achievements in the field of clinical neurology.

  9. Open access practices of selected library science journals

    • data.niaid.nih.gov
    • search.dataone.org
    • +3more
    zip
    Updated May 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jennifer Jordan; Blair Solon; Stephanie Beene (2025). Open access practices of selected library science journals [Dataset]. http://doi.org/10.5061/dryad.pvmcvdnt3
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 7, 2025
    Dataset provided by
    University of New Mexico
    Authors
    Jennifer Jordan; Blair Solon; Stephanie Beene
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    The data in this set was gathered to analyze the open access practices of library journals. The data was culled from the Directory of Open Access Journals (DOAJ), the Proquest database Library and Information Science Abstracts (LISA), and a sample of peer reviewed scholarly journals in the field of Library Science. Starting with a batch of 377 journals, the researchers focused their dataset to include journals that met the following criteria: 1) peer-reviewed 2) written in English or abstracted in English, 3) actively published at the time of analysis, and 4) scoped to librarianship. The dataset presents an overview of the landscape of open access scholarly publishing in the LIS field during a very specific time period, spring and summer of 2023. Methods Data Collection The researchers gathered 377 scholarly journals whose content covered the work of librarians, archivists, and affiliated information professionals. This data encompassed 222 journals from the Proquest database Library and Information Science Abstracts (LISA), widely regarded as an authoritative database in the field of librarianship. From the Directory of Open Access Journals, we included 144 LIS journals. We also included 11 other journals not indexed in DOAJ or LISA, based on the researchers’ knowledge of existing OA library journals. The data is separated into several different sets representing the different indices and journals we searched. The first set includes journals from the database LISA. The following fields are in this dataset:

    Journal: title of the journal

    Publisher: title of the publishing company

    Publisher Type: the kind of publisher, whether association, traditional, university library, or independent

    Country of publication: country where the journal is published

    Region: geographical place of publication

    Open Data Policy: lists whether an open data exists and what the policy is

    Open Data Notes: descriptions of the open data policies Open ranking: details whether the journal is diamond, gold, and/or green

    Open peer review: specifies if the journal does open peer review

    Author retains copyright: explains copyright policy

    APCs: Details whether there is an article processing charge

    In DOAJ: details whether the journal is also published in the Directory of Open Access Journals

    The second set includes similar as the previous set, but it also includes two additional columns:

    Type of CC: lists the Creative Commons license applied to the journal articles

    In LISA: details whether the journal is also published in the Library and Information Science Abstracts database

    A third dataset includes eleven scholarly, peer reviewed journals focused on Library and Information Science that were not in DOAJ or LISA. This dataset is also labeled with the same fields as the first dataset. The fourth dataset is the complete list of 377 journals that we evaluated for inclusion in this dataset. Data Processing To explore the current state of OA scholarly publishing in librarianship, we developed the following criteria: Journals must be published at the time of analysis, peer reviewed, and scoped to librarianship and must have articles or abstracts in English so that we could determine the journal’s scope. After applying inclusion/exclusion criteria, 145 of 377 journals remained; however, the total number of journals analyzed is 133 because the DOAJ and LISA shared 12 journals. The researchers explored the open data policies, open access publication options, country of origin, publisher, and peer review process of each of the remaining 133 journals. The researchers also looked for article processing costs, type of Creative Commons licensing (open licenses that allow users to redistribute and sometimes remix intellectual property), and whether the journals were included in either the DOAJ and/or LISA index. References: Budapest Open Access Initiative. (2002) http://www.soros.org/openaccess/

  10. r

    High-Ranked Social Science Journal Articles

    • researchdata.edu.au
    Updated 2014
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stern David; Professor David Stern (2014). High-Ranked Social Science Journal Articles [Dataset]. http://doi.org/10.4225/13/5404052DDD22E
    Explore at:
    Dataset updated
    2014
    Dataset provided by
    The Australian National University
    The Australian National University Data Commons
    Authors
    Stern David; Professor David Stern
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Time period covered
    1999 - 2012
    Description

    Do citations accumulate too slowly in the social sciences to be used to assess the quality of recent articles? This data supports research that shows that it is no more difficult to predict the future citations that social science journal articles will receive than it is to predict the future citations of articles in some natural sciences. The research uses citations accumulated in the first few years after publication and journal impact factors to predict the future citation ranks of all economics and political science articles in the Web of Science published in 2006. I find that citations in the first two years after publication explain more than half of the variation in cumulative citations received over a longer period. Journal impact factors improve the correlation between the predicted and actual future ranks of journal articles when using citation data from 2006 alone but the effect declines sharply thereafter. Also, more than half of the papers in the top 20% in 2012 were already in the top 20% in the year of publication (2006). A comparison The data includes citation data in each year from 2006 to 2012 for the 2006 articles and the impact factors of the journals that the articles were published in. The data also includes citations in each year from 1999 to 2012 for all economics articles in the Web of Science published in 1999.

  11. H

    Data from: A study of the impact of data sharing on article citations using...

    • dataverse.harvard.edu
    • search.dataone.org
    • +1more
    application/gzip +13
    Updated Sep 4, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Harvard Dataverse (2020). A study of the impact of data sharing on article citations using journal policies as a natural experiment [Dataset]. http://doi.org/10.7910/DVN/ORTJT5
    Explore at:
    text/x-stata-syntax(519), txt(0), png(15306), type/x-r-syntax(569), jar(21709328), pdf(65387), tsv(35864), text/markdown(125), bin(26), application/gzip(111839), text/x-python(0), application/x-stata-syntax(720), tex(3986), text/plain; charset=us-ascii(91)Available download formats
    Dataset updated
    Sep 4, 2020
    Dataset provided by
    Harvard Dataverse
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This study estimates the effect of data sharing on the citations of academic articles, using journal policies as a natural experiment. We begin by examining 17 high-impact journals that have adopted the requirement that data from published articles be publicly posted. We match these 17 journals to 13 journals without policy changes and find that empirical articles published just before their change in editorial policy have citation rates with no statistically significant difference from those published shortly after the shift. We then ask whether this null result stems from poor compliance with data sharing policies, and use the data sharing policy changes as instrumental variables to examine more closely two leading journals in economics and political science with relatively strong enforcement of new data policies. We find that articles that make their data available receive 97 additional citations (estimate standard error of 34). We conclude that: a) authors who share data may be rewarded eventually with additional scholarly citations, and b) data-posting policies alone do not increase the impact of articles published in a journal unless those policies are enforced.

  12. H

    Replication Data for: Does Peer Review Identify the Best Papers? A...

    • dataverse.harvard.edu
    • search.dataone.org
    Updated Apr 17, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Justin Esarey (2017). Replication Data for: Does Peer Review Identify the Best Papers? A Simulation Study of Editors, Reviewers, and the Social Scientific Publication Process [Dataset]. http://doi.org/10.7910/DVN/TT17NY
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 17, 2017
    Dataset provided by
    Harvard Dataverse
    Authors
    Justin Esarey
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    How does the structure of the peer review process, which can vary from journal to journal, influence the quality of papers published in that journal? In this paper, I study multiple systems of peer review using computational simulation. I find that, under any system I study, a majority of accepted papers will be evaluated by the average reader as not meeting the standards of the journal. Moreover, all systems allow random chance to play a strong role in the acceptance decision. Heterogeneous reviewer and reader standards for scientific quality drive both results. A peer review system with an active editor (who uses desk rejection before review and does not rely strictly on reviewer votes to make decisions) can mitigate some of these effects.

  13. I

    Data from: OpCitance: Citation contexts identified from the PubMed Central...

    • databank.illinois.edu
    Updated Feb 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tzu-Kun Hsiao; Vetle Torvik (2023). OpCitance: Citation contexts identified from the PubMed Central open access articles [Dataset]. http://doi.org/10.13012/B2IDB-4353270_V1
    Explore at:
    Dataset updated
    Feb 15, 2023
    Authors
    Tzu-Kun Hsiao; Vetle Torvik
    Dataset funded by
    U.S. National Institutes of Health (NIH)
    Description

    Sentences and citation contexts identified from the PubMed Central open access articles ---------------------------------------------------------------------- The dataset is delivered as 24 tab-delimited text files. The files contain 720,649,608 sentences, 75,848,689 of which are citation contexts. The dataset is based on a snapshot of articles in the XML version of the PubMed Central open access subset (i.e., the PMCOA subset). The PMCOA subset was collected in May 2019. The dataset is created as described in: Hsiao TK., & Torvik V. I. (manuscript) OpCitance: Citation contexts identified from the PubMed Central open access articles. Files: • A_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with A. • B_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with B. • C_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with C. • D_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with D. • E_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with E. • F_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with F. • G_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with G. • H_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with H. • I_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with I. • J_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with J. • K_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with K. • L_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with L. • M_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with M. • N_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with N. • O_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with O. • P_p1_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with P (part 1). • P_p2_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with P (part 2). • Q_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with Q. • R_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with R. • S_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with S. • T_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with T. • UV_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with U or V. • W_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with W. • XYZ_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with X, Y or Z. Each row in the file is a sentence/citation context and contains the following columns: • pmcid: PMCID of the article • pmid: PMID of the article. If an article does not have a PMID, the value is NONE. • location: The article component (abstract, main text, table, figure, etc.) to which the citation context/sentence belongs. • IMRaD: The type of IMRaD section associated with the citation context/sentence. I, M, R, and D represent introduction/background, method, results, and conclusion/discussion, respectively; NoIMRaD indicates that the section type is not identifiable. • sentence_id: The ID of the citation context/sentence in the article component • total_sentences: The number of sentences in the article component. • intxt_id: The ID of the citation. • intxt_pmid: PMID of the citation (as tagged in the XML file). If a citation does not have a PMID tagged in the XML file, the value is "-". • intxt_pmid_source: The sources where the intxt_pmid can be identified. Xml represents that the PMID is only identified from the XML file; xml,pmc represents that the PMID is not only from the XML file, but also in the citation data collected from the NCBI Entrez Programming Utilities. If a citation does not have an intxt_pmid, the value is "-". • intxt_mark: The citation marker associated with the inline citation. • best_id: The best source link ID (e.g., PMID) of the citation. • best_source: The sources that confirm the best ID. • best_id_diff: The comparison result between the best_id column and the intxt_pmid column. • citation: A citation context. If no citation is found in a sentence, the value is the sentence. • progression: Text progression of the citation context/sentence. Supplementary Files • PMC-OA-patci.tsv.gz – This file contains the best source link IDs for the references (e.g., PMID). Patci [1] was used to identify the best source link IDs. The best source link IDs are mapped to the citation contexts and displayed in the *_journal IntxtCit.tsv files as the best_id column. Each row in the PMC-OA-patci.tsv.gz file is a citation (i.e., a reference extracted from the XML file) and contains the following columns: • pmcid: PMCID of the citing article. • pos: The citation's position in the reference list. • fromPMID: PMID of the citing article. • toPMID: Source link ID (e.g., PMID) of the citation. This ID is identified by Patci. • SRC: The sources that confirm the toPMID. • MatchDB: The origin bibliographic database of the toPMID. • Probability: The match probability of the toPMID. • toPMID2: PMID of the citation (as tagged in the XML file). • SRC2: The sources that confirm the toPMID2. • intxt_id: The ID of the citation. • journal: The first letter of the journal title. This maps to the *_journal_IntxtCit.tsv files. • same_ref_string: Whether the citation string appears in the reference list more than once. • DIFF: The comparison result between the toPMID column and the toPMID2 column. • bestID: The best source link ID (e.g., PMID) of the citation. • bestSRC: The sources that confirm the best ID. • Match: Matching result produced by Patci. [1] Agarwal, S., Lincoln, M., Cai, H., & Torvik, V. (2014). Patci – a tool for identifying scientific articles cited by patents. GSLIS Research Showcase 2014. http://hdl.handle.net/2142/54885 • Supplementary_File_1.zip – This file contains the code for generating the dataset.

  14. Journal Impact Factor (JIF) ranking in the dataset.

    • plos.figshare.com
    bin
    Updated Jun 16, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Masanao Ochi; Masanori Shiro; Jun’ichiro Mori; Ichiro Sakata (2023). Journal Impact Factor (JIF) ranking in the dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0274253.t005
    Explore at:
    binAvailable download formats
    Dataset updated
    Jun 16, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Masanao Ochi; Masanori Shiro; Jun’ichiro Mori; Ichiro Sakata
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Journal Impact Factor (JIF) ranking in the dataset.

  15. o

    Data and Code for: Publishing and Promotion in Economics: The Tyranny of the...

    • openicpsr.org
    stata
    Updated Apr 23, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    James J. Heckman; Sidharth Moktan (2020). Data and Code for: Publishing and Promotion in Economics: The Tyranny of the Top Five [Dataset]. http://doi.org/10.3886/E118326V3
    Explore at:
    stataAvailable download formats
    Dataset updated
    Apr 23, 2020
    Dataset provided by
    American Economic Association
    Authors
    James J. Heckman; Sidharth Moktan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 1990 - Dec 31, 2017
    Area covered
    United States of America
    Dataset funded by
    Institute for New Economic Thinking
    Description
    This paper examines the relationship between placement of publications in Top Five (T5) journals and receipt of tenure in academic economics departments. Analyzing the job histories of tenure-track economists hired by the top 35 U.S. economics departments, we find that T5 publications have a powerful influence on tenure decisions and rates of transition to tenure. A survey of the perceptions of young economists supports the formal statistical analysis. Pursuit of T5 publications has become the obsession of the next generation of economists. However, the T5 screen is far from reliable. A substantial share of influential publications appear in non-T5 outlets. Reliance on the T5 to screen talent incentivizes careerism over creativity.
  16. r

    Journal of biological chemistry Publication fee - ResearchHelpDesk

    • researchhelpdesk.org
    Updated May 7, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). Journal of biological chemistry Publication fee - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/publication-fee/568/journal-of-biological-chemistry
    Explore at:
    Dataset updated
    May 7, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    Journal of biological chemistry Publication fee - ResearchHelpDesk - The Journal of Biological Chemistry welcomes high-quality science that seeks to elucidate the molecular and cellular basis of biological processes. Papers published in JBC can therefore fall under the umbrellas of not only biological chemistry, chemical biology, or biochemistry, but also allied disciplines such as biophysics, systems biology, RNA biology, immunology, microbiology, neurobiology, epigenetics, computational biology, ’omics, and many more. The outcome of our focus on papers that contribute novel and important mechanistic insights, rather than on a particular topic area, is that JBC is truly a melting pot for scientists across disciplines. In addition, JBC welcomes papers that describe methods that will help scientists push their biochemical inquiries forward and resources that will be of use to the research community. Beyond the consideration of anyone particular article, our mission as a journal is to bring significant, enduring research to the scientific community. We believe it is our responsibility to safeguard the research we publish by providing high-quality review and maintaining strict standards on data presentation and deposition. It is our goal to help scientists disclose their findings in the most efficient and effective way possible by keeping review times short, providing editorial feedback on the manuscript text and promoting papers after publication. It is our aspiration to facilitate scientific discovery in new ways by exploring new technologies and forging new partnerships. The heart of this mission is the publication of original research in the form of Articles and Accelerated Communications, a subset of JBC’s articles that succinctly report particularly compelling advances across biological chemistry. Some JBC papers are also selected as Editors’ Picks, which represent top content in the journal and are highlighted with additional coverage. The journal publishes JBC Reviews to keep readers up to speed with the latest advances across diverse scientific topics; Thematic Series are collections of reviews that cover multiple aspects of a particular field. JBC also publishes Editorials and eLetters to facilitate communication within the biological chemistry community, Meeting Reports discussing findings presented at conferences, Virtual Issues to help readers find collections of papers on their favourite topics, and Classics and Reflections that honour the papers and people that have shaped scientific progress. Find more information and instructions about these content types here. Finally, JBC administers an award program established in honor of Herbert Tabor, JBC’s editor-in-chief from 1971-2012. The review process at JBC relies predominantly on editorial board members who have been vetted and appointed by the editor-in-chief and associate editors. Our editorial board members undergo a comprehensive training process to make our reviews as consistent, thoughtful and fair as possible for our authors. As of January 2021, JBC is a gold open access journal; the final versions of all articles are free to read without a subscription. The author versions of accepted research papers and JBC Reviews are also posted and freely available within 24 hours of acceptance as Papers in Press. JBC is owned and published by the American Society for Biochemistry and Molecular Biology, Inc. Abstract & Indexing Indexed in Medline, PubMed, PubMed Central, Index Medicus, The Science Citation Index, Current Contents - Life Sciences, SCOPUS, BIOSIS Previews, Clarivate Analytics Web of Science, and the Chemical Abstracts Service

  17. s

    Journalysis

    • scicrunch.org
    Updated Sep 20, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). Journalysis [Dataset]. http://identifiers.org/RRID:SCR_014015)
    Explore at:
    Dataset updated
    Sep 20, 2018
    Description

    A database where researchers can read and post reviews of publishing experiences with scientific journals. When writing comments, reviewers are adivsed to provide facts where possible, be specific, be honest, check the review, and offer constructive criticism when writing a negative report. Journalysis collates submitted reviews and data for each journal and provides useful summary information (metrics). Authors wanting to find out more about specific journals can search its databases and find the best journal for their next manuscript submission. Journals can use these reviews and metrics to demonstrate high publishing standards, or to improve standards where reviews may be negative.

  18. UNIVERSITIES & Research INSTITUTIONS Rank -SCImago

    • kaggle.com
    zip
    Updated Apr 9, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ali Jalaali (2025). UNIVERSITIES & Research INSTITUTIONS Rank -SCImago [Dataset]. https://www.kaggle.com/datasets/alijalali4ai/scimago-universities-and-institutions-ranking
    Explore at:
    zip(15363511 bytes)Available download formats
    Dataset updated
    Apr 9, 2025
    Authors
    Ali Jalaali
    Description

    The SCImago Institutions Rankings (SIR) is a classification of academic and research-related institutions ranked by a composite indicator that combines three different sets of indicators based on research performance, innovation outputs and societal impact measured by their web visibility.

    The aim of SIR is to provide a useful metric tool for institutions, policymakers and research managers for the analysis, evaluation and improvement of their activities, outputs and outcomes.

    💬Also have a look at
    💡 COUNTRIES Research & Science Dataset - SCImagoJR
    💡 Scientific JOURNALS Indicators & Info - SCImagoJR

    For the ranking purposes, the calculation is generated each year from the results obtained over a period of five year ending two years before the edition of the ranking. For instance, if the selected year of publication is 2024, the results used are those from the five year period 2018-2022. The only exception is the case of web indicators which have been calculated once the last year.

    ☢️❓The entire dataset is obtained from public and open-access data of ScimagoIR ( SCImago Institutions Rankings)
    ScimagoIR
    ScimagoIR Ranking Methodology

    • Document types: all documents types in SCOPUS are considered except Retracted, Erratum and Preprint.
    • Multiple affiliations: in Higher Education sector we use fractional counting to calculate research indicators when authors have more than one institutional affiliation. In these cases we distribute the weight of a document among the institutions of the authors who present multiple affiliations.
    • Multinational institutions (MUL) which cannot be attributed to any country have also been included.
    • The institutions marked with an asterisk consist of a group of sub-institutions, identified by with the abbreviated name of the parent institution. The parent institutions show the results of all of their sub-institutions.
    • The inclusion criterions are:
      1. The institutions had published at least 100 works included in the SCOPUS database during the last year of the selected time period.
      2. Citable documents (Article, Chapter, Conference Paper, Review and Short Survey) must represent at least 75% of total documents published by the institution.
    • The SIR includes both, size-dependent and size-independent indicators; that is indicators influenced and not influenced by the size of the institutions. *The source of information used for the indicators for innovation is PATSTAT database. The sources of information used for the indicators for web visibility are Google and Semrush. Unpaywall database is used to identify Open Access documents. Altmetrics from PlumX metrics and Mendeley are used for Societal Factor. The Overton database is used to identify documents cited in policy documents.*

    Indicators are divided into three groups:

    1. Research (50%)

    IndicatorWeightDescription
    Normalized Impact (NI)13 %The values (in decimal numbers) show the relationship between an institution's average scientific impact and the world average set to a score of 1 - Size-independent
    Excellence with Leadership (EwL)8%amount of documents in Excellence in which the institution is the main contributor - Size-dependent
    Output (O)8%total number of documents published in scholarly journals indexed in Scopus- Size-dependent
    Scientific Leadership (L)5%amount of an institution’s output as main contributor, that is, the amount of papers
    in which the corresponding author belongs to the institution- Size-dependent
    Not Own Journals (NotOJ)3%number of documents not published in own journals (published by the institution)
    - Size-dependent
    Own Journals (OJ)3%number of journals published by the institution (publishing services) -
    Size-dependent
    Excellence (Exc)2%amount of an institution’s scientific output that is included in the top 10% of the
    most cited papers in their respective scientific fields - Size-dependent
    High Quality Publications (Q1)2%number of publications that an institution publishes in the most influential
    scholarly journals of the world. These are those ranked in the first quartile (25%) in their categories
    as ordered by SCImago Journal Rank (SJRII) indicator - Size-dependent
    International Collaboration (IC)2%Institution's output produced in collaboration with foreign i...
  19. r

    Food and Energy Security Impact Factor 2025-2026 - ResearchHelpDesk

    • researchhelpdesk.org
    Updated Feb 23, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). Food and Energy Security Impact Factor 2025-2026 - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/impact-factor-if/536/food-and-energy-security
    Explore at:
    Dataset updated
    Feb 23, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    Food and Energy Security Impact Factor 2025-2026 - ResearchHelpDesk - Food and Energy Security is a high quality and high impact open access journal publishing original research on agricultural crop and forest productivity to improve food and energy security. Aims and Scope Food and Energy Security seeks to publish high quality and high impact original research on agricultural crop and forest productivity to improve food and energy security. It actively seeks submissions from emerging countries with expanding agricultural research communities. Papers from China, other parts of Asia, India and South America are particularly welcome. The Editorial Board, headed by Editor-in-Chief Professor Christine Foyer, is determined to make FES the leading publication in its sector and will be aiming for a top-ranking impact factor. Primary research articles should report hypothesis-driven investigations that provide new insights into mechanisms and processes that determine productivity and properties for exploitation. Review articles are welcome but they must be critical in approach and provide particularly novel and far-reaching insights. Food and Energy Security offers authors a forum for the discussion of the most important advances in this field and promotes an integrative approach of scientific disciplines. Papers must contribute substantially to the advancement of knowledge. Examples of areas covered in Food and Energy Security include: Agronomy Biotechnological Approaches Breeding & Genetics Climate Change Quality and Composition Food Crops and Bioenergy Feedstocks Developmental, Physiology, and Biochemistry Functional Genomics Molecular Biology Pest and Disease Management Political, economic and societal influences on food security and agricultural crop production Post Harvest Biology Soil Science Systems Biology The journal is Open Access and published online. Submission of manuscripts to Food and Energy Security is exclusive via a web-based electronic submission and tracking system enabling rapid submission to first decision times. Before submitting a paper for publication, potential authors should first read the Author Guidelines. Instructions as to how to upload your manuscript can be found on ScholarOne Manuscripts. Keywords Agricultural economics, Agriculture, Bioenergy, Biofuels, Biochemistry, Biotechnology, Breeding, Composition, Development, Diseases, Feedstocks, Food, Food Security, Food Safety, Forestry, Functional Genomics, Genetics, Horticulture, Pests, Phenomics, Plant Architecture, Plant Biotechnology, Plant Science, Quality Traits, Secondary Metabolites, Social policies, Weed Science. Abstracting and Indexing Information Abstracts on Hygiene & Communicable Diseases (CABI) AgBiotechNet (CABI) AGRICOLA Database (National Agricultural Library) Agricultural Economics Database (CABI) Animal Breeding Abstracts (CABI) Animal Production Database (CABI) Animal Science Database (CABI) CAB Abstracts® (CABI) Current Contents: Agriculture, Biology & Environmental Sciences (Clarivate Analytics) Environmental Impact (CABI) Global Health (CABI) Nutrition & Food Sciences Database (CABI) Nutrition Abstracts & Reviews Series A: Human & Experimental (CABI) Plant Breeding Abstracts (CABI) Plant Genetics and Breeding Database (CABI) Plant Protection Database (CABI) Postharvest News & Information (CABI) Science Citation Index Expanded (Clarivate Analytics) SCOPUS (Elsevier) Seed Abstracts (CABI) Soil Science Database (CABI) Soils & Fertilizers Abstracts (CABI) Web of Science (Clarivate Analytics) Weed Abstracts (CABI) Wheat, Barley & Triticale Abstracts (CABI) World Agricultural Economics & Rural Sociology Abstracts (CABI) Society Information The Association of Applied Biologists is a registered charity (No. 275655), that was founded in 1904. The Association's overall aim is: 'To promote the study and advancement of all branches of Biology and in particular (but without prejudice to the generality of the foregoing), to foster the practice, growth, and development of applied biology, including the application of biological sciences for the production and preservation of food, fiber, and other materials and for the maintenance and improvement of earth's physical environment'.

  20. r

    Journal of biological chemistry Impact Factor 2024-2025 - ResearchHelpDesk

    • researchhelpdesk.org
    Updated Feb 23, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). Journal of biological chemistry Impact Factor 2024-2025 - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/impact-factor-if/568/journal-of-biological-chemistry
    Explore at:
    Dataset updated
    Feb 23, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    Journal of biological chemistry Impact Factor 2024-2025 - ResearchHelpDesk - The Journal of Biological Chemistry welcomes high-quality science that seeks to elucidate the molecular and cellular basis of biological processes. Papers published in JBC can therefore fall under the umbrellas of not only biological chemistry, chemical biology, or biochemistry, but also allied disciplines such as biophysics, systems biology, RNA biology, immunology, microbiology, neurobiology, epigenetics, computational biology, ’omics, and many more. The outcome of our focus on papers that contribute novel and important mechanistic insights, rather than on a particular topic area, is that JBC is truly a melting pot for scientists across disciplines. In addition, JBC welcomes papers that describe methods that will help scientists push their biochemical inquiries forward and resources that will be of use to the research community. Beyond the consideration of anyone particular article, our mission as a journal is to bring significant, enduring research to the scientific community. We believe it is our responsibility to safeguard the research we publish by providing high-quality review and maintaining strict standards on data presentation and deposition. It is our goal to help scientists disclose their findings in the most efficient and effective way possible by keeping review times short, providing editorial feedback on the manuscript text and promoting papers after publication. It is our aspiration to facilitate scientific discovery in new ways by exploring new technologies and forging new partnerships. The heart of this mission is the publication of original research in the form of Articles and Accelerated Communications, a subset of JBC’s articles that succinctly report particularly compelling advances across biological chemistry. Some JBC papers are also selected as Editors’ Picks, which represent top content in the journal and are highlighted with additional coverage. The journal publishes JBC Reviews to keep readers up to speed with the latest advances across diverse scientific topics; Thematic Series are collections of reviews that cover multiple aspects of a particular field. JBC also publishes Editorials and eLetters to facilitate communication within the biological chemistry community, Meeting Reports discussing findings presented at conferences, Virtual Issues to help readers find collections of papers on their favourite topics, and Classics and Reflections that honour the papers and people that have shaped scientific progress. Find more information and instructions about these content types here. Finally, JBC administers an award program established in honor of Herbert Tabor, JBC’s editor-in-chief from 1971-2012. The review process at JBC relies predominantly on editorial board members who have been vetted and appointed by the editor-in-chief and associate editors. Our editorial board members undergo a comprehensive training process to make our reviews as consistent, thoughtful and fair as possible for our authors. As of January 2021, JBC is a gold open access journal; the final versions of all articles are free to read without a subscription. The author versions of accepted research papers and JBC Reviews are also posted and freely available within 24 hours of acceptance as Papers in Press. JBC is owned and published by the American Society for Biochemistry and Molecular Biology, Inc. Abstract & Indexing Indexed in Medline, PubMed, PubMed Central, Index Medicus, The Science Citation Index, Current Contents - Life Sciences, SCOPUS, BIOSIS Previews, Clarivate Analytics Web of Science, and the Chemical Abstracts Service

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Dorothy Massey; Joshua Wallach; Joseph Ross; Michelle Opare; Harlan Krumholz (2020). Top 100-Ranked Clinical Journals' Preprint Policies as of April 23, 2020 [Dataset]. http://doi.org/10.5061/dryad.jdfn2z38f

Top 100-Ranked Clinical Journals' Preprint Policies as of April 23, 2020

Related Article
Explore at:
zipAvailable download formats
Dataset updated
Sep 6, 2020
Dataset provided by
Yale School of Public Health
Yale University
Yale New Haven Hospital
Authors
Dorothy Massey; Joshua Wallach; Joseph Ross; Michelle Opare; Harlan Krumholz
License

https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

Description

Objective: To determine the top 100-ranked (by impact factor) clinical journals' policies toward publishing research previously published on preprint servers (preprints).

Design: Cross sectional. Main outcome measures: Editorial guidelines toward preprints, journal rank by impact factor.

Results: 86 (86%) of the journals examined will consider papers previously published as preprints (preprints), 13 (13%) determine their decision on a case-by-case basis, and 1 (1%) does not allow preprints.

Conclusions: We found wide acceptance of publishing preprints in the clinical research community, although researchers may still face uncertainty that their preprints will be accepted by all of their target journals.

Methods We examined journal policies of the 100 top-ranked clinical journals using the 2018 impact factors as reported by InCites Journal Citation Reports (JCR). First, we examined all journals with an impact factor greater than 5, and then we manually screened by title and category do identify the first 100 clinical journals. We included only those that publish original research. Next, we checked each journal's editorial policy on preprints. We examined, in order, the journal website, the publisher website, the Transpose Database, and the first 10 pages of a Google search with the journal name and the term "preprint." We classified each journal's policy, as shown in this dataset, as allowing preprints, determining based on preprint status on a case-by-case basis, and not allowing any preprints. We collected data on April 23, 2020.

(Full methods can also be found in previously published paper.)

Search
Clear search
Close search
Google apps
Main menu