77 datasets found
  1. s

    Scimago Journal Rankings

    • scimagojr.com
    • vnufulimi.com
    • +9more
    csv
    Updated Jun 26, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Scimago Lab (2017). Scimago Journal Rankings [Dataset]. https://www.scimagojr.com/journalrank.php
    Explore at:
    csvAvailable download formats
    Dataset updated
    Jun 26, 2017
    Dataset authored and provided by
    Scimago Lab
    Description

    Academic journals indicators developed from the information contained in the Scopus database (Elsevier B.V.). These indicators can be used to assess and analyze scientific domains.

  2. Z

    An analysis of the current overlay journals

    • data.niaid.nih.gov
    • zenodo.org
    Updated Oct 18, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rousi, Antti M. (2022). An analysis of the current overlay journals [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6420517
    Explore at:
    Dataset updated
    Oct 18, 2022
    Dataset provided by
    Laakso, Mikael
    Rousi, Antti M.
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Research data to accommodate the article "Overlay journals: a study of the current landscape" (https://doi.org/10.1177/09610006221125208)

    Identifying the sample of overlay journals was an explorative process (occurring during April 2021 to February 2022). The sample of investigated overlay journals were identified by using the websites of Episciences.org (2021), Scholastica (2021), Free Journal Network (2021), Open Journals (2021), PubPub (2022), and Wikipedia (2021). In total, this study identified 34 overlay journals. Please see the paper for more details about the excluded journal types.

    The journal ISSN numbers, manuscript source repositories, first overlay volumes, article volumes, publication languages, peer-review type, licence for published articles, author costs, publisher types, submission policy, and preprint availability policy were observed by inspecting journal editorial policies and submission guidelines found from journal websites. The overlay journals’ ISSN numbers were identified by examining journal websites and cross-checking this information with the Ulrich’s periodicals database (Ulrichsweb, 2021). Journals that published review reports, either with reviewers’ names or anonymously, were classified as operating with open peer-review. Publisher types defined by Laakso and Björk (2013) were used to categorise the findings concerning the publishers. If the journal website did not include publisher information, the editorial board was interpreted to publish the journal.

    The Organisation for Economic Co-operation and Development (OECD) field of science classification was used to categorise the journals into different domains of science. The journals’ primary OECD field of sciences were defined by the authors through examining the journal websites.

    Whether the journals were indexed in the Directory of Open Access Journals (DOAJ), Scopus, or Clarivate Analytics’ Web of Science Core collection’s journal master list was examined by searching the services with journal ISSN numbers and journal titles.

    The identified overlay journals were examined from the viewpoint of both qualitative and quantitative journal metrics. The qualitative metrics comprised the Nordic expert panel rankings of scientific journals, namely the Finnish Publication Forum, the Danish Bibliometric Research Indicator and the Norwegian Register for Scientific Journals, Series and Publishers. Searches were conducted from the web portals of the above services with both ISSN numbers and journal titles. Clarivate Analytics’ Journal Citation Reports database was searched with the use of both ISSN numbers and journal titles to identify whether the journals had a Journal Citation Indicator (JCI), Two-Year Impact Factor (IF) and an Impact Factor ranking (IF rank). The examined Journal Impact Factors and Impact Factor rankings were for the year 2020 (as released in 2021).

  3. Top 100-Ranked Clinical Journals' Preprint Policies as of April 23, 2020

    • zenodo.org
    • data.niaid.nih.gov
    • +1more
    bin
    Updated Jun 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dorothy Massey; Dorothy Massey; Joshua Wallach; Joseph Ross; Joseph Ross; Michelle Opare; Harlan Krumholz; Harlan Krumholz; Joshua Wallach; Michelle Opare (2022). Top 100-Ranked Clinical Journals' Preprint Policies as of April 23, 2020 [Dataset]. http://doi.org/10.5061/dryad.jdfn2z38f
    Explore at:
    binAvailable download formats
    Dataset updated
    Jun 3, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Dorothy Massey; Dorothy Massey; Joshua Wallach; Joseph Ross; Joseph Ross; Michelle Opare; Harlan Krumholz; Harlan Krumholz; Joshua Wallach; Michelle Opare
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Objective: To determine the top 100-ranked (by impact factor) clinical journals' policies toward publishing research previously published on preprint servers (preprints).

    Design: Cross sectional. Main outcome measures: Editorial guidelines toward preprints, journal rank by impact factor.

    Results: 86 (86%) of the journals examined will consider papers previously published as preprints (preprints), 13 (13%) determine their decision on a case-by-case basis, and 1 (1%) does not allow preprints.

    Conclusions: We found wide acceptance of publishing preprints in the clinical research community, although researchers may still face uncertainty that their preprints will be accepted by all of their target journals.

  4. d

    Data of top 50 most cited articles about COVID-19 and the complications of...

    • search.dataone.org
    • data.niaid.nih.gov
    • +2more
    Updated Jan 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tanya Singh; Jagadish Rao Padubidri; Pavanchand Shetty H; Matthew Antony Manoj; Therese Mary; Bhanu Thejaswi Pallempati (2024). Data of top 50 most cited articles about COVID-19 and the complications of COVID-19 [Dataset]. http://doi.org/10.5061/dryad.tx95x6b4m
    Explore at:
    Dataset updated
    Jan 11, 2024
    Dataset provided by
    Dryad Digital Repository
    Authors
    Tanya Singh; Jagadish Rao Padubidri; Pavanchand Shetty H; Matthew Antony Manoj; Therese Mary; Bhanu Thejaswi Pallempati
    Time period covered
    Jan 1, 2023
    Description

    Background This bibliometric analysis examines the top 50 most-cited articles on COVID-19 complications, offering insights into the multifaceted impact of the virus. Since its emergence in Wuhan in December 2019, COVID-19 has evolved into a global health crisis, with over 770 million confirmed cases and 6.9 million deaths as of September 2023. Initially recognized as a respiratory illness causing pneumonia and ARDS, its diverse complications extend to cardiovascular, gastrointestinal, renal, hematological, neurological, endocrinological, ophthalmological, hepatobiliary, and dermatological systems. Methods Identifying the top 50 articles from a pool of 5940 in Scopus, the analysis spans November 2019 to July 2021, employing terms related to COVID-19 and complications. Rigorous review criteria excluded non-relevant studies, basic science research, and animal models. The authors independently reviewed articles, considering factors like title, citations, publication year, journal, impact fa..., A bibliometric analysis of the most cited articles about COVID-19 complications was conducted in July 2021 using all journals indexed in Elsevier’s Scopus and Thomas Reuter’s Web of Science from November 1, 2019 to July 1, 2021. All journals were selected for inclusion regardless of country of origin, language, medical speciality, or electronic availability of articles or abstracts. The terms were combined as follows: (“COVID-19†OR “COVID19†OR “SARS-COV-2†OR “SARSCOV2†OR “SARS 2†OR “Novel coronavirus†OR “2019-nCov†OR “Coronavirus†) AND (“Complication†OR “Long Term Complication†OR “Post-Intensive Care Syndrome†OR “Venous Thromboembolism†OR “Acute Kidney Injury†OR “Acute Liver Injury†OR “Post COVID-19 Syndrome†OR “Acute Cardiac Injury†OR “Cardiac Arrest†OR “Stroke†OR “Embolism†OR “Septic Shock†OR “Disseminated Intravascular Coagulation†OR “Secondary Infection†OR “Blood Clots† OR “Cytokine Release Syndrome†OR “Paediatric Inflammatory Multisystem Syndrome†OR “Vaccine..., , # Data of top 50 most cited articles about COVID-19 and the complications of COVID-19

    This dataset contains information about the top 50 most cited articles about COVID-19 and the complications of COVID-19. We have looked into a variety of research and clinical factors for the analysis.

    Description of the data and file structure

    The data sheet offers a comprehensive analysis of the selected articles. It delves into specifics such as the publication year of the top 50 articles, the journals responsible for publishing them, and the geographical region with the highest number of citations in this elite list. Moreover, the sheet sheds light on the key players involved, including authors and their affiliated departments, in crafting the top 50 most cited articles.

    Beyond these fundamental aspects, the data sheet goes on to provide intricate details related to the study types and topics prevalent in the top 50 articles. To enrich the analysis, it incorporates clinical data, capturing...

  5. f

    Age,sex and country of survey respondents.

    • plos.figshare.com
    xls
    Updated Aug 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michele Salvagno; Alessandro De Cassai; Stefano Zorzi; Mario Zaccarelli; Marco Pasetto; Elda Diletta Sterchele; Dmytro Chumachenko; Alberto Giovanni Gerli; Razvan Azamfirei; Fabio Silvio Taccone (2024). Age,sex and country of survey respondents. [Dataset]. http://doi.org/10.1371/journal.pone.0309208.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Aug 23, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Michele Salvagno; Alessandro De Cassai; Stefano Zorzi; Mario Zaccarelli; Marco Pasetto; Elda Diletta Sterchele; Dmytro Chumachenko; Alberto Giovanni Gerli; Razvan Azamfirei; Fabio Silvio Taccone
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Natural Language Processing (NLP) is a subset of artificial intelligence that enables machines to understand and respond to human language through Large Language Models (LLMs)‥ These models have diverse applications in fields such as medical research, scientific writing, and publishing, but concerns such as hallucination, ethical issues, bias, and cybersecurity need to be addressed. To understand the scientific community’s understanding and perspective on the role of Artificial Intelligence (AI) in research and authorship, a survey was designed for corresponding authors in top medical journals. An online survey was conducted from July 13th, 2023, to September 1st, 2023, using the SurveyMonkey web instrument, and the population of interest were corresponding authors who published in 2022 in the 15 highest-impact medical journals, as ranked by the Journal Citation Report. The survey link has been sent to all the identified corresponding authors by mail. A total of 266 authors answered, and 236 entered the final analysis. Most of the researchers (40.6%) reported having moderate familiarity with artificial intelligence, while a minority (4.4%) had no associated knowledge. Furthermore, the vast majority (79.0%) believe that artificial intelligence will play a major role in the future of research. Of note, no correlation between academic metrics and artificial intelligence knowledge or confidence was found. The results indicate that although researchers have varying degrees of familiarity with artificial intelligence, its use in scientific research is still in its early phases. Despite lacking formal AI training, many scholars publishing in high-impact journals have started integrating such technologies into their projects, including rephrasing, translation, and proofreading tasks. Efforts should focus on providing training for their effective use, establishing guidelines by journal editors, and creating software applications that bundle multiple integrated tools into a single platform.

  6. Open access practices of selected library science journals

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated Nov 25, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jennifer Jordan; Blair Solon; Stephanie Beene (2024). Open access practices of selected library science journals [Dataset]. http://doi.org/10.5061/dryad.pvmcvdnt3
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 25, 2024
    Dataset provided by
    University of New Mexico
    Authors
    Jennifer Jordan; Blair Solon; Stephanie Beene
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    The data in this set was culled from the Directory of Open Access Journals (DOAJ), the Proquest database Library and Information Science Abstracts (LISA), and a sample of peer reviewed scholarly journals in the field of Library Science. The data include journals that are open access, which was first defined by the Budapest Open Access Initiative: By ‘open access’ to [scholarly] literature, we mean its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. Starting with a batch of 377 journals, we focused our dataset to include journals that met the following criteria: 1) peer-reviewed 2) written in English or abstracted in English, 3) actively published at the time of analysis, and 4) scoped to librarianship. The dataset presents an overview of the landscape of open access scholarly publishing in the LIS field during a very specific time period, spring and summer of 2023. Methods Data Collection In the spring of 2023, researchers gathered 377 scholarly journals whose content covered the work of librarians, archivists, and affiliated information professionals. This data encompassed 221 journals from the Proquest database Library and Information Science Abstracts (LISA), widely regarded as an authoritative database in the field of librarianship. From the Directory of Open Access Journals, we included 144 LIS journals. We also included 12 other journals not indexed in DOAJ or LISA, based on the researchers’ knowledge of existing OA library journals. The data is separated into several different sets representing the different indices and journals we searched. The first set includes journals from the database LISA. The following fields are in this dataset:

    Journal: title of the journal

    Publisher: title of the publishing company

    Open Data Policy: lists whether an open data exists and what the policy is

    Country of publication: country where the journal is published

    Open ranking: details whether the journal is diamond, gold, and/or green

    Open peer review: specifies if the journal does open peer review

    Author retains copyright: explains copyright policy

    Charges: Details whether there is an article processing charge

    In DOAJ: details whether the journal is also published in the Directory of Open Access Journals

    The second set includes similar information, but it includes the titles of journals listed in the DOAJ.

    Journal: states the title of the journal

    Publisher: title of the publishing company

    Country: country where the journal is published

    Open Data Policy: lists whether an open data exists

    Open Data Notes: Details about the open data policy

    OA since: lists when the journal became open access

    Open ranking: details whether the journal is diamond, gold, and/or green

    Open peer review: specifies if the journal does open peer review

    Author Holds Copyright without Restriction: lists

    APC: Details whether there is an article processing charge

    Type of CC: lists the Creative Commons license applied to the journal articles

    In LISA: details whether the journal is also published in the Library and Information Science Abstracts database

    A third dataset includes twelve scholarly, peer reviewed journals focused on Library and Information Science but not included in the DOAJ or LISA.

    Journal: states the title of the journal

    Publisher: title of the publishing company

    Country: country where the journal is published

    Open Data Policy: lists whether an open data exists

    Open Data Notes: Details about the open data policy

    Open ranking: details whether the journal is diamond, gold, and/or green

    Open peer review: specifies if the journal does open peer review

    Author Holds Copyright without Restriction: lists

    APC: Details whether there is an article processing charge

    Type of CC: lists the Creative Commons license applied to the journal articles

    In LISA?: details whether the journal is also published in the Library and Information Science Abstracts database

    Data Processing The researchers downloaded an Excel file from the publisher Proquest that listed the 221 journals included in LISA. From the DOAJ, the researchers searched and scoped to build an initial list. Thus, 144 journals were identified after limiting search results to English-language only journals and those whose scope fell under the following DOAJ search terms: librar* (to cover library, libraries, librarian, librarians, librarianship). Journals also needed to have been categorized within the DOAJ subject heading “Bibliography. Library science. Information resources. And for the journals that we analyzed that were in either index, those journals were included based on the researchers’ knowledge of current scholarly, peer-reviewed journals that would count toward tenure at their own university, an R1 university. Once the journals were identified, the researchers divided up the journals amongst each other and scoped them for the following criteria: 1) peer-reviewed 2) written in English or abstracted in English, 3) actively published at the time of analysis, and 4) scoped to librarianship. The end result was 134 journals that the researchers then explored on their individual websites to identify the following items: open data policies, open access publication options, country of origin, publisher, and peer review process. The researchers also looked for article processing costs, type of Creative Commons licensing (open licenses that allow users to redistribute and sometimes remix intellectual property), and whether the journals were included in either the DOAJ and/or LISA index. References: Budapest Open Access Initiative. (2002) http://www.soros.org/openaccess/

  7. Z

    Open access publishers: the DOAJ Seal profile

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rodrigues, Rosângela S. (2020). Open access publishers: the DOAJ Seal profile [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3598233
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Araújo, Breno K. de
    Rodrigues, Rosângela S.
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The DOAJ database was created in 2003 and includes almost 14,000 peer-reviewed open access journals covering all knowledge areas, published in 130 countries. There is a selective process to be followed to assure the quality of the titles. DOAJ is maintained by Infrastructure Services for Open Access (IS4OA) and its funding is derived from donations (40% from publishers and 60% from the public sector). DOAJ introduced a quality distinction, called the DOAJ Seal, to identify the most prominent journals. There are 1354 journals (around 10% of the total) that have been awarded the Seal.

    The search strategy involved using the Seal option, then ranking the journals to identify the biggest publishers, the number of journals and the number of articles. We have extracted the following indicators from DOAJ: publisher, title, ISSN, country, number of articles, knowledge area (according to the DOAJ classification), value of article processing charges in USD, time for publication in weeks, and year of indexing in DOAJ.

  8. u

    Data from: Inventory of online public databases and repositories holding...

    • agdatacommons.nal.usda.gov
    • datadiscoverystudio.org
    • +2more
    txt
    Updated Feb 8, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Erin Antognoli; Jonathan Sears; Cynthia Parr (2024). Inventory of online public databases and repositories holding agricultural data in 2017 [Dataset]. http://doi.org/10.15482/USDA.ADC/1389839
    Explore at:
    txtAvailable download formats
    Dataset updated
    Feb 8, 2024
    Dataset provided by
    Ag Data Commons
    Authors
    Erin Antognoli; Jonathan Sears; Cynthia Parr
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    United States agricultural researchers have many options for making their data available online. This dataset aggregates the primary sources of ag-related data and determines where researchers are likely to deposit their agricultural data. These data serve as both a current landscape analysis and also as a baseline for future studies of ag research data. Purpose As sources of agricultural data become more numerous and disparate, and collaboration and open data become more expected if not required, this research provides a landscape inventory of online sources of open agricultural data. An inventory of current agricultural data sharing options will help assess how the Ag Data Commons, a platform for USDA-funded data cataloging and publication, can best support data-intensive and multi-disciplinary research. It will also help agricultural librarians assist their researchers in data management and publication. The goals of this study were to

    establish where agricultural researchers in the United States-- land grant and USDA researchers, primarily ARS, NRCS, USFS and other agencies -- currently publish their data, including general research data repositories, domain-specific databases, and the top journals compare how much data is in institutional vs. domain-specific vs. federal platforms determine which repositories are recommended by top journals that require or recommend the publication of supporting data ascertain where researchers not affiliated with funding or initiatives possessing a designated open data repository can publish data

    Approach The National Agricultural Library team focused on Agricultural Research Service (ARS), Natural Resources Conservation Service (NRCS), and United States Forest Service (USFS) style research data, rather than ag economics, statistics, and social sciences data. To find domain-specific, general, institutional, and federal agency repositories and databases that are open to US research submissions and have some amount of ag data, resources including re3data, libguides, and ARS lists were analysed. Primarily environmental or public health databases were not included, but places where ag grantees would publish data were considered.
    Search methods We first compiled a list of known domain specific USDA / ARS datasets / databases that are represented in the Ag Data Commons, including ARS Image Gallery, ARS Nutrition Databases (sub-components), SoyBase, PeanutBase, National Fungus Collection, i5K Workspace @ NAL, and GRIN. We then searched using search engines such as Bing and Google for non-USDA / federal ag databases, using Boolean variations of “agricultural data” /“ag data” / “scientific data” + NOT + USDA (to filter out the federal / USDA results). Most of these results were domain specific, though some contained a mix of data subjects. We then used search engines such as Bing and Google to find top agricultural university repositories using variations of “agriculture”, “ag data” and “university” to find schools with agriculture programs. Using that list of universities, we searched each university web site to see if their institution had a repository for their unique, independent research data if not apparent in the initial web browser search. We found both ag specific university repositories and general university repositories that housed a portion of agricultural data. Ag specific university repositories are included in the list of domain-specific repositories. Results included Columbia University – International Research Institute for Climate and Society, UC Davis – Cover Crops Database, etc. If a general university repository existed, we determined whether that repository could filter to include only data results after our chosen ag search terms were applied. General university databases that contain ag data included Colorado State University Digital Collections, University of Michigan ICPSR (Inter-university Consortium for Political and Social Research), and University of Minnesota DRUM (Digital Repository of the University of Minnesota). We then split out NCBI (National Center for Biotechnology Information) repositories. Next we searched the internet for open general data repositories using a variety of search engines, and repositories containing a mix of data, journals, books, and other types of records were tested to determine whether that repository could filter for data results after search terms were applied. General subject data repositories include Figshare, Open Science Framework, PANGEA, Protein Data Bank, and Zenodo. Finally, we compared scholarly journal suggestions for data repositories against our list to fill in any missing repositories that might contain agricultural data. Extensive lists of journals were compiled, in which USDA published in 2012 and 2016, combining search results in ARIS, Scopus, and the Forest Service's TreeSearch, plus the USDA web sites Economic Research Service (ERS), National Agricultural Statistics Service (NASS), Natural Resources and Conservation Service (NRCS), Food and Nutrition Service (FNS), Rural Development (RD), and Agricultural Marketing Service (AMS). The top 50 journals' author instructions were consulted to see if they (a) ask or require submitters to provide supplemental data, or (b) require submitters to submit data to open repositories. Data are provided for Journals based on a 2012 and 2016 study of where USDA employees publish their research studies, ranked by number of articles, including 2015/2016 Impact Factor, Author guidelines, Supplemental Data?, Supplemental Data reviewed?, Open Data (Supplemental or in Repository) Required? and Recommended data repositories, as provided in the online author guidelines for each the top 50 journals. Evaluation We ran a series of searches on all resulting general subject databases with the designated search terms. From the results, we noted the total number of datasets in the repository, type of resource searched (datasets, data, images, components, etc.), percentage of the total database that each term comprised, any dataset with a search term that comprised at least 1% and 5% of the total collection, and any search term that returned greater than 100 and greater than 500 results. We compared domain-specific databases and repositories based on parent organization, type of institution, and whether data submissions were dependent on conditions such as funding or affiliation of some kind. Results A summary of the major findings from our data review:

    Over half of the top 50 ag-related journals from our profile require or encourage open data for their published authors. There are few general repositories that are both large AND contain a significant portion of ag data in their collection. GBIF (Global Biodiversity Information Facility), ICPSR, and ORNL DAAC were among those that had over 500 datasets returned with at least one ag search term and had that result comprise at least 5% of the total collection.
    Not even one quarter of the domain-specific repositories and datasets reviewed allow open submission by any researcher regardless of funding or affiliation.

    See included README file for descriptions of each individual data file in this dataset. Resources in this dataset:Resource Title: Journals. File Name: Journals.csvResource Title: Journals - Recommended repositories. File Name: Repos_from_journals.csvResource Title: TDWG presentation. File Name: TDWG_Presentation.pptxResource Title: Domain Specific ag data sources. File Name: domain_specific_ag_databases.csvResource Title: Data Dictionary for Ag Data Repository Inventory. File Name: Ag_Data_Repo_DD.csvResource Title: General repositories containing ag data. File Name: general_repos_1.csvResource Title: README and file inventory. File Name: README_InventoryPublicDBandREepAgData.txt

  9. I

    Data from: OpCitance: Citation contexts identified from the PubMed Central...

    • databank.illinois.edu
    • aws-databank-alb.library.illinois.edu
    Updated Feb 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tzu-Kun Hsiao; Vetle Torvik (2023). OpCitance: Citation contexts identified from the PubMed Central open access articles [Dataset]. http://doi.org/10.13012/B2IDB-4353270_V1
    Explore at:
    Dataset updated
    Feb 15, 2023
    Authors
    Tzu-Kun Hsiao; Vetle Torvik
    Dataset funded by
    U.S. National Institutes of Health (NIH)
    Description

    Sentences and citation contexts identified from the PubMed Central open access articles ---------------------------------------------------------------------- The dataset is delivered as 24 tab-delimited text files. The files contain 720,649,608 sentences, 75,848,689 of which are citation contexts. The dataset is based on a snapshot of articles in the XML version of the PubMed Central open access subset (i.e., the PMCOA subset). The PMCOA subset was collected in May 2019. The dataset is created as described in: Hsiao TK., & Torvik V. I. (manuscript) OpCitance: Citation contexts identified from the PubMed Central open access articles. Files: • A_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with A. • B_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with B. • C_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with C. • D_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with D. • E_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with E. • F_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with F. • G_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with G. • H_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with H. • I_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with I. • J_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with J. • K_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with K. • L_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with L. • M_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with M. • N_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with N. • O_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with O. • P_p1_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with P (part 1). • P_p2_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with P (part 2). • Q_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with Q. • R_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with R. • S_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with S. • T_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with T. • UV_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with U or V. • W_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with W. • XYZ_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with X, Y or Z. Each row in the file is a sentence/citation context and contains the following columns: • pmcid: PMCID of the article • pmid: PMID of the article. If an article does not have a PMID, the value is NONE. • location: The article component (abstract, main text, table, figure, etc.) to which the citation context/sentence belongs. • IMRaD: The type of IMRaD section associated with the citation context/sentence. I, M, R, and D represent introduction/background, method, results, and conclusion/discussion, respectively; NoIMRaD indicates that the section type is not identifiable. • sentence_id: The ID of the citation context/sentence in the article component • total_sentences: The number of sentences in the article component. • intxt_id: The ID of the citation. • intxt_pmid: PMID of the citation (as tagged in the XML file). If a citation does not have a PMID tagged in the XML file, the value is "-". • intxt_pmid_source: The sources where the intxt_pmid can be identified. Xml represents that the PMID is only identified from the XML file; xml,pmc represents that the PMID is not only from the XML file, but also in the citation data collected from the NCBI Entrez Programming Utilities. If a citation does not have an intxt_pmid, the value is "-". • intxt_mark: The citation marker associated with the inline citation. • best_id: The best source link ID (e.g., PMID) of the citation. • best_source: The sources that confirm the best ID. • best_id_diff: The comparison result between the best_id column and the intxt_pmid column. • citation: A citation context. If no citation is found in a sentence, the value is the sentence. • progression: Text progression of the citation context/sentence. Supplementary Files • PMC-OA-patci.tsv.gz – This file contains the best source link IDs for the references (e.g., PMID). Patci [1] was used to identify the best source link IDs. The best source link IDs are mapped to the citation contexts and displayed in the *_journal IntxtCit.tsv files as the best_id column. Each row in the PMC-OA-patci.tsv.gz file is a citation (i.e., a reference extracted from the XML file) and contains the following columns: • pmcid: PMCID of the citing article. • pos: The citation's position in the reference list. • fromPMID: PMID of the citing article. • toPMID: Source link ID (e.g., PMID) of the citation. This ID is identified by Patci. • SRC: The sources that confirm the toPMID. • MatchDB: The origin bibliographic database of the toPMID. • Probability: The match probability of the toPMID. • toPMID2: PMID of the citation (as tagged in the XML file). • SRC2: The sources that confirm the toPMID2. • intxt_id: The ID of the citation. • journal: The first letter of the journal title. This maps to the *_journal_IntxtCit.tsv files. • same_ref_string: Whether the citation string appears in the reference list more than once. • DIFF: The comparison result between the toPMID column and the toPMID2 column. • bestID: The best source link ID (e.g., PMID) of the citation. • bestSRC: The sources that confirm the best ID. • Match: Matching result produced by Patci. [1] Agarwal, S., Lincoln, M., Cai, H., & Torvik, V. (2014). Patci – a tool for identifying scientific articles cited by patents. GSLIS Research Showcase 2014. http://hdl.handle.net/2142/54885 • Supplementary_File_1.zip – This file contains the code for generating the dataset.

  10. H

    Data from: A study of the impact of data sharing on article citations using...

    • dataverse.harvard.edu
    • search.dataone.org
    • +2more
    application/gzip +13
    Updated Sep 4, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Harvard Dataverse (2020). A study of the impact of data sharing on article citations using journal policies as a natural experiment [Dataset]. http://doi.org/10.7910/DVN/ORTJT5
    Explore at:
    text/x-stata-syntax(519), txt(0), png(15306), type/x-r-syntax(569), jar(21709328), pdf(65387), tsv(35864), text/markdown(125), bin(26), application/gzip(111839), text/x-python(0), application/x-stata-syntax(720), tex(3986), text/plain; charset=us-ascii(91)Available download formats
    Dataset updated
    Sep 4, 2020
    Dataset provided by
    Harvard Dataverse
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This study estimates the effect of data sharing on the citations of academic articles, using journal policies as a natural experiment. We begin by examining 17 high-impact journals that have adopted the requirement that data from published articles be publicly posted. We match these 17 journals to 13 journals without policy changes and find that empirical articles published just before their change in editorial policy have citation rates with no statistically significant difference from those published shortly after the shift. We then ask whether this null result stems from poor compliance with data sharing policies, and use the data sharing policy changes as instrumental variables to examine more closely two leading journals in economics and political science with relatively strong enforcement of new data policies. We find that articles that make their data available receive 97 additional citations (estimate standard error of 34). We conclude that: a) authors who share data may be rewarded eventually with additional scholarly citations, and b) data-posting policies alone do not increase the impact of articles published in a journal unless those policies are enforced.

  11. Data from: Design of tables for the presentation and communication of data...

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated Aug 10, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Miriam Remshard; Simon Queenborough (2024). Design of tables for the presentation and communication of data in ecological and evolutionary biology [Dataset]. http://doi.org/10.5061/dryad.jq2bvq8f3
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 10, 2024
    Dataset provided by
    Yale University
    Authors
    Miriam Remshard; Simon Queenborough
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Tables and charts have long been seen as effective ways to convey data. Much attention has been focused on improving charts, following ideas of human perception and brain function. Tables can also be viewed as two-dimensional representations of data, yet it is only fairly recently that we have begun to apply principles of design that aid the communication of information between the author and reader. In this study, we collated guidelines for the design of data and statistical tables. These guidelines fall under three principles: aiding comparisons, reducing visual clutter, and increasing readability. We surveyed tables published in recent issues of 43 journals in the fields of ecology and evolutionary biology for their adherence to these three principles, as well as author guidelines on journal publisher websites. We found that most of the over 1,000 tables we sampled had no heavy grid lines and little visual clutter. They were also easy to read, with clear headers and horizontal orientation. However, most tables did not aid the vertical comparison of numeric data. We suggest that authors could improve their tables by the right-flush alignment of numeric columns typeset with a tabular font, clearly identify statistical significance, and use clear titles and captions. Journal publishers could easily implement these formatting guidelines when typesetting manuscripts. Methods Once we had established the above principles of table design, we assessed their use in issues of 43 widely read ecology and evolution journals (SI 2). Between January and July 2022, we reviewed the tables in the most recent issue published by these journals. For journals without issues (such as Annual Review of Ecology, Evolution, and Systematics, or Biological Conservation), we examined the tables in issues published in a single month or in the entire most recent volume if few papers were published in that journal on a monthly basis. We reviewed only articles in a traditionally typeset format and published as a PDF or in print. We did not examine the tables in online versions of articles. Having identified all tables for review, we assessed whether these tables followed the above-described best practice principles for table design and, if not, we noted the way in which these tables failed to meet the outlined guidelines. We initially both reviewed the same 10 tables to ensure that we agreed in our assessment of whether these tables followed each of the principles. Having ensured agreement on how to classify tables, we proceeded to review all subsequent journals individually, while resolving any uncertainties collaboratively. These preliminary table evaluations also showed that assessing whether tables used long format or a tabular font was hard to evaluate objectively without knowing the data or the font used. Therefore, we did not systematically review the extent to which these two guidelines were adhered to.

  12. Number of publications in India FY 2023, by periodicity

    • statista.com
    Updated Sep 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2024). Number of publications in India FY 2023, by periodicity [Dataset]. https://www.statista.com/statistics/1458961/india-number-of-periodicals-by-periodicity/
    Explore at:
    Dataset updated
    Sep 26, 2024
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    India
    Description

    As of the financial year 2023, the greatest number of publications registered in India were monthly periodicals, figuring over 47,000. Comparatively, bi- and tri-weekly publications were the least in number. Most of the publications were published in Hindi, followed by English. Newspapers sustain the Indian print industry Just under 150,000 newspapers and periodicals were registered and in circulation in India as of the financial year 2023. Of these, daily newspapers were among the most consumed publications in the country, with Hindi and Marathi dailies being the most circulated. Newspapers accounted for the lion’s share of the print industry’s revenue while the magazine segment raked in seven billion Indian rupees in 2023. The Indian publishing market The country’s publishing industry was estimated to expand in size to over 700 billion Indian rupees by 2024. Alongside newspapers and magazines, books find a large consumer base in the Indian market. Segmented into trade and non-trade books, the Indian book market, while significant, is quite fragmented, fraught with publishers and sellers of varying sizes. Some of the major publishers operating in the country include Jaico Publishing House, Penguin Random House, Rupa Publications, and Hachette India.

  13. r

    Journal of biological chemistry Publication fee - ResearchHelpDesk

    • researchhelpdesk.org
    Updated May 7, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). Journal of biological chemistry Publication fee - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/publication-fee/568/journal-of-biological-chemistry
    Explore at:
    Dataset updated
    May 7, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    Journal of biological chemistry Publication fee - ResearchHelpDesk - The Journal of Biological Chemistry welcomes high-quality science that seeks to elucidate the molecular and cellular basis of biological processes. Papers published in JBC can therefore fall under the umbrellas of not only biological chemistry, chemical biology, or biochemistry, but also allied disciplines such as biophysics, systems biology, RNA biology, immunology, microbiology, neurobiology, epigenetics, computational biology, ’omics, and many more. The outcome of our focus on papers that contribute novel and important mechanistic insights, rather than on a particular topic area, is that JBC is truly a melting pot for scientists across disciplines. In addition, JBC welcomes papers that describe methods that will help scientists push their biochemical inquiries forward and resources that will be of use to the research community. Beyond the consideration of anyone particular article, our mission as a journal is to bring significant, enduring research to the scientific community. We believe it is our responsibility to safeguard the research we publish by providing high-quality review and maintaining strict standards on data presentation and deposition. It is our goal to help scientists disclose their findings in the most efficient and effective way possible by keeping review times short, providing editorial feedback on the manuscript text and promoting papers after publication. It is our aspiration to facilitate scientific discovery in new ways by exploring new technologies and forging new partnerships. The heart of this mission is the publication of original research in the form of Articles and Accelerated Communications, a subset of JBC’s articles that succinctly report particularly compelling advances across biological chemistry. Some JBC papers are also selected as Editors’ Picks, which represent top content in the journal and are highlighted with additional coverage. The journal publishes JBC Reviews to keep readers up to speed with the latest advances across diverse scientific topics; Thematic Series are collections of reviews that cover multiple aspects of a particular field. JBC also publishes Editorials and eLetters to facilitate communication within the biological chemistry community, Meeting Reports discussing findings presented at conferences, Virtual Issues to help readers find collections of papers on their favourite topics, and Classics and Reflections that honour the papers and people that have shaped scientific progress. Find more information and instructions about these content types here. Finally, JBC administers an award program established in honor of Herbert Tabor, JBC’s editor-in-chief from 1971-2012. The review process at JBC relies predominantly on editorial board members who have been vetted and appointed by the editor-in-chief and associate editors. Our editorial board members undergo a comprehensive training process to make our reviews as consistent, thoughtful and fair as possible for our authors. As of January 2021, JBC is a gold open access journal; the final versions of all articles are free to read without a subscription. The author versions of accepted research papers and JBC Reviews are also posted and freely available within 24 hours of acceptance as Papers in Press. JBC is owned and published by the American Society for Biochemistry and Molecular Biology, Inc. Abstract & Indexing Indexed in Medline, PubMed, PubMed Central, Index Medicus, The Science Citation Index, Current Contents - Life Sciences, SCOPUS, BIOSIS Previews, Clarivate Analytics Web of Science, and the Chemical Abstracts Service

  14. B

    Canadian Student-led Academic Journals - platforms and indexing data

    • borealisdata.ca
    • search.dataone.org
    Updated May 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mariya Maistrovskaya (2023). Canadian Student-led Academic Journals - platforms and indexing data [Dataset]. http://doi.org/10.5683/SP3/QXEUVH
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 4, 2023
    Dataset provided by
    Borealis
    Authors
    Mariya Maistrovskaya
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Canada
    Description

    This dataset was compiled as part of a study on Barriers and Opportunities in the Discoverability and Indexing of Student-led Academic Journals. The list of student journals and their details is compiled from public sources. This list is used to identify the presence of Canadian student journals in Google Scholar as well as in select indexes and databases: DOAJ, Scopus, Web of Science, Medline, Erudit, ProQuest, and HeinOnline. Additionally, journal publishing platform is recorded to be used for a correlational analysis against Google Scholar indexing results. For further details see README.

  15. d

    Replication Data for: Does Peer Review Identify the Best Papers? A...

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esarey, Justin (2023). Replication Data for: Does Peer Review Identify the Best Papers? A Simulation Study of Editors, Reviewers, and the Social Scientific Publication Process [Dataset]. https://search.dataone.org/view/sha256%3Ae344cd04cf14960381a58b289abcd4b2b8d545bc150ffed6e93099baa1c70b27
    Explore at:
    Dataset updated
    Nov 21, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Esarey, Justin
    Description

    How does the structure of the peer review process, which can vary from journal to journal, influence the quality of papers published in that journal? In this paper, I study multiple systems of peer review using computational simulation. I find that, under any system I study, a majority of accepted papers will be evaluated by the average reader as not meeting the standards of the journal. Moreover, all systems allow random chance to play a strong role in the acceptance decision. Heterogeneous reviewer and reader standards for scientific quality drive both results. A peer review system with an active editor (who uses desk rejection before review and does not rely strictly on reviewer votes to make decisions) can mitigate some of these effects.

  16. Academic reception and public dissemination of neurological research between...

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated Sep 22, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Markus Lauerer; Julian McGinnis (2023). Academic reception and public dissemination of neurological research between 2012 and 2021 [Dataset]. http://doi.org/10.5061/dryad.brv15dvg0
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 22, 2023
    Dataset provided by
    Technical University of Munich
    Authors
    Markus Lauerer; Julian McGinnis
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Fundamental changes in the way scientific research is disseminated have inspired the concept of altmetrics, most prominently the Altmetric Attention Score (AAS). The exact relation between the latter and traditional measures of science reception (e.g. citation count) is unknown. In this study, we determined citation counts and AAS as well as the ratio between the two (AAS-to-citation ratio) in 138,339 original research and review articles from 86 neurological journals between 2012 and 2021. The journal impact factor was closely correlated with both citation count (rs = 0.73) and AAS (rs = 0.64), whereas it showed a negative association with the AAS-to-citation ratio (rs = −0.26). Reviews accumulated more citations and a higher AAS than original research, while their AAS-to-citation ratio was significantly lower. Citation count was the only metric significantly associated with the number of publications by country (rs = 0.65). There were notable differences between major neurological subspecialties, with Alzheimer’s disease the article topic having the highest average citation count, AAS, and AAS-to-citation ratio. Our findings suggest that the career of a neurological paper in the academic and public sphere is determined by various and sometimes specific factors. Methods To gain a representative overview of the research in neurology during the decade of interest, we took the top 20 neurology journals by h5 index11 listed on Google Scholar and combined them with the top 50 neurology and clinical neurology journals according to the SCImago Journal Rank (SJR) (https://www.scimagojr.com). Only journals with at least 100 citable documents (i.e. original research or reviews) over the period of interest were considered to avoid potential bias by outliers. Any duplicates were removed. In total, 86 journals were chosen for further analyses. The Web of Science (WoS) Core Collection was used for the identification of articles to be included. Every document from the 86 journals listed as either ‘Article’ or ‘Review’ with a final publication year between 2012 and 2021 was included in the analysis. This timeframe was selected with two reasons in mind: First, the company Altmetric (and with it the AAS) was founded in 2011, so 2012 constitutes the earliest year with full coverage. Second, since the data collection took place in late 2022, the year 2021 was fully covered in the databases and article citations/online dissemination had already been given some time to accumulate. Citation counts and other metadata for each document were retrieved from the WoS Core Collection database. Impact factors for each journal were retrieved through Clarivate’s Journal Citation Reports (2021 being the most recent data available).AAS data were obtained through the Altmetric API using the documents’ unique Digital Object Identifier. All data were collected between the 13th and 15th of November, 2022.

  17. s

    Scimago Country Rankings

    • scimagojr.com
    • turkmath.org
    • +2more
    xlsx
    Updated Jul 1, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Scimago Lab (2017). Scimago Country Rankings [Dataset]. https://www.scimagojr.com/countryrank.php
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jul 1, 2017
    Dataset authored and provided by
    Scimago Lab
    Description

    Country scientific indicators developed from the information contained in the Scopus® database (Elsevier B.V.). These indicators can be used to assess and analyze scientific domains. Country rankings may be compared or analysed separately. Indicators offered for each country: H Index, Documents, Citations, Citation per Document and Citable Documents.

  18. Using egocentric analysis to investigate professional networks and...

    • plos.figshare.com
    doc
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Noriko Hara; Hui Chen; Marcus Antonius Ynalvez (2023). Using egocentric analysis to investigate professional networks and productivity of graduate students and faculty in life sciences in Japan, Singapore, and Taiwan [Dataset]. http://doi.org/10.1371/journal.pone.0186608
    Explore at:
    docAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Noriko Hara; Hui Chen; Marcus Antonius Ynalvez
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Japan, Taiwan, Singapore
    Description

    Prior studies showed that scientists’ professional networks contribute to research productivity, but little work has examined what factors predict the formation of professional networks. This study sought to 1) examine what factors predict the formation of international ties between faculty and graduate students and 2) identify how these international ties would affect publication productivity in three East Asian countries. Face-to-face surveys and in-depth semi-structured interviews were conducted with a sample of faculty and doctoral students in life sciences at 10 research institutions in Japan, Singapore, and Taiwan. Our final sample consisted of 290 respondents (84 faculty and 206 doctoral students) and 1,435 network members. We used egocentric social network analysis to examine the structure of international ties and how they relate to research productivity. Our findings suggest that overseas graduate training can be a key factor in graduate students’ development of international ties in these countries. Those with a higher proportion of international ties in their professional networks were likely to have published more papers and written more manuscripts. For faculty, international ties did not affect the number of manuscripts written or of papers published, but did correlate with an increase in publishing in top journals. The networks we examined were identified by asking study participants with whom they discuss their research. Because the relationships may not appear in explicit co-authorship networks, these networks were not officially recorded elsewhere. This study sheds light on the relationships of these invisible support networks to researcher productivity.

  19. Z

    Data from: Reliability of citations of medRxiv preprints in articles...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Feb 7, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gehanno (2022). Reliability of citations of medRxiv preprints in articles published on COVID-19 in the world leading medical journals [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5985959
    Explore at:
    Dataset updated
    Feb 7, 2022
    Dataset authored and provided by
    Gehanno
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    World
    Description

    Articles published on COVID in 2020 in the BMJ, The Lancet, the JAMA and the NEJM were manually screened to identify all articles citing at least one preprint from medRxiv. We searched PubMed, Google and Google Scholar to assess if the preprint had been published in a peer-reviewed journal, and when. Published articles were screened to assess if the title, data or conclusions were identical to the preprint version.

  20. The top 100 most influential articles in olfactory disorder: a bibliometric...

    • zenodo.org
    • data.niaid.nih.gov
    Updated Sep 22, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Teng-yu Chen; Teng-yu Chen; Hai-yu Hong; Hai-yu Hong; Dan Li; Dan Li; Cai-shan Fang; Cai-shan Fang; Man-qing Lin; Man-qing Lin; Wang Rui-zhi; Wang Rui-zhi; Jin-xiang Zhu; Jin-xiang Zhu; Xue-cheng He; Xue-cheng He; Jia-jun Zhang; Jia-jun Zhang; Qin-dong Liu; Qin-dong Liu; Yan Ruan; Yan Ruan; Min Zhou; Min Zhou (2022). The top 100 most influential articles in olfactory disorder: a bibliometric analysis [Dataset]. http://doi.org/10.5281/zenodo.6632052
    Explore at:
    Dataset updated
    Sep 22, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Teng-yu Chen; Teng-yu Chen; Hai-yu Hong; Hai-yu Hong; Dan Li; Dan Li; Cai-shan Fang; Cai-shan Fang; Man-qing Lin; Man-qing Lin; Wang Rui-zhi; Wang Rui-zhi; Jin-xiang Zhu; Jin-xiang Zhu; Xue-cheng He; Xue-cheng He; Jia-jun Zhang; Jia-jun Zhang; Qin-dong Liu; Qin-dong Liu; Yan Ruan; Yan Ruan; Min Zhou; Min Zhou
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Introduction: Studies on olfactory disorder have varied considerably in content and field over the past four decades. Influential publications of olfactory disorder have not been analyzed quantitatively. This study aimed to analyze the top 100 most influential articles in olfactory disorder through bibliometrics.

    Methods: The researchers searched the Web of Science Core Collection for publications from 1980 to August 31, 2021. The top 100 highly cited articles were screened in accordance with the inclusion and exclusion criteria. The journal impact factor and the SCImago Journal Rank indicator were searched for the journal in which the top-cited articles were published. The researchers analyzed the data via Microsoft Excel and SPSS 24.0 and used VOSviewer software for data visualization.

    Results: The total citations of the top 100 articles have increased exponentially, especially in the past 2 years. The journal LARYNGOSCOPE contributed the most influential papers (n=6). The United States published the most top 100 articles (n=52), followed by Germany and the United Kingdom. The University of Pennsylvania published the most influential studies, with a total of 3,852 citations. Richard L. Doty and Thomas Hummel contributed the most influential literature, and the top 100 studies also cited their research frequently. The most important articles in the field of OD mainly focused on COVID-19, Parkinson's disease and olfactory tests.

    Conclusion: Neurodegenerative diseases and COVID-19-related olfactory disorders have been considered main topics over the past 40 years. This study identified the most influential articles for olfactory disorder researchers, providing guidance for their research.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Scimago Lab (2017). Scimago Journal Rankings [Dataset]. https://www.scimagojr.com/journalrank.php

Scimago Journal Rankings

Explore at:
csvAvailable download formats
Dataset updated
Jun 26, 2017
Dataset authored and provided by
Scimago Lab
Description

Academic journals indicators developed from the information contained in the Scopus database (Elsevier B.V.). These indicators can be used to assess and analyze scientific domains.

Search
Clear search
Close search
Google apps
Main menu