50 datasets found
  1. d

    Games academics play and their consequences: how authorship, h-index, and...

    • search.dataone.org
    • data.niaid.nih.gov
    • +1more
    Updated Jun 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jan Gogarten; Colin Chapman; Julio Bicca-Marques; Sébastien Calvignac-Spencer; Pengfei Fan; Peter Fashing; Songtao Guo; Claire Hemingway; Fabian Leendertz; Baoguo Li; Ikki Matsuda; Rong Hou; Juan Carlos Serio-Silva; Nils Chr. Stenseth (2025). Games academics play and their consequences: how authorship, h-index, and journal impact factors are shaping the future of academia [Dataset]. http://doi.org/10.5061/dryad.fn2z34tpx
    Explore at:
    Dataset updated
    Jun 23, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Jan Gogarten; Colin Chapman; Julio Bicca-Marques; Sébastien Calvignac-Spencer; Pengfei Fan; Peter Fashing; Songtao Guo; Claire Hemingway; Fabian Leendertz; Baoguo Li; Ikki Matsuda; Rong Hou; Juan Carlos Serio-Silva; Nils Chr. Stenseth
    Time period covered
    Nov 29, 2019
    Description

    Research is a highly competitive profession where evaluation plays a central role; journals are ranked and individuals are evaluated based on their publication number, the number of times they are cited, and their h-index. Yet, such evaluations are often done in inappropriate ways that are damaging to individual careers, particularly for young scholars, and to the profession. Furthermore, as with all indices, people can play games to better their scores. This has resulted in the incentive structure of science increasingly mimicking economic principles, but rather than a monetary gain, the incentive is a higher score. To ensure a diversity of cultural perspectives and individual experiences, we gathered a team of academics in the fields of ecology and evolution from around the world and at different career stages. We first examine how authorship, h-index of individuals, and journal impact factors are being used and abused. Second, we speculate on the consequences of the continued use of ...

  2. Data from: The assessment of science: the relative merits of...

    • zenodo.org
    • data.niaid.nih.gov
    • +1more
    Updated May 28, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Adam Eyre-Walker; Nina Stoletzki; Adam Eyre-Walker; Nina Stoletzki (2022). Data from: The assessment of science: the relative merits of post-publication review, the impact factor and the number of citations [Dataset]. http://doi.org/10.5061/dryad.2h4j5
    Explore at:
    Dataset updated
    May 28, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Adam Eyre-Walker; Nina Stoletzki; Adam Eyre-Walker; Nina Stoletzki
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Background: The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper and the impact factor of the journal in which the article was published. Methodology/principle findings: We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. Conclusions: We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative.

  3. n

    Top 100-Ranked Clinical Journals' Preprint Policies as of April 23, 2020

    • data.niaid.nih.gov
    • datadryad.org
    • +1more
    zip
    Updated Sep 6, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dorothy Massey; Joshua Wallach; Joseph Ross; Michelle Opare; Harlan Krumholz (2020). Top 100-Ranked Clinical Journals' Preprint Policies as of April 23, 2020 [Dataset]. http://doi.org/10.5061/dryad.jdfn2z38f
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 6, 2020
    Dataset provided by
    Yale University
    Yale School of Public Health
    Yale New Haven Hospital
    Authors
    Dorothy Massey; Joshua Wallach; Joseph Ross; Michelle Opare; Harlan Krumholz
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Objective: To determine the top 100-ranked (by impact factor) clinical journals' policies toward publishing research previously published on preprint servers (preprints).

    Design: Cross sectional. Main outcome measures: Editorial guidelines toward preprints, journal rank by impact factor.

    Results: 86 (86%) of the journals examined will consider papers previously published as preprints (preprints), 13 (13%) determine their decision on a case-by-case basis, and 1 (1%) does not allow preprints.

    Conclusions: We found wide acceptance of publishing preprints in the clinical research community, although researchers may still face uncertainty that their preprints will be accepted by all of their target journals.

    Methods We examined journal policies of the 100 top-ranked clinical journals using the 2018 impact factors as reported by InCites Journal Citation Reports (JCR). First, we examined all journals with an impact factor greater than 5, and then we manually screened by title and category do identify the first 100 clinical journals. We included only those that publish original research. Next, we checked each journal's editorial policy on preprints. We examined, in order, the journal website, the publisher website, the Transpose Database, and the first 10 pages of a Google search with the journal name and the term "preprint." We classified each journal's policy, as shown in this dataset, as allowing preprints, determining based on preprint status on a case-by-case basis, and not allowing any preprints. We collected data on April 23, 2020.

    (Full methods can also be found in previously published paper.)

  4. Biology top 10 journals by Impact Factor, 2013.

    • figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ryan P. Womack (2023). Biology top 10 journals by Impact Factor, 2013. [Dataset]. http://doi.org/10.1371/journal.pone.0143460.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Ryan P. Womack
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Biology top 10 journals by Impact Factor, 2013.

  5. Physics top 10 journals by Impact Factor, 2013.

    • plos.figshare.com
    xls
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ryan P. Womack (2023). Physics top 10 journals by Impact Factor, 2013. [Dataset]. http://doi.org/10.1371/journal.pone.0143460.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Ryan P. Womack
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Physics top 10 journals by Impact Factor, 2013.

  6. Chemistry top 10 journals by Impact Factor, 2013.

    • figshare.com
    xls
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ryan P. Womack (2023). Chemistry top 10 journals by Impact Factor, 2013. [Dataset]. http://doi.org/10.1371/journal.pone.0143460.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Ryan P. Womack
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Chemistry top 10 journals by Impact Factor, 2013.

  7. d

    Journal impact factor and citation distributions in veterinary medicine

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pang, Daniel (2023). Journal impact factor and citation distributions in veterinary medicine [Dataset]. http://doi.org/10.7910/DVN/FK2PWD
    Explore at:
    Dataset updated
    Nov 22, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Pang, Daniel
    Description

    These data were used for a paper exploring the relationship between the journal impact factor and underlying citation distributions in selected veterinary journals. Citation reports were generated according to the method of Lariviere et al. 2016 bioRxiv: http://dx.doi.org/10.1101/062109

  8. Data sharing policies in scholarly journals across 22 disciplines [dataset]

    • figshare.com
    xlsx
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ui Ikeuchi (2023). Data sharing policies in scholarly journals across 22 disciplines [dataset] [Dataset]. http://doi.org/10.6084/m9.figshare.3144991.v2
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Ui Ikeuchi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Survey period: 08 April - 08 May, 2014 Top 10 Impact Factor journals in each of 22 categories

    Figures https://doi.org/10.6084/m9.figshare.6857273.v1

    Article https://doi.org/10.20651/jslis.62.1_20 https://doi.org/10.15068/00158168

  9. d

    Impact factor data 2017-02-11

    • dataone.org
    • dataverse.harvard.edu
    Updated Nov 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sood, Suneet (2023). Impact factor data 2017-02-11 [Dataset]. http://doi.org/10.7910/DVN/XR6MR9
    Explore at:
    Dataset updated
    Nov 21, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Sood, Suneet
    Description

    Data for impact factor. Fields are descriptive.. Visit https://dataone.org/datasets/sha256%3A0bcdcdcf5e67b4b376267a6df16ade1fba10942c4efe3b8cffe14363e1532fc5 for complete metadata about this dataset.

  10. r

    Procedia computer science Impact Factor 2024-2025 - ResearchHelpDesk

    • researchhelpdesk.org
    Updated Feb 23, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). Procedia computer science Impact Factor 2024-2025 - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/impact-factor-if/356/procedia-computer-science
    Explore at:
    Dataset updated
    Feb 23, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    Procedia computer science Impact Factor 2024-2025 - ResearchHelpDesk - Launched in 2009, Procedia Computer Science is an electronic product focusing entirely on publishing high quality conference proceedings. Procedia Computer Science enables fast dissemination so conference delegates can publish their papers in a dedicated online issue on ScienceDirect, which is then made freely available worldwide. Conference proceedings are accepted for publication in Procedia Computer Science based on quality and are therefore required to meet certain criteria, including quality of the conference, relevance to an international audience and covering highly cited or timely topics. Procedia Computer Science will cover proceedings in all topics of Computer Science with the exception of certain workshops in Theoretical Computer Science, as our Electronic Notes in Theoretical Computer Science http://www.elsevier.com/locate/entcs are the primary outlet for these papers. The journal is indexed in Scopus, the largest abstract and citation database of peer-reviewed literature. Copyright information For authors publishing in Procedia Computer Science, accepted manuscript will be governed by CC BY-NC-ND. For further details see our copyright information. What does Procedia Computer Science offer authors and conferences organizers? Procedia Computer Science offers a single, highly recognized platform where conference papers can be hosted and accessed by millions of researchers. Authors then know where to go to keep abreast of the latest developments in their field, and get maximum exposure for their own work. All proceedings appear online, on Science Direct, within 8 weeks of acceptance of the final manuscripts via the conference organizer and are freely available to all users. To ensure discoverability and citability the final online papers are individually metadated, XML versions are optimized for search engines, references are linked, and DOI (Digital Object Identifier) numbers are added. Why should conference organizers choose Procedia Computer Science? Unlike regular printed conference publications, there is no practical limit to the amount of papers a Procedia Computer Science issue can contain and pricing is affordable, clear and transparent, offering organizers a state of the art platform for their conference in a cost effective and sustainable manner. Procedia Computer Science offers immediate access for all, with papers online within 8 weeks of acceptance, and free access providing maximum exposure for individual papers and the related conference. Conference delegates can thus access the papers from anywhere on the web during and post conference. Organizers are credited on the actual issue and are fully responsible for quality control, the review process and the content of individual conference papers. To assist conference organizers in the publication process templates (both Latex and Word) are provided. What is the process for submitting conference proceedings to Procedia Computer Science? Proposals should include a synopsis of why such a Procedia Computer Science issue will have an impact on its community, the (sub) fields covered, and an outline of the related conference (or conferences / satellite events) with links to the conference website and pervious proceedings where possible. Please include a tentative list of the steering committee / program committee, a short description of the conference peer review standards (acceptance rate, reviewer selection, etc) as well as estimates of number of conference delegates and number of papers to be accepted for publication. Proposals should be prepared using the template here Please note that we do not accept proposals from individual authors for Procedia titles. What are the costs associated with Procedia Computer Science? There is an agreed fee, based on the size of the proceedings, which includes online publication on ScienceDirect and free access after acceptance. In addition to the online version there is the possibility to purchase paper copies, CD-ROMs and USB sticks. Moreover, interactive media possibilities such as webcasts and web seminars can be acquired for extra exposure both before and after the conference. Can Procedia Computer Science be sponsored? We are open to discussing sponsoring possibilities at all times. Sponsors can include funding bodies, government agencies and/or industry. Benefits to sponsors include visibility amongst a scientific audience and the opportunity to communicate messages via a peer-reviewed platform. Benefits to authors and conference organizers are the further dissemination of material to an international audience and even more reduced costs to the overall conference budget. What more can Elsevier do to help computer science conference organizers? We are open to discuss even further collaboration, including but not limited to providing global organizational and administrative services, assisting with the setup of new multidisciplinary meetings. Contact your Elsevier Computer Science Publisher for more information. Abstracting and Indexing Conference Proceedings Citation Index Scopus

  11. n

    Data from: Public sharing of research datasets: a pilot study of...

    • data.niaid.nih.gov
    • datadryad.org
    • +1more
    zip
    Updated May 26, 2011
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Heather A. Piwowar; Wendy W. Chapman (2011). Public sharing of research datasets: a pilot study of associations [Dataset]. http://doi.org/10.5061/dryad.3td2f
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 26, 2011
    Dataset provided by
    University of Pittsburgh
    Authors
    Heather A. Piwowar; Wendy W. Chapman
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    The public sharing of primary research datasets potentially benefits the research community but is not yet common practice. In this pilot study, we analyzed whether data sharing frequency was associated with funder and publisher requirements, journal impact factor, or investigator experience and impact. Across 397 recent biomedical microarray studies, we found investigators were more likely to publicly share their raw dataset when their study was published in a high-impact journal and when the first or last authors had high levels of career experience and impact. We estimate the USA's National Institutes of Health (NIH) data sharing policy applied to 19% of the studies in our cohort; being subject to the NIH data sharing plan requirement was not found to correlate with increased data sharing behavior in multivariate logistic regression analysis. Studies published in journals that required a database submission accession number as a condition of publication were more likely to share their data, but this trend was not statistically significant. These early results will inform our ongoing larger analysis, and hopefully contribute to the development of more effective data sharing initiatives. Earlier version presented at ASIS&T and ISSI Pre-Conference: Symposium on Informetrics and Scientometrics 2009

  12. n

    Data from: Sharing detailed research data is associated with increased...

    • data.niaid.nih.gov
    • dataone.org
    • +1more
    zip
    Updated May 26, 2011
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Heather A. Piwowar; Roger S. Day; Douglas B. Fridsma (2011). Sharing detailed research data is associated with increased citation rate [Dataset]. http://doi.org/10.5061/dryad.j2c4g
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 26, 2011
    Dataset provided by
    University of Pittsburgh School of Medicine
    Authors
    Heather A. Piwowar; Roger S. Day; Douglas B. Fridsma
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Sharing research data provides benefit to the general scientific community, but the benefit is less obvious for the investigator who makes his or her data available. We examined the citation history of 85 cancer microarray clinical trial publications with respect to the availability of their data. The 48% of trials with publicly available microarray data received 85% of the aggregate citations. Publicly available data was significantly (p = 0.006) associated with a 69% increase in citations, independently of journal impact factor, date of publication, and author country of origin using linear regression. This correlation between publicly available data and increased literature impact may further motivate investigators to share their detailed research data.

  13. f

    Data from: Effect of impact factor and discipline on journal data sharing...

    • tandf.figshare.com
    • commons.datacite.org
    txt
    Updated Feb 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David B. Resnik; Melissa Morales; Rachel Landrum; Min Shi; Jessica Minnier; Nicole A. Vasilevsky; Robin E. Champieux (2024). Effect of impact factor and discipline on journal data sharing policies [Dataset]. http://doi.org/10.6084/m9.figshare.7887080.v2
    Explore at:
    txtAvailable download formats
    Dataset updated
    Feb 6, 2024
    Dataset provided by
    Taylor & Francis
    Authors
    David B. Resnik; Melissa Morales; Rachel Landrum; Min Shi; Jessica Minnier; Nicole A. Vasilevsky; Robin E. Champieux
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data sharing is crucial to the advancement of science because it facilitates collaboration, transparency, reproducibility, criticism, and re-analysis. Publishers are well-positioned to promote sharing of research data by implementing data sharing policies. While there is an increasing trend toward requiring data sharing, not all journals mandate that data be shared at the time of publication. In this study, we extended previous work to analyze the data sharing policies of 447 journals across several scientific disciplines, including biology, clinical sciences, mathematics, physics, and social sciences. Our results showed that only a small percentage of journals require data sharing as a condition of publication, and that this varies across disciplines and Impact Factors. Both Impact Factors and discipline are associated with the presence of a data sharing policy. Our results suggest that journals with higher Impact Factors are more likely to have data sharing policies; use shared data in peer review; require deposit of specific data types into publicly available data banks; and refer to reproducibility as a rationale for sharing data. Biological science journals are more likely than social science and mathematics journals to require data sharing.

  14. Data from: The effectiveness of journals as arbiters of scientific quality

    • zenodo.org
    • data.niaid.nih.gov
    • +2more
    Updated May 31, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    C.E. Timothy Paine; Charles W. Fox; C. E. Timothy Paine; C.E. Timothy Paine; Charles W. Fox; C. E. Timothy Paine (2022). Data from: The effectiveness of journals as arbiters of scientific quality [Dataset]. http://doi.org/10.5061/dryad.6nh4fc2
    Explore at:
    Dataset updated
    May 31, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    C.E. Timothy Paine; Charles W. Fox; C. E. Timothy Paine; C.E. Timothy Paine; Charles W. Fox; C. E. Timothy Paine
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Academic publishers purport to be arbiters of knowledge, aiming to publish studied that advance the frontiers of their research domain. Yet the effectiveness of journal editors at identifying novel and important research is generally unknown, in part because of the confidential nature of the editorial and peer-review process. Using questionnaires, we evaluated the degree to which journals are effective arbiters of scientific impact in the domain of Ecology, quantified by three key criteria. First, journals discriminated against low-impact manuscripts: the probability of rejection increased as the number of citations gained by the published paper decreased. Second, journals were more likely to publish high-impact manuscripts (those that obtained citations in 90th percentile for their journal) than run-of-the-mill manuscripts; editors were only 23 and 41% as likely to reject an eventual high-impact paper (pre- versus post-review rejection) compared to a run-of-the-mill paper. Third, editors did occasionally reject papers that went on to be highly cited. Error rates were low, however: only 3.8% of rejected papers gained more citations than the median article in the journal that rejected them, and only 9.2% of rejected manuscripts went on to be high-impact papers in the (generally lower impact factor) publishing journal. The effectiveness of scientific arbitration increased with journal prominence, although some highly prominent journals were no more effective than much less prominent ones. We conclude that the academic publishing system, founded on peer review, appropriately recognises the significance of research contained in manuscripts, as measured by the number of citations that manuscripts obtain after publication, even though some errors are made. We therefore recommend that authors reduce publication delays by choosing journals appropriate to the significance of their research.

  15. H

    Transparency and Openness Promotion (TOP) Factor scores of leading economics...

    • dataverse.harvard.edu
    Updated Feb 8, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aleksandar Bogdanoski; Sarah Stillman (2021). Transparency and Openness Promotion (TOP) Factor scores of leading economics journals [Dataset]. http://doi.org/10.7910/DVN/JINXLP
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 8, 2021
    Dataset provided by
    Harvard Dataverse
    Authors
    Aleksandar Bogdanoski; Sarah Stillman
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset contains TOP Factor (https://www.cos.io/initiatives/top-guidelines) scores for the top 50 economics journals ranked based on their Impact Factor in October 2020.

  16. r

    Annals of Medical Physiology Impact Factor 2024-2025 - ResearchHelpDesk

    • researchhelpdesk.org
    Updated Feb 23, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). Annals of Medical Physiology Impact Factor 2024-2025 - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/impact-factor-if/395/annals-of-medical-physiology
    Explore at:
    Dataset updated
    Feb 23, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    Annals of Medical Physiology Impact Factor 2024-2025 - ResearchHelpDesk - Annals of Medical Physiology (Ann Med Physiol.) is a double-blind peer-reviewed quarterly journal, published both online and in print version, aims to provide the most complete and reliable source of information on current developments in the field of medical physiology. The emphasis will be on publishing quality research papers rapidly and keep freely available to researchers worldwide through open access policy. Annals of Medical Physiology serves an important role by encouraging, fostering and promoting developments in various areas of medical physiology. This journal publishes reviews, original research articles, brief communications in all areas of medical physiology. Annals of Medical Physiology is registered with Registrar of Newspapers for India (RNI), Ministry of Information And Broadcasting, Government of India vide registration number TELENG/2018/75630 dated 25th June 2018. Abstract & indexing International Institute of Organized Research (I2OR), PKP Index, AcademicKeys, DOI-DS, Openarchives, Google Scholar, Directory of Research Journals Indexing (DRJI), ResearchBib, International Citation Index, Scientific Indexing Services (SIS), Bielefeld Academic Search Engine (BASE), J-Gate, ICMJE, Zenodo, Root Indexing, Eurasian Scientific Journal Index (ESJI), Elektronische Zeitschriftenbibliothek (EZB), Zeitschriftendatenbank (ZDB), OpenAIRE, Directory of Open Access Scholarly Resources (ROAD), WorldCat, Scilit, EduIndex

  17. The Biodiversity Footprint Database

    • zenodo.org
    • data.niaid.nih.gov
    Updated Feb 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sami El Geneidy; Sami El Geneidy; Stefan Baumeister; Stefan Baumeister; Maiju Peura; Maiju Peura; Kotiaho, Janne, S.; Kotiaho, Janne, S. (2024). The Biodiversity Footprint Database [Dataset]. http://doi.org/10.5281/zenodo.8369650
    Explore at:
    Dataset updated
    Feb 16, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Sami El Geneidy; Sami El Geneidy; Stefan Baumeister; Stefan Baumeister; Maiju Peura; Maiju Peura; Kotiaho, Janne, S.; Kotiaho, Janne, S.
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    The Biodiversity Footprint Database contains global consumption-based, monetary, biodiversity impact factors for 44 countries and five rest of the world regions. The dataset has been compiled by combining information from EXIOBASE and LC-IMPACT databases. In addition, the EXIOBASE database has been analyzed with the pymrio analysis tool to determine the geographical location of the consumption-based biodiversity impacts. The mid-point impact factors from EXIOBASE are based on 2019 data, but the regional analysis with pymrio is based on 2011 data. EXIOBASE version 3.8.2 was used and LC-IMPACT version 1.3. The data is currently non peer-reviewed and under submission. The database will be open access after publication. The preprint of the manuscript can be found from: https://doi.org/10.48550/arXiv.2309.14186

    About the units

    The unit used in the database is the biodiversity equivalent (BDe). The biodversity equivalent, as we call it, is more commonly known as the global potentially disappeared fraction of species (global PDF, Verones et al., 2020). Thus, the monetary biodiversity impact factors are presented in the form BDe/€.

    Prices are in basic prices and the conversion factors to transform purchaser prices (e.g. financial accounting prices) to basic prices are provided for Finland (and later for all regions), based on EXIOBASE supply and use tables (SUT).

    Content of files

    BiodiversityFootprintDatabase.xlsx

    The biodiversity impact factors, regional abbreviations and basic price conversion factors for Finland.

    BiodiversityFootprintDatabase_DetailedData.zip

    The detailed data used to combine EXIOBASE and LC-IMPACT data after the EXIOBASE data was analyzed with the pymrio tool. Contains folders for each driver of biodiversity loss according to the LC-IMPACT classification.

    20220406_Exio3stressorcode _2011.py & 20220406_Exio3StressorAggregationCode_2011.py

    The pymrio codes that were used to analyze EXIOBASE and the geographical location of the drivers of biodiversity loss (mid-point indicators).

  18. d

    Data from: Evaluating the Impact of Altmetrics

    • search.dataone.org
    • dataone.org
    • +1more
    Updated Jan 3, 2013
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Drew Wright (2013). Evaluating the Impact of Altmetrics [Dataset]. http://identifiers.org/ark:/90135/q17m05w4/2/mrt-eml.xml
    Explore at:
    Dataset updated
    Jan 3, 2013
    Dataset provided by
    ONEShare Repository
    Authors
    Drew Wright
    Description

    Librarians, publishers, and researchers have long placed significant emphasis on journal metrics such as the impact factor. However, these tools do not take into account the impact of research outside of citations and publications. Altmetrics seek to describe the reach of scholarly activity across the Internet and social media to paint a more vivid picture of the scholarly landscape. In order to examine the impact of altmetrics on scholarly activity, it is helpful to compare these new tools to an existing method. Citation counts are currently the standard for determining the impact of a scholarly work, and two studies were conducted to examine the correlation between citation count and altmetric data. First, a set of highly cited papers was chosen across a variety of disciplines, and their citation counts were compared with the altmetrics generated from Total-Impact.org. Second, to evaluate the hypothesized increased impact of altmetrics on recently published articles, a set of articles published in 2011 were taken from a sampling of journals with high impact factors, both subscription-based and open access, and the altmetrics were then compared to their citation counts.

  19. Z

    rchampieux/Biomedical_Journal_Data_Sharing_Policies: Data and Code Release...

    • data.niaid.nih.gov
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rachel Landrum (2020). rchampieux/Biomedical_Journal_Data_Sharing_Policies: Data and Code Release for Publication [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_2595590
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Melissa Morales
    Jessica Minnier
    Rachel Landrum
    Min Shi
    David B Resnik
    Robin Champieux
    Nicole Vasilevsky
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data and code for manuscript:

    David B Resnik, Melissa Morales, Rachel Landrum, Min Shi, Jessica Minnier, Nicole A. Vasilevsky & Robin E. Champieux (2019) Effect of Impact Factor and Discipline on Journal Data Sharing Policies, Accountability in Research, DOI: 10.1080/08989621.2019.1591277

    Zenodo pre-print DOI: https://doi.org/10.5281/zenodo.2592682

    Data collection utilized three sources:

    2016 InCites Journal Citations Report

    Directory of Open Access Journal

    Journal websites and author guidelines

    The data was collected and analyzed between May 2018 and October 2018.

    Data and Code

    Data can be found in data/if-discipline-datasharing-policy-rawdata-1.0.0.csv.

    Analysis code for tables and figures can be seen in code/analysis_report.md (author of code: Jessica Minnier, OHSU, @jminnier)

  20. H

    Table 3 Selection of impact factors

    • dataverse.harvard.edu
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    xining wang (2023). Table 3 Selection of impact factors [Dataset]. http://doi.org/10.7910/DVN/JIXSFR
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 31, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    xining wang
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Table 3 Selection of impact factors

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Jan Gogarten; Colin Chapman; Julio Bicca-Marques; Sébastien Calvignac-Spencer; Pengfei Fan; Peter Fashing; Songtao Guo; Claire Hemingway; Fabian Leendertz; Baoguo Li; Ikki Matsuda; Rong Hou; Juan Carlos Serio-Silva; Nils Chr. Stenseth (2025). Games academics play and their consequences: how authorship, h-index, and journal impact factors are shaping the future of academia [Dataset]. http://doi.org/10.5061/dryad.fn2z34tpx

Games academics play and their consequences: how authorship, h-index, and journal impact factors are shaping the future of academia

Explore at:
Dataset updated
Jun 23, 2025
Dataset provided by
Dryad Digital Repository
Authors
Jan Gogarten; Colin Chapman; Julio Bicca-Marques; Sébastien Calvignac-Spencer; Pengfei Fan; Peter Fashing; Songtao Guo; Claire Hemingway; Fabian Leendertz; Baoguo Li; Ikki Matsuda; Rong Hou; Juan Carlos Serio-Silva; Nils Chr. Stenseth
Time period covered
Nov 29, 2019
Description

Research is a highly competitive profession where evaluation plays a central role; journals are ranked and individuals are evaluated based on their publication number, the number of times they are cited, and their h-index. Yet, such evaluations are often done in inappropriate ways that are damaging to individual careers, particularly for young scholars, and to the profession. Furthermore, as with all indices, people can play games to better their scores. This has resulted in the incentive structure of science increasingly mimicking economic principles, but rather than a monetary gain, the incentive is a higher score. To ensure a diversity of cultural perspectives and individual experiences, we gathered a team of academics in the fields of ecology and evolution from around the world and at different career stages. We first examine how authorship, h-index of individuals, and journal impact factors are being used and abused. Second, we speculate on the consequences of the continued use of ...

Search
Clear search
Close search
Google apps
Main menu