18 datasets found
  1. n

    Games academics play and their consequences: how authorship, h-index, and...

    • data.niaid.nih.gov
    • search.dataone.org
    • +1more
    zip
    Updated Dec 1, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jan Gogarten; Colin Chapman; Julio Bicca-Marques; Sébastien Calvignac-Spencer; Pengfei Fan; Peter Fashing; Songtao Guo; Claire Hemingway; Fabian Leendertz; Baoguo Li; Ikki Matsuda; Rong Hou; Juan Carlos Serio-Silva; Nils Chr. Stenseth (2019). Games academics play and their consequences: how authorship, h-index, and journal impact factors are shaping the future of academia [Dataset]. http://doi.org/10.5061/dryad.fn2z34tpx
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 1, 2019
    Dataset provided by
    Instituto de Ecología
    Robert Koch Institute
    Sun Yat-sen University
    University of Oslo
    California State University, Fullerton
    Pontifícia Universidade Católica do Rio Grande do Sul
    Northwest University
    George Washington University
    Chubu University
    Laboratoire des Sciences de l'Ingénieur, de l'Informatique et de l'Imagerie
    Authors
    Jan Gogarten; Colin Chapman; Julio Bicca-Marques; Sébastien Calvignac-Spencer; Pengfei Fan; Peter Fashing; Songtao Guo; Claire Hemingway; Fabian Leendertz; Baoguo Li; Ikki Matsuda; Rong Hou; Juan Carlos Serio-Silva; Nils Chr. Stenseth
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Research is a highly competitive profession where evaluation plays a central role; journals are ranked and individuals are evaluated based on their publication number, the number of times they are cited, and their h-index. Yet, such evaluations are often done in inappropriate ways that are damaging to individual careers, particularly for young scholars, and to the profession. Furthermore, as with all indices, people can play games to better their scores. This has resulted in the incentive structure of science increasingly mimicking economic principles, but rather than a monetary gain, the incentive is a higher score. To ensure a diversity of cultural perspectives and individual experiences, we gathered a team of academics in the fields of ecology and evolution from around the world and at different career stages. We first examine how authorship, h-index of individuals, and journal impact factors are being used and abused. Second, we speculate on the consequences of the continued use of these metrics with the hope of sparking discussions that will help our fields move in a positive direction. We would like to see changes in the incentive systems rewarding quality research and guaranteeing transparency. Senior faculty should establish the ethical standards, mentoring practices, and institutional evaluation criteria to create the needed changes.

    Methods The number of authors on research articles in six journals through time. The area of each circle corresponds to the number of publications with that publication number for that year. To aid in visual interpretation of the data, a generalized additive model was fitted to the data. For ease of interpretation, the number of authors is truncated at 100, meaning that publications with >100 coauthors are plotted here as just including 101 coauthors.

  2. d

    Journal impact factor and citation distributions in veterinary medicine

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pang, Daniel (2023). Journal impact factor and citation distributions in veterinary medicine [Dataset]. http://doi.org/10.7910/DVN/FK2PWD
    Explore at:
    Dataset updated
    Nov 22, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Pang, Daniel
    Description

    These data were used for a paper exploring the relationship between the journal impact factor and underlying citation distributions in selected veterinary journals. Citation reports were generated according to the method of Lariviere et al. 2016 bioRxiv: http://dx.doi.org/10.1101/062109

  3. n

    Data of top 50 most cited articles about COVID-19 and the complications of...

    • data.niaid.nih.gov
    • search.dataone.org
    • +1more
    zip
    Updated Jan 10, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tanya Singh; Jagadish Rao Padubidri; Pavanchand Shetty H; Matthew Antony Manoj; Therese Mary; Bhanu Thejaswi Pallempati (2024). Data of top 50 most cited articles about COVID-19 and the complications of COVID-19 [Dataset]. http://doi.org/10.5061/dryad.tx95x6b4m
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 10, 2024
    Dataset provided by
    Kasturba Medical College, Mangalore
    Authors
    Tanya Singh; Jagadish Rao Padubidri; Pavanchand Shetty H; Matthew Antony Manoj; Therese Mary; Bhanu Thejaswi Pallempati
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Background This bibliometric analysis examines the top 50 most-cited articles on COVID-19 complications, offering insights into the multifaceted impact of the virus. Since its emergence in Wuhan in December 2019, COVID-19 has evolved into a global health crisis, with over 770 million confirmed cases and 6.9 million deaths as of September 2023. Initially recognized as a respiratory illness causing pneumonia and ARDS, its diverse complications extend to cardiovascular, gastrointestinal, renal, hematological, neurological, endocrinological, ophthalmological, hepatobiliary, and dermatological systems. Methods Identifying the top 50 articles from a pool of 5940 in Scopus, the analysis spans November 2019 to July 2021, employing terms related to COVID-19 and complications. Rigorous review criteria excluded non-relevant studies, basic science research, and animal models. The authors independently reviewed articles, considering factors like title, citations, publication year, journal, impact factor, authors, study details, and patient demographics. Results The focus is primarily on 2020 publications (96%), with all articles being open-access. Leading journals include The Lancet, NEJM, and JAMA, with prominent contributions from Internal Medicine (46.9%) and Pulmonary Medicine (14.5%). China played a major role (34.9%), followed by France and Belgium. Clinical features were the primary study topic (68%), often utilizing retrospective designs (24%). Among 22,477 patients analyzed, 54.8% were male, with the most common age group being 26–65 years (63.2%). Complications affected 13.9% of patients, with a recovery rate of 57.8%. Conclusion Analyzing these top-cited articles offers clinicians and researchers a comprehensive, timely understanding of influential COVID-19 literature. This approach uncovers attributes contributing to high citations and provides authors with valuable insights for crafting impactful research. As a strategic tool, this analysis facilitates staying updated and making meaningful contributions to the dynamic field of COVID-19 research. Methods A bibliometric analysis of the most cited articles about COVID-19 complications was conducted in July 2021 using all journals indexed in Elsevier’s Scopus and Thomas Reuter’s Web of Science from November 1, 2019 to July 1, 2021. All journals were selected for inclusion regardless of country of origin, language, medical speciality, or electronic availability of articles or abstracts. The terms were combined as follows: (“COVID-19” OR “COVID19” OR “SARS-COV-2” OR “SARSCOV2” OR “SARS 2” OR “Novel coronavirus” OR “2019-nCov” OR “Coronavirus”) AND (“Complication” OR “Long Term Complication” OR “Post-Intensive Care Syndrome” OR “Venous Thromboembolism” OR “Acute Kidney Injury” OR “Acute Liver Injury” OR “Post COVID-19 Syndrome” OR “Acute Cardiac Injury” OR “Cardiac Arrest” OR “Stroke” OR “Embolism” OR “Septic Shock” OR “Disseminated Intravascular Coagulation” OR “Secondary Infection” OR “Blood Clots” OR “Cytokine Release Syndrome” OR “Paediatric Inflammatory Multisystem Syndrome” OR “Vaccine Induced Thrombosis with Thrombocytopenia Syndrome” OR “Aspergillosis” OR “Mucormycosis” OR “Autoimmune Thrombocytopenia Anaemia” OR “Immune Thrombocytopenia” OR “Subacute Thyroiditis” OR “Acute Respiratory Failure” OR “Acute Respiratory Distress Syndrome” OR “Pneumonia” OR “Subcutaneous Emphysema” OR “Pneumothorax” OR “Pneumomediastinum” OR “Encephalopathy” OR “Pancreatitis” OR “Chronic Fatigue” OR “Rhabdomyolysis” OR “Neurologic Complication” OR “Cardiovascular Complications” OR “Psychiatric Complication” OR “Respiratory Complication” OR “Cardiac Complication” OR “Vascular Complication” OR “Renal Complication” OR “Gastrointestinal Complication” OR “Haematological Complication” OR “Hepatobiliary Complication” OR “Musculoskeletal Complication” OR “Genitourinary Complication” OR “Otorhinolaryngology Complication” OR “Dermatological Complication” OR “Paediatric Complication” OR “Geriatric Complication” OR “Pregnancy Complication”) in the Title, Abstract or Keyword. A total of 5940 articles were accessed, of which the top 50 most cited articles about COVID-19 and Complications of COVID-19 were selected through Scopus. Each article was reviewed for its appropriateness for inclusion. The articles were independently reviewed by three researchers (JRP, MAM and TS) (Table 1). Differences in opinion with regard to article inclusion were resolved by consensus. The inclusion criteria specified articles that were focused on COVID-19 and Complications of COVID-19. Articles were excluded if they did not relate to COVID-19 and or complications of COVID-19, Basic Science Research and studies using animal models or phantoms. Review articles, Viewpoints, Guidelines, Perspectives and Meta-analysis were also excluded from the top 50 most-cited articles (Table 1). The top 50 most-cited articles were compiled in a single database and the relevant data was extracted. The database included: Article Title, Scopus Citations, Year of Publication, Journal, Journal Impact Factor, Authors, Number of Authors, Department Affiliation, Number of Institutions, Country of Origin, Study Topic, Study Design, Sample Size, Open Access, Non-Original Articles, Patient/Participants Age, Gender, Symptoms, Signs, Co-morbidities, Complications, Imaging Modalities Used and outcome.

  4. Data sharing policies in scholarly journals across 22 disciplines [dataset]

    • figshare.com
    xlsx
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ui Ikeuchi (2023). Data sharing policies in scholarly journals across 22 disciplines [dataset] [Dataset]. http://doi.org/10.6084/m9.figshare.3144991.v2
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Ui Ikeuchi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Survey period: 08 April - 08 May, 2014 Top 10 Impact Factor journals in each of 22 categories

    Figures https://doi.org/10.6084/m9.figshare.6857273.v1

    Article https://doi.org/10.20651/jslis.62.1_20 https://doi.org/10.15068/00158168

  5. r

    Procedia computer science Impact Factor 2024-2025 - ResearchHelpDesk

    • researchhelpdesk.org
    Updated Feb 23, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). Procedia computer science Impact Factor 2024-2025 - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/impact-factor-if/356/procedia-computer-science
    Explore at:
    Dataset updated
    Feb 23, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    Procedia computer science Impact Factor 2024-2025 - ResearchHelpDesk - Launched in 2009, Procedia Computer Science is an electronic product focusing entirely on publishing high quality conference proceedings. Procedia Computer Science enables fast dissemination so conference delegates can publish their papers in a dedicated online issue on ScienceDirect, which is then made freely available worldwide. Conference proceedings are accepted for publication in Procedia Computer Science based on quality and are therefore required to meet certain criteria, including quality of the conference, relevance to an international audience and covering highly cited or timely topics. Procedia Computer Science will cover proceedings in all topics of Computer Science with the exception of certain workshops in Theoretical Computer Science, as our Electronic Notes in Theoretical Computer Science http://www.elsevier.com/locate/entcs are the primary outlet for these papers. The journal is indexed in Scopus, the largest abstract and citation database of peer-reviewed literature. Copyright information For authors publishing in Procedia Computer Science, accepted manuscript will be governed by CC BY-NC-ND. For further details see our copyright information. What does Procedia Computer Science offer authors and conferences organizers? Procedia Computer Science offers a single, highly recognized platform where conference papers can be hosted and accessed by millions of researchers. Authors then know where to go to keep abreast of the latest developments in their field, and get maximum exposure for their own work. All proceedings appear online, on Science Direct, within 8 weeks of acceptance of the final manuscripts via the conference organizer and are freely available to all users. To ensure discoverability and citability the final online papers are individually metadated, XML versions are optimized for search engines, references are linked, and DOI (Digital Object Identifier) numbers are added. Why should conference organizers choose Procedia Computer Science? Unlike regular printed conference publications, there is no practical limit to the amount of papers a Procedia Computer Science issue can contain and pricing is affordable, clear and transparent, offering organizers a state of the art platform for their conference in a cost effective and sustainable manner. Procedia Computer Science offers immediate access for all, with papers online within 8 weeks of acceptance, and free access providing maximum exposure for individual papers and the related conference. Conference delegates can thus access the papers from anywhere on the web during and post conference. Organizers are credited on the actual issue and are fully responsible for quality control, the review process and the content of individual conference papers. To assist conference organizers in the publication process templates (both Latex and Word) are provided. What is the process for submitting conference proceedings to Procedia Computer Science? Proposals should include a synopsis of why such a Procedia Computer Science issue will have an impact on its community, the (sub) fields covered, and an outline of the related conference (or conferences / satellite events) with links to the conference website and pervious proceedings where possible. Please include a tentative list of the steering committee / program committee, a short description of the conference peer review standards (acceptance rate, reviewer selection, etc) as well as estimates of number of conference delegates and number of papers to be accepted for publication. Proposals should be prepared using the template here Please note that we do not accept proposals from individual authors for Procedia titles. What are the costs associated with Procedia Computer Science? There is an agreed fee, based on the size of the proceedings, which includes online publication on ScienceDirect and free access after acceptance. In addition to the online version there is the possibility to purchase paper copies, CD-ROMs and USB sticks. Moreover, interactive media possibilities such as webcasts and web seminars can be acquired for extra exposure both before and after the conference. Can Procedia Computer Science be sponsored? We are open to discussing sponsoring possibilities at all times. Sponsors can include funding bodies, government agencies and/or industry. Benefits to sponsors include visibility amongst a scientific audience and the opportunity to communicate messages via a peer-reviewed platform. Benefits to authors and conference organizers are the further dissemination of material to an international audience and even more reduced costs to the overall conference budget. What more can Elsevier do to help computer science conference organizers? We are open to discuss even further collaboration, including but not limited to providing global organizational and administrative services, assisting with the setup of new multidisciplinary meetings. Contact your Elsevier Computer Science Publisher for more information. Abstracting and Indexing Conference Proceedings Citation Index Scopus

  6. Data from: The effectiveness of journals as arbiters of scientific quality

    • zenodo.org
    • data.niaid.nih.gov
    • +2more
    Updated May 31, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    C.E. Timothy Paine; Charles W. Fox; C. E. Timothy Paine; C.E. Timothy Paine; Charles W. Fox; C. E. Timothy Paine (2022). Data from: The effectiveness of journals as arbiters of scientific quality [Dataset]. http://doi.org/10.5061/dryad.6nh4fc2
    Explore at:
    Dataset updated
    May 31, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    C.E. Timothy Paine; Charles W. Fox; C. E. Timothy Paine; C.E. Timothy Paine; Charles W. Fox; C. E. Timothy Paine
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Academic publishers purport to be arbiters of knowledge, aiming to publish studied that advance the frontiers of their research domain. Yet the effectiveness of journal editors at identifying novel and important research is generally unknown, in part because of the confidential nature of the editorial and peer-review process. Using questionnaires, we evaluated the degree to which journals are effective arbiters of scientific impact in the domain of Ecology, quantified by three key criteria. First, journals discriminated against low-impact manuscripts: the probability of rejection increased as the number of citations gained by the published paper decreased. Second, journals were more likely to publish high-impact manuscripts (those that obtained citations in 90th percentile for their journal) than run-of-the-mill manuscripts; editors were only 23 and 41% as likely to reject an eventual high-impact paper (pre- versus post-review rejection) compared to a run-of-the-mill paper. Third, editors did occasionally reject papers that went on to be highly cited. Error rates were low, however: only 3.8% of rejected papers gained more citations than the median article in the journal that rejected them, and only 9.2% of rejected manuscripts went on to be high-impact papers in the (generally lower impact factor) publishing journal. The effectiveness of scientific arbitration increased with journal prominence, although some highly prominent journals were no more effective than much less prominent ones. We conclude that the academic publishing system, founded on peer review, appropriately recognises the significance of research contained in manuscripts, as measured by the number of citations that manuscripts obtain after publication, even though some errors are made. We therefore recommend that authors reduce publication delays by choosing journals appropriate to the significance of their research.

  7. r

    Annals of Medical Physiology Impact Factor 2024-2025 - ResearchHelpDesk

    • researchhelpdesk.org
    Updated Feb 23, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). Annals of Medical Physiology Impact Factor 2024-2025 - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/impact-factor-if/395/annals-of-medical-physiology
    Explore at:
    Dataset updated
    Feb 23, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    Annals of Medical Physiology Impact Factor 2024-2025 - ResearchHelpDesk - Annals of Medical Physiology (Ann Med Physiol.) is a double-blind peer-reviewed quarterly journal, published both online and in print version, aims to provide the most complete and reliable source of information on current developments in the field of medical physiology. The emphasis will be on publishing quality research papers rapidly and keep freely available to researchers worldwide through open access policy. Annals of Medical Physiology serves an important role by encouraging, fostering and promoting developments in various areas of medical physiology. This journal publishes reviews, original research articles, brief communications in all areas of medical physiology. Annals of Medical Physiology is registered with Registrar of Newspapers for India (RNI), Ministry of Information And Broadcasting, Government of India vide registration number TELENG/2018/75630 dated 25th June 2018. Abstract & indexing International Institute of Organized Research (I2OR), PKP Index, AcademicKeys, DOI-DS, Openarchives, Google Scholar, Directory of Research Journals Indexing (DRJI), ResearchBib, International Citation Index, Scientific Indexing Services (SIS), Bielefeld Academic Search Engine (BASE), J-Gate, ICMJE, Zenodo, Root Indexing, Eurasian Scientific Journal Index (ESJI), Elektronische Zeitschriftenbibliothek (EZB), Zeitschriftendatenbank (ZDB), OpenAIRE, Directory of Open Access Scholarly Resources (ROAD), WorldCat, Scilit, EduIndex

  8. d

    Data from: Evaluating the Impact of Altmetrics

    • dataone.org
    • search.dataone.org
    • +1more
    Updated Jan 3, 2013
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Drew Wright (2013). Evaluating the Impact of Altmetrics [Dataset]. http://identifiers.org/ark:/90135/q17m05w4/2/mrt-eml.xml
    Explore at:
    Dataset updated
    Jan 3, 2013
    Dataset provided by
    ONEShare Repository
    Authors
    Drew Wright
    Description

    Librarians, publishers, and researchers have long placed significant emphasis on journal metrics such as the impact factor. However, these tools do not take into account the impact of research outside of citations and publications. Altmetrics seek to describe the reach of scholarly activity across the Internet and social media to paint a more vivid picture of the scholarly landscape. In order to examine the impact of altmetrics on scholarly activity, it is helpful to compare these new tools to an existing method. Citation counts are currently the standard for determining the impact of a scholarly work, and two studies were conducted to examine the correlation between citation count and altmetric data. First, a set of highly cited papers was chosen across a variety of disciplines, and their citation counts were compared with the altmetrics generated from Total-Impact.org. Second, to evaluate the hypothesized increased impact of altmetrics on recently published articles, a set of articles published in 2011 were taken from a sampling of journals with high impact factors, both subscription-based and open access, and the altmetrics were then compared to their citation counts.

  9. d

    Top 100-Ranked Clinical Journals' Preprint Policies as of April 23, 2020

    • datadryad.org
    • zenodo.org
    zip
    Updated Sep 6, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dorothy Massey; Joshua Wallach; Joseph Ross; Michelle Opare; Harlan Krumholz (2020). Top 100-Ranked Clinical Journals' Preprint Policies as of April 23, 2020 [Dataset]. http://doi.org/10.5061/dryad.jdfn2z38f
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 6, 2020
    Dataset provided by
    Dryad
    Authors
    Dorothy Massey; Joshua Wallach; Joseph Ross; Michelle Opare; Harlan Krumholz
    Time period covered
    Sep 4, 2020
    Description

    We examined journal policies of the 100 top-ranked clinical journals using the 2018 impact factors as reported by InCites Journal Citation Reports (JCR). First, we examined all journals with an impact factor greater than 5, and then we manually screened by title and category do identify the first 100 clinical journals. We included only those that publish original research. Next, we checked each journal's editorial policy on preprints. We examined, in order, the journal website, the publisher website, the Transpose Database, and the first 10 pages of a Google search with the journal name and the term "preprint." We classified each journal's policy, as shown in this dataset, as allowing preprints, determining based on preprint status on a case-by-case basis, and not allowing any preprints. We collected data on April 23, 2020.

    (Full methods can also be found in previously published paper.)

  10. Data (i.e., evidence) about evidence based medicine

    • figshare.com
    • search.datacite.org
    png
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jorge H Ramirez (2023). Data (i.e., evidence) about evidence based medicine [Dataset]. http://doi.org/10.6084/m9.figshare.1093997.v24
    Explore at:
    pngAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Jorge H Ramirez
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Update — December 7, 2014. – Evidence-based medicine (EBM) is not working for many reasons, for example: 1. Incorrect in their foundations (paradox): hierarchical levels of evidence are supported by opinions (i.e., lowest strength of evidence according to EBM) instead of real data collected from different types of study designs (i.e., evidence). http://dx.doi.org/10.6084/m9.figshare.1122534 2. The effect of criminal practices by pharmaceutical companies is only possible because of the complicity of others: healthcare systems, professional associations, governmental and academic institutions. Pharmaceutical companies also corrupt at the personal level, politicians and political parties are on their payroll, medical professionals seduced by different types of gifts in exchange of prescriptions (i.e., bribery) which very likely results in patients not receiving the proper treatment for their disease, many times there is no such thing: healthy persons not needing pharmacological treatments of any kind are constantly misdiagnosed and treated with unnecessary drugs. Some medical professionals are converted in K.O.L. which is only a puppet appearing on stage to spread lies to their peers, a person supposedly trained to improve the well-being of others, now deceits on behalf of pharmaceutical companies. Probably the saddest thing is that many honest doctors are being misled by these lies created by the rules of pharmaceutical marketing instead of scientific, medical, and ethical principles. Interpretation of EBM in this context was not anticipated by their creators. “The main reason we take so many drugs is that drug companies don’t sell drugs, they sell lies about drugs.” ―Peter C. Gøtzsche “doctors and their organisations should recognise that it is unethical to receive money that has been earned in part through crimes that have harmed those people whose interests doctors are expected to take care of. Many crimes would be impossible to carry out if doctors weren’t willing to participate in them.” —Peter C Gøtzsche, The BMJ, 2012, Big pharma often commits corporate crime, and this must be stopped. Pending (Colombia): Health Promoter Entities (In Spanish: EPS ―Empresas Promotoras de Salud).

    1. Misinterpretations New technologies or concepts are difficult to understand in the beginning, it doesn’t matter their simplicity, we need to get used to new tools aimed to improve our professional practice. Probably the best explanation is here in these videos (credits to Antonio Villafaina for sharing these videos with me). English https://www.youtube.com/watch?v=pQHX-SjgQvQ&w=420&h=315 Spanish https://www.youtube.com/watch?v=DApozQBrlhU&w=420&h=315 ----------------------- Hypothesis: hierarchical levels of evidence based medicine are wrong Dear Editor, I have data to support the hypothesis described in the title of this letter. Before rejecting the null hypothesis I would like to ask the following open question:Could you support with data that hierarchical levels of evidence based medicine are correct? (1,2) Additional explanation to this question: – Only respond to this question attaching publicly available raw data.– Be aware that more than a question this is a challenge: I have data (i.e., evidence) which is contrary to classic (i.e., McMaster) or current (i.e., Oxford) hierarchical levels of evidence based medicine. An important part of this data (but not all) is publicly available. References
    2. Ramirez, Jorge H (2014): The EBM challenge. figshare. http://dx.doi.org/10.6084/m9.figshare.1135873
    3. The EBM Challenge Day 1: No Answers. Competing interests: I endorse the principles of open data in human biomedical research Read this letter on The BMJ – August 13, 2014.http://www.bmj.com/content/348/bmj.g3725/rr/762595Re: Greenhalgh T, et al. Evidence based medicine: a movement in crisis? BMJ 2014; 348: g3725. _ Fileset contents Raw data: Excel archive: Raw data, interactive figures, and PubMed search terms. Google Spreadsheet is also available (URL below the article description). Figure 1. Unadjusted (Fig 1A) and adjusted (Fig 1B) PubMed publication trends (01/01/1992 to 30/06/2014). Figure 2. Adjusted PubMed publication trends (07/01/2008 to 29/06/2014) Figure 3. Google search trends: Jan 2004 to Jun 2014 / 1-week periods. Figure 4. PubMed publication trends (1962-2013) systematic reviews and meta-analysis, clinical trials, and observational studies.
      Figure 5. Ramirez, Jorge H (2014): Infographics: Unpublished US phase 3 clinical trials (2002-2014) completed before Jan 2011 = 50.8%. figshare.http://dx.doi.org/10.6084/m9.figshare.1121675 Raw data: "13377 studies found for: Completed | Interventional Studies | Phase 3 | received from 01/01/2002 to 01/01/2014 | Worldwide". This database complies with the terms and conditions of ClinicalTrials.gov: http://clinicaltrials.gov/ct2/about-site/terms-conditions Supplementary Figures (S1-S6). PubMed publication delay in the indexation processes does not explain the descending trends in the scientific output of evidence-based medicine. Acknowledgments I would like to acknowledge the following persons for providing valuable concepts in data visualization and infographics:
    4. Maria Fernanda Ramírez. Professor of graphic design. Universidad del Valle. Cali, Colombia.
    5. Lorena Franco. Graphic design student. Universidad del Valle. Cali, Colombia. Related articles by this author (Jorge H. Ramírez)
    6. Ramirez JH. Lack of transparency in clinical trials: a call for action. Colomb Med (Cali) 2013;44(4):243-6. URL: http://www.ncbi.nlm.nih.gov/pubmed/24892242
    7. Ramirez JH. Re: Evidence based medicine is broken (17 June 2014). http://www.bmj.com/node/759181
    8. Ramirez JH. Re: Global rules for global health: why we need an independent, impartial WHO (19 June 2014). http://www.bmj.com/node/759151
    9. Ramirez JH. PubMed publication trends (1992 to 2014): evidence based medicine and clinical practice guidelines (04 July 2014). http://www.bmj.com/content/348/bmj.g3725/rr/759895 Recommended articles
    10. Greenhalgh Trisha, Howick Jeremy,Maskrey Neal. Evidence based medicine: a movement in crisis? BMJ 2014;348:g3725
    11. Spence Des. Evidence based medicine is broken BMJ 2014; 348:g22
    12. Schünemann Holger J, Oxman Andrew D,Brozek Jan, Glasziou Paul, JaeschkeRoman, Vist Gunn E et al. Grading quality of evidence and strength of recommendations for diagnostic tests and strategies BMJ 2008; 336:1106
    13. Lau Joseph, Ioannidis John P A, TerrinNorma, Schmid Christopher H, OlkinIngram. The case of the misleading funnel plot BMJ 2006; 333:597
    14. Moynihan R, Henry D, Moons KGM (2014) Using Evidence to Combat Overdiagnosis and Overtreatment: Evaluating Treatments, Tests, and Disease Definitions in the Time of Too Much. PLoS Med 11(7): e1001655. doi:10.1371/journal.pmed.1001655
    15. Katz D. A-holistic view of evidence based medicinehttp://thehealthcareblog.com/blog/2014/05/02/a-holistic-view-of-evidence-based-medicine/ ---
  11. d

    Data from: Double-blind review favours increased representation of female...

    • dataone.org
    • knb.ecoinformatics.org
    • +2more
    Updated Jan 6, 2015
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amber Budden (2015). Double-blind review favours increased representation of female authors [Dataset]. http://doi.org/10.5063/AA/xhan.4.1
    Explore at:
    Dataset updated
    Jan 6, 2015
    Dataset provided by
    Knowledge Network for Biocomplexity
    Authors
    Amber Budden
    Time period covered
    Jan 1, 1997 - Jan 1, 2005
    Variables measured
    BP, PY, VL, Title, Authors, Journal, FA Gender, Pre/Post 2001, Review policy
    Description

    Double-blind peer review, in which neither author nor reviewer are identified, is rarely practised in ecology or evolution journals. Most journals in the field of ecology practice single-blind reviews in which the reviewer but not the author identity is concealed. In 2001, however, double-blind review was introduced by the journal Behavioral Ecology. A database of all papers published in BE between 1997 and 2005 (n=867) was generated (the year 2001 was omitted to accomodate the change in editorial policy). For each paper, gender was assignmed to the first author using first names. Gender was classified as "unknown" if the author provided only initials, if the name was gender neutral or if the name could not be assigned to either gender. The same data was gathered from an out-group set of primary research journals listed by ISI as being in the category of "Ecology" or "Evolutionary Biology" with a 2004 impact factor of 2.0-2.5 (similar to that of BE). This provided an additional five journals: Behavioral Ecology and Sociobiology (BES; n=1040), Animal Behavior (AB; n=2178), Journal of Biogeography (JB; n=1040), Biological Conservation (BC; n=1719), and Landscape Ecology (LE; n=419). Missing data from complete issues omitted from the table of contents were inserted using ISI (JB and LE; four issues). This study showed that following the policy change to double-blind peer reviews, there was a significant increase in female first-authored papers.

  12. r

    Journal of Islamabad Medical and Dental College Impact Factor 2024-2025 -...

    • researchhelpdesk.org
    Updated Feb 23, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). Journal of Islamabad Medical and Dental College Impact Factor 2024-2025 - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/impact-factor-if/320/journal-of-islamabad-medical-and-dental-college
    Explore at:
    Dataset updated
    Feb 23, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    Journal of Islamabad Medical and Dental College Impact Factor 2024-2025 - ResearchHelpDesk - The Journal of Islamabad Medical and Dental College (JIMDC) is an official journal of Islamabad Medical & Dental College, Islamabad. This open access, double blind peer reviewed journal started in 2012 with bi-annual publication and was upgraded to quarterly publishing in 2015. The journal aims to cover and promote all the latest and innovative advancements in the field of Medical/Health Sciences and healthcare practices. It is an essential journal for all the health sciences including doctors, bio-researchers, students and healthcare professionals, who wish to be kept informed and up-to-date with the latest and dynamic developments in this field. The vision of the journal is to publish and share good quality research and enhance the visibility and citation of the journal through open journal system (OJS). The mission of our journal is to provide a platform for publishing quality research (both Graduate and Postgraduate level) in the field of medical, dental and allied Health Sciences. JIMDC publishes research papers, in the form of original articles, case reports, review articles, articles on medical education, and short communication related to health, medicine and dentistry in accordance with the “Uniform requirements for manuscript submitted to medical journals”, after peer review by at least two reviewers. JIMDC readership includes IMDC faculty, other healthcare professionals and researchers, both national and international. It is distributed to medical colleges and medical libraries throughout Pakistan. We encourage research librarians to list this journal among their library's electronic journal holdings. It may be worth noting that this journal's open source publishing system is suitable for libraries to host for their faculty members to use with journals they are involved in editing (see Open Journal Systems). Abstract & indexing Abstract and Indexing: WHO Index Medicus (Index for the Eastern Mediterranean Region (IMEMR) Bielefeld Academic Search Engine (BASE), Germany Netmedics (Cambridge International Academics UK) Pakmedinet CrossRef (doi) Asian Digital Library

  13. B

    Discoverability of published objects

    • borealisdata.ca
    • search.dataone.org
    Updated Jan 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Topps; Corey Wirun (2024). Discoverability of published objects [Dataset]. http://doi.org/10.5683/SP3/NPUWEE
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 23, 2024
    Dataset provided by
    Borealis
    Authors
    David Topps; Corey Wirun
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Many products of medical education scholarship have difficulty in being published through traditional, standard journals. These still fit within the METRICS framework that describes scholarly activities.(1) Just as is seen in standard research outputs, such as replication studies, trials with negative results, in the increasingly competitive world of scholastic publishing where citation indexes and impact factor matter greatly, it is now hard to get studies published that do not fit within a positivist research model, demonstrating significant results. This report reflects our explorations and testing of various approaches to making our scholarly products and learning objects more discoverable.

  14. d

    Data from: From integrative taxonomy to species description: one step beyond...

    • datadryad.org
    • data.niaid.nih.gov
    • +1more
    zip
    Updated Oct 17, 2014
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eric Pante; Charlotte Schoelinck; Nicolas Puillandre (2014). From integrative taxonomy to species description: one step beyond [Dataset]. http://doi.org/10.5061/dryad.59jp0
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 17, 2014
    Dataset provided by
    Dryad
    Authors
    Eric Pante; Charlotte Schoelinck; Nicolas Puillandre
    Time period covered
    Mar 3, 2014
    Description

    Online appendix 1List of the 494 articles reviewed and data extracted.Online appendix 2List of the journals in which new species were delineated. Editorial policies on including formal taxonomic descriptions to articles and impact factors were investigated before 2005, between 2005-2010, and 2011-2014. Y=Yes, N=No, NA=Not Available and IF=Impact Factor.

  15. u

    Data from: COVID-19 evidence syntheses with artificial intelligence: an...

    • produccioncientifica.ugr.es
    • data.niaid.nih.gov
    • +2more
    Updated 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tercero-Hidalgo, Juan R.; Khan, Khalid S.; Bueno-Cavanillas, Aurora; Fernández-López, Rodrigo; Huete, Juan F.; Amezcua-Prieto, Carmen; Zamora, Javier; Fernández-Luna, Juan M.; Tercero-Hidalgo, Juan R.; Khan, Khalid S.; Bueno-Cavanillas, Aurora; Fernández-López, Rodrigo; Huete, Juan F.; Amezcua-Prieto, Carmen; Zamora, Javier; Fernández-Luna, Juan M. (2022). COVID-19 evidence syntheses with artificial intelligence: an empirical study of systematic reviews [Dataset]. https://produccioncientifica.ugr.es/documentos/668fc430b9e7c03b01bd5ef7
    Explore at:
    Dataset updated
    2022
    Authors
    Tercero-Hidalgo, Juan R.; Khan, Khalid S.; Bueno-Cavanillas, Aurora; Fernández-López, Rodrigo; Huete, Juan F.; Amezcua-Prieto, Carmen; Zamora, Javier; Fernández-Luna, Juan M.; Tercero-Hidalgo, Juan R.; Khan, Khalid S.; Bueno-Cavanillas, Aurora; Fernández-López, Rodrigo; Huete, Juan F.; Amezcua-Prieto, Carmen; Zamora, Javier; Fernández-Luna, Juan M.
    Description

    Objectives: A rapidly developing scenario like a pandemic requires the prompt production of high-quality systematic reviews, which can be automated using artificial intelligence (AI) techniques. We evaluated the application of AI tools in COVID-19 evidence syntheses. Study design: After prospective registration of the review protocol, we automated the download of all open-access COVID-19 systematic reviews in the COVID-19 Living Overview of Evidence database, indexed them for AI-related keywords, and located those that used AI tools. We compared their journals’ JCR Impact Factor, citations per month, screening workloads, completion times (from pre-registration to preprint or submission to a journal) and AMSTAR-2 methodology assessments (maximum score 13 points) with a set of publication date matched control reviews without AI. Results: Of the 3999 COVID-19 reviews, 28 (0.7%, 95% CI 0.47-1.03%) made use of AI. On average, compared to controls (n=64), AI reviews were published in journals with higher Impact Factors (median 8.9 vs 3.5, P<0.001), and screened more abstracts per author (302.2 vs 140.3, P=0.009) and per included study (189.0 vs 365.8, P<0.001) while inspecting less full texts per author (5.3 vs 14.0, P=0.005). No differences were found in citation counts (0.5 vs 0.6, P=0.600), inspected full texts per included study (3.8 vs 3.4, P=0.481), completion times (74.0 vs 123.0, P=0.205) or AMSTAR-2 (7.5 vs 6.3, P=0.119). Conclusion: AI was an underutilized tool in COVID-19 systematic reviews. Its usage, compared to reviews without AI, was associated with more efficient screening of literature and higher publication impact. There is scope for the application of AI in automating systematic reviews.

  16. Data from: The dawn of open access to phylogenetic data

    • zenodo.org
    • data.niaid.nih.gov
    • +2more
    zip
    Updated May 28, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrew F. Magee; Michael R. May; Brian R. Moore; Andrew F. Magee; Michael R. May; Brian R. Moore (2022). Data from: The dawn of open access to phylogenetic data [Dataset]. http://doi.org/10.5061/dryad.9fm28
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 28, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Andrew F. Magee; Michael R. May; Brian R. Moore; Andrew F. Magee; Michael R. May; Brian R. Moore
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The scientific enterprise depends critically on the preservation of and open access to published data. This basic tenet applies acutely to phylogenies (estimates of evolutionary relationships among species). Increasingly, phylogenies are estimated from increasingly large, genome-scale datasets using increasingly complex statistical methods that require increasing levels of expertise and computational investment. Moreover, the resulting phylogenetic data provide an explicit historical perspective that critically informs research in a vast and growing number of scientific disciplines. One such use is the study of changes in rates of lineage diversification (speciation – extinction) through time. As part of a meta-analysis in this area, we sought to collect phylogenetic data (comprising nucleotide sequence alignment and tree files) from 217 studies published in 46 journals over a 13-year period. We document our attempts to procure those data (from online archives and by direct request to corresponding authors), and report results of analyses (using Bayesian logistic regression) to assess the impact of various factors on the success of our efforts. Overall, complete phylogenetic data for ~ 60% of these studies are effectively lost to science. Our study indicates that phylogenetic data are more likely to be deposited in online archives and/or shared upon request when: (1) the publishing journal has a strong data-sharing policy; (2) the publishing journal has a higher impact factor, and; (3) the data are requested from faculty rather than students. Importantly, our survey spans recent policy initiatives and infrastructural changes; our analyses indicate that the positive impact of these community initiatives has been both dramatic and immediate. Although the results of our study indicate that the situation is dire, our findings also reveal tremendous recent progress in the sharing and preservation of phylogenetic data.

  17. Z

    Collection of literature and extracted features in the field of...

    • data.niaid.nih.gov
    • explore.openaire.eu
    • +1more
    Updated Apr 12, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rudez, Urban (2022). Collection of literature and extracted features in the field of under-frequency load shedding for the period 1954-2020 [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6451346
    Explore at:
    Dataset updated
    Apr 12, 2022
    Dataset provided by
    Rudez, Urban
    Skrjanc, Tadej
    Mihalic, Rafael
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The number of publications in the field of UFLS research is rapidly increasing. This confirms that the topic is timely, but due to the amount of publications, it is easy to lose track of the prevailing concepts driving the latest technologies. To overcome this problem, researchers resort to subjective review articles in the literature in which individual authors attempt to systematically categorize UFLS algorithms according to their own understanding of a variety of approaches. Since this is not exactly a trivial task and is undoubtedly subject to personal interpretation, it is better to use specialized mathematical techniques for this purpose. Recently, it has been shown that the use of clustering techniques and graph theory can be a useful tool for systematic literature reviews.

    If one chooses to search for similarities between UFLS algorithms using such techniques, one must first collect the relevant literature in the field and extract important information. Therefore, this collection provides an Excel spreadsheet (.xlsx) of 381 publications in the field of UFLS protection for the period from 1954 to 2020, with the following information for each publication:

    authors,

    title,

    year of publication (journal, conference proceedings, doctoral dissertation, master's thesis, bachelor's thesis, other),

    the source from which we obtained the publication,

    DOI (Digital Object Identifier),

    extracted general features, and

    extracted specific features.

    Relevant literature was obtained from various open access journals, impact factor journals, repositories of various universities, and archives of various electric utilities and transmission system operators.

    If the publication in the row can be assigned to the feature specified in column I3-BJ3, the corresponding cell contains the value "1". Otherwise, i.e., if the feature specified in column I3-BJ3 cannot be assigned to the publication, the cell is empty. Detailed descriptions of the individual feature can be found in "Description_of_features.docx". Namely, two identical documents are available, one in English and one in Slovene.

    Updated versions with more recent data will be uploaded with a differing version number and doi.

  18. H

    Data Visualization in Social Work Research

    • data.niaid.nih.gov
    • dataverse.harvard.edu
    • +1more
    Updated Aug 8, 2013
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Rothwell; Tonino Esposito; Wegner-Lohin (2013). Data Visualization in Social Work Research [Dataset]. http://doi.org/10.7910/DVN/I6IIXL
    Explore at:
    tsv, text/plain; charset=us-asciiAvailable download formats
    Dataset updated
    Aug 8, 2013
    Dataset provided by
    Centre for Research on Children and Families, McGill University
    McGill University School of Social Work
    Authors
    David Rothwell; Tonino Esposito; Wegner-Lohin
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    International
    Description

    Research dissemination and knowledge translation are imperative in social work. Methodological developments in data visualization techniques have improved the ability to convey meaning and reduce erroneous conclusions. The purpose of this project is to examine: (1) How are empirical results presented visually in social work research?; (2) To what extent do top social work journals vary in the publication of data visualization techniques?; (3) What is the predominant type of analysis presented in tables and graphs?; (4) How can current data visualization methods be improved to increase understanding of social work research? Method: A database was built from a systematic literature review of the four most recent issues of Social Work Research and 6 other highly ranked journals in social work based on the 2009 5-year impact factor (Thomson Reuters ISI Web of Knowledge). Overall, 294 articles were reviewed. Articles without any form of data visualization were not included in the final database. The number of articles reviewed by journal includes : Child Abuse & Neglect (38), Child Maltreatment (30), American Journal of Community Psychology (31), Family Relations (36), Social Work (29), Children and Youth Services Review (112), and Social Work Research (18). Articles with any type of data visualization (table, graph, other) were included in the database and coded sequentially by two reviewers based on the type of visualization method and type of analyses presented (descriptive, bivariate, measurement, estimate, predicted value, other). Additional revi ew was required from the entire research team for 68 articles. Codes were discussed until 100% agreement was reached. The final database includes 824 data visualization entries.

  19. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Jan Gogarten; Colin Chapman; Julio Bicca-Marques; Sébastien Calvignac-Spencer; Pengfei Fan; Peter Fashing; Songtao Guo; Claire Hemingway; Fabian Leendertz; Baoguo Li; Ikki Matsuda; Rong Hou; Juan Carlos Serio-Silva; Nils Chr. Stenseth (2019). Games academics play and their consequences: how authorship, h-index, and journal impact factors are shaping the future of academia [Dataset]. http://doi.org/10.5061/dryad.fn2z34tpx

Games academics play and their consequences: how authorship, h-index, and journal impact factors are shaping the future of academia

Related Article
Explore at:
zipAvailable download formats
Dataset updated
Dec 1, 2019
Dataset provided by
Instituto de Ecología
Robert Koch Institute
Sun Yat-sen University
University of Oslo
California State University, Fullerton
Pontifícia Universidade Católica do Rio Grande do Sul
Northwest University
George Washington University
Chubu University
Laboratoire des Sciences de l'Ingénieur, de l'Informatique et de l'Imagerie
Authors
Jan Gogarten; Colin Chapman; Julio Bicca-Marques; Sébastien Calvignac-Spencer; Pengfei Fan; Peter Fashing; Songtao Guo; Claire Hemingway; Fabian Leendertz; Baoguo Li; Ikki Matsuda; Rong Hou; Juan Carlos Serio-Silva; Nils Chr. Stenseth
License

https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

Description

Research is a highly competitive profession where evaluation plays a central role; journals are ranked and individuals are evaluated based on their publication number, the number of times they are cited, and their h-index. Yet, such evaluations are often done in inappropriate ways that are damaging to individual careers, particularly for young scholars, and to the profession. Furthermore, as with all indices, people can play games to better their scores. This has resulted in the incentive structure of science increasingly mimicking economic principles, but rather than a monetary gain, the incentive is a higher score. To ensure a diversity of cultural perspectives and individual experiences, we gathered a team of academics in the fields of ecology and evolution from around the world and at different career stages. We first examine how authorship, h-index of individuals, and journal impact factors are being used and abused. Second, we speculate on the consequences of the continued use of these metrics with the hope of sparking discussions that will help our fields move in a positive direction. We would like to see changes in the incentive systems rewarding quality research and guaranteeing transparency. Senior faculty should establish the ethical standards, mentoring practices, and institutional evaluation criteria to create the needed changes.

Methods The number of authors on research articles in six journals through time. The area of each circle corresponds to the number of publications with that publication number for that year. To aid in visual interpretation of the data, a generalized additive model was fitted to the data. For ease of interpretation, the number of authors is truncated at 100, meaning that publications with >100 coauthors are plotted here as just including 101 coauthors.

Search
Clear search
Close search
Google apps
Main menu