56 datasets found
  1. Data from: Journal Ranking Dataset

    • kaggle.com
    Updated Aug 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Abir (2023). Journal Ranking Dataset [Dataset]. https://www.kaggle.com/datasets/xabirhasan/journal-ranking-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 15, 2023
    Dataset provided by
    Kaggle
    Authors
    Abir
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Journals & Ranking

    An academic journal or research journal is a periodical publication in which research articles relating to a particular academic discipline is published, according to Wikipedia. Currently, there are more than 25,000 peer-reviewed journals that are indexed in citation index databases such as Scopus and Web of Science. These indexes are ranked on the basis of various metrics such as CiteScore, H-index, etc. The metrics are calculated from yearly citation data of the journal. A lot of efforts are given to make a metric that reflects the journal's quality.

    Journal Ranking Dataset

    This is a comprehensive dataset on the academic journals coving their metadata information as well as citation, metrics, and ranking information. Detailed data on their subject area is also given in this dataset. The dataset is collected from the following indexing databases: - Scimago Journal Ranking - Scopus - Web of Science Master Journal List

    The data is collected by scraping and then it was cleaned, details of which can be found in HERE.

    Key Features

    • Rank: Overall rank of journal (derived from sorted SJR index).
    • Title: Name or title of journal.
    • OA: Open Access or not.
    • Country: Country of origin.
    • SJR-index: A citation index calculated by Scimago.
    • CiteScore: A citation index calculated by Scopus.
    • H-index: Hirsh index, the largest number h such that at least h articles in that journal were cited at least h times each.
    • Best Quartile: Top Q-index or quartile a journal has in any subject area.
    • Best Categories: Subject areas with top quartile.
    • Best Subject Area: Highest ranking subject area.
    • Best Subject Rank: Rank of the highest ranking subject area.
    • Total Docs.: Total number of documents of the journal.
    • Total Docs. 3y: Total number of documents in the past 3 years.
    • Total Refs.: Total number of references of the journal.
    • Total Cites 3y: Total number of citations in the past 3 years.
    • Citable Docs. 3y: Total number of citable documents in the past 3 years.
    • Cites/Doc. 2y: Total number of citations divided by the total number of documents in the past 2 years.
    • Refs./Doc.: Total number of references divided by the total number of documents.
    • Publisher: Name of the publisher company of the journal.
    • Core Collection: Web of Science core collection name.
    • Coverage: Starting year of coverage.
    • Active: Active or inactive.
    • In-Press: Articles in press or not.
    • ISO Language Code: Three-letter ISO 639 code for language.
    • ASJC Codes: All Science Journal Classification codes for the journal.

    Rest of the features provide further details on the journal's subject area or category: - Life Sciences: Top level subject area. - Social Sciences: Top level subject area. - Physical Sciences: Top level subject area. - Health Sciences: Top level subject area. - 1000 General: ASJC main category. - 1100 Agricultural and Biological Sciences: ASJC main category. - 1200 Arts and Humanities: ASJC main category. - 1300 Biochemistry, Genetics and Molecular Biology: ASJC main category. - 1400 Business, Management and Accounting: ASJC main category. - 1500 Chemical Engineering: ASJC main category. - 1600 Chemistry: ASJC main category. - 1700 Computer Science: ASJC main category. - 1800 Decision Sciences: ASJC main category. - 1900 Earth and Planetary Sciences: ASJC main category. - 2000 Economics, Econometrics and Finance: ASJC main category. - 2100 Energy: ASJC main category. - 2200 Engineering: ASJC main category. - 2300 Environmental Science: ASJC main category. - 2400 Immunology and Microbiology: ASJC main category. - 2500 Materials Science: ASJC main category. - 2600 Mathematics: ASJC main category. - 2700 Medicine: ASJC main category. - 2800 Neuroscience: ASJC main category. - 2900 Nursing: ASJC main category. - 3000 Pharmacology, Toxicology and Pharmaceutics: ASJC main category. - 3100 Physics and Astronomy: ASJC main category. - 3200 Psychology: ASJC main category. - 3300 Social Sciences: ASJC main category. - 3400 Veterinary: ASJC main category. - 3500 Dentistry: ASJC main category. - 3600 Health Professions: ASJC main category.

  2. f

    Public Availability of Published Research Data in High-Impact Journals

    • plos.figshare.com
    xls
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alawi A. Alsheikh-Ali; Waqas Qureshi; Mouaz H. Al-Mallah; John P. A. Ioannidis (2023). Public Availability of Published Research Data in High-Impact Journals [Dataset]. http://doi.org/10.1371/journal.pone.0024357
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Alawi A. Alsheikh-Ali; Waqas Qureshi; Mouaz H. Al-Mallah; John P. A. Ioannidis
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundThere is increasing interest to make primary data from published research publicly available. We aimed to assess the current status of making research data available in highly-cited journals across the scientific literature. Methods and ResultsWe reviewed the first 10 original research papers of 2009 published in the 50 original research journals with the highest impact factor. For each journal we documented the policies related to public availability and sharing of data. Of the 50 journals, 44 (88%) had a statement in their instructions to authors related to public availability and sharing of data. However, there was wide variation in journal requirements, ranging from requiring the sharing of all primary data related to the research to just including a statement in the published manuscript that data can be available on request. Of the 500 assessed papers, 149 (30%) were not subject to any data availability policy. Of the remaining 351 papers that were covered by some data availability policy, 208 papers (59%) did not fully adhere to the data availability instructions of the journals they were published in, most commonly (73%) by not publicly depositing microarray data. The other 143 papers that adhered to the data availability instructions did so by publicly depositing only the specific data type as required, making a statement of willingness to share, or actually sharing all the primary data. Overall, only 47 papers (9%) deposited full primary raw data online. None of the 149 papers not subject to data availability policies made their full primary data publicly available. ConclusionA substantial proportion of original research papers published in high-impact journals are either not subject to any data availability policies, or do not adhere to the data availability instructions in their respective journals. This empiric evaluation highlights opportunities for improvement.

  3. Scientific JOURNALS Indicators & Info - SCImagoJR

    • kaggle.com
    Updated Apr 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ali Jalaali (2025). Scientific JOURNALS Indicators & Info - SCImagoJR [Dataset]. https://www.kaggle.com/datasets/alijalali4ai/scimagojr-scientific-journals-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 9, 2025
    Dataset provided by
    Kaggle
    Authors
    Ali Jalaali
    Description


    The SCImago Journal & Country Rank is a publicly available portal that includes the journals and country scientific indicators developed from the information contained in the Scopus® database (Elsevier B.V.). These indicators can be used to assess and analyze scientific domains. Journals can be compared or analysed separately.


    💬Also have a look at
    💡 COUNTRIES Research & Science Dataset - SCImagoJR
    💡 UNIVERSITIES & Research INSTITUTIONS Rank - SCImagoIR

    • Journals can be grouped by subject area (27 major thematic areas), subject category (309 specific subject categories) or by country.
    • Citation data is drawn from over 34,100 titles from more than 5,000 international publishers
    • This platform takes its name from the SCImago Journal Rank (SJR) indicator , developed by SCImago from the widely known algorithm Google PageRank™. This indicator shows the visibility of the journals contained in the Scopus® database from 1996.
    • SCImago is a research group from the Consejo Superior de Investigaciones Científicas (CSIC), University of Granada, Extremadura, Carlos III (Madrid) and Alcalá de Henares, dedicated to information analysis, representation and retrieval by means of visualisation techniques.

    ☢️❓The entire dataset is obtained from public and open-access data of ScimagoJR (SCImago Journal & Country Rank)
    ScimagoJR Journal Rank
    SCImagoJR About Us

    Available indicators:

    • SJR (SCImago Journal Rank) indicator: It expresses the average number of weighted citations received in the selected year by the documents published in the selected journal in the three previous years, --i.e. weighted citations received in year X to documents published in the journal in years X-1, X-2 and X-3. See detailed description of SJR (PDF).
    • H Index: The h index expresses the journal's number of articles (h) that have received at least h citations. It quantifies both journal scientific productivity and scientific impact and it is also applicable to scientists, countries, etc. (see H-index wikipedia definition)
    • Total Documents: Output of the selected period. All types of documents are considered, including citable and non citable documents.
    • Total Documents (3years): Published documents in the three previous years (selected year documents are excluded), i.e.when the year X is selected, then X-1, X-2 and X-3 published documents are retrieved. All types of documents are considered, including citable and non citable documents.
    • Citable Documents (3 years): Number of citable documents published by a journal in the three previous years (selected year documents are excluded). Exclusively articles, reviews and conference papers are considered. Non-citable Docs. (Available in the graphics) Non-citable documents ratio in the period being considered.
    • Total Cites (3years): Number of citations received in the seleted year by a journal to the documents published in the three previous years, --i.e. citations received in year X to documents published in years X-1, X-2 and X-3. All types of documents are considered.
    • Cites per Document (2 years): Average citations per document in a 2 year period. It is computed considering the number of citations received by a journal in the current year to the documents published in the two previous years, --i.e. citations received in year X to documents published in years X-1 and X-2.
    • Cites per Document (3 years): Average citations per document in a 3 year period. It is computed considering the number of citations received by a journal in the current year to the documents published in the three previous years, --i.e. citations received in year X to documents published in years X-1, X-2 and X-3.
    • Self Cites: Number of journal's self-citations in the seleted year to its own documents published in the three previous years, --i.e. self-citations in year X to documents published in years X-1, X-2 and X-3. All types of documents are considered.
    • Cited Documents: Number of documents cited at least once in the three previous years, --i.e. years X-1, X-2 and X-3
    • Uncited Documents: Number of uncited documents in the three previous years, --i.e. years X-1, X-2 and X-3
    • Total References: It includes all the bibliographical references in a journal in the selected period.
    • References per Document:Average number of references per document in the selected year.
    • % International Collaboration: Document ratio whose affiliation includes more than one country address.
  4. n

    Data from: The assessment of science: the relative merits of...

    • data.niaid.nih.gov
    • zenodo.org
    • +1more
    zip
    Updated Oct 7, 2014
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Adam Eyre-Walker; Nina Stoletzki (2014). The assessment of science: the relative merits of post-publication review, the impact factor and the number of citations [Dataset]. http://doi.org/10.5061/dryad.2h4j5
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 7, 2014
    Dataset provided by
    University of Sussex
    Hannover, Germany
    Authors
    Adam Eyre-Walker; Nina Stoletzki
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Background: The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper and the impact factor of the journal in which the article was published. Methodology/principle findings: We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. Conclusions: We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative.

  5. H

    Data from: A study of the impact of data sharing on article citations using...

    • dataverse.harvard.edu
    • search.dataone.org
    • +1more
    application/gzip +13
    Updated Sep 4, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Harvard Dataverse (2020). A study of the impact of data sharing on article citations using journal policies as a natural experiment [Dataset]. http://doi.org/10.7910/DVN/ORTJT5
    Explore at:
    text/x-stata-syntax(519), txt(0), png(15306), type/x-r-syntax(569), jar(21709328), pdf(65387), tsv(35864), text/markdown(125), bin(26), application/gzip(111839), text/x-python(0), application/x-stata-syntax(720), tex(3986), text/plain; charset=us-ascii(91)Available download formats
    Dataset updated
    Sep 4, 2020
    Dataset provided by
    Harvard Dataverse
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This study estimates the effect of data sharing on the citations of academic articles, using journal policies as a natural experiment. We begin by examining 17 high-impact journals that have adopted the requirement that data from published articles be publicly posted. We match these 17 journals to 13 journals without policy changes and find that empirical articles published just before their change in editorial policy have citation rates with no statistically significant difference from those published shortly after the shift. We then ask whether this null result stems from poor compliance with data sharing policies, and use the data sharing policy changes as instrumental variables to examine more closely two leading journals in economics and political science with relatively strong enforcement of new data policies. We find that articles that make their data available receive 97 additional citations (estimate standard error of 34). We conclude that: a) authors who share data may be rewarded eventually with additional scholarly citations, and b) data-posting policies alone do not increase the impact of articles published in a journal unless those policies are enforced.

  6. Data of the article "Journal research data sharing policies: a study of...

    • zenodo.org
    Updated May 26, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Antti Rousi; Antti Rousi (2021). Data of the article "Journal research data sharing policies: a study of highly-cited journals in neuroscience, physics, and operations research" [Dataset]. http://doi.org/10.5281/zenodo.3635511
    Explore at:
    Dataset updated
    May 26, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Antti Rousi; Antti Rousi
    Description

    The journals’ author guidelines and/or editorial policies were examined on whether they take a stance with regard to the availability of the underlying data of the submitted article. The mere explicated possibility of providing supplementary material along with the submitted article was not considered as a research data policy in the present study. Furthermore, the present article excluded source codes or algorithms from the scope of the paper and thus policies related to them are not included in the analysis of the present article.

    For selection of journals within the field of neurosciences, Clarivate Analytics’ InCites Journal Citation Reports database was searched using categories of neurosciences and neuroimaging. From the results, journals with the 40 highest Impact Factor (for the year 2017) indicators were extracted for scrutiny of research data policies. Respectively, the selection journals within the field of physics was created by performing a similar search with the categories of physics, applied; physics, atomic, molecular & chemical; physics, condensed matter; physics, fluids & plasmas; physics, mathematical; physics, multidisciplinary; physics, nuclear and physics, particles & fields. From the results, journals with the 40 highest Impact Factor indicators were again extracted for scrutiny. Similarly, the 40 journals representing the field of operations research were extracted by using the search category of operations research and management.

    Journal-specific data policies were sought from journal specific websites providing journal specific author guidelines or editorial policies. Within the present study, the examination of journal data policies was done in May 2019. The primary data source was journal-specific author guidelines. If journal guidelines explicitly linked to the publisher’s general policy with regard to research data, these were used in the analyses of the present article. If journal-specific research data policy, or lack of, was inconsistent with the publisher’s general policies, the journal-specific policies and guidelines were prioritized and used in the present article’s data. If journals’ author guidelines were not openly available online due to, e.g., accepting submissions on an invite-only basis, the journal was not included in the data of the present article. Also journals that exclusively publish review articles were excluded and replaced with the journal having the next highest Impact Factor indicator so that each set representing the three field of sciences consisted of 40 journals. The final data thus consisted of 120 journals in total.

    ‘Public deposition’ refers to a scenario where researcher deposits data to a public repository and thus gives the administrative role of the data to the receiving repository. ‘Scientific sharing’ refers to a scenario where researcher administers his or her data locally and by request provides it to interested reader. Note that none of the journals examined in the present article required that all data types underlying a submitted work should be deposited into a public data repositories. However, some journals required public deposition of data of specific types. Within the journal research data policies examined in the present article, these data types are well presented by the Springer Nature policy on “Availability of data, materials, code and protocols” (Springer Nature, 2018), that is, DNA and RNA data; protein sequences and DNA and RNA sequencing data; genetic polymorphisms data; linked phenotype and genotype data; gene expression microarray data; proteomics data; macromolecular structures and crystallographic data for small molecules. Furthermore, the registration of clinical trials in a public repository was also considered as a data type in this study. The term specific data types used in the custom coding framework of the present study thus refers to both life sciences data and public registration of clinical trials. These data types have community-endorsed public repositories where deposition was most often mandated within the journals’ research data policies.

    The term ‘location’ refers to whether the journal’s data policy provides suggestions or requirements for the repositories or services used to share the underlying data of the submitted works. A mere general reference to ‘public repositories’ was not considered a location suggestion, but only references to individual repositories and services. The category of ‘immediate release of data’ examines whether the journals’ research data policy addresses the timing of publication of the underlying data of submitted works. Note that even though the journals may only encourage public deposition of the data, the editorial processes could be set up so that it leads to either publication of the research data or the research data metadata in conjunction to publishing of the submitted work.

  7. Data from: Reliance on Science in Patenting

    • zenodo.org
    • explore.openaire.eu
    pdf, tsv, zip
    Updated Jul 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Matt Marx; Matt Marx; Aaron Fuegi; Aaron Fuegi (2024). Reliance on Science in Patenting [Dataset]. http://doi.org/10.5281/zenodo.3382981
    Explore at:
    pdf, zip, tsvAvailable download formats
    Dataset updated
    Jul 22, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Matt Marx; Matt Marx; Aaron Fuegi; Aaron Fuegi
    License

    Open Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
    License information was derived automatically

    Description

    This dataset contains citations from USPTO patents granted 1947-2018 to articles captured by the Microsoft Academic Graph (MAG) from 1800-2018. If you use the data, please cite these two papers:

    for the dataset of citations: Marx, Matt and Aaron Fuegi, "Reliance on Science in Patenting: USPTO Front-Page Citations to Scientific Articles" (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3331686)

    for the underlying dataset of papers Sinha, Arnab, Zhihong Shen, Yang Song, Hao Ma, Darrin Eide, Bo-June (Paul) Hsu, and Kuansan Wang. 2015. An Overview of Microsoft Academic Service (MAS) and Applications. In Proceedings of the 24th International Conference on World Wide Web (WWW ’15 Companion). ACM, New York, NY, USA, 243-246.

    The main file, pcs.tsv, contains the resolved citations. Fields are tab-separated. Each match has the patent number, MAG ID, the original citation from the patent, an indicator for whether the citation was supplied by the applicant, examiner, or unknown, and a confidence score (1-10) indicating how likely this match is correct. Note that this distribution does not contain matches with confidence 2 or 1.

    There is also a PubMed-specific match in pcs-pubmed.tsv.

    The remaining files are a redistribution of the 1 January 2019 release of the Microsoft Academic Graph. All of these files are compressed using ZIP compression under CentOS5. Original files, documented at https://docs.microsoft.com/en-us/academic-services/graph/reference-data-schema, can be downloaded from https://aka.ms/msracad; this redistribution carves up the original files into smaller, variable-specific files that can be loaded individually (see _relianceonscience.pdf for full details).

    Source code for generating the patent citations to science in pcs.tsv is available at https://github.com/mattmarx/reliance_on_science. Source code for generating jif.zip and jcif.zip (Journal Impact Factor and Journal Commercial Impact Factor) is at https://github.com/mattmarx/jcif.

    Although MAG contains authors and affiliations for each paper, it does not contain the location for affiliations. We have created a dataset of locations for affiliations appearing at least 100x using Bing Maps and Google Maps; however, it is unclear to us whether the API licensing terms allow us to repost their data. In any case, you can download our source code for doing so here: https://github.com/ksjiaxian/api-requester-locations.

    MAG extracts field keywords for each paper (paperfieldid.zip and fieldidname.zip) --more than 200,000 fields in all! When looking to study industries or technical areas you might find this a bit overwhelming. We mapped the MAG subjects to six OECD fields and 39 subfields, defined here: http://www.oecd.org/science/inno/38235147.pdf. Clarivate provides a crosswalk between the OECD classifications and Web of Science fields, so we include WoS fields as well. This file is magfield_oecd_wos_crosswalk.zip.

  8. d

    Data from: Evaluating the Impact of Altmetrics

    • dataone.org
    • search.dataone.org
    • +1more
    Updated Jan 3, 2013
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Drew Wright (2013). Evaluating the Impact of Altmetrics [Dataset]. http://identifiers.org/ark:/90135/q17m05w4/2/mrt-eml.xml
    Explore at:
    Dataset updated
    Jan 3, 2013
    Dataset provided by
    ONEShare Repository
    Authors
    Drew Wright
    Description

    Librarians, publishers, and researchers have long placed significant emphasis on journal metrics such as the impact factor. However, these tools do not take into account the impact of research outside of citations and publications. Altmetrics seek to describe the reach of scholarly activity across the Internet and social media to paint a more vivid picture of the scholarly landscape. In order to examine the impact of altmetrics on scholarly activity, it is helpful to compare these new tools to an existing method. Citation counts are currently the standard for determining the impact of a scholarly work, and two studies were conducted to examine the correlation between citation count and altmetric data. First, a set of highly cited papers was chosen across a variety of disciplines, and their citation counts were compared with the altmetrics generated from Total-Impact.org. Second, to evaluate the hypothesized increased impact of altmetrics on recently published articles, a set of articles published in 2011 were taken from a sampling of journals with high impact factors, both subscription-based and open access, and the altmetrics were then compared to their citation counts.

  9. C

    Data for: Social Media Analysis of High-Impact Information and Communication...

    • dataverse.csuc.cat
    • produccioncientifica.ugr.es
    tsv, txt
    Updated Apr 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jesús Daniel Cascón Katchadourian; Jesús Daniel Cascón Katchadourian; Javier Guallar; Javier Guallar; Wileidys Artigas; Wileidys Artigas (2025). Data for: Social Media Analysis of High-Impact Information and Communication Journals: Adoption, Use and Content Curation [Dataset]. http://doi.org/10.34810/data1905
    Explore at:
    tsv(16146), tsv(50320), txt(3050)Available download formats
    Dataset updated
    Apr 7, 2025
    Dataset provided by
    CORA.Repositori de Dades de Recerca
    Authors
    Jesús Daniel Cascón Katchadourian; Jesús Daniel Cascón Katchadourian; Javier Guallar; Javier Guallar; Wileidys Artigas; Wileidys Artigas
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset presents content curation practices in top-ranked scientific journals (Q1) within the Communication (COM) and Library and Information Science (LIS) categories, as classified by Scimago Journal Rank. It focuses on how these journals manage and publish content on social media, analyzing indicators such as total and curated publications, topics covered, techniques employed, and types of integration used in their curation strategies. The study aims to identify patterns, methods, and differences between the two categories, providing a comprehensive overview of academic content curation practices. The data was collected from March to April 2024.

  10. r

    Food and Energy Security CiteScore 2024-2025 - ResearchHelpDesk

    • researchhelpdesk.org
    Updated May 9, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). Food and Energy Security CiteScore 2024-2025 - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/sjr/536/food-and-energy-security
    Explore at:
    Dataset updated
    May 9, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    Food and Energy Security CiteScore 2024-2025 - ResearchHelpDesk - Food and Energy Security is a high quality and high impact open access journal publishing original research on agricultural crop and forest productivity to improve food and energy security. Aims and Scope Food and Energy Security seeks to publish high quality and high impact original research on agricultural crop and forest productivity to improve food and energy security. It actively seeks submissions from emerging countries with expanding agricultural research communities. Papers from China, other parts of Asia, India and South America are particularly welcome. The Editorial Board, headed by Editor-in-Chief Professor Christine Foyer, is determined to make FES the leading publication in its sector and will be aiming for a top-ranking impact factor. Primary research articles should report hypothesis-driven investigations that provide new insights into mechanisms and processes that determine productivity and properties for exploitation. Review articles are welcome but they must be critical in approach and provide particularly novel and far-reaching insights. Food and Energy Security offers authors a forum for the discussion of the most important advances in this field and promotes an integrative approach of scientific disciplines. Papers must contribute substantially to the advancement of knowledge. Examples of areas covered in Food and Energy Security include: Agronomy Biotechnological Approaches Breeding & Genetics Climate Change Quality and Composition Food Crops and Bioenergy Feedstocks Developmental, Physiology, and Biochemistry Functional Genomics Molecular Biology Pest and Disease Management Political, economic and societal influences on food security and agricultural crop production Post Harvest Biology Soil Science Systems Biology The journal is Open Access and published online. Submission of manuscripts to Food and Energy Security is exclusive via a web-based electronic submission and tracking system enabling rapid submission to first decision times. Before submitting a paper for publication, potential authors should first read the Author Guidelines. Instructions as to how to upload your manuscript can be found on ScholarOne Manuscripts. Keywords Agricultural economics, Agriculture, Bioenergy, Biofuels, Biochemistry, Biotechnology, Breeding, Composition, Development, Diseases, Feedstocks, Food, Food Security, Food Safety, Forestry, Functional Genomics, Genetics, Horticulture, Pests, Phenomics, Plant Architecture, Plant Biotechnology, Plant Science, Quality Traits, Secondary Metabolites, Social policies, Weed Science. Abstracting and Indexing Information Abstracts on Hygiene & Communicable Diseases (CABI) AgBiotechNet (CABI) AGRICOLA Database (National Agricultural Library) Agricultural Economics Database (CABI) Animal Breeding Abstracts (CABI) Animal Production Database (CABI) Animal Science Database (CABI) CAB Abstracts® (CABI) Current Contents: Agriculture, Biology & Environmental Sciences (Clarivate Analytics) Environmental Impact (CABI) Global Health (CABI) Nutrition & Food Sciences Database (CABI) Nutrition Abstracts & Reviews Series A: Human & Experimental (CABI) Plant Breeding Abstracts (CABI) Plant Genetics and Breeding Database (CABI) Plant Protection Database (CABI) Postharvest News & Information (CABI) Science Citation Index Expanded (Clarivate Analytics) SCOPUS (Elsevier) Seed Abstracts (CABI) Soil Science Database (CABI) Soils & Fertilizers Abstracts (CABI) Web of Science (Clarivate Analytics) Weed Abstracts (CABI) Wheat, Barley & Triticale Abstracts (CABI) World Agricultural Economics & Rural Sociology Abstracts (CABI) Society Information The Association of Applied Biologists is a registered charity (No. 275655), that was founded in 1904. The Association's overall aim is: 'To promote the study and advancement of all branches of Biology and in particular (but without prejudice to the generality of the foregoing), to foster the practice, growth, and development of applied biology, including the application of biological sciences for the production and preservation of food, fiber, and other materials and for the maintenance and improvement of earth's physical environment'.

  11. Data from: The effectiveness of journals as arbiters of scientific quality

    • zenodo.org
    • data.niaid.nih.gov
    • +2more
    Updated May 31, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    C.E. Timothy Paine; Charles W. Fox; C. E. Timothy Paine; C.E. Timothy Paine; Charles W. Fox; C. E. Timothy Paine (2022). Data from: The effectiveness of journals as arbiters of scientific quality [Dataset]. http://doi.org/10.5061/dryad.6nh4fc2
    Explore at:
    Dataset updated
    May 31, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    C.E. Timothy Paine; Charles W. Fox; C. E. Timothy Paine; C.E. Timothy Paine; Charles W. Fox; C. E. Timothy Paine
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Academic publishers purport to be arbiters of knowledge, aiming to publish studied that advance the frontiers of their research domain. Yet the effectiveness of journal editors at identifying novel and important research is generally unknown, in part because of the confidential nature of the editorial and peer-review process. Using questionnaires, we evaluated the degree to which journals are effective arbiters of scientific impact in the domain of Ecology, quantified by three key criteria. First, journals discriminated against low-impact manuscripts: the probability of rejection increased as the number of citations gained by the published paper decreased. Second, journals were more likely to publish high-impact manuscripts (those that obtained citations in 90th percentile for their journal) than run-of-the-mill manuscripts; editors were only 23 and 41% as likely to reject an eventual high-impact paper (pre- versus post-review rejection) compared to a run-of-the-mill paper. Third, editors did occasionally reject papers that went on to be highly cited. Error rates were low, however: only 3.8% of rejected papers gained more citations than the median article in the journal that rejected them, and only 9.2% of rejected manuscripts went on to be high-impact papers in the (generally lower impact factor) publishing journal. The effectiveness of scientific arbitration increased with journal prominence, although some highly prominent journals were no more effective than much less prominent ones. We conclude that the academic publishing system, founded on peer review, appropriately recognises the significance of research contained in manuscripts, as measured by the number of citations that manuscripts obtain after publication, even though some errors are made. We therefore recommend that authors reduce publication delays by choosing journals appropriate to the significance of their research.

  12. r

    International Journal of Scientific and Technology Research FAQ -...

    • researchhelpdesk.org
    Updated Jun 8, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). International Journal of Scientific and Technology Research FAQ - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/faq/560/international-journal-of-scientific-and-technology-research
    Explore at:
    Dataset updated
    Jun 8, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    International Journal of Scientific and Technology Research FAQ - ResearchHelpDesk - IJSTR - International Journal of Scientific & Technology Research is an open access international journal from diverse fields in sciences, engineering, and technologies Open Access that emphasizes new research, development, and applications. Papers reporting original research or extended versions of already published conference/journal papers are all welcomed. Papers for publication are selected through peer review to ensure originality, relevance, and readability. IJSTR ensures a wide indexing policy to make published papers highly visible to the scientific community. IJSTR is part of the eco-friendly community and favors e-publication mode for being an online 'GREEN journal'. IJSTR is an international peer-reviewed, electronic, online journal published monthly. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching, and research in the fields of engineering, science, and technology. Original theoretical work and application-based studies, which contribute to a better understanding of engineering, science, and technological challenges, are encouraged. IJSTR Publication Charges IJSTR covers the costs partially through article processing fees. IJSTR expenses are split among peer review administration and management, production of articles in PDF format, editorial costs, electronic composition and production, journal information system, manuscript management system, electronic archiving, overhead expenses, and administrative costs. Moreover, we are providing research paper publishing in minimum available costing such as there are no charges for rejected articles, no submission charges, and no surcharges based on the figures or supplementary data. IJSTR Publication Indexing IJSTR ​​​​​submit all published papers to indexing partners. Indexing totally depends on the content, indexing partner guidelines, and their indexing procedures. This is the reason sometimes indexing happens immediately and sometimes it takes time. Publication with IJSTR does not guarantee that paper will surely be added indexing partner website. The whole process for including any article (s) in the Scopus database is done by the Scopus team only. Journal or Publication House doesn't have any involvement in the decision whether to accept or reject a paper for the Scopus database and cannot influence the processing time of paper. International Journal of Scientific & Technology Research RG Journal Impact: 0.31 * *This value is calculated using ResearchGate data and is based on average citation counts from work published in this journal. The data used in the calculation may not be exhaustive. RG Journal impact history 2018 / 2019 0.31 2017 0.34 2016 0.33 2015 0.36 2014 0.19 Is Ijstr Scopus indexed? Yes IJSTR - International Journal of Scientific & Technology Research Journal is Scopus indexed. please visit for more details - IJSTR Scoups

  13. U

    Data from: Availability of Study Protocols for Randomized Trials Published...

    • datacatalog.hshsl.umaryland.edu
    Updated Mar 27, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Peter Doshi; O'Mareen Spence; Kyungwan Hong; Richie Onwuchekwa Uba (2024). Availability of Study Protocols for Randomized Trials Published in High-Impact Medical Journals: A Cross-Sectional Analysis [Dataset]. http://doi.org/10.5281/zenodo.1344634
    Explore at:
    Dataset updated
    Mar 27, 2024
    Dataset provided by
    HS/HSL
    Authors
    Peter Doshi; O'Mareen Spence; Kyungwan Hong; Richie Onwuchekwa Uba
    Description

    To improve reporting transparency and research integrity, some journals have begun publishing study protocols and statistical analysis plans alongside trial publications. To determine the overall availability and characteristics of protocols and statistical analysis plans this study reviewed all randomized clinical trials (RCT) published in 2016 in the following 5 general medicine journals: Annals of Internal Medicine, BMJ, JAMA, Lancet, and NEJM. Characteristics of RCTs were extracted from the publication and clinical trial registry. A detailed assessment of protocols and statistical analysis plans was conducted in a 20% random sample of trials. Dataset contains extraction sheets (as SAS data files), code to calculate the values in the tables in the manuscript, and a supplemental file with additional notes on methods used in the study.

  14. n

    Data from: Forecasting the publication and citation outcomes of Covid-19...

    • data.niaid.nih.gov
    • search.dataone.org
    • +1more
    zip
    Updated Sep 27, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thomas Pfeiffer; Michael Gordon; Michael Bishop; Yiling Chen; Brandon Goldfedder; Anna Dreber; Felix Holzmeister; Magnus Johannesson; Yang Liu; Charles Twardy; Juntao Wang; Luisa Tran (2022). Forecasting the publication and citation outcomes of Covid-19 preprints [Dataset]. http://doi.org/10.5061/dryad.rfj6q57d0
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 27, 2022
    Dataset provided by
    Stockholm School of Economics
    Jacobs Engineering Group
    Massey University
    University of California, Santa Cruz
    Gold Brand Software
    Michael Bishop Consulting
    Harvard University
    Universität Innsbruck
    Authors
    Thomas Pfeiffer; Michael Gordon; Michael Bishop; Yiling Chen; Brandon Goldfedder; Anna Dreber; Felix Holzmeister; Magnus Johannesson; Yang Liu; Charles Twardy; Juntao Wang; Luisa Tran
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    The scientific community reacted quickly to the Covid-19 pandemic in 2020, generating an unprecedented increase in publications. Many of these publications were released on preprint servers such as medRxiv and bioRxiv. It is unknown however how reliable these preprints are, and if they will eventually be published in scientific journals. In this study, we use crowdsourced human forecasts to predict publication outcomes and future citation counts for a sample of 400 preprints with high Altmetric scores. Most of these preprints were published within one year of upload on a preprint server (70%), and 46% of the published preprints appeared in a high-impact journal with a Journal Impact Factor of at least 10. On average, the preprints received 162 citations within the first year. We found that forecasters can predict if preprints will be published after one year and if the publishing journal has high impact. Forecasts are also informative with respect to preprints’ rankings in terms of Google Scholar citations within one year of upload on a preprint server. For both types of assessment, we found statistically significant positive correlations between forecasts and observed outcomes. While the forecasts can help to provide a preliminary assessment of preprints at a faster pace than the traditional peer-review process, it remains to be investigated if such an assessment is suited to identify methodological problems in pre-prints. Methods The dataset consists of survey responses collected through Qualtrix. Data was formatted and stored as .csv, and analysed with R.

  15. m

    Data for: Agreement in Reporting between Trial Publications and Current...

    • data.mendeley.com
    • explore.openaire.eu
    Updated Jan 5, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sarah Daisy Kosa (2018). Data for: Agreement in Reporting between Trial Publications and Current Clinical Trial Registry in High Impact Journals: A Methodological Survey [Dataset]. http://doi.org/10.17632/dnhsxxhffc.1
    Explore at:
    Dataset updated
    Jan 5, 2018
    Authors
    Sarah Daisy Kosa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The agreement between the registry and published paper for the 200 published articles, as well as study characteristics.

  16. Dataset for Spence et al., "Availability of study protocols for randomized...

    • zenodo.org
    bin
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    O'Mareen Spence; Kyungwan Hong; Richie Onwuchekwa Uba; Peter Doshi; O'Mareen Spence; Kyungwan Hong; Richie Onwuchekwa Uba; Peter Doshi (2020). Dataset for Spence et al., "Availability of study protocols for randomized trials published in high-impact medical journals: cross-sectional analysis" (CITATION) [Dataset]. http://doi.org/10.5281/zenodo.3368770
    Explore at:
    binAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    O'Mareen Spence; Kyungwan Hong; Richie Onwuchekwa Uba; Peter Doshi; O'Mareen Spence; Kyungwan Hong; Richie Onwuchekwa Uba; Peter Doshi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Contains our extraction sheets (as SAS data files), code to calculate the values in the tables in our manuscript, and a supplemental file with additional notes on methods used in our study.

  17. n

    Data of top 50 most cited articles about COVID-19 and the complications of...

    • data.niaid.nih.gov
    • search.dataone.org
    • +1more
    zip
    Updated Jan 10, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tanya Singh; Jagadish Rao Padubidri; Pavanchand Shetty H; Matthew Antony Manoj; Therese Mary; Bhanu Thejaswi Pallempati (2024). Data of top 50 most cited articles about COVID-19 and the complications of COVID-19 [Dataset]. http://doi.org/10.5061/dryad.tx95x6b4m
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 10, 2024
    Dataset provided by
    Kasturba Medical College, Mangalore
    Authors
    Tanya Singh; Jagadish Rao Padubidri; Pavanchand Shetty H; Matthew Antony Manoj; Therese Mary; Bhanu Thejaswi Pallempati
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Background This bibliometric analysis examines the top 50 most-cited articles on COVID-19 complications, offering insights into the multifaceted impact of the virus. Since its emergence in Wuhan in December 2019, COVID-19 has evolved into a global health crisis, with over 770 million confirmed cases and 6.9 million deaths as of September 2023. Initially recognized as a respiratory illness causing pneumonia and ARDS, its diverse complications extend to cardiovascular, gastrointestinal, renal, hematological, neurological, endocrinological, ophthalmological, hepatobiliary, and dermatological systems. Methods Identifying the top 50 articles from a pool of 5940 in Scopus, the analysis spans November 2019 to July 2021, employing terms related to COVID-19 and complications. Rigorous review criteria excluded non-relevant studies, basic science research, and animal models. The authors independently reviewed articles, considering factors like title, citations, publication year, journal, impact factor, authors, study details, and patient demographics. Results The focus is primarily on 2020 publications (96%), with all articles being open-access. Leading journals include The Lancet, NEJM, and JAMA, with prominent contributions from Internal Medicine (46.9%) and Pulmonary Medicine (14.5%). China played a major role (34.9%), followed by France and Belgium. Clinical features were the primary study topic (68%), often utilizing retrospective designs (24%). Among 22,477 patients analyzed, 54.8% were male, with the most common age group being 26–65 years (63.2%). Complications affected 13.9% of patients, with a recovery rate of 57.8%. Conclusion Analyzing these top-cited articles offers clinicians and researchers a comprehensive, timely understanding of influential COVID-19 literature. This approach uncovers attributes contributing to high citations and provides authors with valuable insights for crafting impactful research. As a strategic tool, this analysis facilitates staying updated and making meaningful contributions to the dynamic field of COVID-19 research. Methods A bibliometric analysis of the most cited articles about COVID-19 complications was conducted in July 2021 using all journals indexed in Elsevier’s Scopus and Thomas Reuter’s Web of Science from November 1, 2019 to July 1, 2021. All journals were selected for inclusion regardless of country of origin, language, medical speciality, or electronic availability of articles or abstracts. The terms were combined as follows: (“COVID-19” OR “COVID19” OR “SARS-COV-2” OR “SARSCOV2” OR “SARS 2” OR “Novel coronavirus” OR “2019-nCov” OR “Coronavirus”) AND (“Complication” OR “Long Term Complication” OR “Post-Intensive Care Syndrome” OR “Venous Thromboembolism” OR “Acute Kidney Injury” OR “Acute Liver Injury” OR “Post COVID-19 Syndrome” OR “Acute Cardiac Injury” OR “Cardiac Arrest” OR “Stroke” OR “Embolism” OR “Septic Shock” OR “Disseminated Intravascular Coagulation” OR “Secondary Infection” OR “Blood Clots” OR “Cytokine Release Syndrome” OR “Paediatric Inflammatory Multisystem Syndrome” OR “Vaccine Induced Thrombosis with Thrombocytopenia Syndrome” OR “Aspergillosis” OR “Mucormycosis” OR “Autoimmune Thrombocytopenia Anaemia” OR “Immune Thrombocytopenia” OR “Subacute Thyroiditis” OR “Acute Respiratory Failure” OR “Acute Respiratory Distress Syndrome” OR “Pneumonia” OR “Subcutaneous Emphysema” OR “Pneumothorax” OR “Pneumomediastinum” OR “Encephalopathy” OR “Pancreatitis” OR “Chronic Fatigue” OR “Rhabdomyolysis” OR “Neurologic Complication” OR “Cardiovascular Complications” OR “Psychiatric Complication” OR “Respiratory Complication” OR “Cardiac Complication” OR “Vascular Complication” OR “Renal Complication” OR “Gastrointestinal Complication” OR “Haematological Complication” OR “Hepatobiliary Complication” OR “Musculoskeletal Complication” OR “Genitourinary Complication” OR “Otorhinolaryngology Complication” OR “Dermatological Complication” OR “Paediatric Complication” OR “Geriatric Complication” OR “Pregnancy Complication”) in the Title, Abstract or Keyword. A total of 5940 articles were accessed, of which the top 50 most cited articles about COVID-19 and Complications of COVID-19 were selected through Scopus. Each article was reviewed for its appropriateness for inclusion. The articles were independently reviewed by three researchers (JRP, MAM and TS) (Table 1). Differences in opinion with regard to article inclusion were resolved by consensus. The inclusion criteria specified articles that were focused on COVID-19 and Complications of COVID-19. Articles were excluded if they did not relate to COVID-19 and or complications of COVID-19, Basic Science Research and studies using animal models or phantoms. Review articles, Viewpoints, Guidelines, Perspectives and Meta-analysis were also excluded from the top 50 most-cited articles (Table 1). The top 50 most-cited articles were compiled in a single database and the relevant data was extracted. The database included: Article Title, Scopus Citations, Year of Publication, Journal, Journal Impact Factor, Authors, Number of Authors, Department Affiliation, Number of Institutions, Country of Origin, Study Topic, Study Design, Sample Size, Open Access, Non-Original Articles, Patient/Participants Age, Gender, Symptoms, Signs, Co-morbidities, Complications, Imaging Modalities Used and outcome.

  18. d

    Randomized controlled oncology trials with tumor stage inclusion criteria

    • search.dataone.org
    • data.niaid.nih.gov
    • +2more
    Updated Jan 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Paul Windisch; Daniel R. Zwahlen (2025). Randomized controlled oncology trials with tumor stage inclusion criteria [Dataset]. http://doi.org/10.5061/dryad.g4f4qrfzn
    Explore at:
    Dataset updated
    Jan 5, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Paul Windisch; Daniel R. Zwahlen
    Time period covered
    Jun 22, 2024
    Description

    Background: Extracting inclusion and exclusion criteria in a structured, automated fashion remains a challenge to developing better search functionalities or automating systematic reviews of randomized controlled trials in oncology. The question “Did this trial enroll patients with localized disease, metastatic disease, or both?†could be used to narrow down the number of potentially relevant trials when conducting a search. Dataset collection: 600 randomized controlled trials from high-impact medical journals were classified depending on whether they allowed for the inclusion of patients with localized and/or metastatic disease. The dataset was randomly split into a training/validation and a test set of 500 and 100 trials respectively. However, the sets could be merged to allow for different splits. Data properties: Each trial is a row in the csv file. For each trial there is a doi, a publication date, a title, an abstract, the abstract sections (introduction, methods, results, conclus..., Randomized controlled oncology trials from seven major journals (British Medical Journal, JAMA, JAMA Oncology, Journal of Clinical Oncology, Lancet, Lancet Oncology, New England Journal of Medicine) published between 2005 and 2023 were randomly sampled and annotated with the labels “LOCAL†, “METASTATIC†, both or none. Trials that allowed for the inclusion of patients with localized disease received the label “LOCAL†. Trials that allowed for the inclusion of patients with metastatic disease received the label “METASTATIC†. Trials that allowed for the inclusion of patients with either localized or metastatic disease received bot labels. Screening trials that enrolled patients without known cancer or trials of interventions to prevent cancer were assigned no label. Trials of tumor entities where the distinction between localized and metastatic disease is usually not made (e.g., hematologic malignancies) were skipped. Annotation was based on the title and abstract. If those were inconclusiv..., , # Randomized controlled oncology trials with tumor stage inclusion criteria

    https://doi.org/10.5061/dryad.g4f4qrfzn

    600 randomized controlled oncology trials from high-impact medical journals (British Medical Journal, JAMA, JAMA Oncology, Journal of Clinical Oncology, Lancet, Lancet Oncology, New England Journal of Medicine) published between 2005 and 2023**Â **were randomly sampled and classified depending on whether they allowed for the inclusion of patients with localized and/or metastatic disease. The dataset was randomly split into a training/validation and a test set of 500 and 100 trials respectively. However, the sets could be merged to allow for different splits.

    Description of the data and file structure

    Each trial is a row in the csv file. For each trial there are the follwing columns:

    • doi:Â Digital Object Identifier of the trial
    • date: Publication data according to PubMed
    • title: Title of the trial according to PubMed
    • ab...
  19. s

    Data from: High impact bug report identification with imbalanced learning...

    • researchdata.smu.edu.sg
    zip
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    YANG Xinli; David LO; Xin XIA; Qiao HUANG; Jianling SUN (2023). Data from: High impact bug report identification with imbalanced learning strategies [Dataset]. http://doi.org/10.25440/smu.12062763.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    SMU Research Data Repository (RDR)
    Authors
    YANG Xinli; David LO; Xin XIA; Qiao HUANG; Jianling SUN
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This record contains the underlying research data for the publication "High impact bug report identification with imbalanced learning strategies" and the full-text is available from: https://ink.library.smu.edu.sg/sis_research/3702In practice, some bugs have more impact than others and thus deserve more immediate attention. Due to tight schedule and limited human resources, developers may not have enough time to inspect all bugs. Thus, they often concentrate on bugs that are highly impactful. In the literature, high-impact bugs are used to refer to the bugs which appear at unexpected time or locations and bring more unexpected effects (i.e., surprise bugs), or break pre-existing functionalities and destroy the user experience (i.e., breakage bugs). Unfortunately, identifying high-impact bugs from thousands of bug reports in a bug tracking system is not an easy feat. Thus, an automated technique that can identify high-impact bug reports can help developers to be aware of them early, rectify them quickly, and minimize the damages they cause. Considering that only a small proportion of bugs are high-impact bugs, the identification of high-impact bug reports is a difficult task. In this paper, we propose an approach to identify high-impact bug reports by leveraging imbalanced learning strategies. We investigate the effectiveness of various variants, each of which combines one particular imbalanced learning strategy and one particular classification algorithm. In particular, we choose four widely used strategies for dealing with imbalanced data and four state-of-the-art text classification algorithms to conduct experiments on four datasets from four different open source projects. We mainly perform an analytical study on two types of high-impact bugs, i.e., surprise bugs and breakage bugs. The results show that different variants have different performances, and the best performing variants SMOTE (synthetic minority over-sampling technique) + KNN (K-nearest neighbours) for surprise bug identification and RUS (random under-sampling) + NB (naive Bayes) for breakage bug identification outperform the F1-scores of the two state-of-the-art approaches by Thung et al. and Garcia and Shihab.Supplementary code and data available from GitHub:

  20. o

    Current Trends In Scientific Research On Global Warming: A Bibliometric...

    • explore.openaire.eu
    • data.niaid.nih.gov
    Updated Apr 13, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    R. R. Aleixandre-Benavent; J. L. J.L. Aleixandre-Tudó; M. M. Bolaños-Pizarro; J. L. J.L. Aleixandre (2018). Current Trends In Scientific Research On Global Warming: A Bibliometric Analysis (2005-2014) [Dataset]. http://doi.org/10.5281/zenodo.1218021
    Explore at:
    Dataset updated
    Apr 13, 2018
    Authors
    R. R. Aleixandre-Benavent; J. L. J.L. Aleixandre-Tudó; M. M. Bolaños-Pizarro; J. L. J.L. Aleixandre
    Description

    This dataset was created in the context of the project: " Current trends in scientific research on global warming: A bibliometric analysis (2005-2014)". --- Global warming is a topic of increasing public importance, but there have not been published scientometric studies on this topic. The objective of this paper is to contribute to a better understanding of the scientific knowledge in global warming and his effect, as well as to investigate its evolution through the published papers included in Web of Science database. Items under study were collected from Web of Science database from Thomson Reuters. A bibliometric and social network analyses was performed to obtain indicators of scientific productivity, impact and collaboration between researchers, institutions and countries. A subject analysis was also carried out taking into account the key words assigned to papers and subject areas of journals. 1,672 articles were analysed since 2005 until 2014. The most productive journals were Journal of Climate (n=95) and Geophysical Resarch Letters (n=78). The most frequent keywords have been Climate Change (n=722), Model (n=216) and Temperature (n=196). The network of collaboration between countries shows the central position of the United States, together with other leading countries such as United Kingdom, Germany, France and Peoples Republic of China. The research on global warming had grown steadily during the last decade. A vast amount of journals from several subject areas publishes the papers on the topic, including journals of general purpose with high impact factor. Almost all the countries have USA as the main country with which one collaborates. The analysis of key words shows that topics related with climate change, impact, temperature, models and variability are the most important concerns on global warming. --- The dataset consist of the following: 1) The list of papers included in the analyses: Papers.xlsx This file contains 1672 titles, each line representing a paper (including title of the paper, journal ISSN and year of publication). 2) The list of authors: Authors.xlsx This file contains all 4488 authors, each line representing an author (including full name, total number of papers and year of publication). 3) The list of scientific journals: Journals.xlsx This file containts all 687 journals, each line representing a journal (including name of the journal, ISSN, total number of papers and year of publication). 4) The list of countries: Country.xlsx This file contains all 84 countries, each line representing a country (including country name, total number of papers, total number of citations, and number of citations per paper). 5) The list of keywords: Keywords.xlsx This file contains all 6422 keywords, each line representing a keyword (including keywords, number of papers and year of publication)

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Abir (2023). Journal Ranking Dataset [Dataset]. https://www.kaggle.com/datasets/xabirhasan/journal-ranking-dataset
Organization logo

Data from: Journal Ranking Dataset

A dataset of journal ranking based on Scimago, Web of Science, and Scopus.

Related Article
Explore at:
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Aug 15, 2023
Dataset provided by
Kaggle
Authors
Abir
License

https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

Description

Journals & Ranking

An academic journal or research journal is a periodical publication in which research articles relating to a particular academic discipline is published, according to Wikipedia. Currently, there are more than 25,000 peer-reviewed journals that are indexed in citation index databases such as Scopus and Web of Science. These indexes are ranked on the basis of various metrics such as CiteScore, H-index, etc. The metrics are calculated from yearly citation data of the journal. A lot of efforts are given to make a metric that reflects the journal's quality.

Journal Ranking Dataset

This is a comprehensive dataset on the academic journals coving their metadata information as well as citation, metrics, and ranking information. Detailed data on their subject area is also given in this dataset. The dataset is collected from the following indexing databases: - Scimago Journal Ranking - Scopus - Web of Science Master Journal List

The data is collected by scraping and then it was cleaned, details of which can be found in HERE.

Key Features

  • Rank: Overall rank of journal (derived from sorted SJR index).
  • Title: Name or title of journal.
  • OA: Open Access or not.
  • Country: Country of origin.
  • SJR-index: A citation index calculated by Scimago.
  • CiteScore: A citation index calculated by Scopus.
  • H-index: Hirsh index, the largest number h such that at least h articles in that journal were cited at least h times each.
  • Best Quartile: Top Q-index or quartile a journal has in any subject area.
  • Best Categories: Subject areas with top quartile.
  • Best Subject Area: Highest ranking subject area.
  • Best Subject Rank: Rank of the highest ranking subject area.
  • Total Docs.: Total number of documents of the journal.
  • Total Docs. 3y: Total number of documents in the past 3 years.
  • Total Refs.: Total number of references of the journal.
  • Total Cites 3y: Total number of citations in the past 3 years.
  • Citable Docs. 3y: Total number of citable documents in the past 3 years.
  • Cites/Doc. 2y: Total number of citations divided by the total number of documents in the past 2 years.
  • Refs./Doc.: Total number of references divided by the total number of documents.
  • Publisher: Name of the publisher company of the journal.
  • Core Collection: Web of Science core collection name.
  • Coverage: Starting year of coverage.
  • Active: Active or inactive.
  • In-Press: Articles in press or not.
  • ISO Language Code: Three-letter ISO 639 code for language.
  • ASJC Codes: All Science Journal Classification codes for the journal.

Rest of the features provide further details on the journal's subject area or category: - Life Sciences: Top level subject area. - Social Sciences: Top level subject area. - Physical Sciences: Top level subject area. - Health Sciences: Top level subject area. - 1000 General: ASJC main category. - 1100 Agricultural and Biological Sciences: ASJC main category. - 1200 Arts and Humanities: ASJC main category. - 1300 Biochemistry, Genetics and Molecular Biology: ASJC main category. - 1400 Business, Management and Accounting: ASJC main category. - 1500 Chemical Engineering: ASJC main category. - 1600 Chemistry: ASJC main category. - 1700 Computer Science: ASJC main category. - 1800 Decision Sciences: ASJC main category. - 1900 Earth and Planetary Sciences: ASJC main category. - 2000 Economics, Econometrics and Finance: ASJC main category. - 2100 Energy: ASJC main category. - 2200 Engineering: ASJC main category. - 2300 Environmental Science: ASJC main category. - 2400 Immunology and Microbiology: ASJC main category. - 2500 Materials Science: ASJC main category. - 2600 Mathematics: ASJC main category. - 2700 Medicine: ASJC main category. - 2800 Neuroscience: ASJC main category. - 2900 Nursing: ASJC main category. - 3000 Pharmacology, Toxicology and Pharmaceutics: ASJC main category. - 3100 Physics and Astronomy: ASJC main category. - 3200 Psychology: ASJC main category. - 3300 Social Sciences: ASJC main category. - 3400 Veterinary: ASJC main category. - 3500 Dentistry: ASJC main category. - 3600 Health Professions: ASJC main category.

Search
Clear search
Close search
Google apps
Main menu