100+ datasets found
  1. Data from: Journal Ranking Dataset

    • kaggle.com
    Updated Aug 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Abir (2023). Journal Ranking Dataset [Dataset]. https://www.kaggle.com/datasets/xabirhasan/journal-ranking-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 15, 2023
    Dataset provided by
    Kaggle
    Authors
    Abir
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Journals & Ranking

    An academic journal or research journal is a periodical publication in which research articles relating to a particular academic discipline is published, according to Wikipedia. Currently, there are more than 25,000 peer-reviewed journals that are indexed in citation index databases such as Scopus and Web of Science. These indexes are ranked on the basis of various metrics such as CiteScore, H-index, etc. The metrics are calculated from yearly citation data of the journal. A lot of efforts are given to make a metric that reflects the journal's quality.

    Journal Ranking Dataset

    This is a comprehensive dataset on the academic journals coving their metadata information as well as citation, metrics, and ranking information. Detailed data on their subject area is also given in this dataset. The dataset is collected from the following indexing databases: - Scimago Journal Ranking - Scopus - Web of Science Master Journal List

    The data is collected by scraping and then it was cleaned, details of which can be found in HERE.

    Key Features

    • Rank: Overall rank of journal (derived from sorted SJR index).
    • Title: Name or title of journal.
    • OA: Open Access or not.
    • Country: Country of origin.
    • SJR-index: A citation index calculated by Scimago.
    • CiteScore: A citation index calculated by Scopus.
    • H-index: Hirsh index, the largest number h such that at least h articles in that journal were cited at least h times each.
    • Best Quartile: Top Q-index or quartile a journal has in any subject area.
    • Best Categories: Subject areas with top quartile.
    • Best Subject Area: Highest ranking subject area.
    • Best Subject Rank: Rank of the highest ranking subject area.
    • Total Docs.: Total number of documents of the journal.
    • Total Docs. 3y: Total number of documents in the past 3 years.
    • Total Refs.: Total number of references of the journal.
    • Total Cites 3y: Total number of citations in the past 3 years.
    • Citable Docs. 3y: Total number of citable documents in the past 3 years.
    • Cites/Doc. 2y: Total number of citations divided by the total number of documents in the past 2 years.
    • Refs./Doc.: Total number of references divided by the total number of documents.
    • Publisher: Name of the publisher company of the journal.
    • Core Collection: Web of Science core collection name.
    • Coverage: Starting year of coverage.
    • Active: Active or inactive.
    • In-Press: Articles in press or not.
    • ISO Language Code: Three-letter ISO 639 code for language.
    • ASJC Codes: All Science Journal Classification codes for the journal.

    Rest of the features provide further details on the journal's subject area or category: - Life Sciences: Top level subject area. - Social Sciences: Top level subject area. - Physical Sciences: Top level subject area. - Health Sciences: Top level subject area. - 1000 General: ASJC main category. - 1100 Agricultural and Biological Sciences: ASJC main category. - 1200 Arts and Humanities: ASJC main category. - 1300 Biochemistry, Genetics and Molecular Biology: ASJC main category. - 1400 Business, Management and Accounting: ASJC main category. - 1500 Chemical Engineering: ASJC main category. - 1600 Chemistry: ASJC main category. - 1700 Computer Science: ASJC main category. - 1800 Decision Sciences: ASJC main category. - 1900 Earth and Planetary Sciences: ASJC main category. - 2000 Economics, Econometrics and Finance: ASJC main category. - 2100 Energy: ASJC main category. - 2200 Engineering: ASJC main category. - 2300 Environmental Science: ASJC main category. - 2400 Immunology and Microbiology: ASJC main category. - 2500 Materials Science: ASJC main category. - 2600 Mathematics: ASJC main category. - 2700 Medicine: ASJC main category. - 2800 Neuroscience: ASJC main category. - 2900 Nursing: ASJC main category. - 3000 Pharmacology, Toxicology and Pharmaceutics: ASJC main category. - 3100 Physics and Astronomy: ASJC main category. - 3200 Psychology: ASJC main category. - 3300 Social Sciences: ASJC main category. - 3400 Veterinary: ASJC main category. - 3500 Dentistry: ASJC main category. - 3600 Health Professions: ASJC main category.

  2. Paper manufacturing company ranking in Norway 2025, by turnover

    • statista.com
    • ai-chatbox.pro
    Updated Jul 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Paper manufacturing company ranking in Norway 2025, by turnover [Dataset]. https://www.statista.com/statistics/991743/ranking-of-companies-in-the-paper-manufacture-in-norway/
    Explore at:
    Dataset updated
    Jul 11, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    Norway
    Description

    The turnover of Essity Norway AS amounted to over *** million Norwegian kroner. Rygene-Smith & Thommesen AS ranked second in Norway at the beginning of 2025, with a turnover of almost *** million Norwegian kroner.

  3. i

    Tackling Class Imbalance with Ranking - Dataset - CKAN

    • rdm.inesctec.pt
    Updated Feb 20, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2017). Tackling Class Imbalance with Ranking - Dataset - CKAN [Dataset]. https://rdm.inesctec.pt/dataset/nis-2017-002
    Explore at:
    Dataset updated
    Feb 20, 2017
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    The dataset comes originally from UCI Machine Learning. The multiclass datasets were transformed in binary classification as mentioned in the paper. Ranking methods were applied to improve class imbalance. The datasets are divided in 30 folds so that other class imbalance methods can be compared to the methods in the paper. The code used in the paper is also provided.

  4. f

    How to Rank Journals

    • plos.figshare.com
    txt
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Corey J. A. Bradshaw; Barry W. Brook (2023). How to Rank Journals [Dataset]. http://doi.org/10.1371/journal.pone.0149852
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Corey J. A. Bradshaw; Barry W. Brook
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    There are now many methods available to assess the relative citation performance of peer-reviewed journals. Regardless of their individual faults and advantages, citation-based metrics are used by researchers to maximize the citation potential of their articles, and by employers to rank academic track records. The absolute value of any particular index is arguably meaningless unless compared to other journals, and different metrics result in divergent rankings. To provide a simple yet more objective way to rank journals within and among disciplines, we developed a κ-resampled composite journal rank incorporating five popular citation indices: Impact Factor, Immediacy Index, Source-Normalized Impact Per Paper, SCImago Journal Rank and Google 5-year h-index; this approach provides an index of relative rank uncertainty. We applied the approach to six sample sets of scientific journals from Ecology (n = 100 journals), Medicine (n = 100), Multidisciplinary (n = 50); Ecology + Multidisciplinary (n = 25), Obstetrics & Gynaecology (n = 25) and Marine Biology & Fisheries (n = 25). We then cross-compared the κ-resampled ranking for the Ecology + Multidisciplinary journal set to the results of a survey of 188 publishing ecologists who were asked to rank the same journals, and found a 0.68–0.84 Spearman’s ρ correlation between the two rankings datasets. Our composite index approach therefore approximates relative journal reputation, at least for that discipline. Agglomerative and divisive clustering and multi-dimensional scaling techniques applied to the Ecology + Multidisciplinary journal set identified specific clusters of similarly ranked journals, with only Nature & Science separating out from the others. When comparing a selection of journals within or among disciplines, we recommend collecting multiple citation-based metrics for a sample of relevant and realistic journals to calculate the composite rankings and their relative uncertainty windows.

  5. n

    Paper and Paper Products Output

    • nationmaster.com
    Updated Mar 28, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NationMaster (2021). Paper and Paper Products Output [Dataset]. https://www.nationmaster.com/nmx/ranking/paper-and-paper-products-output
    Explore at:
    Dataset updated
    Mar 28, 2021
    Dataset authored and provided by
    NationMaster
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Time period covered
    1975 - 2020
    Area covered
    Croatia, Belgium, Austria, Bulgaria, Greece, Lithuania, Ireland, Spain, Estonia, Italy
    Description

    Italy Paper and Paper Products Output grew 0.8% in 2019, compared to a year earlier.

  6. Top Universities Computer Science

    • kaggle.com
    Updated Apr 20, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bogdan Sorin Miauca (2023). Top Universities Computer Science [Dataset]. https://www.kaggle.com/datasets/bogdansorin/top-universities-computer-science
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 20, 2023
    Dataset provided by
    Kaggle
    Authors
    Bogdan Sorin Miauca
    Description

    The dataset contains best universities by computer science subject ranked from 2020 to 2023, the universities can change their rank and scores over the years

    If you found this dataset usefull you can leave a like.

    Columns

    rank:The placement of the university wrt. the others

    year:The year when the university was ranked

    cs_overall_score:The score of the university on computer science subject

    cs_academic_reputation:Score of how famous is the university

    cs_emloyer_reputation:Score of the reputation of the teachers and researchers

    cs_citations_per_paper:Score of how good are the scientific papers

    cs_index_citations:Score of the index of the scientific papers

    cs_international_research:Score of international partecipation

    locatio:Country and city where the university is located

  7. u

    Anytime Ranking Data

    • figshare.unimelb.edu.au
    application/x-gzip
    Updated Jun 3, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joel Mackenzie; MATTHIAS PETRI; Alistair Moffat (2021). Anytime Ranking Data [Dataset]. http://doi.org/10.26188/14722455.v1
    Explore at:
    application/x-gzipAvailable download formats
    Dataset updated
    Jun 3, 2021
    Dataset provided by
    The University of Melbourne
    Authors
    Joel Mackenzie; MATTHIAS PETRI; Alistair Moffat
    License

    https://www.apache.org/licenses/LICENSE-2.0.htmlhttps://www.apache.org/licenses/LICENSE-2.0.html

    Description

    This is the data repository for the paper Anytime Ranking on Document-Ordered Indexes by Joel Mackenzie, Matthias Petri, and Alistair Moffat. This paper appeared in ACM TOIS in 2021.

  8. D

    Ranking the university

    • ssh.datastations.nl
    pdf
    Updated Jul 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Universiteiten van Nederland (UNL); Universiteiten van Nederland (UNL) (2023). Ranking the university [Dataset]. http://doi.org/10.17026/SS/XGDX0H
    Explore at:
    pdf(583890), pdf(577697)Available download formats
    Dataset updated
    Jul 4, 2023
    Dataset provided by
    DANS Data Station Social Sciences and Humanities
    Authors
    Universiteiten van Nederland (UNL); Universiteiten van Nederland (UNL)
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The document consists of a Recommendation paper and an administrative response from Universities of The Netherlands. Short summary Universities of the Netherlands (UNL) has asked our expert group for an opinion on issues associated with university rankings in relation to Recognition & Rewards (R&R), a national programme in the Netherlands that aims to more broadly recognise and reward the work of academic staff (for more details on this initiative, see the position paper ‘Room for everyone’s talent’). We were also asked to propose solutions to these issues. As an expert group, we focus mainly on so-called league tables in the present opinion. These are one-dimensional university rankings that claim to reflect the overall performance of a university. Our opinion shows that league tables are unjustified in claiming to be able to sum up a university’s performance in the broadest sense in a single score. There is no universally accepted criterion for quantifying a university’s overall performance, and a generic weighing tool cannot do justice to a university’s strategic choice to excel in specific areas. Research, education, and impact achievements cannot be meaningfully combined to produce a one-dimensional overall score. Any attempt to do so will run into arbitrary and debatable decisions about how performance in these three core tasks should be weighted. Is research more important than education? Or is it the other way around? When a weighting system is applied that emphasises one of those core tasks, universities that excel in a different task are disadvantaged.

  9. Rank-DistiLLM Novelty

    • zenodo.org
    bin
    Updated Apr 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ferdinand Schlatt; Ferdinand Schlatt (2025). Rank-DistiLLM Novelty [Dataset]. http://doi.org/10.5281/zenodo.15124457
    Explore at:
    binAvailable download formats
    Dataset updated
    Apr 2, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Ferdinand Schlatt; Ferdinand Schlatt
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains the novelty ranking data for the paper Set-Encoder: Permutation-Invariant Inter-Passage Attention for Listwise Passage Re-Ranking with Cross-Encoders. The dataset is based off the Rank-Distillm dataset (Paper, Dataset).

    The two run files contain the top 100 retrieved passages for 10k queries from the MS MARCO passage training dataset re-ranked by the RankZephyr model for BM25 and ColBERTv2, respectively. The central difference to the Rank-DistiLLM dataset files is that the run files in this dataset are additionally grouped by lexical similarity. All passages within a ranking a clustered according to their jaccard similarity. Any passages with a jaccard similarity > 0.5 are grouped into a cluster and have the same id in the second column (the Q0 column in the standard TREC run format).

    The two qrel files are equivalent to the official TREC Deep Learning 2019 and 2020 qrels, but are also clustered by their lexical similarity (following the strategy outlined above) to enable evaluation of novelty-aware models.

  10. Pulp for Paper Production

    • nationmaster.com
    Updated Feb 2, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NationMaster (2021). Pulp for Paper Production [Dataset]. https://www.nationmaster.com/nmx/ranking/pulp-for-paper-production
    Explore at:
    Dataset updated
    Feb 2, 2021
    Dataset authored and provided by
    NationMaster
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Time period covered
    1961 - 2019
    Area covered
    Morocco, Germany, Taiwan, Israel, Switzerland, Hungary, Iraq, Thailand, Canada, Uzbekistan
    Description

    China Pulp for Paper Production jumped by 5.5% in 2019, compared to a year earlier.

  11. argument_quality_ranking_30k

    • huggingface.co
    Updated Nov 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    IBM Research (2023). argument_quality_ranking_30k [Dataset]. https://huggingface.co/datasets/ibm-research/argument_quality_ranking_30k
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 6, 2023
    Dataset provided by
    IBM Research
    IBMhttp://ibm.com/
    Authors
    IBM Research
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Description

    Dataset Card for Argument-Quality-Ranking-30k Dataset

      Dataset Summary
    
    
    
    
    
      Argument Quality Ranking
    

    The dataset contains 30,497 crowd-sourced arguments for 71 debatable topics labeled for quality and stance, split into train, validation and test sets. The dataset was originally published as part of our paper: A Large-scale Dataset for Argument Quality Ranking: Construction and Analysis.

      Argument Topic
    

    This subset contains 9,487 of the arguments only with… See the full description on the dataset page: https://huggingface.co/datasets/ibm-research/argument_quality_ranking_30k.

  12. WORLD UNIVERSITY RANKING 2022 - 2023

    • kaggle.com
    Updated Sep 30, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aman Chauhan (2022). WORLD UNIVERSITY RANKING 2022 - 2023 [Dataset]. https://www.kaggle.com/whenamancodes/world-university-ranking-2022-2023/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 30, 2022
    Dataset provided by
    Kaggle
    Authors
    Aman Chauhan
    Description

    ::: METHODOLOGY :::

    The Center for World University Rankings (CWUR) publishes the only academic ranking of global universities that assesses the quality of education, alumni employment, quality of faculty, and research performance without relying on surveys and university data submissions.

    CWUR uses seven objective and robust indicators grouped into four areas to rank the world’s universities: - Education: based on the academic success of a university’s alumni, and measured by the number of a university's alumni who have won prestigious academic distinctions relative to the university's size (25%) - Employability: based on the professional success of a university’s alumni, and measured by the number of a university's alumni who have held top positions at major companies relative to the university's size (25%) - Faculty: measured by the number of faculty members who have won prestigious academic distinctions (10%) - Research: i) Research Output: measured by the total number of research papers (10%) ii) High-Quality Publications: measured by the number of research papers appearing in top-tier journals (10%) iii) Influence: measured by the number of research papers appearing in highly-influential journals (10%) iv) Citations: measured by the number of highly-cited research papers (10%)

    More - Find More Exciting🙀 Datasets Here - An Upvote👍 A Dayᕙ(`▿´)ᕗ , Keeps Aman Hurray Hurray..... ٩(˘◡˘)۶Haha

  13. p

    Trends in Overall School Rank (2011-2022): Paper Mill

    • publicschoolreview.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Public School Review, Trends in Overall School Rank (2011-2022): Paper Mill [Dataset]. https://www.publicschoolreview.com/paper-mill-profile
    Explore at:
    Dataset authored and provided by
    Public School Review
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset tracks annual overall school rank from 2011 to 2022 for Paper Mill

  14. European forestry & paper company ranking based on employee numbers 2022

    • statista.com
    Updated Jul 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). European forestry & paper company ranking based on employee numbers 2022 [Dataset]. https://www.statista.com/statistics/1131798/leading-forestry-and-paper-companies-in-europe-by-number-of-employees/
    Explore at:
    Dataset updated
    Jul 8, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    Europe
    Description

    As of June 2022, Finland-based Stora Enso employed approximately ****** people. UPM Kymmene, which is also based in Finland, employed roughly ****** people. Both of these companies are amongst the world's leading forestry and paper companies.

  15. o

    Data from: BIP! DB: A Dataset of Impact Measures for Scientific Publications...

    • explore.openaire.eu
    • zenodo.org
    • +2more
    Updated Jan 1, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thanasis Vergoulis; Ilias Kanellos; Claudio Atzori; Andrea Mannocci; Serafeim Chatzopoulos; Sandro La Bruzzo; Natalia Manola; Paolo Manghi (2021). BIP! DB: A Dataset of Impact Measures for Scientific Publications [Dataset]. http://doi.org/10.5281/zenodo.6801467
    Explore at:
    Dataset updated
    Jan 1, 2021
    Authors
    Thanasis Vergoulis; Ilias Kanellos; Claudio Atzori; Andrea Mannocci; Serafeim Chatzopoulos; Sandro La Bruzzo; Natalia Manola; Paolo Manghi
    Description

    This dataset contains impact measures (metrics/indicators) for ~132M distinct DOIs that correspond to scientific articles. In particular, for each article we have calculated the following measures: Citation count: The total number of citations, reflecting the "influence" (i.e., the total impact) of an article. Incubation Citation Count (3-year CC): This is a time-restricted version of the citation count, where the time window length is fixed for all papers and the time window depends on the publication date of the paper, i.e., only citations 3 years after each paper’s publication are counted. This measure can be seen as an indicator of a paper's "impulse", i.e., its initial momentum directly after its publication. PageRank score: This is a citation-based measure reflecting the "influence" (i.e., the total impact) of an article. It is based on the PageRank1 network analysis method. In the context of citation networks, PageRank estimates the importance of each article based on its centrality in the whole network. RAM score: This is a citation-based measure reflecting the "popularity" (i.e., the current impact) of an article. It is based on the RAM2 method and is essentially a citation count where recent citations are considered as more important. This type of “time awareness” alleviates problems of methods like PageRank, which are biased against recently published articles (new articles need time to receive a “sufficient” number of citations). Hence, RAM is more suitable to capture the current “hype” of an article. AttRank score: This is a citation network analysis-based measure reflecting the "popularity" (i.e., the current impact) of an article. It is based on the AttRank3 method. AttRank alleviates PageRank’s bias against recently published papers by incorporating an attention-based mechanism, akin to a time-restricted version of preferential attachment, to explicitly capture a researcher’s preference to read papers which received a lot of attention recently. This is why it is more suitable to capture the current “hype” of an article. More details about the aforementioned impact measures, the way they are calculated and their interpretation can be found here. For version 5.1 onwards, the impact measures are calculated in two levels: The DOI level (assuming that each DOI corresponds to a distinct scientific article. The OpenAIRE-id level (leveraging DOI synonyms based on OpenAIRE's deduplication algorithm4 - each distinct article has its own OpenAIRE id). Previous versions of the dataset only provided the scores at the DOI level. For each calculation level (DOI / OpenAIRE-id) we provide five (5) compressed CSV files (one for each measure/score provided) where each line follows the format “identifier

  16. i

    DEA Cross-Efficiency Method for Ranking Units Based on Competitive vision

    • ieee-dataport.org
    Updated Mar 19, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lei Xie (2019). DEA Cross-Efficiency Method for Ranking Units Based on Competitive vision [Dataset]. https://ieee-dataport.org/documents/dea-cross-efficiency-method-ranking-units-based-competitive-vision
    Explore at:
    Dataset updated
    Mar 19, 2019
    Authors
    Lei Xie
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    the paper constructs the relationship matrix among DMUs to reflect the relationships among DMUs based on competitive interval and competitive vision.

  17. University world ranking (2108_2019)

    • kaggle.com
    Updated Dec 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ahmad Rafiee (2023). University world ranking (2108_2019) [Dataset]. https://www.kaggle.com/datasets/ahmadrafiee/university-world-ranking
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 3, 2023
    Dataset provided by
    Kaggle
    Authors
    Ahmad Rafiee
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    this data was collected by CWRU: Since 2012, CWUR has been publishing the only academic ranking of global universities that assesses the quality of education, employability, quality of faculty, and research without relying on surveys and university data submissions. The ranking started out as a project in Jeddah, Saudi Arabia with the aim of rating the top 100 world universities.

    Columns description :

    World Rank:Ranking in world

    Institution: Name of University

    Location: Country

    National Rank: Ranking in its Country

    Quality of Education: University's alumi who have won major awards(12.5%)

    Alumni Employment:Average number (per year) of a university's alumni who have held CEO position(12.5%)

    Quality of Faculty: Faculty members of an institution who won awards(25%)

    Research Output: Measured by the total number of research papers(10%)

    Quality Publications: Measured by the total number of research papers appearing in top-tier Journals(10%)

    Influence: Measured by the total number of research papers appearing in highly-influential Journals(10%)

    Citations: Measured by the total number of highly-cited research papers(10%)

    Score: Score

  18. Leading Chinese paper and packaging companies on the Fortune China 500...

    • statista.com
    • ai-chatbox.pro
    Updated Jul 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Leading Chinese paper and packaging companies on the Fortune China 500 ranking 2024 [Dataset]. https://www.statista.com/statistics/455083/china-fortune-500-leading-paper-and-packaging-companies/
    Explore at:
    Dataset updated
    Jul 10, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    2023
    Area covered
    China
    Description

    Leading Chinese paper, printing, and packaging companies on the Fortune China 500 ranking is based on total revenues in 2023 and has been released in *********. Nine Dragons Paper, a paper manufacturing company in China, had ranked first in this sector with an annual revenue of about **** billion U.S. dollars in 2023.

  19. n

    Sold Production of Cigarette Paper

    • nationmaster.com
    Updated Jan 9, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NationMaster (2020). Sold Production of Cigarette Paper [Dataset]. https://www.nationmaster.com/nmx/ranking/sold-production-of-cigarette-paper
    Explore at:
    Dataset updated
    Jan 9, 2020
    Dataset authored and provided by
    NationMaster
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Time period covered
    1995 - 2019
    Area covered
    Italy, Poland, United Kingdom, Portugal, Bulgaria, Hungary, Czechia, Serbia, Ireland, Denmark
    Description

    Italy Sold Production of Cigarette Paper was up 8.3% in 2019, compared to the previous year.

  20. g

    Ranking data of the candidates indicated in the ballots | gimi9.com

    • gimi9.com
    Updated Dec 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Ranking data of the candidates indicated in the ballots | gimi9.com [Dataset]. https://gimi9.com/dataset/eu_https-data-gov-lt-datasets-2014-/
    Explore at:
    Dataset updated
    Dec 13, 2024
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The information system of the Central Electoral Commission shall collect information on the initial ranking data indicated in the ballot papers. When determining the preference votes received by candidates, part of the data shall not be counted (a duplicate candidate number on the same ballot paper, non-candidates' numbers, numbers of excluded candidates). The data set shall comprise the election tour identification number, title, constituency and district identification number of the electoral tour, the identification number of the list of candidates (nominated in a multi-member constituency), the number of the list of candidates, the name, the legal person number (in the Register of Legal Entities), the serial number of the ballot paper, the candidate's election number shall be indicated in a specific ballot box (1 to 5). The entry of one line in the set corresponds to the information of one ballot paper. Contact the atverimas@stat.gov.lt for technical questions or possible errors.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Abir (2023). Journal Ranking Dataset [Dataset]. https://www.kaggle.com/datasets/xabirhasan/journal-ranking-dataset
Organization logo

Data from: Journal Ranking Dataset

A dataset of journal ranking based on Scimago, Web of Science, and Scopus.

Related Article
Explore at:
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Aug 15, 2023
Dataset provided by
Kaggle
Authors
Abir
License

https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

Description

Journals & Ranking

An academic journal or research journal is a periodical publication in which research articles relating to a particular academic discipline is published, according to Wikipedia. Currently, there are more than 25,000 peer-reviewed journals that are indexed in citation index databases such as Scopus and Web of Science. These indexes are ranked on the basis of various metrics such as CiteScore, H-index, etc. The metrics are calculated from yearly citation data of the journal. A lot of efforts are given to make a metric that reflects the journal's quality.

Journal Ranking Dataset

This is a comprehensive dataset on the academic journals coving their metadata information as well as citation, metrics, and ranking information. Detailed data on their subject area is also given in this dataset. The dataset is collected from the following indexing databases: - Scimago Journal Ranking - Scopus - Web of Science Master Journal List

The data is collected by scraping and then it was cleaned, details of which can be found in HERE.

Key Features

  • Rank: Overall rank of journal (derived from sorted SJR index).
  • Title: Name or title of journal.
  • OA: Open Access or not.
  • Country: Country of origin.
  • SJR-index: A citation index calculated by Scimago.
  • CiteScore: A citation index calculated by Scopus.
  • H-index: Hirsh index, the largest number h such that at least h articles in that journal were cited at least h times each.
  • Best Quartile: Top Q-index or quartile a journal has in any subject area.
  • Best Categories: Subject areas with top quartile.
  • Best Subject Area: Highest ranking subject area.
  • Best Subject Rank: Rank of the highest ranking subject area.
  • Total Docs.: Total number of documents of the journal.
  • Total Docs. 3y: Total number of documents in the past 3 years.
  • Total Refs.: Total number of references of the journal.
  • Total Cites 3y: Total number of citations in the past 3 years.
  • Citable Docs. 3y: Total number of citable documents in the past 3 years.
  • Cites/Doc. 2y: Total number of citations divided by the total number of documents in the past 2 years.
  • Refs./Doc.: Total number of references divided by the total number of documents.
  • Publisher: Name of the publisher company of the journal.
  • Core Collection: Web of Science core collection name.
  • Coverage: Starting year of coverage.
  • Active: Active or inactive.
  • In-Press: Articles in press or not.
  • ISO Language Code: Three-letter ISO 639 code for language.
  • ASJC Codes: All Science Journal Classification codes for the journal.

Rest of the features provide further details on the journal's subject area or category: - Life Sciences: Top level subject area. - Social Sciences: Top level subject area. - Physical Sciences: Top level subject area. - Health Sciences: Top level subject area. - 1000 General: ASJC main category. - 1100 Agricultural and Biological Sciences: ASJC main category. - 1200 Arts and Humanities: ASJC main category. - 1300 Biochemistry, Genetics and Molecular Biology: ASJC main category. - 1400 Business, Management and Accounting: ASJC main category. - 1500 Chemical Engineering: ASJC main category. - 1600 Chemistry: ASJC main category. - 1700 Computer Science: ASJC main category. - 1800 Decision Sciences: ASJC main category. - 1900 Earth and Planetary Sciences: ASJC main category. - 2000 Economics, Econometrics and Finance: ASJC main category. - 2100 Energy: ASJC main category. - 2200 Engineering: ASJC main category. - 2300 Environmental Science: ASJC main category. - 2400 Immunology and Microbiology: ASJC main category. - 2500 Materials Science: ASJC main category. - 2600 Mathematics: ASJC main category. - 2700 Medicine: ASJC main category. - 2800 Neuroscience: ASJC main category. - 2900 Nursing: ASJC main category. - 3000 Pharmacology, Toxicology and Pharmaceutics: ASJC main category. - 3100 Physics and Astronomy: ASJC main category. - 3200 Psychology: ASJC main category. - 3300 Social Sciences: ASJC main category. - 3400 Veterinary: ASJC main category. - 3500 Dentistry: ASJC main category. - 3600 Health Professions: ASJC main category.

Search
Clear search
Close search
Google apps
Main menu