https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
An academic journal or research journal is a periodical publication in which research articles relating to a particular academic discipline is published, according to Wikipedia. Currently, there are more than 25,000 peer-reviewed journals that are indexed in citation index databases such as Scopus and Web of Science. These indexes are ranked on the basis of various metrics such as CiteScore, H-index, etc. The metrics are calculated from yearly citation data of the journal. A lot of efforts are given to make a metric that reflects the journal's quality.
This is a comprehensive dataset on the academic journals coving their metadata information as well as citation, metrics, and ranking information. Detailed data on their subject area is also given in this dataset. The dataset is collected from the following indexing databases: - Scimago Journal Ranking - Scopus - Web of Science Master Journal List
The data is collected by scraping and then it was cleaned, details of which can be found in HERE.
Rest of the features provide further details on the journal's subject area or category: - Life Sciences: Top level subject area. - Social Sciences: Top level subject area. - Physical Sciences: Top level subject area. - Health Sciences: Top level subject area. - 1000 General: ASJC main category. - 1100 Agricultural and Biological Sciences: ASJC main category. - 1200 Arts and Humanities: ASJC main category. - 1300 Biochemistry, Genetics and Molecular Biology: ASJC main category. - 1400 Business, Management and Accounting: ASJC main category. - 1500 Chemical Engineering: ASJC main category. - 1600 Chemistry: ASJC main category. - 1700 Computer Science: ASJC main category. - 1800 Decision Sciences: ASJC main category. - 1900 Earth and Planetary Sciences: ASJC main category. - 2000 Economics, Econometrics and Finance: ASJC main category. - 2100 Energy: ASJC main category. - 2200 Engineering: ASJC main category. - 2300 Environmental Science: ASJC main category. - 2400 Immunology and Microbiology: ASJC main category. - 2500 Materials Science: ASJC main category. - 2600 Mathematics: ASJC main category. - 2700 Medicine: ASJC main category. - 2800 Neuroscience: ASJC main category. - 2900 Nursing: ASJC main category. - 3000 Pharmacology, Toxicology and Pharmaceutics: ASJC main category. - 3100 Physics and Astronomy: ASJC main category. - 3200 Psychology: ASJC main category. - 3300 Social Sciences: ASJC main category. - 3400 Veterinary: ASJC main category. - 3500 Dentistry: ASJC main category. - 3600 Health Professions: ASJC main category.
The turnover of Essity Norway AS amounted to over *** million Norwegian kroner. Rygene-Smith & Thommesen AS ranked second in Norway at the beginning of 2025, with a turnover of almost *** million Norwegian kroner.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The dataset comes originally from UCI Machine Learning. The multiclass datasets were transformed in binary classification as mentioned in the paper. Ranking methods were applied to improve class imbalance. The datasets are divided in 30 folds so that other class imbalance methods can be compared to the methods in the paper. The code used in the paper is also provided.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
There are now many methods available to assess the relative citation performance of peer-reviewed journals. Regardless of their individual faults and advantages, citation-based metrics are used by researchers to maximize the citation potential of their articles, and by employers to rank academic track records. The absolute value of any particular index is arguably meaningless unless compared to other journals, and different metrics result in divergent rankings. To provide a simple yet more objective way to rank journals within and among disciplines, we developed a κ-resampled composite journal rank incorporating five popular citation indices: Impact Factor, Immediacy Index, Source-Normalized Impact Per Paper, SCImago Journal Rank and Google 5-year h-index; this approach provides an index of relative rank uncertainty. We applied the approach to six sample sets of scientific journals from Ecology (n = 100 journals), Medicine (n = 100), Multidisciplinary (n = 50); Ecology + Multidisciplinary (n = 25), Obstetrics & Gynaecology (n = 25) and Marine Biology & Fisheries (n = 25). We then cross-compared the κ-resampled ranking for the Ecology + Multidisciplinary journal set to the results of a survey of 188 publishing ecologists who were asked to rank the same journals, and found a 0.68–0.84 Spearman’s ρ correlation between the two rankings datasets. Our composite index approach therefore approximates relative journal reputation, at least for that discipline. Agglomerative and divisive clustering and multi-dimensional scaling techniques applied to the Ecology + Multidisciplinary journal set identified specific clusters of similarly ranked journals, with only Nature & Science separating out from the others. When comparing a selection of journals within or among disciplines, we recommend collecting multiple citation-based metrics for a sample of relevant and realistic journals to calculate the composite rankings and their relative uncertainty windows.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Italy Paper and Paper Products Output grew 0.8% in 2019, compared to a year earlier.
https://www.apache.org/licenses/LICENSE-2.0.htmlhttps://www.apache.org/licenses/LICENSE-2.0.html
This is the data repository for the paper Anytime Ranking on Document-Ordered Indexes by Joel Mackenzie, Matthias Petri, and Alistair Moffat. This paper appeared in ACM TOIS in 2021.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The document consists of a Recommendation paper and an administrative response from Universities of The Netherlands. Short summary Universities of the Netherlands (UNL) has asked our expert group for an opinion on issues associated with university rankings in relation to Recognition & Rewards (R&R), a national programme in the Netherlands that aims to more broadly recognise and reward the work of academic staff (for more details on this initiative, see the position paper ‘Room for everyone’s talent’). We were also asked to propose solutions to these issues. As an expert group, we focus mainly on so-called league tables in the present opinion. These are one-dimensional university rankings that claim to reflect the overall performance of a university. Our opinion shows that league tables are unjustified in claiming to be able to sum up a university’s performance in the broadest sense in a single score. There is no universally accepted criterion for quantifying a university’s overall performance, and a generic weighing tool cannot do justice to a university’s strategic choice to excel in specific areas. Research, education, and impact achievements cannot be meaningfully combined to produce a one-dimensional overall score. Any attempt to do so will run into arbitrary and debatable decisions about how performance in these three core tasks should be weighted. Is research more important than education? Or is it the other way around? When a weighting system is applied that emphasises one of those core tasks, universities that excel in a different task are disadvantaged.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains the novelty ranking data for the paper Set-Encoder: Permutation-Invariant Inter-Passage Attention for Listwise Passage Re-Ranking with Cross-Encoders. The dataset is based off the Rank-Distillm dataset (Paper, Dataset).
The two run files contain the top 100 retrieved passages for 10k queries from the MS MARCO passage training dataset re-ranked by the RankZephyr model for BM25 and ColBERTv2, respectively. The central difference to the Rank-DistiLLM dataset files is that the run files in this dataset are additionally grouped by lexical similarity. All passages within a ranking a clustered according to their jaccard similarity. Any passages with a jaccard similarity > 0.5 are grouped into a cluster and have the same id in the second column (the Q0 column in the standard TREC run format).
The two qrel files are equivalent to the official TREC Deep Learning 2019 and 2020 qrels, but are also clustered by their lexical similarity (following the strategy outlined above) to enable evaluation of novelty-aware models.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
China Pulp for Paper Production jumped by 5.5% in 2019, compared to a year earlier.
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
Dataset Card for Argument-Quality-Ranking-30k Dataset
Dataset Summary
Argument Quality Ranking
The dataset contains 30,497 crowd-sourced arguments for 71 debatable topics labeled for quality and stance, split into train, validation and test sets. The dataset was originally published as part of our paper: A Large-scale Dataset for Argument Quality Ranking: Construction and Analysis.
Argument Topic
This subset contains 9,487 of the arguments only with… See the full description on the dataset page: https://huggingface.co/datasets/ibm-research/argument_quality_ranking_30k.
The Center for World University Rankings (CWUR) publishes the only academic ranking of global universities that assesses the quality of education, alumni employment, quality of faculty, and research performance without relying on surveys and university data submissions.
CWUR uses seven objective and robust indicators grouped into four areas to rank the world’s universities: - Education: based on the academic success of a university’s alumni, and measured by the number of a university's alumni who have won prestigious academic distinctions relative to the university's size (25%) - Employability: based on the professional success of a university’s alumni, and measured by the number of a university's alumni who have held top positions at major companies relative to the university's size (25%) - Faculty: measured by the number of faculty members who have won prestigious academic distinctions (10%) - Research: i) Research Output: measured by the total number of research papers (10%) ii) High-Quality Publications: measured by the number of research papers appearing in top-tier journals (10%) iii) Influence: measured by the number of research papers appearing in highly-influential journals (10%) iv) Citations: measured by the number of highly-cited research papers (10%)
More - Find More Exciting🙀 Datasets Here - An Upvote👍 A Dayᕙ(`▿´)ᕗ , Keeps Aman Hurray Hurray..... ٩(˘◡˘)۶Haha
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset tracks annual overall school rank from 2011 to 2022 for Paper Mill
As of June 2022, Finland-based Stora Enso employed approximately ****** people. UPM Kymmene, which is also based in Finland, employed roughly ****** people. Both of these companies are amongst the world's leading forestry and paper companies.
This dataset contains impact measures (metrics/indicators) for ~132M distinct DOIs that correspond to scientific articles. In particular, for each article we have calculated the following measures: Citation count: The total number of citations, reflecting the "influence" (i.e., the total impact) of an article. Incubation Citation Count (3-year CC): This is a time-restricted version of the citation count, where the time window length is fixed for all papers and the time window depends on the publication date of the paper, i.e., only citations 3 years after each paper’s publication are counted. This measure can be seen as an indicator of a paper's "impulse", i.e., its initial momentum directly after its publication. PageRank score: This is a citation-based measure reflecting the "influence" (i.e., the total impact) of an article. It is based on the PageRank1 network analysis method. In the context of citation networks, PageRank estimates the importance of each article based on its centrality in the whole network. RAM score: This is a citation-based measure reflecting the "popularity" (i.e., the current impact) of an article. It is based on the RAM2 method and is essentially a citation count where recent citations are considered as more important. This type of “time awareness” alleviates problems of methods like PageRank, which are biased against recently published articles (new articles need time to receive a “sufficient” number of citations). Hence, RAM is more suitable to capture the current “hype” of an article. AttRank score: This is a citation network analysis-based measure reflecting the "popularity" (i.e., the current impact) of an article. It is based on the AttRank3 method. AttRank alleviates PageRank’s bias against recently published papers by incorporating an attention-based mechanism, akin to a time-restricted version of preferential attachment, to explicitly capture a researcher’s preference to read papers which received a lot of attention recently. This is why it is more suitable to capture the current “hype” of an article. More details about the aforementioned impact measures, the way they are calculated and their interpretation can be found here. For version 5.1 onwards, the impact measures are calculated in two levels: The DOI level (assuming that each DOI corresponds to a distinct scientific article. The OpenAIRE-id level (leveraging DOI synonyms based on OpenAIRE's deduplication algorithm4 - each distinct article has its own OpenAIRE id). Previous versions of the dataset only provided the scores at the DOI level. For each calculation level (DOI / OpenAIRE-id) we provide five (5) compressed CSV files (one for each measure/score provided) where each line follows the format “identifier
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
the paper constructs the relationship matrix among DMUs to reflect the relationships among DMUs based on competitive interval and competitive vision.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
this data was collected by CWRU: Since 2012, CWUR has been publishing the only academic ranking of global universities that assesses the quality of education, employability, quality of faculty, and research without relying on surveys and university data submissions. The ranking started out as a project in Jeddah, Saudi Arabia with the aim of rating the top 100 world universities.
Columns description :
World Rank:Ranking in world
Institution: Name of University
Location: Country
National Rank: Ranking in its Country
Quality of Education: University's alumi who have won major awards(12.5%)
Alumni Employment:Average number (per year) of a university's alumni who have held CEO position(12.5%)
Quality of Faculty: Faculty members of an institution who won awards(25%)
Research Output: Measured by the total number of research papers(10%)
Quality Publications: Measured by the total number of research papers appearing in top-tier Journals(10%)
Influence: Measured by the total number of research papers appearing in highly-influential Journals(10%)
Citations: Measured by the total number of highly-cited research papers(10%)
Score: Score
Leading Chinese paper, printing, and packaging companies on the Fortune China 500 ranking is based on total revenues in 2023 and has been released in *********. Nine Dragons Paper, a paper manufacturing company in China, had ranked first in this sector with an annual revenue of about **** billion U.S. dollars in 2023.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Italy Sold Production of Cigarette Paper was up 8.3% in 2019, compared to the previous year.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The information system of the Central Electoral Commission shall collect information on the initial ranking data indicated in the ballot papers. When determining the preference votes received by candidates, part of the data shall not be counted (a duplicate candidate number on the same ballot paper, non-candidates' numbers, numbers of excluded candidates). The data set shall comprise the election tour identification number, title, constituency and district identification number of the electoral tour, the identification number of the list of candidates (nominated in a multi-member constituency), the number of the list of candidates, the name, the legal person number (in the Register of Legal Entities), the serial number of the ballot paper, the candidate's election number shall be indicated in a specific ballot box (1 to 5). The entry of one line in the set corresponds to the information of one ballot paper. Contact the atverimas@stat.gov.lt for technical questions or possible errors.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
An academic journal or research journal is a periodical publication in which research articles relating to a particular academic discipline is published, according to Wikipedia. Currently, there are more than 25,000 peer-reviewed journals that are indexed in citation index databases such as Scopus and Web of Science. These indexes are ranked on the basis of various metrics such as CiteScore, H-index, etc. The metrics are calculated from yearly citation data of the journal. A lot of efforts are given to make a metric that reflects the journal's quality.
This is a comprehensive dataset on the academic journals coving their metadata information as well as citation, metrics, and ranking information. Detailed data on their subject area is also given in this dataset. The dataset is collected from the following indexing databases: - Scimago Journal Ranking - Scopus - Web of Science Master Journal List
The data is collected by scraping and then it was cleaned, details of which can be found in HERE.
Rest of the features provide further details on the journal's subject area or category: - Life Sciences: Top level subject area. - Social Sciences: Top level subject area. - Physical Sciences: Top level subject area. - Health Sciences: Top level subject area. - 1000 General: ASJC main category. - 1100 Agricultural and Biological Sciences: ASJC main category. - 1200 Arts and Humanities: ASJC main category. - 1300 Biochemistry, Genetics and Molecular Biology: ASJC main category. - 1400 Business, Management and Accounting: ASJC main category. - 1500 Chemical Engineering: ASJC main category. - 1600 Chemistry: ASJC main category. - 1700 Computer Science: ASJC main category. - 1800 Decision Sciences: ASJC main category. - 1900 Earth and Planetary Sciences: ASJC main category. - 2000 Economics, Econometrics and Finance: ASJC main category. - 2100 Energy: ASJC main category. - 2200 Engineering: ASJC main category. - 2300 Environmental Science: ASJC main category. - 2400 Immunology and Microbiology: ASJC main category. - 2500 Materials Science: ASJC main category. - 2600 Mathematics: ASJC main category. - 2700 Medicine: ASJC main category. - 2800 Neuroscience: ASJC main category. - 2900 Nursing: ASJC main category. - 3000 Pharmacology, Toxicology and Pharmaceutics: ASJC main category. - 3100 Physics and Astronomy: ASJC main category. - 3200 Psychology: ASJC main category. - 3300 Social Sciences: ASJC main category. - 3400 Veterinary: ASJC main category. - 3500 Dentistry: ASJC main category. - 3600 Health Professions: ASJC main category.