Facebook
TwitterAttribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
Citation metrics are widely used and misused. We have created a publicly available database of top-cited scientists that provides standardized information on citations, h-index, co-authorship adjusted hm-index, citations to papers in different authorship positions and a composite indicator (c-score). Separate data are shown for career-long and, separately, for single recent year impact. Metrics with and without self-citations and ratio of citations to citing papers are given. Scientists are classified into 22 scientific fields and 174 sub-fields according to the standard Science-Metrix classification. Field- and subfield-specific percentiles are also provided for all scientists with at least 5 papers. Career-long data are updated to end-of-2022 and single recent year data pertain to citations received during calendar year 2022. The selection is based on the top 100,000 scientists by c-score (with and without self-citations) or a percentile rank of 2% or above in the sub-field. This version (6) is based on the October 1, 2023 snapshot from Scopus, updated to end of citation year 2022. This work uses Scopus data provided by Elsevier through ICSR Lab (https://www.elsevier.com/icsr/icsrlab). Calculations were performed using all Scopus author profiles as of October 1, 2023. If an author is not on the list it is simply because the composite indicator value was not high enough to appear on the list. It does not mean that the author does not do good work.
PLEASE ALSO NOTE THAT THE DATABASE HAS BEEN PUBLISHED IN AN ARCHIVAL FORM AND WILL NOT BE CHANGED. The published version reflects Scopus author profiles at the time of calculation. We thus advise authors to ensure that their Scopus profiles are accurate. REQUESTS FOR CORRECIONS OF THE SCOPUS DATA (INCLUDING CORRECTIONS IN AFFILIATIONS) SHOULD NOT BE SENT TO US. They should be sent directly to Scopus, preferably by use of the Scopus to ORCID feedback wizard (https://orcid.scopusfeedback.com/) so that the correct data can be used in any future annual updates of the citation indicator databases.
The c-score focuses on impact (citations) rather than productivity (number of publications) and it also incorporates information on co-authorship and author positions (single, first, last author). If you have additional questions, please read the 3 associated PLoS Biology papers that explain the development, validation and use of these metrics and databases. (https://doi.org/10.1371/journal.pbio.1002501, https://doi.org/10.1371/journal.pbio.3000384 and https://doi.org/10.1371/journal.pbio.3000918).
Finally, we alert users that all citation metrics have limitations and their use should be tempered and judicious. For more reading, we refer to the Leiden manifesto: https://www.nature.com/articles/520429a
Facebook
TwitterAbstract and indexing database with full text links that is produced by Elsevier Co. Combines expertly curated abstract and citation database with enriched data and linked scholarly literature across wide variety of disciplines.
Facebook
Twitter💬Also have a look at
💡 UNIVERSITIES & Research INSTITUTIONS Rank - SCImagoIR
💡 Scientific JOURNALS Indicators & Info - SCImagoJR
☢️❓The entire dataset is obtained from public and open-access data of ScimagoJR (SCImago Journal & Country Rank)
ScimagoJR Country Rank
SCImagoJR About Us
Documents: Number of documents published during the selected year. It is usually called the country's scientific output.
Citable Documents: Selected year citable documents. Exclusively articles, reviews and conference papers are considered.
Citations: Number of citations by the documents published during the source year, --i.e. citations in years X, X+1, X+2, X+3... to documents published during year X. When referred to the period 1996-2021, all published documents during this period are considered.
Citations per Document: Average citations per document published during the source year, --i.e. citations in years X, X+1, X+2, X+3... to documents published during year X. When referred to the period 1996-2021, all published documents during this period are considered.
Self Citations: Country self-citations. Number of self-citations of all dates received by the documents published during the source year, --i.e. self-citations in years X, X+1, X+2, X+3... to documents published during year X. When referred to the period 1996-2021, all published documents during this period are considered.
H index: The h index is a country's number of articles (h) that have received at least h- citations. It quantifies both country's scientific productivity and scientific impact and it is also applicable to scientists, journals, etc.
Facebook
TwitterAcademic journals indicators developed from the information contained in the Scopus database (Elsevier B.V.). These indicators can be used to assess and analyze scientific domains.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Scopus coverage updates (2011-2018) Data aggregated from Elsevier title list files; it provides: - source ID in Scopus - Title - ISSN - ESSN - Year of the title list file used as source - SNIP - Open access status - Status in the database (Added, previously indexed) - Filed - Subfield - ASJC code
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This file contains Memento information for all URIs from the Elsevier dataset.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This bar chart displays books by publication date using the aggregation count. The data is filtered where the book publisher is Elsevier Academic. The data is about books.
Facebook
TwitterTraffic analytics, rankings, and competitive metrics for elsevier.com as of October 2025
Facebook
TwitterElsevier Limited Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.
Facebook
TwitterOn February 28, 2019, the University of California (UC) announced that it would not renew its subscriptions to Elsevier journals. UC is a public research university in California, USA, with 10 campuses across the state.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data includes 2021 Elsevier title level pricing for seven universities, including:
Florida State University Iowa State University University of North Carolina, Chapel Hill West Virginia University Purdue University University of Virginia an anonymous university named “Institution A”
In addition, the data includes a summary analysis that includes for each university the 2021 Published List Price, Adjustment from List Price, Average Cost per Journal, and number of Subscribed Titles. This data will be of interest to anyone interested in examining title level pricing from a major commercial publisher.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
To generate the bibliographic and survey data to support a data reuse study conducted by several Library faculty and accepted for publication in the Journal of Academic Librarianship, the project team utilized a series of web-based online scripts that employed several different endpoints from the Scopus API. The related dataset: "Data for: An Examination of Data Reuse Practices within Highly Cited Articles of Faculty at a Research University" contains survey design and results.
1) getScopus_API_process_dmp_IDB.asp: used the search API query the Scopus database API for papers by UIUC authors published in 2015 -- limited to one of 9 pre-defined Scopus subject areas -- and retrieve metadata results sorted highest to lowest by the number of times the retrieved articles were cited. The URL for the basic searches took the following form: https://api.elsevier.com/content/search/scopus?query=(AFFIL%28(urbana%20OR%20champaign) AND univ*%29) OR (AF-ID(60000745) OR AF-ID(60005290))&apikey=xxxxxx&start=" & nstart & "&count=25&date=2015&view=COMPLETE&sort=citedby-count&subj=PHYS
Here, the variable nstart was incremented by 25 each iteration and 25 records were retrieved in each pass. The subject area was renamed (e.g. from PHYS to COMP for computer science) in each of the 9 runs. This script does not use the Scopus API cursor but downloads 25 records at a time for up to 28 times -- or 675 maximum bibliographic records. The project team felt that looking at the most 675 cited articles from UIUC faculty in each of the 9 subject areas was sufficient to gather a robust, representative sample of articles from 2015. These downloaded records were stored in a temporary table that was renamed for each of the 9 subject areas.
2) get_citing_from_surveys_IDB.asp: takes a Scopus article ID (eid) from the 49 UIUC author returned surveys and retrieves short citing article references, 200 at a time, into a temporary composite table. These citing records contain only one author, no author affiliations, and no author email addresses. This script uses the Scopus API cursor=* feature and is able to download all the citing references of an article 200 records at a time.
3) put_in_all_authors_affil_IDB.asp: adds important data to the short citing records. The script adds all co-authors and their affiliations, the corresponding author, and author email addresses.
4) process_for_final_IDB.asp: creates a relational database table with author, title, and source journal information for each of the citing articles that can be copied as an Excel file for processing by the Qualtrics survey software. This was initially 4,626 citing articles over the 49 UIUC authored articles, but was reduced to 2,041 entries after checking for available email addresses and eliminating duplicates.
Facebook
TwitterCitation metrics are widely used and misused. We have created a publicly available database of over 100,000 top-scientists that provides standardized information on citations, h-index, co-authorship adjusted hm-index, citations to papers in different authorship positions and a composite indicator. Separate data are shown for career-long and single year impact. Metrics with and without self-citations and ratio of citations to citing papers are given. Scientists are classified into 22 scientific fields and 176 sub-fields. Field- and subfield-specific percentiles are also provided for all scientists who have published at least 5 papers. Career-long data are updated to end-of-2020. The selection is based on the top 100,000 by c-score (with and without self-citations) or a percentile rank of 2% or above.
The dataset and code provides an update to previously released version 1 data under https://doi.org/10.17632/btchxktzyw.1; The version 2 dataset is based on the May 06, 2020 snapshot from Scopus and is updated to citation year 2019 available at https://doi.org/10.17632/btchxktzyw.2
This version (3) is based on the Aug 01, 2021 snapshot from Scopus and is updated to citation year 2020.
Baas, Jeroen; Boyack, Kevin; Ioannidis, John P.A. (2021), “August 2021 data-update for "Updated science-wide author databases of standardized citation indicators"”, Mendeley Data, V3, doi: 10.17632/btchxktzyw.3
For more please visit here.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a corpus of 40k (40,001) open access (OA) CC-BY articles from across Elsevier’s journals represent the first cross-discipline research of data at this scale to support NLP and ML research.
This dataset was released to support the development of ML and NLP models targeting science articles from across all research domains. While the release builds on other datasets designed for specific domains and tasks, it will allow for similar datasets to be derived or for the development of models which can be applied and tested across domains.
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Background This bibliometric analysis examines the top 50 most-cited articles on COVID-19 complications, offering insights into the multifaceted impact of the virus. Since its emergence in Wuhan in December 2019, COVID-19 has evolved into a global health crisis, with over 770 million confirmed cases and 6.9 million deaths as of September 2023. Initially recognized as a respiratory illness causing pneumonia and ARDS, its diverse complications extend to cardiovascular, gastrointestinal, renal, hematological, neurological, endocrinological, ophthalmological, hepatobiliary, and dermatological systems. Methods Identifying the top 50 articles from a pool of 5940 in Scopus, the analysis spans November 2019 to July 2021, employing terms related to COVID-19 and complications. Rigorous review criteria excluded non-relevant studies, basic science research, and animal models. The authors independently reviewed articles, considering factors like title, citations, publication year, journal, impact factor, authors, study details, and patient demographics. Results The focus is primarily on 2020 publications (96%), with all articles being open-access. Leading journals include The Lancet, NEJM, and JAMA, with prominent contributions from Internal Medicine (46.9%) and Pulmonary Medicine (14.5%). China played a major role (34.9%), followed by France and Belgium. Clinical features were the primary study topic (68%), often utilizing retrospective designs (24%). Among 22,477 patients analyzed, 54.8% were male, with the most common age group being 26–65 years (63.2%). Complications affected 13.9% of patients, with a recovery rate of 57.8%. Conclusion Analyzing these top-cited articles offers clinicians and researchers a comprehensive, timely understanding of influential COVID-19 literature. This approach uncovers attributes contributing to high citations and provides authors with valuable insights for crafting impactful research. As a strategic tool, this analysis facilitates staying updated and making meaningful contributions to the dynamic field of COVID-19 research. Methods A bibliometric analysis of the most cited articles about COVID-19 complications was conducted in July 2021 using all journals indexed in Elsevier’s Scopus and Thomas Reuter’s Web of Science from November 1, 2019 to July 1, 2021. All journals were selected for inclusion regardless of country of origin, language, medical speciality, or electronic availability of articles or abstracts. The terms were combined as follows: (“COVID-19” OR “COVID19” OR “SARS-COV-2” OR “SARSCOV2” OR “SARS 2” OR “Novel coronavirus” OR “2019-nCov” OR “Coronavirus”) AND (“Complication” OR “Long Term Complication” OR “Post-Intensive Care Syndrome” OR “Venous Thromboembolism” OR “Acute Kidney Injury” OR “Acute Liver Injury” OR “Post COVID-19 Syndrome” OR “Acute Cardiac Injury” OR “Cardiac Arrest” OR “Stroke” OR “Embolism” OR “Septic Shock” OR “Disseminated Intravascular Coagulation” OR “Secondary Infection” OR “Blood Clots” OR “Cytokine Release Syndrome” OR “Paediatric Inflammatory Multisystem Syndrome” OR “Vaccine Induced Thrombosis with Thrombocytopenia Syndrome” OR “Aspergillosis” OR “Mucormycosis” OR “Autoimmune Thrombocytopenia Anaemia” OR “Immune Thrombocytopenia” OR “Subacute Thyroiditis” OR “Acute Respiratory Failure” OR “Acute Respiratory Distress Syndrome” OR “Pneumonia” OR “Subcutaneous Emphysema” OR “Pneumothorax” OR “Pneumomediastinum” OR “Encephalopathy” OR “Pancreatitis” OR “Chronic Fatigue” OR “Rhabdomyolysis” OR “Neurologic Complication” OR “Cardiovascular Complications” OR “Psychiatric Complication” OR “Respiratory Complication” OR “Cardiac Complication” OR “Vascular Complication” OR “Renal Complication” OR “Gastrointestinal Complication” OR “Haematological Complication” OR “Hepatobiliary Complication” OR “Musculoskeletal Complication” OR “Genitourinary Complication” OR “Otorhinolaryngology Complication” OR “Dermatological Complication” OR “Paediatric Complication” OR “Geriatric Complication” OR “Pregnancy Complication”) in the Title, Abstract or Keyword. A total of 5940 articles were accessed, of which the top 50 most cited articles about COVID-19 and Complications of COVID-19 were selected through Scopus. Each article was reviewed for its appropriateness for inclusion. The articles were independently reviewed by three researchers (JRP, MAM and TS) (Table 1). Differences in opinion with regard to article inclusion were resolved by consensus. The inclusion criteria specified articles that were focused on COVID-19 and Complications of COVID-19. Articles were excluded if they did not relate to COVID-19 and or complications of COVID-19, Basic Science Research and studies using animal models or phantoms. Review articles, Viewpoints, Guidelines, Perspectives and Meta-analysis were also excluded from the top 50 most-cited articles (Table 1). The top 50 most-cited articles were compiled in a single database and the relevant data was extracted. The database included: Article Title, Scopus Citations, Year of Publication, Journal, Journal Impact Factor, Authors, Number of Authors, Department Affiliation, Number of Institutions, Country of Origin, Study Topic, Study Design, Sample Size, Open Access, Non-Original Articles, Patient/Participants Age, Gender, Symptoms, Signs, Co-morbidities, Complications, Imaging Modalities Used and outcome.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This FAIRsharing record describes: The Information for Authors page contains a broad range of guidelines for publishing in The Lancet. With regards to data deposition, novel gene sequences should be deposited in a public database (GenBank, EMBL, or DDBJ), and the accession number provided. Authors of microarray papers should include in their submission the information recommended by the MIAME guidelines. Authors should also submit their experimental details to one of the publicly available databases: ArrayExpress or GEO.
Facebook
TwitterPolymer engineering and science Impact Factor 2024-2025 - ResearchHelpDesk - Polymer engineering and science - Every day, the Society of Plastics Engineers (SPE) takes action to help companies in the plastics industry succeed. How? By spreading knowledge, strengthening skills and promoting plastics. Employing these vital strategies, Polymer engineering and science - SPE has helped the plastics industry thrive for over 60 years. In the process, we've developed a 25,000-member network of leading engineers and other plastics professionals, including technicians, salespeople, marketers, retailers, and representatives from tertiary industries. For more than 30 years, Polymer Engineering & Science has been one of the most highly regarded journals in the field, serving as a forum for authors of treatises on the cutting edge of polymer science and technology. The importance of PE&S is underscored by the frequent rate at which its articles are cited, especially by other publications - literally thousands of times a year. Engineers, researchers, technicians, and academicians worldwide are looking to PE&S for the valuable information they need. There are special issues compiled by distinguished guest editors. These contain proceedings of symposia on such diverse topics as polyblends, mechanics of plastics and polymer welding. Abstracting and Indexing Information Academic ASAP (GALE Cengage) Advanced Technologies & Aerospace Database (ProQuest) Applied Science & Technology Index/Abstracts (EBSCO Publishing) CAS: Chemical Abstracts Service (ACS) CCR Database (Clarivate Analytics) Chemical Abstracts Service/SciFinder (ACS) Chemistry Server Reaction Center (Clarivate Analytics) ChemWeb (ChemIndustry.com) Chimica Database (Elsevier) COMPENDEX (Elsevier) Current Contents: Engineering, Computing & Technology (Clarivate Analytics) Current Contents: Physical, Chemical & Earth Sciences (Clarivate Analytics) Expanded Academic ASAP (GALE Cengage) InfoTrac (GALE Cengage) Journal Citation Reports/Science Edition (Clarivate Analytics) Materials Science & Engineering Database (ProQuest) PASCAL Database (INIST/CNRS) Polymer Library (iSmithers RAPRA) ProQuest Central (ProQuest) ProQuest Central K-462 Reaction Citation Index (Clarivate Analytics) Research Library (ProQuest) Research Library Prep (ProQuest) Science Citation Index (Clarivate Analytics) Science Citation Index Expanded (Clarivate Analytics) SciTech Premium Collection (ProQuest) SCOPUS (Elsevier) STEM Database (ProQuest) Technology Collection (ProQuest) Web of Science (Clarivate Analytics)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This bar chart displays books by publication date using the aggregation count. The data is filtered where the book publisher is Saunders/ Elsevier. The data is about books.
Facebook
Twitterhttps://www.nist.gov/open/licensehttps://www.nist.gov/open/license
This dataset contains links to ThermoML files, which represent experimental thermophysical and thermochemical property data reported in the corresponding articles published by major journals in the field. These files are posted here through cooperation between the Thermodynamics Research Center (TRC) at the National Institute of Standards and Technology (NIST) and Elsevier. The ThermoML files corresponding to articles in the journals are available here with permission of the journal publishers.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains supplementary materials for the following journal paper:
Valdemar Švábenský, Jan Vykopal, Pavel Seda, Pavel Čeleda. Dataset of Shell Commands Used by Participants of Hands-on Cybersecurity Training. In Elsevier Data in Brief. 2021. https://doi.org/10.1016/j.dib.2021.107398
How to cite
If you use or build upon the materials, please use the BibTeX entry below to cite the original paper (not only this web link).
@article{Svabensky2021dataset, author = {\v{S}v\'{a}bensk\'{y}, Valdemar and Vykopal, Jan and Seda, Pavel and \v{C}eleda, Pavel}, title = {{Dataset of Shell Commands Used by Participants of Hands-on Cybersecurity Training}}, journal = {Data in Brief}, publisher = {Elsevier}, volume = {38}, year = {2021}, issn = {2352-3409}, url = {https://doi.org/10.1016/j.dib.2021.107398}, doi = {10.1016/j.dib.2021.107398}, }
The data were collected using a logging toolset referenced here.
Attached content
Dataset (data.zip). The collected data are attached here on Zenodo. A copy is also available in this repository.
Analytical tools (toolset.zip). To analyze the data, you can instantiate the toolset or this project for ELK.
Version history
Version 1 (https://zenodo.org/record/5137355) contains 13446 log records from 175 trainees. These data are precisely those that are described in the associated journal paper. Version 1 provides a snapshot of the state when the article was published.
Version 2 (https://zenodo.org/record/5517479) contains 13446 log records from 175 trainees. The data are unchanged from Version 1, but the analytical toolset includes a minor fix.
Version 3 (https://zenodo.org/record/6670113) contains 21762 log records from 275 trainees. It is a superset of Version 2, with newly collected data added to the dataset.
The current Version 4 (https://zenodo.org/record/8136017) contains 21459 log records from 275 trainees. Compared to Version 3, we cleaned 303 invalid/duplicate command records.
Facebook
TwitterAttribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
Citation metrics are widely used and misused. We have created a publicly available database of top-cited scientists that provides standardized information on citations, h-index, co-authorship adjusted hm-index, citations to papers in different authorship positions and a composite indicator (c-score). Separate data are shown for career-long and, separately, for single recent year impact. Metrics with and without self-citations and ratio of citations to citing papers are given. Scientists are classified into 22 scientific fields and 174 sub-fields according to the standard Science-Metrix classification. Field- and subfield-specific percentiles are also provided for all scientists with at least 5 papers. Career-long data are updated to end-of-2022 and single recent year data pertain to citations received during calendar year 2022. The selection is based on the top 100,000 scientists by c-score (with and without self-citations) or a percentile rank of 2% or above in the sub-field. This version (6) is based on the October 1, 2023 snapshot from Scopus, updated to end of citation year 2022. This work uses Scopus data provided by Elsevier through ICSR Lab (https://www.elsevier.com/icsr/icsrlab). Calculations were performed using all Scopus author profiles as of October 1, 2023. If an author is not on the list it is simply because the composite indicator value was not high enough to appear on the list. It does not mean that the author does not do good work.
PLEASE ALSO NOTE THAT THE DATABASE HAS BEEN PUBLISHED IN AN ARCHIVAL FORM AND WILL NOT BE CHANGED. The published version reflects Scopus author profiles at the time of calculation. We thus advise authors to ensure that their Scopus profiles are accurate. REQUESTS FOR CORRECIONS OF THE SCOPUS DATA (INCLUDING CORRECTIONS IN AFFILIATIONS) SHOULD NOT BE SENT TO US. They should be sent directly to Scopus, preferably by use of the Scopus to ORCID feedback wizard (https://orcid.scopusfeedback.com/) so that the correct data can be used in any future annual updates of the citation indicator databases.
The c-score focuses on impact (citations) rather than productivity (number of publications) and it also incorporates information on co-authorship and author positions (single, first, last author). If you have additional questions, please read the 3 associated PLoS Biology papers that explain the development, validation and use of these metrics and databases. (https://doi.org/10.1371/journal.pbio.1002501, https://doi.org/10.1371/journal.pbio.3000384 and https://doi.org/10.1371/journal.pbio.3000918).
Finally, we alert users that all citation metrics have limitations and their use should be tempered and judicious. For more reading, we refer to the Leiden manifesto: https://www.nature.com/articles/520429a