Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
There are now many methods available to assess the relative citation performance of peer-reviewed journals. Regardless of their individual faults and advantages, citation-based metrics are used by researchers to maximize the citation potential of their articles, and by employers to rank academic track records. The absolute value of any particular index is arguably meaningless unless compared to other journals, and different metrics result in divergent rankings. To provide a simple yet more objective way to rank journals within and among disciplines, we developed a κ-resampled composite journal rank incorporating five popular citation indices: Impact Factor, Immediacy Index, Source-Normalized Impact Per Paper, SCImago Journal Rank and Google 5-year h-index; this approach provides an index of relative rank uncertainty. We applied the approach to six sample sets of scientific journals from Ecology (n = 100 journals), Medicine (n = 100), Multidisciplinary (n = 50); Ecology + Multidisciplinary (n = 25), Obstetrics & Gynaecology (n = 25) and Marine Biology & Fisheries (n = 25). We then cross-compared the κ-resampled ranking for the Ecology + Multidisciplinary journal set to the results of a survey of 188 publishing ecologists who were asked to rank the same journals, and found a 0.68–0.84 Spearman’s ρ correlation between the two rankings datasets. Our composite index approach therefore approximates relative journal reputation, at least for that discipline. Agglomerative and divisive clustering and multi-dimensional scaling techniques applied to the Ecology + Multidisciplinary journal set identified specific clusters of similarly ranked journals, with only Nature & Science separating out from the others. When comparing a selection of journals within or among disciplines, we recommend collecting multiple citation-based metrics for a sample of relevant and realistic journals to calculate the composite rankings and their relative uncertainty windows.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This file contains metadata from the 40 most impactful journals in the field of distance education, selected through a rigorous bibliometric analysis using the SCImago Journal Rank (SJR) indicator provided by SCImago. Two main criteria guided the selection: the first targeted journals ranked in Q1 and Q2 in the specific category of "e-learning" within the social sciences and education area, highlighting publications that demonstrate high impact and relevance in the academic community according to the selected indicator. The second criterion was based on keyword searches in the journal titles, selecting those that include terms like e-learning, online, Distance Education, Technology Learning, Communications in Information, and Information Education and their variants (e.g., plural), also positioned in Q1 or Q2.
The metadata, extracted from the Scopus database, covers publications from the period 2018 to 2022 and includes vital information such as document type, authors' names, article title, journal name, publication year, pages, volume, and issue number. Additionally, each article is identified by a Digital Object Identifier (DOI) and URLs for direct access to the full text online, along with abstracts and keywords. These elements together provide a comprehensive and accessible view of the articles, facilitating bibliometric analyses and related academic research.
This compilation serves as an essential resource for researchers and educators interested in understanding the dynamics and development of the field of distance education, offering a solid foundation for future investigations and the formulation of evidence-based educational policies.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data set for the study published in the journal El Profesional de la información (vol. 29, num. 4) and devoted to the analysis of the creation or title changes of scientific journals in the information and documentation (ID) area in the period 2013-2018. Based on the total of 62 such journals identified through ISSN Portal and UlrichsWeb, the following are described: characteristic aspects such as country, language, type of publisher, and access model; presence in bibliographic databases, citations, or journal directories; survival and volume of articles published; annual number of citations to articles according to Google Scholar; thematic scope declared by the editors; and finally, the justification given by the editors for the change of title or the creation of a new journal. Among the main conclusions regarding newly created titles, the leading role of academic publishers in expanding national university systems and open-access titles stands out. In general, new publications generate few articles per year, have little presence in databases, and receive few citations. Title changes were found only in journals published by commercial publishers. In both cases, journals with a general thematic scope predominate and a significant number of journals did not justify their creation or change of title.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The DOAJ database was created in 2003 and includes almost 14,000 peer-reviewed open access journals covering all knowledge areas, published in 130 countries. There is a selective process to be followed to assure the quality of the titles. DOAJ is maintained by Infrastructure Services for Open Access (IS4OA) and its funding is derived from donations (40% from publishers and 60% from the public sector). DOAJ introduced a quality distinction, called the DOAJ Seal, to identify the most prominent journals. There are 1354 journals (around 10% of the total) that have been awarded the Seal.
The search strategy involved using the Seal option, then ranking the journals to identify the biggest publishers, the number of journals and the number of articles. We have extracted the following indicators from DOAJ: publisher, title, ISSN, country, number of articles, knowledge area (according to the DOAJ classification), value of article processing charges in USD, time for publication in weeks, and year of indexing in DOAJ.
United States agricultural researchers have many options for making their data available online. This dataset aggregates the primary sources of ag-related data and determines where researchers are likely to deposit their agricultural data. These data serve as both a current landscape analysis and also as a baseline for future studies of ag research data. Purpose As sources of agricultural data become more numerous and disparate, and collaboration and open data become more expected if not required, this research provides a landscape inventory of online sources of open agricultural data. An inventory of current agricultural data sharing options will help assess how the Ag Data Commons, a platform for USDA-funded data cataloging and publication, can best support data-intensive and multi-disciplinary research. It will also help agricultural librarians assist their researchers in data management and publication. The goals of this study were to establish where agricultural researchers in the United States-- land grant and USDA researchers, primarily ARS, NRCS, USFS and other agencies -- currently publish their data, including general research data repositories, _domain-specific databases, and the top journals compare how much data is in institutional vs. _domain-specific vs. federal platforms determine which repositories are recommended by top journals that require or recommend the publication of supporting data ascertain where researchers not affiliated with funding or initiatives possessing a designated open data repository can publish data Approach The National Agricultural Library team focused on Agricultural Research Service (ARS), Natural Resources Conservation Service (NRCS), and United States Forest Service (USFS) style research data, rather than ag economics, statistics, and social sciences data. To find _domain-specific, general, institutional, and federal agency repositories and databases that are open to US research submissions and have some amount of ag data, resources including re3data, libguides, and ARS lists were analysed. Primarily environmental or public health databases were not included, but places where ag grantees would publish data were considered. Search methods We first compiled a list of known _domain specific USDA / ARS datasets / databases that are represented in the Ag Data Commons, including ARS Image Gallery, ARS Nutrition Databases (sub-components), SoyBase, PeanutBase, National Fungus Collection, i5K Workspace @ NAL, and GRIN. We then searched using search engines such as Bing and Google for non-USDA / federal ag databases, using Boolean variations of “agricultural data” /“ag data” / “scientific data” + NOT + USDA (to filter out the federal / USDA results). Most of these results were _domain specific, though some contained a mix of data subjects. We then used search engines such as Bing and Google to find top agricultural university repositories using variations of “agriculture”, “ag data” and “university” to find schools with agriculture programs. Using that list of universities, we searched each university web site to see if their institution had a repository for their unique, independent research data if not apparent in the initial web browser search. We found both ag specific university repositories and general university repositories that housed a portion of agricultural data. Ag specific university repositories are included in the list of _domain-specific repositories. Results included Columbia University – International Research Institute for Climate and Society, UC Davis – Cover Crops Database, etc. If a general university repository existed, we determined whether that repository could filter to include only data results after our chosen ag search terms were applied. General university databases that contain ag data included Colorado State University Digital Collections, University of Michigan ICPSR (Inter-university Consortium for Political and Social Research), and University of Minnesota DRUM (Digital Repository of the University of Minnesota). We then split out NCBI (National Center for Biotechnology Information) repositories. Next we searched the internet for open general data repositories using a variety of search engines, and repositories containing a mix of data, journals, books, and other types of records were tested to determine whether that repository could filter for data results after search terms were applied. General subject data repositories include Figshare, Open Science Framework, PANGEA, Protein Data Bank, and Zenodo. Finally, we compared scholarly journal suggestions for data repositories against our list to fill in any missing repositories that might contain agricultural data. Extensive lists of journals were compiled, in which USDA published in 2012 and 2016, combining search results in ARIS, Scopus, and the Forest Service's TreeSearch, plus the USDA web sites Economic Research Service (ERS), National Agricultural Statistics Service (NASS), Natural Resources and Conservation Service (NRCS), Food and Nutrition Service (FNS), Rural Development (RD), and Agricultural Marketing Service (AMS). The top 50 journals' author instructions were consulted to see if they (a) ask or require submitters to provide supplemental data, or (b) require submitters to submit data to open repositories. Data are provided for Journals based on a 2012 and 2016 study of where USDA employees publish their research studies, ranked by number of articles, including 2015/2016 Impact Factor, Author guidelines, Supplemental Data?, Supplemental Data reviewed?, Open Data (Supplemental or in Repository) Required? and Recommended data repositories, as provided in the online author guidelines for each the top 50 journals. Evaluation We ran a series of searches on all resulting general subject databases with the designated search terms. From the results, we noted the total number of datasets in the repository, type of resource searched (datasets, data, images, components, etc.), percentage of the total database that each term comprised, any dataset with a search term that comprised at least 1% and 5% of the total collection, and any search term that returned greater than 100 and greater than 500 results. We compared _domain-specific databases and repositories based on parent organization, type of institution, and whether data submissions were dependent on conditions such as funding or affiliation of some kind. Results A summary of the major findings from our data review: Over half of the top 50 ag-related journals from our profile require or encourage open data for their published authors. There are few general repositories that are both large AND contain a significant portion of ag data in their collection. GBIF (Global Biodiversity Information Facility), ICPSR, and ORNL DAAC were among those that had over 500 datasets returned with at least one ag search term and had that result comprise at least 5% of the total collection. Not even one quarter of the _domain-specific repositories and datasets reviewed allow open submission by any researcher regardless of funding or affiliation. See included README file for descriptions of each individual data file in this dataset. Resources in this dataset:Resource Title: Journals. File Name: Journals.csvResource Title: Journals - Recommended repositories. File Name: Repos_from_journals.csvResource Title: TDWG presentation. File Name: TDWG_Presentation.pptxResource Title: Domain Specific ag data sources. File Name: domain_specific_ag_databases.csvResource Title: Data Dictionary for Ag Data Repository Inventory. File Name: Ag_Data_Repo_DD.csvResource Title: General repositories containing ag data. File Name: general_repos_1.csvResource Title: README and file inventory. File Name: README_InventoryPublicDBandREepAgData.txt
We examined journal policies of the 100 top-ranked clinical journals using the 2018 impact factors as reported by InCites Journal Citation Reports (JCR). First, we examined all journals with an impact factor greater than 5, and then we manually screened by title and category do identify the first 100 clinical journals. We included only those that publish original research. Next, we checked each journal's editorial policy on preprints. We examined, in order, the journal website, the publisher website, the Transpose Database, and the first 10 pages of a Google search with the journal name and the term "preprint." We classified each journal's policy, as shown in this dataset, as allowing preprints, determining based on preprint status on a case-by-case basis, and not allowing any preprints. We collected data on April 23, 2020.
(Full methods can also be found in previously published paper.)
Background This bibliometric analysis examines the top 50 most-cited articles on COVID-19 complications, offering insights into the multifaceted impact of the virus. Since its emergence in Wuhan in December 2019, COVID-19 has evolved into a global health crisis, with over 770 million confirmed cases and 6.9 million deaths as of September 2023. Initially recognized as a respiratory illness causing pneumonia and ARDS, its diverse complications extend to cardiovascular, gastrointestinal, renal, hematological, neurological, endocrinological, ophthalmological, hepatobiliary, and dermatological systems. Methods Identifying the top 50 articles from a pool of 5940 in Scopus, the analysis spans November 2019 to July 2021, employing terms related to COVID-19 and complications. Rigorous review criteria excluded non-relevant studies, basic science research, and animal models. The authors independently reviewed articles, considering factors like title, citations, publication year, journal, impact fa..., A bibliometric analysis of the most cited articles about COVID-19 complications was conducted in July 2021 using all journals indexed in Elsevier’s Scopus and Thomas Reuter’s Web of Science from November 1, 2019 to July 1, 2021. All journals were selected for inclusion regardless of country of origin, language, medical speciality, or electronic availability of articles or abstracts. The terms were combined as follows: (“COVID-19†OR “COVID19†OR “SARS-COV-2†OR “SARSCOV2†OR “SARS 2†OR “Novel coronavirus†OR “2019-nCov†OR “Coronavirus†) AND (“Complication†OR “Long Term Complication†OR “Post-Intensive Care Syndrome†OR “Venous Thromboembolism†OR “Acute Kidney Injury†OR “Acute Liver Injury†OR “Post COVID-19 Syndrome†OR “Acute Cardiac Injury†OR “Cardiac Arrest†OR “Stroke†OR “Embolism†OR “Septic Shock†OR “Disseminated Intravascular Coagulation†OR “Secondary Infection†OR “Blood Clots† OR “Cytokine Release Syndrome†OR “Paediatric Inflammatory Multisystem Syndrome†OR “Vaccine..., , # Data of top 50 most cited articles about COVID-19 and the complications of COVID-19
This dataset contains information about the top 50 most cited articles about COVID-19 and the complications of COVID-19. We have looked into a variety of research and clinical factors for the analysis.
The data sheet offers a comprehensive analysis of the selected articles. It delves into specifics such as the publication year of the top 50 articles, the journals responsible for publishing them, and the geographical region with the highest number of citations in this elite list. Moreover, the sheet sheds light on the key players involved, including authors and their affiliated departments, in crafting the top 50 most cited articles.
Beyond these fundamental aspects, the data sheet goes on to provide intricate details related to the study types and topics prevalent in the top 50 articles. To enrich the analysis, it incorporates clinical data, capturing...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Natural Language Processing (NLP) is a subset of artificial intelligence that enables machines to understand and respond to human language through Large Language Models (LLMs)‥ These models have diverse applications in fields such as medical research, scientific writing, and publishing, but concerns such as hallucination, ethical issues, bias, and cybersecurity need to be addressed. To understand the scientific community’s understanding and perspective on the role of Artificial Intelligence (AI) in research and authorship, a survey was designed for corresponding authors in top medical journals. An online survey was conducted from July 13th, 2023, to September 1st, 2023, using the SurveyMonkey web instrument, and the population of interest were corresponding authors who published in 2022 in the 15 highest-impact medical journals, as ranked by the Journal Citation Report. The survey link has been sent to all the identified corresponding authors by mail. A total of 266 authors answered, and 236 entered the final analysis. Most of the researchers (40.6%) reported having moderate familiarity with artificial intelligence, while a minority (4.4%) had no associated knowledge. Furthermore, the vast majority (79.0%) believe that artificial intelligence will play a major role in the future of research. Of note, no correlation between academic metrics and artificial intelligence knowledge or confidence was found. The results indicate that although researchers have varying degrees of familiarity with artificial intelligence, its use in scientific research is still in its early phases. Despite lacking formal AI training, many scholars publishing in high-impact journals have started integrating such technologies into their projects, including rephrasing, translation, and proofreading tasks. Efforts should focus on providing training for their effective use, establishing guidelines by journal editors, and creating software applications that bundle multiple integrated tools into a single platform.
This entry contains code and data that was used in the publication: "Adoption of Transparency and Openness Promotion (TOP) guidelines across journals" submitted in Publications journal. #IDEA: This project was about analyzing policies of two thousand journals within the framework of eight TOP standards: data citation, transparency of data, material, code and design and analysis, replication, plan and study pre-registration, and two effective interventions: “Registered reports” and “Open science badges”. # MATERIALS & METHODS We downloaded the TOP Factor (v33, 2022-08-29 3:12 PM) metric from the https://osf.io/kgnva/files/osfstorage/5e13502257341901c3805317 website and analyzed its content with an in-house R script (in this repo): 1) SCRIPT: fig1_Analyzing_journals_policies_and_TOP_guidelines.R 2) SCRIPT: Figure2a_b_TOP_impl_journal_statistist_0_1_piechart_barplot.R In order to get statistics about implementation of the TOP guidelines across discipline-specific journals, we extracted information about journal’s disciplines from the Scopus content database. We downloaded SCOPUS content coverage from the https://www.elsevier.com/solutions/scopus/how-scopus-works/content?dgcid=RN_AGCM_Sourced_300005030 (existJuly2022.xlsx) and used the first Sheet. We identified match between those 2 tables: 3) SCRIPT: Rscript_overlapping_TOP_dataframe_and_SCOPUS_db.R And resulted in Overlap_SCOPUS_TOP.rds file And performed visualization and statistics: 4) SCRIPT: Fig_3_Tab2_Defining_science_disciplines_plus_plot.R #RESULTS Submitted to Publications 30.9.2022.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This study estimates the effect of data sharing on the citations of academic articles, using journal policies as a natural experiment. We begin by examining 17 high-impact journals that have adopted the requirement that data from published articles be publicly posted. We match these 17 journals to 13 journals without policy changes and find that empirical articles published just before their change in editorial policy have citation rates with no statistically significant difference from those published shortly after the shift. We then ask whether this null result stems from poor compliance with data sharing policies, and use the data sharing policy changes as instrumental variables to examine more closely two leading journals in economics and political science with relatively strong enforcement of new data policies. We find that articles that make their data available receive 97 additional citations (estimate standard error of 34). We conclude that: a) authors who share data may be rewarded eventually with additional scholarly citations, and b) data-posting policies alone do not increase the impact of articles published in a journal unless those policies are enforced.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Tables and charts have long been seen as effective ways to convey data. Much attention has been focused on improving charts, following ideas of human perception and brain function. Tables can also be viewed as two-dimensional representations of data, yet it is only fairly recently that we have begun to apply principles of design that aid the communication of information between the author and reader. In this study, we collated guidelines for the design of data and statistical tables. These guidelines fall under three principles: aiding comparisons, reducing visual clutter, and increasing readability. We surveyed tables published in recent issues of 43 journals in the fields of ecology and evolutionary biology for their adherence to these three principles, as well as author guidelines on journal publisher websites. We found that most of the over 1,000 tables we sampled had no heavy grid lines and little visual clutter. They were also easy to read, with clear headers and horizontal orientation. However, most tables did not aid the vertical comparison of numeric data. We suggest that authors could improve their tables by the right-flush alignment of numeric columns typeset with a tabular font, clearly identify statistical significance, and use clear titles and captions. Journal publishers could easily implement these formatting guidelines when typesetting manuscripts. Methods Once we had established the above principles of table design, we assessed their use in issues of 43 widely read ecology and evolution journals (SI 2). Between January and July 2022, we reviewed the tables in the most recent issue published by these journals. For journals without issues (such as Annual Review of Ecology, Evolution, and Systematics, or Biological Conservation), we examined the tables in issues published in a single month or in the entire most recent volume if few papers were published in that journal on a monthly basis. We reviewed only articles in a traditionally typeset format and published as a PDF or in print. We did not examine the tables in online versions of articles. Having identified all tables for review, we assessed whether these tables followed the above-described best practice principles for table design and, if not, we noted the way in which these tables failed to meet the outlined guidelines. We initially both reviewed the same 10 tables to ensure that we agreed in our assessment of whether these tables followed each of the principles. Having ensured agreement on how to classify tables, we proceeded to review all subsequent journals individually, while resolving any uncertainties collaboratively. These preliminary table evaluations also showed that assessing whether tables used long format or a tabular font was hard to evaluate objectively without knowing the data or the font used. Therefore, we did not systematically review the extent to which these two guidelines were adhered to.
Sentences and citation contexts identified from the PubMed Central open access articles ---------------------------------------------------------------------- The dataset is delivered as 24 tab-delimited text files. The files contain 720,649,608 sentences, 75,848,689 of which are citation contexts. The dataset is based on a snapshot of articles in the XML version of the PubMed Central open access subset (i.e., the PMCOA subset). The PMCOA subset was collected in May 2019. The dataset is created as described in: Hsiao TK., & Torvik V. I. (manuscript) OpCitance: Citation contexts identified from the PubMed Central open access articles. Files: • A_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with A. • B_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with B. • C_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with C. • D_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with D. • E_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with E. • F_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with F. • G_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with G. • H_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with H. • I_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with I. • J_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with J. • K_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with K. • L_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with L. • M_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with M. • N_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with N. • O_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with O. • P_p1_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with P (part 1). • P_p2_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with P (part 2). • Q_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with Q. • R_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with R. • S_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with S. • T_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with T. • UV_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with U or V. • W_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with W. • XYZ_journal_IntxtCit.tsv – Sentences and citation contexts identified from articles published in journals with journal titles starting with X, Y or Z. Each row in the file is a sentence/citation context and contains the following columns: • pmcid: PMCID of the article • pmid: PMID of the article. If an article does not have a PMID, the value is NONE. • location: The article component (abstract, main text, table, figure, etc.) to which the citation context/sentence belongs. • IMRaD: The type of IMRaD section associated with the citation context/sentence. I, M, R, and D represent introduction/background, method, results, and conclusion/discussion, respectively; NoIMRaD indicates that the section type is not identifiable. • sentence_id: The ID of the citation context/sentence in the article component • total_sentences: The number of sentences in the article component. • intxt_id: The ID of the citation. • intxt_pmid: PMID of the citation (as tagged in the XML file). If a citation does not have a PMID tagged in the XML file, the value is "-". • intxt_pmid_source: The sources where the intxt_pmid can be identified. Xml represents that the PMID is only identified from the XML file; xml,pmc represents that the PMID is not only from the XML file, but also in the citation data collected from the NCBI Entrez Programming Utilities. If a citation does not have an intxt_pmid, the value is "-". • intxt_mark: The citation marker associated with the inline citation. • best_id: The best source link ID (e.g., PMID) of the citation. • best_source: The sources that confirm the best ID. • best_id_diff: The comparison result between the best_id column and the intxt_pmid column. • citation: A citation context. If no citation is found in a sentence, the value is the sentence. • progression: Text progression of the citation context/sentence. Supplementary Files • PMC-OA-patci.tsv.gz – This file contains the best source link IDs for the references (e.g., PMID). Patci [1] was used to identify the best source link IDs. The best source link IDs are mapped to the citation contexts and displayed in the *_journal IntxtCit.tsv files as the best_id column. Each row in the PMC-OA-patci.tsv.gz file is a citation (i.e., a reference extracted from the XML file) and contains the following columns: • pmcid: PMCID of the citing article. • pos: The citation's position in the reference list. • fromPMID: PMID of the citing article. • toPMID: Source link ID (e.g., PMID) of the citation. This ID is identified by Patci. • SRC: The sources that confirm the toPMID. • MatchDB: The origin bibliographic database of the toPMID. • Probability: The match probability of the toPMID. • toPMID2: PMID of the citation (as tagged in the XML file). • SRC2: The sources that confirm the toPMID2. • intxt_id: The ID of the citation. • journal: The first letter of the journal title. This maps to the *_journal_IntxtCit.tsv files. • same_ref_string: Whether the citation string appears in the reference list more than once. • DIFF: The comparison result between the toPMID column and the toPMID2 column. • bestID: The best source link ID (e.g., PMID) of the citation. • bestSRC: The sources that confirm the best ID. • Match: Matching result produced by Patci. [1] Agarwal, S., Lincoln, M., Cai, H., & Torvik, V. (2014). Patci – a tool for identifying scientific articles cited by patents. GSLIS Research Showcase 2014. http://hdl.handle.net/2142/54885 • Supplementary_File_1.zip – This file contains the code for generating the dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Research data to accommodate the article "Overlay journals: a study of the current landscape" (https://doi.org/10.1177/09610006221125208)
Identifying the sample of overlay journals was an explorative process (occurring during April 2021 to February 2022). The sample of investigated overlay journals were identified by using the websites of Episciences.org (2021), Scholastica (2021), Free Journal Network (2021), Open Journals (2021), PubPub (2022), and Wikipedia (2021). In total, this study identified 34 overlay journals. Please see the paper for more details about the excluded journal types.
The journal ISSN numbers, manuscript source repositories, first overlay volumes, article volumes, publication languages, peer-review type, licence for published articles, author costs, publisher types, submission policy, and preprint availability policy were observed by inspecting journal editorial policies and submission guidelines found from journal websites. The overlay journals’ ISSN numbers were identified by examining journal websites and cross-checking this information with the Ulrich’s periodicals database (Ulrichsweb, 2021). Journals that published review reports, either with reviewers’ names or anonymously, were classified as operating with open peer-review. Publisher types defined by Laakso and Björk (2013) were used to categorise the findings concerning the publishers. If the journal website did not include publisher information, the editorial board was interpreted to publish the journal.
The Organisation for Economic Co-operation and Development (OECD) field of science classification was used to categorise the journals into different domains of science. The journals’ primary OECD field of sciences were defined by the authors through examining the journal websites.
Whether the journals were indexed in the Directory of Open Access Journals (DOAJ), Scopus, or Clarivate Analytics’ Web of Science Core collection’s journal master list was examined by searching the services with journal ISSN numbers and journal titles.
The identified overlay journals were examined from the viewpoint of both qualitative and quantitative journal metrics. The qualitative metrics comprised the Nordic expert panel rankings of scientific journals, namely the Finnish Publication Forum, the Danish Bibliometric Research Indicator and the Norwegian Register for Scientific Journals, Series and Publishers. Searches were conducted from the web portals of the above services with both ISSN numbers and journal titles. Clarivate Analytics’ Journal Citation Reports database was searched with the use of both ISSN numbers and journal titles to identify whether the journals had a Journal Citation Indicator (JCI), Two-Year Impact Factor (IF) and an Impact Factor ranking (IF rank). The examined Journal Impact Factors and Impact Factor rankings were for the year 2020 (as released in 2021).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset was compiled as part of a study on Barriers and Opportunities in the Discoverability and Indexing of Student-led Academic Journals. The list of student journals and their details is compiled from public sources. This list is used to identify the presence of Canadian student journals in Google Scholar as well as in select indexes and databases: DOAJ, Scopus, Web of Science, Medline, Erudit, ProQuest, and HeinOnline. Additionally, journal publishing platform is recorded to be used for a correlational analysis against Google Scholar indexing results. For further details see README.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Raw data of data sharing policies in over 300 journals, supporting the article currently under review: "Reproducible and reusable research: Are journal data sharing policies meeting the mark?".
Raw data and analysis of data sharing policies of 318 biomedical journals. The study authors manually reviewed the author instructions and editorial policies to analyze the each journal's data sharing requirements and characteristics. The data sharing policies were ranked using a rubric to determine if data sharing was required, recommended, or not addressed at all. The data sharing method and licensing recommendations were examined, as well any mention of reproducibility or similar concepts. The data was analyzed for patterns relating to publishing volume, Journal Impact Factor, and the publishing model (open access or subscription) of each journal.
We evaluated journals included in Thomson Reuter’s InCites 2013 Journal Citations Reports (JCR) classified within the following World of Science schema categories: Biochemistry and Molecular Biology, Biology, Cell Biology, Crystallography, Developmental Biology, Biomedical Engineering, Immunology, Medical Informatics, Microbiology, Microscopy, Multidisciplinary Sciences, and Neurosciences. These categories were selected to capture the journals publishing the majority of peer-reviewed biomedical research. The original data pull included 1,166 journals, collectively publishing 213,449 articles. We filtered this list to the journals in the top quartiles by impact factor (IF) or number of articles published 2013. Additionally, the list was manually reviewed to exclude short report and review journals, and titles determined to be outside the fields of basic medical science or clinical research. The final study set included 318 journals, which published 130,330 articles in 2013. The study set represented 27% of the original Journal Citation Report list and 61% of the original citable articles. Prior to our analysis, the 2014 Journal Citations Reports was released. After our initial analyses and first preprint submission, the 2015 Journal Citations Reports was released. While we did not use the 2014 or 2015 data to amend the journals in the study set, we did employ data from all three reports in our analyses. In our data pull from JCR, we included the journal title, International Standard Serial Number (ISSN), the total citable items for 2013, 2014, and 2015, the total citations to the journal for 2013/14/15, the impact factors for 2013/14/15, and the publisher.
How does the structure of the peer review process, which can vary from journal to journal, influence the quality of papers published in that journal? In this paper, I study multiple systems of peer review using computational simulation. I find that, under any system I study, a majority of accepted papers will be evaluated by the average reader as not meeting the standards of the journal. Moreover, all systems allow random chance to play a strong role in the acceptance decision. Heterogeneous reviewer and reader standards for scientific quality drive both results. A peer review system with an active editor (who uses desk rejection before review and does not rely strictly on reviewer votes to make decisions) can mitigate some of these effects.
Top 20 Ancient medicine Journals find a best Ancient medicine journal from the top 20 Ancient medicine journals list and publish your manuscript research paper for peer review proccess
As of the financial year 2023, the greatest number of publications registered in India were monthly periodicals, figuring over ******. Comparatively, bi- and tri-weekly publications were the least in number. Most of the publications were published in Hindi, followed by English. Newspapers sustain the Indian print industry Just under ******* newspapers and periodicals were registered and in circulation in India as of the financial year 2023. Of these, daily newspapers were among the most consumed publications in the country, with Hindi and Marathi dailies being the most circulated. Newspapers accounted for the lion’s share of the print industry’s revenue while the magazine segment raked in ************* Indian rupees in 2023. The Indian publishing market The country’s publishing industry was estimated to expand in size to over *********** Indian rupees by 2024. Alongside newspapers and magazines, books find a large consumer base in the Indian market. Segmented into trade and non-trade books, the Indian book market, while significant, is quite fragmented, fraught with publishers and sellers of varying sizes. Some of the major publishers operating in the country include Jaico Publishing House, Penguin Random House, Rupa Publications, and Hachette India.
Food and Energy Security CiteScore 2024-2025 - ResearchHelpDesk - Food and Energy Security is a high quality and high impact open access journal publishing original research on agricultural crop and forest productivity to improve food and energy security. Aims and Scope Food and Energy Security seeks to publish high quality and high impact original research on agricultural crop and forest productivity to improve food and energy security. It actively seeks submissions from emerging countries with expanding agricultural research communities. Papers from China, other parts of Asia, India and South America are particularly welcome. The Editorial Board, headed by Editor-in-Chief Professor Christine Foyer, is determined to make FES the leading publication in its sector and will be aiming for a top-ranking impact factor. Primary research articles should report hypothesis-driven investigations that provide new insights into mechanisms and processes that determine productivity and properties for exploitation. Review articles are welcome but they must be critical in approach and provide particularly novel and far-reaching insights. Food and Energy Security offers authors a forum for the discussion of the most important advances in this field and promotes an integrative approach of scientific disciplines. Papers must contribute substantially to the advancement of knowledge. Examples of areas covered in Food and Energy Security include: Agronomy Biotechnological Approaches Breeding & Genetics Climate Change Quality and Composition Food Crops and Bioenergy Feedstocks Developmental, Physiology, and Biochemistry Functional Genomics Molecular Biology Pest and Disease Management Political, economic and societal influences on food security and agricultural crop production Post Harvest Biology Soil Science Systems Biology The journal is Open Access and published online. Submission of manuscripts to Food and Energy Security is exclusive via a web-based electronic submission and tracking system enabling rapid submission to first decision times. Before submitting a paper for publication, potential authors should first read the Author Guidelines. Instructions as to how to upload your manuscript can be found on ScholarOne Manuscripts. Keywords Agricultural economics, Agriculture, Bioenergy, Biofuels, Biochemistry, Biotechnology, Breeding, Composition, Development, Diseases, Feedstocks, Food, Food Security, Food Safety, Forestry, Functional Genomics, Genetics, Horticulture, Pests, Phenomics, Plant Architecture, Plant Biotechnology, Plant Science, Quality Traits, Secondary Metabolites, Social policies, Weed Science. Abstracting and Indexing Information Abstracts on Hygiene & Communicable Diseases (CABI) AgBiotechNet (CABI) AGRICOLA Database (National Agricultural Library) Agricultural Economics Database (CABI) Animal Breeding Abstracts (CABI) Animal Production Database (CABI) Animal Science Database (CABI) CAB Abstracts® (CABI) Current Contents: Agriculture, Biology & Environmental Sciences (Clarivate Analytics) Environmental Impact (CABI) Global Health (CABI) Nutrition & Food Sciences Database (CABI) Nutrition Abstracts & Reviews Series A: Human & Experimental (CABI) Plant Breeding Abstracts (CABI) Plant Genetics and Breeding Database (CABI) Plant Protection Database (CABI) Postharvest News & Information (CABI) Science Citation Index Expanded (Clarivate Analytics) SCOPUS (Elsevier) Seed Abstracts (CABI) Soil Science Database (CABI) Soils & Fertilizers Abstracts (CABI) Web of Science (Clarivate Analytics) Weed Abstracts (CABI) Wheat, Barley & Triticale Abstracts (CABI) World Agricultural Economics & Rural Sociology Abstracts (CABI) Society Information The Association of Applied Biologists is a registered charity (No. 275655), that was founded in 1904. The Association's overall aim is: 'To promote the study and advancement of all branches of Biology and in particular (but without prejudice to the generality of the foregoing), to foster the practice, growth, and development of applied biology, including the application of biological sciences for the production and preservation of food, fiber, and other materials and for the maintenance and improvement of earth's physical environment'.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Prior studies showed that scientists’ professional networks contribute to research productivity, but little work has examined what factors predict the formation of professional networks. This study sought to 1) examine what factors predict the formation of international ties between faculty and graduate students and 2) identify how these international ties would affect publication productivity in three East Asian countries. Face-to-face surveys and in-depth semi-structured interviews were conducted with a sample of faculty and doctoral students in life sciences at 10 research institutions in Japan, Singapore, and Taiwan. Our final sample consisted of 290 respondents (84 faculty and 206 doctoral students) and 1,435 network members. We used egocentric social network analysis to examine the structure of international ties and how they relate to research productivity. Our findings suggest that overseas graduate training can be a key factor in graduate students’ development of international ties in these countries. Those with a higher proportion of international ties in their professional networks were likely to have published more papers and written more manuscripts. For faculty, international ties did not affect the number of manuscripts written or of papers published, but did correlate with an increase in publishing in top journals. The networks we examined were identified by asking study participants with whom they discuss their research. Because the relationships may not appear in explicit co-authorship networks, these networks were not officially recorded elsewhere. This study sheds light on the relationships of these invisible support networks to researcher productivity.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
There are now many methods available to assess the relative citation performance of peer-reviewed journals. Regardless of their individual faults and advantages, citation-based metrics are used by researchers to maximize the citation potential of their articles, and by employers to rank academic track records. The absolute value of any particular index is arguably meaningless unless compared to other journals, and different metrics result in divergent rankings. To provide a simple yet more objective way to rank journals within and among disciplines, we developed a κ-resampled composite journal rank incorporating five popular citation indices: Impact Factor, Immediacy Index, Source-Normalized Impact Per Paper, SCImago Journal Rank and Google 5-year h-index; this approach provides an index of relative rank uncertainty. We applied the approach to six sample sets of scientific journals from Ecology (n = 100 journals), Medicine (n = 100), Multidisciplinary (n = 50); Ecology + Multidisciplinary (n = 25), Obstetrics & Gynaecology (n = 25) and Marine Biology & Fisheries (n = 25). We then cross-compared the κ-resampled ranking for the Ecology + Multidisciplinary journal set to the results of a survey of 188 publishing ecologists who were asked to rank the same journals, and found a 0.68–0.84 Spearman’s ρ correlation between the two rankings datasets. Our composite index approach therefore approximates relative journal reputation, at least for that discipline. Agglomerative and divisive clustering and multi-dimensional scaling techniques applied to the Ecology + Multidisciplinary journal set identified specific clusters of similarly ranked journals, with only Nature & Science separating out from the others. When comparing a selection of journals within or among disciplines, we recommend collecting multiple citation-based metrics for a sample of relevant and realistic journals to calculate the composite rankings and their relative uncertainty windows.