https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Research is a highly competitive profession where evaluation plays a central role; journals are ranked and individuals are evaluated based on their publication number, the number of times they are cited, and their h-index. Yet, such evaluations are often done in inappropriate ways that are damaging to individual careers, particularly for young scholars, and to the profession. Furthermore, as with all indices, people can play games to better their scores. This has resulted in the incentive structure of science increasingly mimicking economic principles, but rather than a monetary gain, the incentive is a higher score. To ensure a diversity of cultural perspectives and individual experiences, we gathered a team of academics in the fields of ecology and evolution from around the world and at different career stages. We first examine how authorship, h-index of individuals, and journal impact factors are being used and abused. Second, we speculate on the consequences of the continued use of these metrics with the hope of sparking discussions that will help our fields move in a positive direction. We would like to see changes in the incentive systems rewarding quality research and guaranteeing transparency. Senior faculty should establish the ethical standards, mentoring practices, and institutional evaluation criteria to create the needed changes.
Methods The number of authors on research articles in six journals through time. The area of each circle corresponds to the number of publications with that publication number for that year. To aid in visual interpretation of the data, a generalized additive model was fitted to the data. For ease of interpretation, the number of authors is truncated at 100, meaning that publications with >100 coauthors are plotted here as just including 101 coauthors.
These data were used for a paper exploring the relationship between the journal impact factor and underlying citation distributions in selected veterinary journals. Citation reports were generated according to the method of Lariviere et al. 2016 bioRxiv: http://dx.doi.org/10.1101/062109
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Background This bibliometric analysis examines the top 50 most-cited articles on COVID-19 complications, offering insights into the multifaceted impact of the virus. Since its emergence in Wuhan in December 2019, COVID-19 has evolved into a global health crisis, with over 770 million confirmed cases and 6.9 million deaths as of September 2023. Initially recognized as a respiratory illness causing pneumonia and ARDS, its diverse complications extend to cardiovascular, gastrointestinal, renal, hematological, neurological, endocrinological, ophthalmological, hepatobiliary, and dermatological systems. Methods Identifying the top 50 articles from a pool of 5940 in Scopus, the analysis spans November 2019 to July 2021, employing terms related to COVID-19 and complications. Rigorous review criteria excluded non-relevant studies, basic science research, and animal models. The authors independently reviewed articles, considering factors like title, citations, publication year, journal, impact factor, authors, study details, and patient demographics. Results The focus is primarily on 2020 publications (96%), with all articles being open-access. Leading journals include The Lancet, NEJM, and JAMA, with prominent contributions from Internal Medicine (46.9%) and Pulmonary Medicine (14.5%). China played a major role (34.9%), followed by France and Belgium. Clinical features were the primary study topic (68%), often utilizing retrospective designs (24%). Among 22,477 patients analyzed, 54.8% were male, with the most common age group being 26–65 years (63.2%). Complications affected 13.9% of patients, with a recovery rate of 57.8%. Conclusion Analyzing these top-cited articles offers clinicians and researchers a comprehensive, timely understanding of influential COVID-19 literature. This approach uncovers attributes contributing to high citations and provides authors with valuable insights for crafting impactful research. As a strategic tool, this analysis facilitates staying updated and making meaningful contributions to the dynamic field of COVID-19 research. Methods A bibliometric analysis of the most cited articles about COVID-19 complications was conducted in July 2021 using all journals indexed in Elsevier’s Scopus and Thomas Reuter’s Web of Science from November 1, 2019 to July 1, 2021. All journals were selected for inclusion regardless of country of origin, language, medical speciality, or electronic availability of articles or abstracts. The terms were combined as follows: (“COVID-19” OR “COVID19” OR “SARS-COV-2” OR “SARSCOV2” OR “SARS 2” OR “Novel coronavirus” OR “2019-nCov” OR “Coronavirus”) AND (“Complication” OR “Long Term Complication” OR “Post-Intensive Care Syndrome” OR “Venous Thromboembolism” OR “Acute Kidney Injury” OR “Acute Liver Injury” OR “Post COVID-19 Syndrome” OR “Acute Cardiac Injury” OR “Cardiac Arrest” OR “Stroke” OR “Embolism” OR “Septic Shock” OR “Disseminated Intravascular Coagulation” OR “Secondary Infection” OR “Blood Clots” OR “Cytokine Release Syndrome” OR “Paediatric Inflammatory Multisystem Syndrome” OR “Vaccine Induced Thrombosis with Thrombocytopenia Syndrome” OR “Aspergillosis” OR “Mucormycosis” OR “Autoimmune Thrombocytopenia Anaemia” OR “Immune Thrombocytopenia” OR “Subacute Thyroiditis” OR “Acute Respiratory Failure” OR “Acute Respiratory Distress Syndrome” OR “Pneumonia” OR “Subcutaneous Emphysema” OR “Pneumothorax” OR “Pneumomediastinum” OR “Encephalopathy” OR “Pancreatitis” OR “Chronic Fatigue” OR “Rhabdomyolysis” OR “Neurologic Complication” OR “Cardiovascular Complications” OR “Psychiatric Complication” OR “Respiratory Complication” OR “Cardiac Complication” OR “Vascular Complication” OR “Renal Complication” OR “Gastrointestinal Complication” OR “Haematological Complication” OR “Hepatobiliary Complication” OR “Musculoskeletal Complication” OR “Genitourinary Complication” OR “Otorhinolaryngology Complication” OR “Dermatological Complication” OR “Paediatric Complication” OR “Geriatric Complication” OR “Pregnancy Complication”) in the Title, Abstract or Keyword. A total of 5940 articles were accessed, of which the top 50 most cited articles about COVID-19 and Complications of COVID-19 were selected through Scopus. Each article was reviewed for its appropriateness for inclusion. The articles were independently reviewed by three researchers (JRP, MAM and TS) (Table 1). Differences in opinion with regard to article inclusion were resolved by consensus. The inclusion criteria specified articles that were focused on COVID-19 and Complications of COVID-19. Articles were excluded if they did not relate to COVID-19 and or complications of COVID-19, Basic Science Research and studies using animal models or phantoms. Review articles, Viewpoints, Guidelines, Perspectives and Meta-analysis were also excluded from the top 50 most-cited articles (Table 1). The top 50 most-cited articles were compiled in a single database and the relevant data was extracted. The database included: Article Title, Scopus Citations, Year of Publication, Journal, Journal Impact Factor, Authors, Number of Authors, Department Affiliation, Number of Institutions, Country of Origin, Study Topic, Study Design, Sample Size, Open Access, Non-Original Articles, Patient/Participants Age, Gender, Symptoms, Signs, Co-morbidities, Complications, Imaging Modalities Used and outcome.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Survey period: 08 April - 08 May, 2014 Top 10 Impact Factor journals in each of 22 categories
Figures https://doi.org/10.6084/m9.figshare.6857273.v1
Article https://doi.org/10.20651/jslis.62.1_20 https://doi.org/10.15068/00158168
Procedia computer science Impact Factor 2024-2025 - ResearchHelpDesk - Launched in 2009, Procedia Computer Science is an electronic product focusing entirely on publishing high quality conference proceedings. Procedia Computer Science enables fast dissemination so conference delegates can publish their papers in a dedicated online issue on ScienceDirect, which is then made freely available worldwide. Conference proceedings are accepted for publication in Procedia Computer Science based on quality and are therefore required to meet certain criteria, including quality of the conference, relevance to an international audience and covering highly cited or timely topics. Procedia Computer Science will cover proceedings in all topics of Computer Science with the exception of certain workshops in Theoretical Computer Science, as our Electronic Notes in Theoretical Computer Science http://www.elsevier.com/locate/entcs are the primary outlet for these papers. The journal is indexed in Scopus, the largest abstract and citation database of peer-reviewed literature. Copyright information For authors publishing in Procedia Computer Science, accepted manuscript will be governed by CC BY-NC-ND. For further details see our copyright information. What does Procedia Computer Science offer authors and conferences organizers? Procedia Computer Science offers a single, highly recognized platform where conference papers can be hosted and accessed by millions of researchers. Authors then know where to go to keep abreast of the latest developments in their field, and get maximum exposure for their own work. All proceedings appear online, on Science Direct, within 8 weeks of acceptance of the final manuscripts via the conference organizer and are freely available to all users. To ensure discoverability and citability the final online papers are individually metadated, XML versions are optimized for search engines, references are linked, and DOI (Digital Object Identifier) numbers are added. Why should conference organizers choose Procedia Computer Science? Unlike regular printed conference publications, there is no practical limit to the amount of papers a Procedia Computer Science issue can contain and pricing is affordable, clear and transparent, offering organizers a state of the art platform for their conference in a cost effective and sustainable manner. Procedia Computer Science offers immediate access for all, with papers online within 8 weeks of acceptance, and free access providing maximum exposure for individual papers and the related conference. Conference delegates can thus access the papers from anywhere on the web during and post conference. Organizers are credited on the actual issue and are fully responsible for quality control, the review process and the content of individual conference papers. To assist conference organizers in the publication process templates (both Latex and Word) are provided. What is the process for submitting conference proceedings to Procedia Computer Science? Proposals should include a synopsis of why such a Procedia Computer Science issue will have an impact on its community, the (sub) fields covered, and an outline of the related conference (or conferences / satellite events) with links to the conference website and pervious proceedings where possible. Please include a tentative list of the steering committee / program committee, a short description of the conference peer review standards (acceptance rate, reviewer selection, etc) as well as estimates of number of conference delegates and number of papers to be accepted for publication. Proposals should be prepared using the template here Please note that we do not accept proposals from individual authors for Procedia titles. What are the costs associated with Procedia Computer Science? There is an agreed fee, based on the size of the proceedings, which includes online publication on ScienceDirect and free access after acceptance. In addition to the online version there is the possibility to purchase paper copies, CD-ROMs and USB sticks. Moreover, interactive media possibilities such as webcasts and web seminars can be acquired for extra exposure both before and after the conference. Can Procedia Computer Science be sponsored? We are open to discussing sponsoring possibilities at all times. Sponsors can include funding bodies, government agencies and/or industry. Benefits to sponsors include visibility amongst a scientific audience and the opportunity to communicate messages via a peer-reviewed platform. Benefits to authors and conference organizers are the further dissemination of material to an international audience and even more reduced costs to the overall conference budget. What more can Elsevier do to help computer science conference organizers? We are open to discuss even further collaboration, including but not limited to providing global organizational and administrative services, assisting with the setup of new multidisciplinary meetings. Contact your Elsevier Computer Science Publisher for more information. Abstracting and Indexing Conference Proceedings Citation Index Scopus
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Academic publishers purport to be arbiters of knowledge, aiming to publish studied that advance the frontiers of their research domain. Yet the effectiveness of journal editors at identifying novel and important research is generally unknown, in part because of the confidential nature of the editorial and peer-review process. Using questionnaires, we evaluated the degree to which journals are effective arbiters of scientific impact in the domain of Ecology, quantified by three key criteria. First, journals discriminated against low-impact manuscripts: the probability of rejection increased as the number of citations gained by the published paper decreased. Second, journals were more likely to publish high-impact manuscripts (those that obtained citations in 90th percentile for their journal) than run-of-the-mill manuscripts; editors were only 23 and 41% as likely to reject an eventual high-impact paper (pre- versus post-review rejection) compared to a run-of-the-mill paper. Third, editors did occasionally reject papers that went on to be highly cited. Error rates were low, however: only 3.8% of rejected papers gained more citations than the median article in the journal that rejected them, and only 9.2% of rejected manuscripts went on to be high-impact papers in the (generally lower impact factor) publishing journal. The effectiveness of scientific arbitration increased with journal prominence, although some highly prominent journals were no more effective than much less prominent ones. We conclude that the academic publishing system, founded on peer review, appropriately recognises the significance of research contained in manuscripts, as measured by the number of citations that manuscripts obtain after publication, even though some errors are made. We therefore recommend that authors reduce publication delays by choosing journals appropriate to the significance of their research.
Annals of Medical Physiology Impact Factor 2024-2025 - ResearchHelpDesk - Annals of Medical Physiology (Ann Med Physiol.) is a double-blind peer-reviewed quarterly journal, published both online and in print version, aims to provide the most complete and reliable source of information on current developments in the field of medical physiology. The emphasis will be on publishing quality research papers rapidly and keep freely available to researchers worldwide through open access policy. Annals of Medical Physiology serves an important role by encouraging, fostering and promoting developments in various areas of medical physiology. This journal publishes reviews, original research articles, brief communications in all areas of medical physiology. Annals of Medical Physiology is registered with Registrar of Newspapers for India (RNI), Ministry of Information And Broadcasting, Government of India vide registration number TELENG/2018/75630 dated 25th June 2018. Abstract & indexing International Institute of Organized Research (I2OR), PKP Index, AcademicKeys, DOI-DS, Openarchives, Google Scholar, Directory of Research Journals Indexing (DRJI), ResearchBib, International Citation Index, Scientific Indexing Services (SIS), Bielefeld Academic Search Engine (BASE), J-Gate, ICMJE, Zenodo, Root Indexing, Eurasian Scientific Journal Index (ESJI), Elektronische Zeitschriftenbibliothek (EZB), Zeitschriftendatenbank (ZDB), OpenAIRE, Directory of Open Access Scholarly Resources (ROAD), WorldCat, Scilit, EduIndex
Librarians, publishers, and researchers have long placed significant emphasis on journal metrics such as the impact factor. However, these tools do not take into account the impact of research outside of citations and publications. Altmetrics seek to describe the reach of scholarly activity across the Internet and social media to paint a more vivid picture of the scholarly landscape. In order to examine the impact of altmetrics on scholarly activity, it is helpful to compare these new tools to an existing method. Citation counts are currently the standard for determining the impact of a scholarly work, and two studies were conducted to examine the correlation between citation count and altmetric data. First, a set of highly cited papers was chosen across a variety of disciplines, and their citation counts were compared with the altmetrics generated from Total-Impact.org. Second, to evaluate the hypothesized increased impact of altmetrics on recently published articles, a set of articles published in 2011 were taken from a sampling of journals with high impact factors, both subscription-based and open access, and the altmetrics were then compared to their citation counts.
We examined journal policies of the 100 top-ranked clinical journals using the 2018 impact factors as reported by InCites Journal Citation Reports (JCR). First, we examined all journals with an impact factor greater than 5, and then we manually screened by title and category do identify the first 100 clinical journals. We included only those that publish original research. Next, we checked each journal's editorial policy on preprints. We examined, in order, the journal website, the publisher website, the Transpose Database, and the first 10 pages of a Google search with the journal name and the term "preprint." We classified each journal's policy, as shown in this dataset, as allowing preprints, determining based on preprint status on a case-by-case basis, and not allowing any preprints. We collected data on April 23, 2020.
(Full methods can also be found in previously published paper.)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Update — December 7, 2014. – Evidence-based medicine (EBM) is not working for many reasons, for example: 1. Incorrect in their foundations (paradox): hierarchical levels of evidence are supported by opinions (i.e., lowest strength of evidence according to EBM) instead of real data collected from different types of study designs (i.e., evidence). http://dx.doi.org/10.6084/m9.figshare.1122534 2. The effect of criminal practices by pharmaceutical companies is only possible because of the complicity of others: healthcare systems, professional associations, governmental and academic institutions. Pharmaceutical companies also corrupt at the personal level, politicians and political parties are on their payroll, medical professionals seduced by different types of gifts in exchange of prescriptions (i.e., bribery) which very likely results in patients not receiving the proper treatment for their disease, many times there is no such thing: healthy persons not needing pharmacological treatments of any kind are constantly misdiagnosed and treated with unnecessary drugs. Some medical professionals are converted in K.O.L. which is only a puppet appearing on stage to spread lies to their peers, a person supposedly trained to improve the well-being of others, now deceits on behalf of pharmaceutical companies. Probably the saddest thing is that many honest doctors are being misled by these lies created by the rules of pharmaceutical marketing instead of scientific, medical, and ethical principles. Interpretation of EBM in this context was not anticipated by their creators. “The main reason we take so many drugs is that drug companies don’t sell drugs, they sell lies about drugs.” ―Peter C. Gøtzsche “doctors and their organisations should recognise that it is unethical to receive money that has been earned in part through crimes that have harmed those people whose interests doctors are expected to take care of. Many crimes would be impossible to carry out if doctors weren’t willing to participate in them.” —Peter C Gøtzsche, The BMJ, 2012, Big pharma often commits corporate crime, and this must be stopped. Pending (Colombia): Health Promoter Entities (In Spanish: EPS ―Empresas Promotoras de Salud).
Double-blind peer review, in which neither author nor reviewer are identified, is rarely practised in ecology or evolution journals. Most journals in the field of ecology practice single-blind reviews in which the reviewer but not the author identity is concealed. In 2001, however, double-blind review was introduced by the journal Behavioral Ecology. A database of all papers published in BE between 1997 and 2005 (n=867) was generated (the year 2001 was omitted to accomodate the change in editorial policy). For each paper, gender was assignmed to the first author using first names. Gender was classified as "unknown" if the author provided only initials, if the name was gender neutral or if the name could not be assigned to either gender. The same data was gathered from an out-group set of primary research journals listed by ISI as being in the category of "Ecology" or "Evolutionary Biology" with a 2004 impact factor of 2.0-2.5 (similar to that of BE). This provided an additional five journals: Behavioral Ecology and Sociobiology (BES; n=1040), Animal Behavior (AB; n=2178), Journal of Biogeography (JB; n=1040), Biological Conservation (BC; n=1719), and Landscape Ecology (LE; n=419). Missing data from complete issues omitted from the table of contents were inserted using ISI (JB and LE; four issues). This study showed that following the policy change to double-blind peer reviews, there was a significant increase in female first-authored papers.
Journal of Islamabad Medical and Dental College Impact Factor 2024-2025 - ResearchHelpDesk - The Journal of Islamabad Medical and Dental College (JIMDC) is an official journal of Islamabad Medical & Dental College, Islamabad. This open access, double blind peer reviewed journal started in 2012 with bi-annual publication and was upgraded to quarterly publishing in 2015. The journal aims to cover and promote all the latest and innovative advancements in the field of Medical/Health Sciences and healthcare practices. It is an essential journal for all the health sciences including doctors, bio-researchers, students and healthcare professionals, who wish to be kept informed and up-to-date with the latest and dynamic developments in this field. The vision of the journal is to publish and share good quality research and enhance the visibility and citation of the journal through open journal system (OJS). The mission of our journal is to provide a platform for publishing quality research (both Graduate and Postgraduate level) in the field of medical, dental and allied Health Sciences. JIMDC publishes research papers, in the form of original articles, case reports, review articles, articles on medical education, and short communication related to health, medicine and dentistry in accordance with the “Uniform requirements for manuscript submitted to medical journals”, after peer review by at least two reviewers. JIMDC readership includes IMDC faculty, other healthcare professionals and researchers, both national and international. It is distributed to medical colleges and medical libraries throughout Pakistan. We encourage research librarians to list this journal among their library's electronic journal holdings. It may be worth noting that this journal's open source publishing system is suitable for libraries to host for their faculty members to use with journals they are involved in editing (see Open Journal Systems). Abstract & indexing Abstract and Indexing: WHO Index Medicus (Index for the Eastern Mediterranean Region (IMEMR) Bielefeld Academic Search Engine (BASE), Germany Netmedics (Cambridge International Academics UK) Pakmedinet CrossRef (doi) Asian Digital Library
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
Many products of medical education scholarship have difficulty in being published through traditional, standard journals. These still fit within the METRICS framework that describes scholarly activities.(1) Just as is seen in standard research outputs, such as replication studies, trials with negative results, in the increasingly competitive world of scholastic publishing where citation indexes and impact factor matter greatly, it is now hard to get studies published that do not fit within a positivist research model, demonstrating significant results. This report reflects our explorations and testing of various approaches to making our scholarly products and learning objects more discoverable.
Online appendix 1List of the 494 articles reviewed and data extracted.Online appendix 2List of the journals in which new species were delineated. Editorial policies on including formal taxonomic descriptions to articles and impact factors were investigated before 2005, between 2005-2010, and 2011-2014. Y=Yes, N=No, NA=Not Available and IF=Impact Factor.
Objectives: A rapidly developing scenario like a pandemic requires the prompt production of high-quality systematic reviews, which can be automated using artificial intelligence (AI) techniques. We evaluated the application of AI tools in COVID-19 evidence syntheses. Study design: After prospective registration of the review protocol, we automated the download of all open-access COVID-19 systematic reviews in the COVID-19 Living Overview of Evidence database, indexed them for AI-related keywords, and located those that used AI tools. We compared their journals’ JCR Impact Factor, citations per month, screening workloads, completion times (from pre-registration to preprint or submission to a journal) and AMSTAR-2 methodology assessments (maximum score 13 points) with a set of publication date matched control reviews without AI. Results: Of the 3999 COVID-19 reviews, 28 (0.7%, 95% CI 0.47-1.03%) made use of AI. On average, compared to controls (n=64), AI reviews were published in journals with higher Impact Factors (median 8.9 vs 3.5, P<0.001), and screened more abstracts per author (302.2 vs 140.3, P=0.009) and per included study (189.0 vs 365.8, P<0.001) while inspecting less full texts per author (5.3 vs 14.0, P=0.005). No differences were found in citation counts (0.5 vs 0.6, P=0.600), inspected full texts per included study (3.8 vs 3.4, P=0.481), completion times (74.0 vs 123.0, P=0.205) or AMSTAR-2 (7.5 vs 6.3, P=0.119). Conclusion: AI was an underutilized tool in COVID-19 systematic reviews. Its usage, compared to reviews without AI, was associated with more efficient screening of literature and higher publication impact. There is scope for the application of AI in automating systematic reviews.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The scientific enterprise depends critically on the preservation of and open access to published data. This basic tenet applies acutely to phylogenies (estimates of evolutionary relationships among species). Increasingly, phylogenies are estimated from increasingly large, genome-scale datasets using increasingly complex statistical methods that require increasing levels of expertise and computational investment. Moreover, the resulting phylogenetic data provide an explicit historical perspective that critically informs research in a vast and growing number of scientific disciplines. One such use is the study of changes in rates of lineage diversification (speciation – extinction) through time. As part of a meta-analysis in this area, we sought to collect phylogenetic data (comprising nucleotide sequence alignment and tree files) from 217 studies published in 46 journals over a 13-year period. We document our attempts to procure those data (from online archives and by direct request to corresponding authors), and report results of analyses (using Bayesian logistic regression) to assess the impact of various factors on the success of our efforts. Overall, complete phylogenetic data for ~ 60% of these studies are effectively lost to science. Our study indicates that phylogenetic data are more likely to be deposited in online archives and/or shared upon request when: (1) the publishing journal has a strong data-sharing policy; (2) the publishing journal has a higher impact factor, and; (3) the data are requested from faculty rather than students. Importantly, our survey spans recent policy initiatives and infrastructural changes; our analyses indicate that the positive impact of these community initiatives has been both dramatic and immediate. Although the results of our study indicate that the situation is dire, our findings also reveal tremendous recent progress in the sharing and preservation of phylogenetic data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The number of publications in the field of UFLS research is rapidly increasing. This confirms that the topic is timely, but due to the amount of publications, it is easy to lose track of the prevailing concepts driving the latest technologies. To overcome this problem, researchers resort to subjective review articles in the literature in which individual authors attempt to systematically categorize UFLS algorithms according to their own understanding of a variety of approaches. Since this is not exactly a trivial task and is undoubtedly subject to personal interpretation, it is better to use specialized mathematical techniques for this purpose. Recently, it has been shown that the use of clustering techniques and graph theory can be a useful tool for systematic literature reviews.
If one chooses to search for similarities between UFLS algorithms using such techniques, one must first collect the relevant literature in the field and extract important information. Therefore, this collection provides an Excel spreadsheet (.xlsx) of 381 publications in the field of UFLS protection for the period from 1954 to 2020, with the following information for each publication:
authors,
title,
year of publication (journal, conference proceedings, doctoral dissertation, master's thesis, bachelor's thesis, other),
the source from which we obtained the publication,
DOI (Digital Object Identifier),
extracted general features, and
extracted specific features.
Relevant literature was obtained from various open access journals, impact factor journals, repositories of various universities, and archives of various electric utilities and transmission system operators.
If the publication in the row can be assigned to the feature specified in column I3-BJ3, the corresponding cell contains the value "1". Otherwise, i.e., if the feature specified in column I3-BJ3 cannot be assigned to the publication, the cell is empty. Detailed descriptions of the individual feature can be found in "Description_of_features.docx". Namely, two identical documents are available, one in English and one in Slovene.
Updated versions with more recent data will be uploaded with a differing version number and doi.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Research dissemination and knowledge translation are imperative in social work. Methodological developments in data visualization techniques have improved the ability to convey meaning and reduce erroneous conclusions. The purpose of this project is to examine: (1) How are empirical results presented visually in social work research?; (2) To what extent do top social work journals vary in the publication of data visualization techniques?; (3) What is the predominant type of analysis presented in tables and graphs?; (4) How can current data visualization methods be improved to increase understanding of social work research? Method: A database was built from a systematic literature review of the four most recent issues of Social Work Research and 6 other highly ranked journals in social work based on the 2009 5-year impact factor (Thomson Reuters ISI Web of Knowledge). Overall, 294 articles were reviewed. Articles without any form of data visualization were not included in the final database. The number of articles reviewed by journal includes : Child Abuse & Neglect (38), Child Maltreatment (30), American Journal of Community Psychology (31), Family Relations (36), Social Work (29), Children and Youth Services Review (112), and Social Work Research (18). Articles with any type of data visualization (table, graph, other) were included in the database and coded sequentially by two reviewers based on the type of visualization method and type of analyses presented (descriptive, bivariate, measurement, estimate, predicted value, other). Additional revi ew was required from the entire research team for 68 articles. Codes were discussed until 100% agreement was reached. The final database includes 824 data visualization entries.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Research is a highly competitive profession where evaluation plays a central role; journals are ranked and individuals are evaluated based on their publication number, the number of times they are cited, and their h-index. Yet, such evaluations are often done in inappropriate ways that are damaging to individual careers, particularly for young scholars, and to the profession. Furthermore, as with all indices, people can play games to better their scores. This has resulted in the incentive structure of science increasingly mimicking economic principles, but rather than a monetary gain, the incentive is a higher score. To ensure a diversity of cultural perspectives and individual experiences, we gathered a team of academics in the fields of ecology and evolution from around the world and at different career stages. We first examine how authorship, h-index of individuals, and journal impact factors are being used and abused. Second, we speculate on the consequences of the continued use of these metrics with the hope of sparking discussions that will help our fields move in a positive direction. We would like to see changes in the incentive systems rewarding quality research and guaranteeing transparency. Senior faculty should establish the ethical standards, mentoring practices, and institutional evaluation criteria to create the needed changes.
Methods The number of authors on research articles in six journals through time. The area of each circle corresponds to the number of publications with that publication number for that year. To aid in visual interpretation of the data, a generalized additive model was fitted to the data. For ease of interpretation, the number of authors is truncated at 100, meaning that publications with >100 coauthors are plotted here as just including 101 coauthors.