76 datasets found
  1. Data from: Highly Cited Documents on Google Scholar (1950-2013)

    • figshare.com
    xlsx
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alberto Martín-Martín; Enrique Orduña-Malea; Juan Manuel Ayllón; Emilio Delgado-López-Cózar (2023). Highly Cited Documents on Google Scholar (1950-2013) [Dataset]. http://doi.org/10.6084/m9.figshare.1224314.v2
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Alberto Martín-Martín; Enrique Orduña-Malea; Juan Manuel Ayllón; Emilio Delgado-López-Cózar
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains: - A sample of 64,000 highly cited documents published in the period 1950-2013, collected from Google Scholar on the 28th of May, 2014. - List of clean references of the top 1% most cited documents in Google Scholar (640 documents) - Study case: different versions (detected and undetected by Google Scholar) for the work "A Mathematical Theory of Communication", by Claude Shannon.- Frequency table: number of highly-cited documents in our sample published in WoS-covered journals

  2. Data set of the article: Ranking by relevance and citation counts, a...

    • zenodo.org
    bin
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cristòfol Rovira; Cristòfol Rovira; Lluís Codina; Lluís Codina; Frederic Guerrero-Solé; Frederic Guerrero-Solé; Carlos Lopezosa; Carlos Lopezosa (2020). Data set of the article: Ranking by relevance and citation counts, a comparative study: Google Scholar, Microsoft Academic, WoS and Scopus [Dataset]. http://doi.org/10.5281/zenodo.3381151
    Explore at:
    binAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Cristòfol Rovira; Cristòfol Rovira; Lluís Codina; Lluís Codina; Frederic Guerrero-Solé; Frederic Guerrero-Solé; Carlos Lopezosa; Carlos Lopezosa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data of investigation published in the article "Ranking by relevance and citation counts, a comparative study: Google Scholar, Microsoft Academic, WoS and Scopus".

    Abstract of the article:

    Search engine optimization (SEO) constitutes the set of methods designed to increase the visibility of, and the number of visits to, a web page by means of its ranking on the search engine results pages. Recently, SEO has also been applied to academic databases and search engines, in a trend that is in constant growth. This new approach, known as academic SEO (ASEO), has generated a field of study with considerable future growth potential due to the impact of open science. The study reported here forms part of this new field of analysis. The ranking of results is a key aspect in any information system since it determines the way in which these results are presented to the user. The aim of this study is to analyse and compare the relevance ranking algorithms employed by various academic platforms to identify the importance of citations received in their algorithms. Specifically, we analyse two search engines and two bibliographic databases: Google Scholar and Microsoft Academic, on the one hand, and Web of Science and Scopus, on the other. A reverse engineering methodology is employed based on the statistical analysis of Spearman’s correlation coefficients. The results indicate that the ranking algorithms used by Google Scholar and Microsoft are the two that are most heavily influenced by citations received. Indeed, citation counts are clearly the main SEO factor in these academic search engines. An unexpected finding is that, at certain points in time, WoS used citations received as a key ranking factor, despite the fact that WoS support documents claim this factor does not intervene.

  3. o

    Data from: Google Scholar as a source for citation and impact analysis for a...

    • explore.openaire.eu
    Updated Jan 1, 2010
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    S. A. Sanni; A. N. Zainab (2010). Google Scholar as a source for citation and impact analysis for a non-ISI indexed medical journal [Dataset]. https://explore.openaire.eu/search/other?orpId=od_124::c09a7638b07750d68773c1f5f9f7b686
    Explore at:
    Dataset updated
    Jan 1, 2010
    Authors
    S. A. Sanni; A. N. Zainab
    Description

    It is difficult to determine the influence and impact of journals which are not covered by the ISI databases and Journal Citation Report. However, with the availability of databases such as MyAIS (Malaysian Abstracting and Indexing System), which offers sufficient information to support bibliometric analysis as well as being indexed by Google Scholar which provides citation information, it has become possible to obtain productivity, citation and impact information for non-ISI indexed journals. The bibliometric tool Harzing's Publish and Perish was used to collate citation information from Google scholar. The study examines article productivity, the citations obtained by articles and calculates the impact factor of Medical Journal of Malaysia (MJM) published between 2004 and 2008. MJM is the oldest medical journal in Malaysia and the unit of analysis is 580 articles. The results indicate that once a journal is covered by MyAIS it becomes visible and accessible on the Web because Google Scholarindexes MyAIS. The results show that contributors to MJM were mainly Malaysian (91) and the number of Malaysian-Foreign collaborated papers were very small (28 articles, 4.8). However, citation information from Google scholar indicates that out of the 580 articles, 76.8 (446) have been cited over the 5-year period. The citations were received from both mainstrean foreign as well as Malaysian journals and the top three citors were from China, Malaysia and the United States. In general more citations were received from East Asian countries, Europe, and Southeast Asia. The 2-yearly impact factor calculated for MJM is 0.378 in 2009, 0.367 in 2008, 0.616 in 2007 and 0.456 in 2006. The 5-year impact factor is calculated as 0.577. The results show that although MJM is a Malaysian journal and not ISI indexed its contents have some international significance based on the citations and impact score it receives, indicating the importance of being visible especially in Google scholar.

  4. o

    Citation Knowledge with Section and Context

    • ordo.open.ac.uk
    zip
    Updated May 5, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anita Khadka (2020). Citation Knowledge with Section and Context [Dataset]. http://doi.org/10.21954/ou.rd.11346848.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 5, 2020
    Dataset provided by
    The Open University
    Authors
    Anita Khadka
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    This dataset contains information from scientific publications written by authors who have published papers in the RecSys conference. It contains four files which have information extracted from scientific publications. The details of each file are explained below:i) all_authors.tsv: This file contains the details of authors who published research papers in the RecSys conference. The details include authors' identifier in various forms, such as number, orcid id, dblp url, dblp key and google scholar url, authors' first name, last name and their affiliation (where they work)ii) all_publications.tsv: This file contains the details of publications authored by the authors mentioned in the all_authors.tsv file (Please note the list of publications does not contain all the authored publications of the authors, refer to the publication for further details).The details include publications' identifier in different forms (such as number, dblp key, dblp url, dblp key, google scholar url), title, filtered title, published date, published conference and paper abstract.iii) selected_author_publications-information.tsv: This file consists of identifiers of authors and their publications. Here, we provide the information of selected authors and their publications used for our experiment.iv) selected_publication_citations-information.tsv: This file contains the information of the selected publications which consists of both citing and cited papers’ information used in our experiment. It consists of identifier of citing paper, identifier of cited paper, citation title, citation filtered title, the sentence before the citation is mentioned, citing sentence, the sentence after the citation is mentioned, citation position (section).Please note, it does not contain information of all the citations cited in the publications. For more detail, please refer to the paper.This dataset is for the use of research purposes only and if you use this dataset, please cite our paper "Capturing and exploiting citation knowledge for recommending recently published papers" due to be published in Web2Touch track 2020 (not yet published).

  5. I

    Data from: Network of First and Second-generation citations to Matsuyama...

    • databank.illinois.edu
    Updated Mar 3, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jodi Schneider; Di Ye (2020). Network of First and Second-generation citations to Matsuyama 2005 from Google Scholar and Web of Science [Dataset]. http://doi.org/10.13012/B2IDB-1403534_V2
    Explore at:
    Dataset updated
    Mar 3, 2020
    Authors
    Jodi Schneider; Di Ye
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This second version (V2) provides additional data cleaning compared to V1, additional data collection (mainly to include data from 2019), and more metadata for nodes. Please see NETWORKv2README.txt for more detail.

  6. Nobel Laureates and Google Scholar citation data

    • figshare.com
    txt
    Updated Mar 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Máté Józsa (2025). Nobel Laureates and Google Scholar citation data [Dataset]. http://doi.org/10.6084/m9.figshare.28564391.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Mar 10, 2025
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Máté Józsa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data related to the article:Does excellence correspond to a universal level of inequality? Evidence from scholarly citations & Olympic medal data; Soumyajyoti Biswas, Bikas K. Chakrabarti, Asim Ghosh, Sourav Ghosh, Máté Józsa, and Zoltán Néda.As the filenames suggest, the dataset contains information about 235,750 authors from Google Scholar, including their Google Scholar IDs (for more details, see this dataset). Additionally, it includes data on 80 Nobel Laureates, with the following metrics:np = number of publicationsnc = number of citationsh = Hirsch indexq = Q-factor (maximum citations / mean citations)g = Gini indexk = Kolkata index

  7. n

    Scholarometer

    • neuinfo.org
    • dknet.org
    Updated Oct 16, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2019). Scholarometer [Dataset]. http://identifiers.org/RRID:SCR_004279
    Explore at:
    Dataset updated
    Oct 16, 2019
    Description

    Scholarometer (beta) is a social tool to facilitate citation analysis and help evaluate the impact of an author''s publications. It is a social (crowdsourcing) application that leverages the wisdom of the crowds. Scholarometer makes visualization of author and discipline networks available on the web site. It requires users to tag their queries with one or more discipline names, choosing from predefined ISI subject categories or arbitrary tags. This generates annotations that go into a database, which collects statistics about the various disciplines, such as average number of citations per paper, average number of papers per authors, etc. This data is publicly available. Scholarometer users can save the finding into formats appropriate for local reference management software (e.g., EndNote), or for social publication sharing systems (e.g., BibSonomy). Currently, our system supports the following export formats: BibTex (BIB), RefMan (RIS), EndNote (ENW), comma-separated values (CSV), tab-separated values (XLS), and BibJSON. Export data is dynamically generated in response to any filter, merge or delete actions performed by the user. Since Scholarometer is a browser extension that provides a smart interface for Google Scholar, it does not have the limitations of server based citation analysis tools that sit between the user and Google Scholar. At the same time Scholarometer is not an application, such as Publish or Perish, and therefore it is platform independent and runs on every system that supports the Firefox or the Chrome browser. Still, Scholarometer uses Google Scholar, which provides the most comprehensive source of citation data across the sciences and social sciences. Scholarometer provides a RESTful web API so that other developers can make use of our crowdsourced data. Select the method on the left panel to see corresponding documentation. The extension/add-on code is available in the Mozilla Firefox Add-ons and Google Chrome Extensions repositories. Additional server-side code is not available at this time.

  8. Z

    Data for "Open Access impact on citations: a case study"

    • data.niaid.nih.gov
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andro, Mathieu (2020). Data for "Open Access impact on citations: a case study" [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_60293
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Bordignon, Frédérique
    Andro, Mathieu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is a list of 347 papers published in 2010 and retrieved from the Web of Science, Scopus and Google Scholar. For each paper, the number of citations and the citation date(s) have been collected. If the full-text is available online, the date of "liberation" and the URL of the file have been retrieved as well. The objective was to assess the impact of Open access on citation rate and more particularly the impact before and after full-text "liberation".

  9. d

    Replication data for: Citations

    • search.dataone.org
    Updated Nov 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lasda Bergman, Elaine (2023). Replication data for: Citations [Dataset]. http://doi.org/10.7910/DVN/27655
    Explore at:
    Dataset updated
    Nov 21, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Lasda Bergman, Elaine
    Description

    Microsoft Access Database for bibliometric analysis found in the article: Elaine M. Lasda Bergman, Finding Citations to Social Work Literature: The Relative Benefits of Using Web of Science, Scopus, or Google Scholar, The Journal of Academic Librarianship, Volume 38, Issue 6, November 2012, Pages 370-379, ISSN 0099-1333, http://dx.doi.org/10.1016/j.acalib.2012.08.002. (http://www.sciencedirect.com/science/article/pii/S009913331200119X) Abstract: Past studies of citation coverage of Web of Science, Scopus, and Google Scholar do not demonstrate a consistent pattern that can be applied to the interdisciplinary mix of resources used in social work research. To determine the utility of these tools to social work researchers, an analysis of citing references to well-known social work journals was conducted. Web of Science had the fewest citing references and almost no variety in source format. Scopus provided higher citation counts, but the pattern of coverage was similar to Web of Science. Google Scholar provided substantially more citing references, but only a relatively small percentage of them were unique scholarly journal articles. The patterns of database coverage were replicated when the citations were broken out for each journal separately. The results of this analysis demonstrate the need to determine what resources constitute scholarly research and reflect the need for future researchers to consider the merits of each database before undertaking their research. This study will be of interest to scholars in library and information science as well as social work, as it facilitates a greater understanding of the strengths and limitations of each database and brings to light important considerations for conducting future research. Keywords: Citation analysis; Social work; Scopus; Web of Science; Google Scholar

  10. f

    DataSheet1_The top 100 cited studies on bacterial persisters: A bibliometric...

    • frontiersin.figshare.com
    docx
    Updated Jun 16, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yuan Ju; Haiyue Long; Ping Zhao; Ping Xu; Luwei Sun; Yongqing Bao; Pingjing Yu; Yu Zhang (2023). DataSheet1_The top 100 cited studies on bacterial persisters: A bibliometric analysis.docx [Dataset]. http://doi.org/10.3389/fphar.2022.1001861.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jun 16, 2023
    Dataset provided by
    Frontiers
    Authors
    Yuan Ju; Haiyue Long; Ping Zhao; Ping Xu; Luwei Sun; Yongqing Bao; Pingjing Yu; Yu Zhang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Background: Bacterial persisters are thought to be responsible for the recalcitrance and relapse of persistent infections, and they also lead to antibiotic treatment failure in clinics. In recent years, researches on bacterial persisters have attracted worldwide attention and the number of related publications is increasing. The purpose of this study was to better understand research trends on bacterial persisters by identifying and bibliometrics analyzing the top 100 cited publications in this field.Methods: The Web of Science Core Collection was utilized to retrieve the highly cited publications on bacterial persisters, and these publications were cross-matched with Google Scholar and Scopus. The top 100 cited publications were identified after reviewing the full texts. The main information of each publication was extracted and analyzed using Excel, SPSS, and VOSviewer.Results: The top 100 cited papers on bacterial persisters were published between 1997 and 2019. The citation frequency of each publication ranged from 147 to 1815 for the Web of Science Core Collection, 153 to 1883 for Scopus, and 207 to 2,986 for Google Scholar. Among the top 100 cited list, there were 64 original articles, 35 review articles, and 1 editorial material. These papers were published in 51 journals, and the Journal of Bacteriology was the most productive journal with 8 papers. A total of 14 countries made contributions to the top 100 cited publications, and 64 publications were from the United States. 15 institutions have published two or more papers and nearly 87% of them were from the United States. Kim Lewis from Northeastern University was the most influential author with 18 publications. Furthermore, keywords co-occurrence suggested that the main topics on bacterial persisters were mechanisms of persister formation or re-growth. Finally, “Microbiology” was the most frequent category in this field.Conclusion: This study identified and analyzed the top 100 cited publications related to bacterial persisters. The results provided a general overview of bacterial persisters and might help researchers to better understand the classic studies, historical developments, and new findings in this field, thus providing ideas for further research.

  11. L-index reports for selected prominent scientists

    • zenodo.org
    pdf
    Updated Jun 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aleksey Belikov; Aleksey Belikov (2025). L-index reports for selected prominent scientists [Dataset]. http://doi.org/10.5281/zenodo.15368429
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 3, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Aleksey Belikov; Aleksey Belikov
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Scores and citations were calculated on the 8th of May 2025 using the L-index Calculator (https://doi.org/10.5281/zenodo.15356378) for 100 most cited publications from publicly available Google Scholar profiles of 45 prominent scientists in biomedicine, neuroscience, physics and computer science. The score of a publication is the number of citations divided by the number of authors and age in years. The L-index is the logarithm of the sum of paper scores.

  12. l

    Data from: Where do engineering students really get their information? :...

    • opal.latrobe.edu.au
    • researchdata.edu.au
    pdf
    Updated Mar 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Clayton Bolitho (2025). Where do engineering students really get their information? : using reference list analysis to improve information literacy programs [Dataset]. http://doi.org/10.4225/22/59d45f4b696e4
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Mar 13, 2025
    Dataset provided by
    La Trobe
    Authors
    Clayton Bolitho
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundAn understanding of the resources which engineering students use to write their academic papers provides information about student behaviour as well as the effectiveness of information literacy programs designed for engineering students. One of the most informative sources of information which can be used to determine the nature of the material that students use is the bibliography at the end of the students’ papers. While reference list analysis has been utilised in other disciplines, few studies have focussed on engineering students or used the results to improve the effectiveness of information literacy programs. Gadd, Baldwin and Norris (2010) found that civil engineering students undertaking a finalyear research project cited journal articles more than other types of material, followed by books and reports, with web sites ranked fourth. Several studies, however, have shown that in their first year at least, most students prefer to use Internet search engines (Ellis & Salisbury, 2004; Wilkes & Gurney, 2009).PURPOSEThe aim of this study was to find out exactly what resources undergraduate students studying civil engineering at La Trobe University were using, and in particular, the extent to which students were utilising the scholarly resources paid for by the library. A secondary purpose of the research was to ascertain whether information literacy sessions delivered to those students had any influence on the resources used, and to investigate ways in which the information literacy component of the unit can be improved to encourage students to make better use of the resources purchased by the Library to support their research.DESIGN/METHODThe study examined student bibliographies for three civil engineering group projects at the Bendigo Campus of La Trobe University over a two-year period, including two first-year units (CIV1EP – Engineering Practice) and one-second year unit (CIV2GR – Engineering Group Research). All units included a mandatory library session at the start of the project where student groups were required to meet with the relevant faculty librarian for guidance. In each case, the Faculty Librarian highlighted specific resources relevant to the topic, including books, e-books, video recordings, websites and internet documents. The students were also shown tips for searching the Library catalogue, Google Scholar, LibSearch (the LTU Library’s research and discovery tool) and ProQuest Central. Subject-specific databases for civil engineering and science were also referred to. After the final reports for each project had been submitted and assessed, the Faculty Librarian contacted the lecturer responsible for the unit, requesting copies of the student bibliographies for each group. References for each bibliography were then entered into EndNote. The Faculty Librarian grouped them according to various facets, including the name of the unit and the group within the unit; the material type of the item being referenced; and whether the item required a Library subscription to access it. A total of 58 references were collated for the 2010 CIV1EP unit; 237 references for the 2010 CIV2GR unit; and 225 references for the 2011 CIV1EP unit.INTERIM FINDINGSThe initial findings showed that student bibliographies for the three group projects were primarily made up of freely available internet resources which required no library subscription. For the 2010 CIV1EP unit, all 58 resources used were freely available on the Internet. For the 2011 CIV1EP unit, 28 of the 225 resources used (12.44%) required a Library subscription or purchase for access, while the second-year students (CIV2GR) used a greater variety of resources, with 71 of the 237 resources used (29.96%) requiring a Library subscription or purchase for access. The results suggest that the library sessions had little or no influence on the 2010 CIV1EP group, but the sessions may have assisted students in the 2011 CIV1EP and 2010 CIV2GR groups to find books, journal articles and conference papers, which were all represented in their bibliographiesFURTHER RESEARCHThe next step in the research is to investigate ways to increase the representation of scholarly references (found by resources other than Google) in student bibliographies. It is anticipated that such a change would lead to an overall improvement in the quality of the student papers. One way of achieving this would be to make it mandatory for students to include a specified number of journal articles, conference papers, or scholarly books in their bibliographies. It is also anticipated that embedding La Trobe University’s Inquiry/Research Quiz (IRQ) using a constructively aligned approach will further enhance the students’ research skills and increase their ability to find suitable scholarly material which relates to their topic. This has already been done successfully (Salisbury, Yager, & Kirkman, 2012)CONCLUSIONS & CHALLENGESThe study shows that most students rely heavily on the free Internet for information. Students don’t naturally use Library databases or scholarly resources such as Google Scholar to find information, without encouragement from their teachers, tutors and/or librarians. It is acknowledged that the use of scholarly resources doesn’t automatically lead to a high quality paper. Resources must be used appropriately and students also need to have the skills to identify and synthesise key findings in the existing literature and relate these to their own paper. Ideally, students should be able to see the benefit of using scholarly resources in their papers, and continue to seek these out even when it’s not a specific assessment requirement, though it can’t be assumed that this will be the outcome.REFERENCESEllis, J., & Salisbury, F. (2004). Information literacy milestones: building upon the prior knowledge of first-year students. Australian Library Journal, 53(4), 383-396.Gadd, E., Baldwin, A., & Norris, M. (2010). The citation behaviour of civil engineering students. Journal of Information Literacy, 4(2), 37-49.Salisbury, F., Yager, Z., & Kirkman, L. (2012). Embedding Inquiry/Research: Moving from a minimalist model to constructive alignment. Paper presented at the 15th International First Year in Higher Education Conference, Brisbane. Retrieved from http://www.fyhe.com.au/past_papers/papers12/Papers/11A.pdfWilkes, J., & Gurney, L. J. (2009). Perceptions and applications of information literacy by first year applied science students. Australian Academic & Research Libraries, 40(3), 159-171.

  13. o

    Template for a "Research Footprint": An overview of a researcher's online...

    • explore.openaire.eu
    Updated Jun 20, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daniella Bayle Deutz; Charlotte Wien (2019). Template for a "Research Footprint": An overview of a researcher's online academic presence [Dataset]. http://doi.org/10.5281/zenodo.3250551
    Explore at:
    Dataset updated
    Jun 20, 2019
    Authors
    Daniella Bayle Deutz; Charlotte Wien
    Description

    What do you look like as a researcher, when someone external to your institute looks you up online? A "Research Footprint" provides a researcher with an immediate and visual overview of their online academic presence. We show what the researcher's metrics look like on the most widely used citation databases: Scopus, Web of Science (WoS) and Google Scholar. We limit the Research Footprint to the most basic personal metrics: Number of publications, number of open access publications, number of citations, times cited per year, and the h-index. We check whether the researcher has created and maintained the most important author identifiers: ORCID, ScopusID and ResearcherID (Publons), and linked them to our institutional repository based on PURE. We collect all this data as a kicking off point for a 1-on-1 discussion with the researcher. In that discussion we go through the importance of Author Identifiers and check whether all their publications are properly claimed on Scopus, WoS and Google Scholar. Finally, we give them the tools to maintain their profiles on their own to ensure that when external parties look them up, they find an accurate representation of the researcher's publication data. These are the template files we developed at the University of Southern Denmark to generate the Research Footprint. They include a Disclaimer and a GDPR statement. The publication data can be collected in the provided Excel file and then copied over to the Word file.

  14. r

    Journal of Chemistry Acceptance Rate - ResearchHelpDesk

    • researchhelpdesk.org
    Updated Feb 15, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). Journal of Chemistry Acceptance Rate - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/acceptance-rate/341/journal-of-chemistry
    Explore at:
    Dataset updated
    Feb 15, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    Journal of Chemistry Acceptance Rate - ResearchHelpDesk - Journal of Chemistry is a peer-reviewed, Open Access journal that publishes original research articles as well as review articles on all aspects of fundamental and applied chemistry. Journal of Chemistry is archived in Portico, which provides permanent archiving for electronic scholarly journals, as well as via the LOCKSS initiative. It operates a fully open access publishing model which allows open global access to its published content. This model is supported through Article Processing Charges. Journal of Chemistry is included in many leading abstracting and indexing databases. For a complete list, click here. The most recent Impact Factor for Journal of Chemistry is 1.727 according to the 2018 Journal Citation Reports released by Clarivate Analytics in 2019. The journal’s most recent CiteScore is 1.32 according to the CiteScore 2018 metrics released by Scopus. Abstracting and Indexing Academic Search Alumni Edition Academic Search Complete AgBiotech Net AgBiotech News and Information AGRICOLA Agricultural Economics Database Agricultural Engineering Abstracts Agroforestry Abstracts Animal Breeding Abstracts Animal Science Database Biofuels Abstracts Botanical Pesticides CAB Abstracts Chemical Abstracts Service (CAS) CNKI Scholar Crop Physiology Abstracts Crop Science Database Directory of Open Access Journals (DOAJ) EBSCOhost Connection EBSCOhost Research Databases Elsevier BIOBASE - Current Awareness in Biological Sciences (CABS) EMBIOlogy Energy and Power Source Global Health Google Scholar J-Gate Portal Journal Citation Reports - Science Edition Open Access Journals Integrated Service System Project (GoOA) Primo Central Index Reaxys Science Citation Index Expanded Scopus Textile Technology Index The Summon Service WorldCat Discovery Services

  15. r

    Journal of Chemistry Impact Factor 2024-2025 - ResearchHelpDesk

    • researchhelpdesk.org
    Updated Feb 23, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). Journal of Chemistry Impact Factor 2024-2025 - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/impact-factor-if/341/journal-of-chemistry
    Explore at:
    Dataset updated
    Feb 23, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    Journal of Chemistry Impact Factor 2024-2025 - ResearchHelpDesk - Journal of Chemistry is a peer-reviewed, Open Access journal that publishes original research articles as well as review articles on all aspects of fundamental and applied chemistry. Journal of Chemistry is archived in Portico, which provides permanent archiving for electronic scholarly journals, as well as via the LOCKSS initiative. It operates a fully open access publishing model which allows open global access to its published content. This model is supported through Article Processing Charges. Journal of Chemistry is included in many leading abstracting and indexing databases. For a complete list, click here. The most recent Impact Factor for Journal of Chemistry is 1.727 according to the 2018 Journal Citation Reports released by Clarivate Analytics in 2019. The journal’s most recent CiteScore is 1.32 according to the CiteScore 2018 metrics released by Scopus. Abstracting and Indexing Academic Search Alumni Edition Academic Search Complete AgBiotech Net AgBiotech News and Information AGRICOLA Agricultural Economics Database Agricultural Engineering Abstracts Agroforestry Abstracts Animal Breeding Abstracts Animal Science Database Biofuels Abstracts Botanical Pesticides CAB Abstracts Chemical Abstracts Service (CAS) CNKI Scholar Crop Physiology Abstracts Crop Science Database Directory of Open Access Journals (DOAJ) EBSCOhost Connection EBSCOhost Research Databases Elsevier BIOBASE - Current Awareness in Biological Sciences (CABS) EMBIOlogy Energy and Power Source Global Health Google Scholar J-Gate Portal Journal Citation Reports - Science Edition Open Access Journals Integrated Service System Project (GoOA) Primo Central Index Reaxys Science Citation Index Expanded Scopus Textile Technology Index The Summon Service WorldCat Discovery Services

  16. Map of articles about "Teaching Open Science"

    • zenodo.org
    • data.niaid.nih.gov
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Isabel Steinhardt; Isabel Steinhardt (2020). Map of articles about "Teaching Open Science" [Dataset]. http://doi.org/10.5281/zenodo.3371415
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Isabel Steinhardt; Isabel Steinhardt
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This description is part of the blog post "Systematic Literature Review of teaching Open Science" https://sozmethode.hypotheses.org/839

    According to my opinion, we do not pay enough attention to teaching Open Science in higher education. Therefore, I designed a seminar to teach students the practices of Open Science by doing qualitative research.About this seminar, I wrote the article ”Teaching Open Science and qualitative methods“. For the article ”Teaching Open Science and qualitative methods“, I started to review the literature on ”Teaching Open Science“. The result of my literature review is that certain aspects of Open Science are used for teaching. However, Open Science with all its aspects (Open Access, Open Data, Open Methodology, Open Science Evaluation and Open Science Tools) is not an issue in publications about teaching.

    Based on this insight, I have started a systematic literature review. I realized quickly that I need help to analyse and interpret the articles and to evaluate my preliminary findings. Especially different disciplinary cultures of teaching different aspects of Open Science are challenging, as I myself, as a social scientist, do not have enough insight to be able to interpret the results correctly. Therefore, I would like to invite you to participate in this research project!

    I am now looking for people who would like to join a collaborative process to further explore and write the systematic literature review on “Teaching Open Science“. Because I want to turn this project into a Massive Open Online Paper (MOOP). According to the 10 rules of Tennant et al (2019) on MOOPs, it is crucial to find a core group that is enthusiastic about the topic. Therefore, I am looking for people who are interested in creating the structure of the paper and writing the paper together with me. I am also looking for people who want to search for and review literature or evaluate the literature I have already found. Together with the interested persons I would then define, the rules for the project (cf. Tennant et al. 2019). So if you are interested to contribute to the further search for articles and / or to enhance the interpretation and writing of results, please get in touch. For everyone interested to contribute, the list of articles collected so far is freely accessible at Zotero: https://www.zotero.org/groups/2359061/teaching_open_science. The figure shown below provides a first overview of my ongoing work. I created the figure with the free software yEd and uploaded the file to zenodo, so everyone can download and work with it:

    To make transparent what I have done so far, I will first introduce what a systematic literature review is. Secondly, I describe the decisions I made to start with the systematic literature review. Third, I present the preliminary results.

    Systematic literature review – an Introduction

    Systematic literature reviews “are a method of mapping out areas of uncertainty, and identifying where little or no relevant research has been done.” (Petticrew/Roberts 2008: 2). Fink defines the systematic literature review as a “systemic, explicit, and reproducible method for identifying, evaluating, and synthesizing the existing body of completed and recorded work produced by researchers, scholars, and practitioners.” (Fink 2019: 6). The aim of a systematic literature reviews is to surpass the subjectivity of a researchers’ search for literature. However, there can never be an objective selection of articles. This is because the researcher has for example already made a preselection by deciding about search strings, for example “Teaching Open Science”. In this respect, transparency is the core criteria for a high-quality review.

    In order to achieve high quality and transparency, Fink (2019: 6-7) proposes the following seven steps:

    1. Selecting a research question.
    2. Selecting the bibliographic database.
    3. Choosing the search terms.
    4. Applying practical screening criteria.
    5. Applying methodological screening criteria.
    6. Doing the review.
    7. Synthesizing the results.

    I have adapted these steps for the “Teaching Open Science” systematic literature review. In the following, I will present the decisions I have made.

    Systematic literature review – decisions I made

    1. Research question: I am interested in the following research questions: How is Open Science taught in higher education? Is Open Science taught in its full range with all aspects like Open Access, Open Data, Open Methodology, Open Science Evaluation and Open Science Tools? Which aspects are taught? Are there disciplinary differences as to which aspects are taught and, if so, why are there such differences?
    2. Databases: I started my search at the Directory of Open Science (DOAJ). “DOAJ is a community-curated online directory that indexes and provides access to high quality, open access, peer-reviewed journals.” (https://doaj.org/) Secondly, I used the Bielefeld Academic Search Engine (base). Base is operated by Bielefeld University Library and “one of the world’s most voluminous search engines especially for academic web resources” (base-search.net). Both platforms are non-commercial and focus on Open Access publications and thus differ from the commercial publication databases, such as Web of Science and Scopus. For this project, I deliberately decided against commercial providers and the restriction of search in indexed journals. Thus, because my explicit aim was to find articles that are open in the context of Open Science.
    3. Search terms: To identify articles about teaching Open Science I used the following search strings: “teaching open science” OR teaching “open science” OR teach „open science“. The topic search looked for the search strings in title, abstract and keywords of articles. Since these are very narrow search terms, I decided to broaden the method. I searched in the reference lists of all articles that appear from this search for further relevant literature. Using Google Scholar I checked which other authors cited the articles in the sample. If the so checked articles met my methodological criteria, I included them in the sample and looked through the reference lists and citations at Google Scholar. This process has not yet been completed.
    4. Practical screening criteria: I have included English and German articles in the sample, as I speak these languages (articles in other languages are very welcome, if there are people who can interpret them!). In the sample only journal articles, articles in edited volumes, working papers and conference papers from proceedings were included. I checked whether the journals were predatory journals – such articles were not included. I did not include blogposts, books or articles from newspapers. I only included articles that fulltexts are accessible via my institution (University of Kassel). As a result, recently published articles at Elsevier could not be included because of the special situation in Germany regarding the Project DEAL (https://www.projekt-deal.de/about-deal/). For articles that are not freely accessible, I have checked whether there is an accessible version in a repository or whether preprint is available. If this was not the case, the article was not included. I started the analysis in May 2019.
    5. Methodological criteria: The method described above to check the reference lists has the problem of subjectivity. Therefore, I hope that other people will be interested in this project and evaluate my decisions. I have used the following criteria as the basis for my decisions: First, the articles must focus on teaching. For example, this means that articles must describe how a course was designed and carried out. Second, at least one aspect of Open Science has to be addressed. The aspects can be very diverse (FOSS, repositories, wiki, data management, etc.) but have to comply with the principles of openness. This means, for example, I included an article when it deals with the use of FOSS in class and addresses the aspects of openness of FOSS. I did not include articles when the authors describe the use of a particular free and open source software for teaching but did not address the principles of openness or re-use.
    6. Doing the review: Due to the methodical approach of going through the reference lists, it is possible to create a map of how the articles relate to each other. This results in thematic clusters and connections between clusters. The starting point for the map were four articles (Cook et al. 2018; Marsden, Thompson, and Plonsky 2017; Petras et al. 2015; Toelch and Ostwald 2018) that I found using the databases and criteria described above. I used yEd to generate the network. „yEd is a powerful desktop application that can be used to quickly and effectively generate high-quality diagrams.” (https://www.yworks.com/products/yed) In the network, arrows show, which articles are cited in an article and which articles are cited by others as well. In addition, I made an initial rough classification of the content using colours. This classification is based on the contents mentioned in the articles’ title and abstract. This rough content classification requires a more exact, i.e., content-based subdivision and

  17. d

    Data from: The assessment of science: the relative merits of...

    • datadryad.org
    • data.niaid.nih.gov
    • +1more
    zip
    Updated Oct 7, 2014
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Adam Eyre-Walker; Nina Stoletzki (2014). The assessment of science: the relative merits of post-publication review, the impact factor and the number of citations [Dataset]. http://doi.org/10.5061/dryad.2h4j5
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 7, 2014
    Dataset provided by
    Dryad
    Authors
    Adam Eyre-Walker; Nina Stoletzki
    Time period covered
    2014
    Description

    Wellcome Trust data - DryadThe Wellcome Trust data. Data given are the journal of publication, the score of the first and second assessors, the 2-year and 5-year impact factors and the number of citations as obtained from Google Scholar in 2011. Please cite Allen et al. (2009) PLoS One 4: e5910 and Eyre-Walker and Stoletzki (2013). PLoS BiologyF1000 data - DryadF1000 data. Data provided is the journal name, score from the first and second assessor (note that not all papers were assessed by two people), 2-year and 5-year impact factors, the number of assessors (in this analysis - there might have been more than 2) and the number of citations from Google Scholar as of 2011. Please cite Eyre-Walker and Stoletzki (2013) PLoS Biology

  18. r

    The Journal of Community Health Management CiteScore 2024-2025 -...

    • researchhelpdesk.org
    Updated Aug 3, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). The Journal of Community Health Management CiteScore 2024-2025 - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/sjr/592/the-journal-of-community-health-management
    Explore at:
    Dataset updated
    Aug 3, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    The Journal of Community Health Management CiteScore 2024-2025 - ResearchHelpDesk - The Journal of Community Health Management (JCHM) open access, peer-reviewed quarterly journal publishing since 2014 and is published under the auspices of the Innovative Education and Scientific Research Foundation (IESRF), aim to uplift researchers, scholars, academicians, and professionals in all academic and scientific disciplines. IESRF is dedicated to the transfer of technology and research by publishing scientific journals, research content, providing professional’s membership, and conducting conferences, seminars, and award programs. With the aim of faster and better dissemination of knowledge, we will be publishing the article ‘Ahead of Print’ immediately on acceptance. In addition, the journal would allow free access (Open Access) to its contents, which is likely to attract more readers and citations to articles published in JCHM. Manuscripts must be prepared in accordance with “Uniform requirements for Manuscripts submitted to Biomedical Journals” as per guidelines by the International Committee of Medical Journals Editors (Updated December 2019). The uniform requirements and specific requirements of JCHM are mentioned below. Before sending a manuscript contributors are requested to check for the author guidelines are available from the website of the journal (www.jchm.in/info/author) or directly from (Innovative Pre-Publication Portal) manuscript submission website https://innovpub.org/journal/JCHM Aims and Scope The aim and commitment of the journal are to publish a research-oriented manuscript address significant issues in all the subjects and areas of Community Health Management. Journal is committed itself to improve Education and Research on Acute Care, Bio-statics, Community Health, Epidemiology and Health Services Research, Health Management, Medicine, and Allied branches of Medical Sciences including Health Statistics, Nutrition, Preventive Medicine, Primary Prevention, Primary Health Care, Secondary Prevention, Secondary Healthcare, Tertiary Healthcare, etc. Indexing and Abstracting Information Index Copernicus, Google Scholar, Indian Science Abstracts, National Science Library, J- gate, ROAD, CrossRef, Microsoft Academic, Indian Citation Index (ICI). Journal Ethics Journal is committed to upholding the highest standards of ethical behavior at all stages of publication. We strictly adhere to the industry associations such as the Committee on Publication Ethics (COPE), International Committee of Medical Journal Editors (ICMJE), and World Association of Medical Editors (WAME), that set standards and provide guidelines for best practices in order to meet these requirements. Our specific policies regarding duplicate publication, conflict of interest, patient consent, etc., Please visit the editor guidelines

  19. Data from: CRAWDAD wireless network data citation bibliography

    • figshare.com
    txt
    Updated Jan 19, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tristan Henderson; David Kotz (2016). CRAWDAD wireless network data citation bibliography [Dataset]. http://doi.org/10.6084/m9.figshare.1203646.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jan 19, 2016
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Tristan Henderson; David Kotz
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This BibTeX file contains the corpus of papers that cite CRAWDAD wireless network datasets, as used in the paper: Tristan Henderson and David Kotz. Data citation practices in the CRAWDAD wireless network data archive. Proceedings of the Second Workshop on Linking and Contextualizing Publications and Datasets, London, UK, September 2014. Most of the fields are standard BibTeX fields. There are two that require further explanation. "citations" - this field contains the citations for a paper as countedby Google Scholar as of 24 September 2014. "keywords" - this field contains a set of tags indicating data citation practice. These are as follows:- "uses_crawdad_data" - this paper uses a CRAWDAD dataset- "cites_insufficiently" - this paper does not meet our sufficiency criteria- "cites_by_description" - this paper cites a dataset by description rather than dataset identifier- "cites_canonical_paper" - this paper cites the original ("canonical") paper that collected a dataset, rather than pointing to the dataset- "cites_by_name" - this paper cites a dataset by a colloquial name rather than dataset identifier- "cites_crawdad_url" - this paper cites the main CRAWDAD URL rather than a particular dataset- "cites_without_url" - this paper does not provide a URL for dataset access- "cites_wrong_attribution" - this paper attributes a dataset to CRAWDAD, Dartmouth etc rather than the dataset authors- "cites_vaguely" - this paper cites the used datasets (if any) too vaguely to be sufficient If you have any questions about the data, please contact us atcrawdad@crawdad.org

  20. n

    Data from: Forecasting the publication and citation outcomes of Covid-19...

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated Sep 27, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thomas Pfeiffer; Michael Gordon; Michael Bishop; Yiling Chen; Brandon Goldfedder; Anna Dreber; Felix Holzmeister; Magnus Johannesson; Yang Liu; Charles Twardy; Juntao Wang; Luisa Tran (2022). Forecasting the publication and citation outcomes of Covid-19 preprints [Dataset]. http://doi.org/10.5061/dryad.rfj6q57d0
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 27, 2022
    Dataset provided by
    University of California, Santa Cruz
    Harvard University
    Stockholm School of Economics
    Jacobs Engineering Group
    Gold Brand Software
    Universität Innsbruck
    Michael Bishop Consulting
    Massey University
    Authors
    Thomas Pfeiffer; Michael Gordon; Michael Bishop; Yiling Chen; Brandon Goldfedder; Anna Dreber; Felix Holzmeister; Magnus Johannesson; Yang Liu; Charles Twardy; Juntao Wang; Luisa Tran
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    The scientific community reacted quickly to the Covid-19 pandemic in 2020, generating an unprecedented increase in publications. Many of these publications were released on preprint servers such as medRxiv and bioRxiv. It is unknown however how reliable these preprints are, and if they will eventually be published in scientific journals. In this study, we use crowdsourced human forecasts to predict publication outcomes and future citation counts for a sample of 400 preprints with high Altmetric scores. Most of these preprints were published within one year of upload on a preprint server (70%), and 46% of the published preprints appeared in a high-impact journal with a Journal Impact Factor of at least 10. On average, the preprints received 162 citations within the first year. We found that forecasters can predict if preprints will be published after one year and if the publishing journal has high impact. Forecasts are also informative with respect to preprints’ rankings in terms of Google Scholar citations within one year of upload on a preprint server. For both types of assessment, we found statistically significant positive correlations between forecasts and observed outcomes. While the forecasts can help to provide a preliminary assessment of preprints at a faster pace than the traditional peer-review process, it remains to be investigated if such an assessment is suited to identify methodological problems in pre-prints. Methods The dataset consists of survey responses collected through Qualtrix. Data was formatted and stored as .csv, and analysed with R.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Alberto Martín-Martín; Enrique Orduña-Malea; Juan Manuel Ayllón; Emilio Delgado-López-Cózar (2023). Highly Cited Documents on Google Scholar (1950-2013) [Dataset]. http://doi.org/10.6084/m9.figshare.1224314.v2
Organization logo

Data from: Highly Cited Documents on Google Scholar (1950-2013)

Related Article
Explore at:
2 scholarly articles cite this dataset (View in Google Scholar)
xlsxAvailable download formats
Dataset updated
May 31, 2023
Dataset provided by
Figsharehttp://figshare.com/
Authors
Alberto Martín-Martín; Enrique Orduña-Malea; Juan Manuel Ayllón; Emilio Delgado-López-Cózar
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

This dataset contains: - A sample of 64,000 highly cited documents published in the period 1950-2013, collected from Google Scholar on the 28th of May, 2014. - List of clean references of the top 1% most cited documents in Google Scholar (640 documents) - Study case: different versions (detected and undetected by Google Scholar) for the work "A Mathematical Theory of Communication", by Claude Shannon.- Frequency table: number of highly-cited documents in our sample published in WoS-covered journals

Search
Clear search
Close search
Google apps
Main menu