100+ datasets found
  1. Number of new papers on Google Scholar related to edge computing worldwide...

    • statista.com
    Updated Jul 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Number of new papers on Google Scholar related to edge computing worldwide 2010-2023 [Dataset]. https://www.statista.com/statistics/1193762/worldwide-edge-computing-google-scholar-paper/
    Explore at:
    Dataset updated
    Jul 18, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    Worldwide
    Description

    The number of articles related to edge computing on Google Scholar increase dramatically in the last decade. 2010 only recorded *** articles related to edge computing on Google Scholar, whereas 2023 saw more than ****** scholarly papers on this topic.

  2. z

    Data from: Google Scholar as a Data Source for Research Assessment in the...

    • zenodo.org
    bin
    Updated Dec 31, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Güleda Doğan; Güleda Doğan (2021). Google Scholar as a Data Source for Research Assessment in the Social Sciences [Dataset]. http://doi.org/10.5281/zenodo.5079007
    Explore at:
    binAvailable download formats
    Dataset updated
    Dec 31, 2021
    Dataset provided by
    Edward Elgar Publishing
    Authors
    Güleda Doğan; Güleda Doğan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Column 1

    Source

    Data sources that the publications retrieved. Values for this column are “Google Scholar”, “Scopus”, and “Web of Science”.

    Column 2

    Authors

    The authors of the publications. This column is kept as additional information for verification of data. Not used in the analysis, it has not been standardized.

    Column 3

    Title

    Titles of the publications. For non-English publications, English titles, if available, are kept in this column. Otherwise, the original titles have been entered. The headings were checked and errors and omissions were corrected. Corrected titles are marked in red.

    Column 4

    Title translated with Google Translate

    In this Column, the English translated titles of the publications that do not have English titles are kept. Google Translate is used for detecting the language and translation. For publications with an English title, the expression [Title in English] has been entered. The translations of the original titles kept in this field were used in the analysis made through VOSviewer. It is marked in red as it is newly added data.

    Column 5

    Language

    Language of the publications. The languages of all publications were checked, missing data were completed and errors were corrected. If the language of the publication could not be determined, the value is [Not found]. The cells with addition or correction are marked in red.

    Column 6

    Document type

    Types of the documents. For all publications, publication type information was checked, missing ones were completed and corrections were made. All intervened cells are marked in red. Article and Review types are referred to as “Article” in the text.

    Column 7

    Full-text available

    Values for this column are “Yes” and “No”. The values for this column are Yes and No. If there is access to the full text of the publication via the web, "Yes", otherwise the "No" value has been entered.

    Column 8

    On research evaluation

    Values for this column are “Yes” and “No”. Using the title and/or abstract information, it was tried to determine whether the publications were related to the research evaluation. “Yes”, if found relevant, and “No” if not. It is marked in red as it is newly added data.

    Column 9

    Publication year

    The publication years of the documents. If the publication years are missing, they have been completed. The current publication years have been checked and corrected if necessary. If the year of publication could not be found, it is indicated as [Not found].

    Column 10

    English abstract

    Abstracts of the publications. If there is an accessible/available English abstract for the publication, it is kept in this column. [Not found/Not available] for missing values. Abstracts that were added, changed, corrected, or completed are marked in red.

  3. d

    Canadian Student-led Academic Journals - platforms and indexing data

    • search.dataone.org
    • borealisdata.ca
    Updated Dec 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Maistrovskaya, Mariya (2023). Canadian Student-led Academic Journals - platforms and indexing data [Dataset]. http://doi.org/10.5683/SP3/QXEUVH
    Explore at:
    Dataset updated
    Dec 28, 2023
    Dataset provided by
    Borealis
    Authors
    Maistrovskaya, Mariya
    Description

    This dataset was compiled as part of a study on Barriers and Opportunities in the Discoverability and Indexing of Student-led Academic Journals. The list of student journals and their details is compiled from public sources. This list is used to identify the presence of Canadian student journals in Google Scholar as well as in select indexes and databases: DOAJ, Scopus, Web of Science, Medline, Erudit, ProQuest, and HeinOnline. Additionally, journal publishing platform is recorded to be used for a correlational analysis against Google Scholar indexing results. For further details see README.

  4. d

    Google Scholar

    • dknet.org
    • scicrunch.org
    • +1more
    Updated Oct 31, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Google Scholar [Dataset]. http://identifiers.org/RRID:SCR_008878
    Explore at:
    Dataset updated
    Oct 31, 2024
    Description

    Google Scholar provides a simple way to broadly search for scholarly literature. From one place, you can search across many disciplines and sources: articles, theses, books, abstracts and court opinions, from academic publishers, professional societies, online repositories, universities and other web sites. Google Scholar helps you find relevant work across the world of scholarly research. Features of Google Scholar * Search diverse sources from one convenient place * Find articles, theses, books, abstracts or court opinions * Locate the complete document through your library or on the web * Learn about key scholarly literature in any area of research How are documents ranked? Google Scholar aims to rank documents the way researchers do, weighing the full text of each document, where it was published, who it was written by, as well as how often and how recently it has been cited in other scholarly literature. * Publishers - Include your publications in Google Scholar * Librarians - Help patrons discover your library''s resources

  5. Library journals ranking 2018

    • figshare.com
    xlsx
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrea Marchitelli (2023). Library journals ranking 2018 [Dataset]. http://doi.org/10.6084/m9.figshare.6994412.v3
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Andrea Marchitelli
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This table lists 38 journals in library and archival sciences, regarding their H-index (as from Google Metrics) and italian evaluation agency ranking.It updates 2016 and 2017 data, at Marchitelli, Andrea (2016): Library journals ranking. figshare.https://doi.org/10.6084/m9.figshare.3487001.v2 and at Marchitelli, Andrea (2017): Library journals ranking. figshare.https://figshare.com/articles/Library_journals_ranking_2017/5188057/1

  6. f

    Overlap between Web of Science (WoS) and Google Scholar (GS) for topic word...

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Neal Robert Haddaway; Alexandra Mary Collins; Deborah Coughlin; Stuart Kirk (2023). Overlap between Web of Science (WoS) and Google Scholar (GS) for topic word searches in Web of Science and the first 1,000 search results from full text searches in Google Scholar. [Dataset]. http://doi.org/10.1371/journal.pone.0138237.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Neal Robert Haddaway; Alexandra Mary Collins; Deborah Coughlin; Stuart Kirk
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    n/a corresponds to search results that were too voluminous to download in full. See Table 2 for case study explanations.

  7. f

    Typical characteristics of academic citation databases and search engines.

    • plos.figshare.com
    xls
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Neal Robert Haddaway; Alexandra Mary Collins; Deborah Coughlin; Stuart Kirk (2023). Typical characteristics of academic citation databases and search engines. [Dataset]. http://doi.org/10.1371/journal.pone.0138237.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Neal Robert Haddaway; Alexandra Mary Collins; Deborah Coughlin; Stuart Kirk
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Typical characteristics of academic citation databases and search engines.

  8. Map of articles about "Teaching Open Science"

    • zenodo.org
    • data.niaid.nih.gov
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Isabel Steinhardt; Isabel Steinhardt (2020). Map of articles about "Teaching Open Science" [Dataset]. http://doi.org/10.5281/zenodo.3371415
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Isabel Steinhardt; Isabel Steinhardt
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This description is part of the blog post "Systematic Literature Review of teaching Open Science" https://sozmethode.hypotheses.org/839

    According to my opinion, we do not pay enough attention to teaching Open Science in higher education. Therefore, I designed a seminar to teach students the practices of Open Science by doing qualitative research.About this seminar, I wrote the article ”Teaching Open Science and qualitative methods“. For the article ”Teaching Open Science and qualitative methods“, I started to review the literature on ”Teaching Open Science“. The result of my literature review is that certain aspects of Open Science are used for teaching. However, Open Science with all its aspects (Open Access, Open Data, Open Methodology, Open Science Evaluation and Open Science Tools) is not an issue in publications about teaching.

    Based on this insight, I have started a systematic literature review. I realized quickly that I need help to analyse and interpret the articles and to evaluate my preliminary findings. Especially different disciplinary cultures of teaching different aspects of Open Science are challenging, as I myself, as a social scientist, do not have enough insight to be able to interpret the results correctly. Therefore, I would like to invite you to participate in this research project!

    I am now looking for people who would like to join a collaborative process to further explore and write the systematic literature review on “Teaching Open Science“. Because I want to turn this project into a Massive Open Online Paper (MOOP). According to the 10 rules of Tennant et al (2019) on MOOPs, it is crucial to find a core group that is enthusiastic about the topic. Therefore, I am looking for people who are interested in creating the structure of the paper and writing the paper together with me. I am also looking for people who want to search for and review literature or evaluate the literature I have already found. Together with the interested persons I would then define, the rules for the project (cf. Tennant et al. 2019). So if you are interested to contribute to the further search for articles and / or to enhance the interpretation and writing of results, please get in touch. For everyone interested to contribute, the list of articles collected so far is freely accessible at Zotero: https://www.zotero.org/groups/2359061/teaching_open_science. The figure shown below provides a first overview of my ongoing work. I created the figure with the free software yEd and uploaded the file to zenodo, so everyone can download and work with it:

    To make transparent what I have done so far, I will first introduce what a systematic literature review is. Secondly, I describe the decisions I made to start with the systematic literature review. Third, I present the preliminary results.

    Systematic literature review – an Introduction

    Systematic literature reviews “are a method of mapping out areas of uncertainty, and identifying where little or no relevant research has been done.” (Petticrew/Roberts 2008: 2). Fink defines the systematic literature review as a “systemic, explicit, and reproducible method for identifying, evaluating, and synthesizing the existing body of completed and recorded work produced by researchers, scholars, and practitioners.” (Fink 2019: 6). The aim of a systematic literature reviews is to surpass the subjectivity of a researchers’ search for literature. However, there can never be an objective selection of articles. This is because the researcher has for example already made a preselection by deciding about search strings, for example “Teaching Open Science”. In this respect, transparency is the core criteria for a high-quality review.

    In order to achieve high quality and transparency, Fink (2019: 6-7) proposes the following seven steps:

    1. Selecting a research question.
    2. Selecting the bibliographic database.
    3. Choosing the search terms.
    4. Applying practical screening criteria.
    5. Applying methodological screening criteria.
    6. Doing the review.
    7. Synthesizing the results.

    I have adapted these steps for the “Teaching Open Science” systematic literature review. In the following, I will present the decisions I have made.

    Systematic literature review – decisions I made

    1. Research question: I am interested in the following research questions: How is Open Science taught in higher education? Is Open Science taught in its full range with all aspects like Open Access, Open Data, Open Methodology, Open Science Evaluation and Open Science Tools? Which aspects are taught? Are there disciplinary differences as to which aspects are taught and, if so, why are there such differences?
    2. Databases: I started my search at the Directory of Open Science (DOAJ). “DOAJ is a community-curated online directory that indexes and provides access to high quality, open access, peer-reviewed journals.” (https://doaj.org/) Secondly, I used the Bielefeld Academic Search Engine (base). Base is operated by Bielefeld University Library and “one of the world’s most voluminous search engines especially for academic web resources” (base-search.net). Both platforms are non-commercial and focus on Open Access publications and thus differ from the commercial publication databases, such as Web of Science and Scopus. For this project, I deliberately decided against commercial providers and the restriction of search in indexed journals. Thus, because my explicit aim was to find articles that are open in the context of Open Science.
    3. Search terms: To identify articles about teaching Open Science I used the following search strings: “teaching open science” OR teaching “open science” OR teach „open science“. The topic search looked for the search strings in title, abstract and keywords of articles. Since these are very narrow search terms, I decided to broaden the method. I searched in the reference lists of all articles that appear from this search for further relevant literature. Using Google Scholar I checked which other authors cited the articles in the sample. If the so checked articles met my methodological criteria, I included them in the sample and looked through the reference lists and citations at Google Scholar. This process has not yet been completed.
    4. Practical screening criteria: I have included English and German articles in the sample, as I speak these languages (articles in other languages are very welcome, if there are people who can interpret them!). In the sample only journal articles, articles in edited volumes, working papers and conference papers from proceedings were included. I checked whether the journals were predatory journals – such articles were not included. I did not include blogposts, books or articles from newspapers. I only included articles that fulltexts are accessible via my institution (University of Kassel). As a result, recently published articles at Elsevier could not be included because of the special situation in Germany regarding the Project DEAL (https://www.projekt-deal.de/about-deal/). For articles that are not freely accessible, I have checked whether there is an accessible version in a repository or whether preprint is available. If this was not the case, the article was not included. I started the analysis in May 2019.
    5. Methodological criteria: The method described above to check the reference lists has the problem of subjectivity. Therefore, I hope that other people will be interested in this project and evaluate my decisions. I have used the following criteria as the basis for my decisions: First, the articles must focus on teaching. For example, this means that articles must describe how a course was designed and carried out. Second, at least one aspect of Open Science has to be addressed. The aspects can be very diverse (FOSS, repositories, wiki, data management, etc.) but have to comply with the principles of openness. This means, for example, I included an article when it deals with the use of FOSS in class and addresses the aspects of openness of FOSS. I did not include articles when the authors describe the use of a particular free and open source software for teaching but did not address the principles of openness or re-use.
    6. Doing the review: Due to the methodical approach of going through the reference lists, it is possible to create a map of how the articles relate to each other. This results in thematic clusters and connections between clusters. The starting point for the map were four articles (Cook et al. 2018; Marsden, Thompson, and Plonsky 2017; Petras et al. 2015; Toelch and Ostwald 2018) that I found using the databases and criteria described above. I used yEd to generate the network. „yEd is a powerful desktop application that can be used to quickly and effectively generate high-quality diagrams.” (https://www.yworks.com/products/yed) In the network, arrows show, which articles are cited in an article and which articles are cited by others as well. In addition, I made an initial rough classification of the content using colours. This classification is based on the contents mentioned in the articles’ title and abstract. This rough content classification requires a more exact, i.e., content-based subdivision and

  9. f

    The ability of Google Scholar to find included articles from six published...

    • plos.figshare.com
    xls
    Updated Jun 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Neal Robert Haddaway; Alexandra Mary Collins; Deborah Coughlin; Stuart Kirk (2023). The ability of Google Scholar to find included articles from six published systematic reviews. [Dataset]. http://doi.org/10.1371/journal.pone.0138237.t007
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Neal Robert Haddaway; Alexandra Mary Collins; Deborah Coughlin; Stuart Kirk
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    1 For those articles not found using Google Scholar, Web of Science searches were carried out using Bangor University subscription (Biological Abstracts, MEDLINE, SciELO Citation Index, Web of Science Core Collections, Zoological Record).Records identified as citations are found only within reference lists of other articles (their existence is not verified by the presence of a publisher version or full text article, unlike hyperlinked citations).

  10. Data from: A critical review of 'just transition' publications using Google...

    • zenodo.org
    • ekoizpen-zientifikoa.ehu.eus
    • +1more
    bin
    Updated Jan 23, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Becca Wilgosh; Becca Wilgosh; Alevgul H. Sorman; Alevgul H. Sorman; Iñaki Barcena; Iñaki Barcena (2022). A critical review of 'just transition' publications using Google and Google Scholar [Dataset]. http://doi.org/10.5281/zenodo.5794358
    Explore at:
    binAvailable download formats
    Dataset updated
    Jan 23, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Becca Wilgosh; Becca Wilgosh; Alevgul H. Sorman; Alevgul H. Sorman; Iñaki Barcena; Iñaki Barcena
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset offers the list of sources included in our critical review of the term 'just transition' (in English), from 1990 to 2021 in the Global North and South Africa, using Google and Google Scholar as search engines. The publications retrieved include both peer-reviewed literature and publicly available reports and documents. Results (with full citations) are categorized by actor group and type, location, and year of publication.

  11. r

    Big Data and Society Abstract & Indexing - ResearchHelpDesk

    • researchhelpdesk.org
    Updated Jun 23, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2022). Big Data and Society Abstract & Indexing - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/abstract-and-indexing/477/big-data-and-society
    Explore at:
    Dataset updated
    Jun 23, 2022
    Dataset authored and provided by
    Research Help Desk
    Description

    Big Data and Society Abstract & Indexing - ResearchHelpDesk - Big Data & Society (BD&S) is open access, peer-reviewed scholarly journal that publishes interdisciplinary work principally in the social sciences, humanities and computing and their intersections with the arts and natural sciences about the implications of Big Data for societies. The Journal's key purpose is to provide a space for connecting debates about the emerging field of Big Data practices and how they are reconfiguring academic, social, industry, business, and government relations, expertise, methods, concepts, and knowledge. BD&S moves beyond usual notions of Big Data and treats it as an emerging field of practice that is not defined by but generative of (sometimes) novel data qualities such as high volume and granularity and complex analytics such as data linking and mining. It thus attends to digital content generated through online and offline practices in social, commercial, scientific, and government domains. This includes, for instance, the content generated on the Internet through social media and search engines but also that which is generated in closed networks (commercial or government transactions) and open networks such as digital archives, open government, and crowdsourced data. Critically, rather than settling on a definition the Journal makes this an object of interdisciplinary inquiries and debates explored through studies of a variety of topics and themes. BD&S seeks contributions that analyze Big Data practices and/or involve empirical engagements and experiments with innovative methods while also reflecting on the consequences for how societies are represented (epistemologies), realized (ontologies) and governed (politics). Article processing charge (APC) The article processing charge (APC) for this journal is currently 1500 USD. Authors who do not have funding for open access publishing can request a waiver from the publisher, SAGE, once their Original Research Article is accepted after peer review. For all other content (Commentaries, Editorials, Demos) and Original Research Articles commissioned by the Editor, the APC will be waived. Abstract & Indexing Clarivate Analytics: Social Sciences Citation Index (SSCI) Directory of Open Access Journals (DOAJ) Google Scholar Scopus

  12. H

    Replication Data for: GPT-fabricated scientific papers on Google Scholar:...

    • dataverse.harvard.edu
    Updated Sep 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kristofer Rolf Söderström; Jutta Haider; Björn Ekström; Malte Rödl (2024). Replication Data for: GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation [Dataset]. http://doi.org/10.7910/DVN/WUVD8X
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 2, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Kristofer Rolf Söderström; Jutta Haider; Björn Ekström; Malte Rödl
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset contains publications collected from Google Scholar with the following queries: 1. "as of my last knowledge update" 2. "I don't have access to real-time data" 3. "as of my last knowledge update" AND "I don't have access to real-time data" It contains the following: variable, description id, ID title, Title of paper author, Author(s) of paper pub_year, Year of publication venue, Venue of publication abstract, Abstract snippet or sample text with exact string match - if found pub_url, URL for publication query, Query that matches publication chatgpt_method, Declared justified or otherwise explainable in-text mention or use of GPT/ChatGPT Example Text, Sample text with exact string match - if found

  13. Scientific JOURNALS Indicators & Info - SCImagoJR

    • kaggle.com
    Updated Apr 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ali Jalaali (2025). Scientific JOURNALS Indicators & Info - SCImagoJR [Dataset]. https://www.kaggle.com/datasets/alijalali4ai/scimagojr-scientific-journals-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 9, 2025
    Dataset provided by
    Kaggle
    Authors
    Ali Jalaali
    Description


    The SCImago Journal & Country Rank is a publicly available portal that includes the journals and country scientific indicators developed from the information contained in the Scopus® database (Elsevier B.V.). These indicators can be used to assess and analyze scientific domains. Journals can be compared or analysed separately.


    💬Also have a look at
    💡 COUNTRIES Research & Science Dataset - SCImagoJR
    💡 UNIVERSITIES & Research INSTITUTIONS Rank - SCImagoIR

    • Journals can be grouped by subject area (27 major thematic areas), subject category (309 specific subject categories) or by country.
    • Citation data is drawn from over 34,100 titles from more than 5,000 international publishers
    • This platform takes its name from the SCImago Journal Rank (SJR) indicator , developed by SCImago from the widely known algorithm Google PageRank™. This indicator shows the visibility of the journals contained in the Scopus® database from 1996.
    • SCImago is a research group from the Consejo Superior de Investigaciones Científicas (CSIC), University of Granada, Extremadura, Carlos III (Madrid) and Alcalá de Henares, dedicated to information analysis, representation and retrieval by means of visualisation techniques.

    ☢️❓The entire dataset is obtained from public and open-access data of ScimagoJR (SCImago Journal & Country Rank)
    ScimagoJR Journal Rank
    SCImagoJR About Us

    Available indicators:

    • SJR (SCImago Journal Rank) indicator: It expresses the average number of weighted citations received in the selected year by the documents published in the selected journal in the three previous years, --i.e. weighted citations received in year X to documents published in the journal in years X-1, X-2 and X-3. See detailed description of SJR (PDF).
    • H Index: The h index expresses the journal's number of articles (h) that have received at least h citations. It quantifies both journal scientific productivity and scientific impact and it is also applicable to scientists, countries, etc. (see H-index wikipedia definition)
    • Total Documents: Output of the selected period. All types of documents are considered, including citable and non citable documents.
    • Total Documents (3years): Published documents in the three previous years (selected year documents are excluded), i.e.when the year X is selected, then X-1, X-2 and X-3 published documents are retrieved. All types of documents are considered, including citable and non citable documents.
    • Citable Documents (3 years): Number of citable documents published by a journal in the three previous years (selected year documents are excluded). Exclusively articles, reviews and conference papers are considered. Non-citable Docs. (Available in the graphics) Non-citable documents ratio in the period being considered.
    • Total Cites (3years): Number of citations received in the seleted year by a journal to the documents published in the three previous years, --i.e. citations received in year X to documents published in years X-1, X-2 and X-3. All types of documents are considered.
    • Cites per Document (2 years): Average citations per document in a 2 year period. It is computed considering the number of citations received by a journal in the current year to the documents published in the two previous years, --i.e. citations received in year X to documents published in years X-1 and X-2.
    • Cites per Document (3 years): Average citations per document in a 3 year period. It is computed considering the number of citations received by a journal in the current year to the documents published in the three previous years, --i.e. citations received in year X to documents published in years X-1, X-2 and X-3.
    • Self Cites: Number of journal's self-citations in the seleted year to its own documents published in the three previous years, --i.e. self-citations in year X to documents published in years X-1, X-2 and X-3. All types of documents are considered.
    • Cited Documents: Number of documents cited at least once in the three previous years, --i.e. years X-1, X-2 and X-3
    • Uncited Documents: Number of uncited documents in the three previous years, --i.e. years X-1, X-2 and X-3
    • Total References: It includes all the bibliographical references in a journal in the selected period.
    • References per Document:Average number of references per document in the selected year.
    • % International Collaboration: Document ratio whose affiliation includes more than one country address.
  14. Descriptive statistics of relevance ratings of the 16 questions in the...

    • plos.figshare.com
    xls
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jelte M. Wicherts (2023). Descriptive statistics of relevance ratings of the 16 questions in the original tool by 20 stakeholders (Study 2). [Dataset]. http://doi.org/10.1371/journal.pone.0147913.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Jelte M. Wicherts
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Descriptive statistics of relevance ratings of the 16 questions in the original tool by 20 stakeholders (Study 2).

  15. Z

    List of articles resulting from the Google Scholar search "graph based...

    • data.niaid.nih.gov
    • zenodo.org
    • +1more
    Updated Jul 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michele De Bonis (2023). List of articles resulting from the Google Scholar search "graph based author name disambiguation" published after 1/1/2021 [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8117572
    Explore at:
    Dataset updated
    Jul 6, 2023
    Dataset authored and provided by
    Michele De Bonis
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains the list of articles resulting from the Google Scholar search “graph based author name disambiguation” published after 1/1/2021. The list is provided for reproducibility of the survey article “Graph-based Methods for Author Name Disambiguation: A Survey” and it was obtained using the following Python script available at https://github.com/WittmannF/sort-google-scholar:

    $ python sortgs.py --kw “graph based author name disambiguation” --startyear 2021

    The command returned the CSV file that contains the first 94 publications matching the query (articles with corrupted metadata have been excluded), each with metadata about Title, Number of Citations, and Rank. The CSV contains a column that specified which articles have been eventually selected for the survey.

  16. o

    Citation Knowledge with Section and Context

    • ordo.open.ac.uk
    zip
    Updated May 5, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anita Khadka (2020). Citation Knowledge with Section and Context [Dataset]. http://doi.org/10.21954/ou.rd.11346848.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 5, 2020
    Dataset provided by
    The Open University
    Authors
    Anita Khadka
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    This dataset contains information from scientific publications written by authors who have published papers in the RecSys conference. It contains four files which have information extracted from scientific publications. The details of each file are explained below:i) all_authors.tsv: This file contains the details of authors who published research papers in the RecSys conference. The details include authors' identifier in various forms, such as number, orcid id, dblp url, dblp key and google scholar url, authors' first name, last name and their affiliation (where they work)ii) all_publications.tsv: This file contains the details of publications authored by the authors mentioned in the all_authors.tsv file (Please note the list of publications does not contain all the authored publications of the authors, refer to the publication for further details).The details include publications' identifier in different forms (such as number, dblp key, dblp url, dblp key, google scholar url), title, filtered title, published date, published conference and paper abstract.iii) selected_author_publications-information.tsv: This file consists of identifiers of authors and their publications. Here, we provide the information of selected authors and their publications used for our experiment.iv) selected_publication_citations-information.tsv: This file contains the information of the selected publications which consists of both citing and cited papers’ information used in our experiment. It consists of identifier of citing paper, identifier of cited paper, citation title, citation filtered title, the sentence before the citation is mentioned, citing sentence, the sentence after the citation is mentioned, citation position (section).Please note, it does not contain information of all the citations cited in the publications. For more detail, please refer to the paper.This dataset is for the use of research purposes only and if you use this dataset, please cite our paper "Capturing and exploiting citation knowledge for recommending recently published papers" due to be published in Web2Touch track 2020 (not yet published).

  17. Citations to Google Scholar's Classic Papers (2017 edition)

    • osf.io
    Updated Dec 2, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alberto Martín-Martín; Enrique Orduna-Malea; Mike Thelwall; Emilio López-Cózar; Michael Gusenbauer (2021). Citations to Google Scholar's Classic Papers (2017 edition) [Dataset]. http://doi.org/10.17605/OSF.IO/GNB72
    Explore at:
    Dataset updated
    Dec 2, 2021
    Dataset provided by
    Center for Open Sciencehttps://cos.io/
    Authors
    Alberto Martín-Martín; Enrique Orduna-Malea; Mike Thelwall; Emilio López-Cózar; Michael Gusenbauer
    Description

    This project contains the citations found in Google Scholar, Web of Science, and Scopus, to the sample of 2,515 highly-cited documents displayed in the 2017 edition of Google Scholar's Classic Papers. Additionally, it contains the tables resulting from the analysis carried out in the paper "Google Scholar, Web of Science, and Scopus: a systematic comparison of citations in 252 subject categories"

  18. i

    Edge/Fog Computing Bibliographic Data from Google Scholar 2003-2019

    • ieee-dataport.org
    Updated Sep 1, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Syed Rameez Kakakhel (2020). Edge/Fog Computing Bibliographic Data from Google Scholar 2003-2019 [Dataset]. https://ieee-dataport.org/open-access/edgefog-computing-bibliographic-data-google-scholar-2003-2019
    Explore at:
    Dataset updated
    Sep 1, 2020
    Authors
    Syed Rameez Kakakhel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Fog Computing

  19. r

    Journal of theoretical and applied computer science Abstract & Indexing -...

    • researchhelpdesk.org
    Updated Feb 16, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Help Desk (2023). Journal of theoretical and applied computer science Abstract & Indexing - ResearchHelpDesk [Dataset]. https://www.researchhelpdesk.org/journal/abstract-and-indexing/351/journal-of-theoretical-and-applied-computer-science
    Explore at:
    Dataset updated
    Feb 16, 2023
    Dataset authored and provided by
    Research Help Desk
    Description

    Journal of theoretical and applied computer science Abstract & Indexing - ResearchHelpDesk - Journal of Theoretical and Applied Computer Science is published by the Computer Science Commision, operating within the Gdansk Branch of Polish Academy of Sciences and located in Szczecin, Poland. JTACS is an open access journal, publishing original research and review papers from the variety of subdiscplines connected to theoretical and applied computer science, including the following: Artificial intelligence Computer modelling and simulation Data analysis and classification Pattern recognition Computer graphics and image processing Information systems engineering Software engineering Computer systems architecture Distributed and parallel processing Computer systems security Web technologies Bioinformatics Abstract and indexing Doaj (Dicretroy of open access journals) Index copurnicus Baztech Google scholar

  20. Z

    Dataset: A Systematic Literature Review on the topic of High-value datasets

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jun 23, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anastasija Nikiforova (2023). Dataset: A Systematic Literature Review on the topic of High-value datasets [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7944424
    Explore at:
    Dataset updated
    Jun 23, 2023
    Dataset provided by
    Charalampos Alexopoulos
    Magdalena Ciesielska
    Nina Rizun
    Andrea Miletič
    Anastasija Nikiforova
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains data collected during a study ("Towards High-Value Datasets determination for data-driven development: a systematic literature review") conducted by Anastasija Nikiforova (University of Tartu), Nina Rizun, Magdalena Ciesielska (Gdańsk University of Technology), Charalampos Alexopoulos (University of the Aegean) and Andrea Miletič (University of Zagreb) It being made public both to act as supplementary data for "Towards High-Value Datasets determination for data-driven development: a systematic literature review" paper (pre-print is available in Open Access here -> https://arxiv.org/abs/2305.10234) and in order for other researchers to use these data in their own work.

    The protocol is intended for the Systematic Literature review on the topic of High-value Datasets with the aim to gather information on how the topic of High-value datasets (HVD) and their determination has been reflected in the literature over the years and what has been found by these studies to date, incl. the indicators used in them, involved stakeholders, data-related aspects, and frameworks. The data in this dataset were collected in the result of the SLR over Scopus, Web of Science, and Digital Government Research library (DGRL) in 2023.

    Methodology

    To understand how HVD determination has been reflected in the literature over the years and what has been found by these studies to date, all relevant literature covering this topic has been studied. To this end, the SLR was carried out to by searching digital libraries covered by Scopus, Web of Science (WoS), Digital Government Research library (DGRL).

    These databases were queried for keywords ("open data" OR "open government data") AND ("high-value data*" OR "high value data*"), which were applied to the article title, keywords, and abstract to limit the number of papers to those, where these objects were primary research objects rather than mentioned in the body, e.g., as a future work. After deduplication, 11 articles were found unique and were further checked for relevance. As a result, a total of 9 articles were further examined. Each study was independently examined by at least two authors.

    To attain the objective of our study, we developed the protocol, where the information on each selected study was collected in four categories: (1) descriptive information, (2) approach- and research design- related information, (3) quality-related information, (4) HVD determination-related information.

    Test procedure Each study was independently examined by at least two authors, where after the in-depth examination of the full-text of the article, the structured protocol has been filled for each study. The structure of the survey is available in the supplementary file available (see Protocol_HVD_SLR.odt, Protocol_HVD_SLR.docx) The data collected for each study by two researchers were then synthesized in one final version by the third researcher.

    Description of the data in this data set

    Protocol_HVD_SLR provides the structure of the protocol Spreadsheets #1 provides the filled protocol for relevant studies. Spreadsheet#2 provides the list of results after the search over three indexing databases, i.e. before filtering out irrelevant studies

    The information on each selected study was collected in four categories: (1) descriptive information, (2) approach- and research design- related information, (3) quality-related information, (4) HVD determination-related information

    Descriptive information
    1) Article number - a study number, corresponding to the study number assigned in an Excel worksheet 2) Complete reference - the complete source information to refer to the study 3) Year of publication - the year in which the study was published 4) Journal article / conference paper / book chapter - the type of the paper -{journal article, conference paper, book chapter} 5) DOI / Website- a link to the website where the study can be found 6) Number of citations - the number of citations of the article in Google Scholar, Scopus, Web of Science 7) Availability in OA - availability of an article in the Open Access 8) Keywords - keywords of the paper as indicated by the authors 9) Relevance for this study - what is the relevance level of the article for this study? {high / medium / low}

    Approach- and research design-related information 10) Objective / RQ - the research objective / aim, established research questions 11) Research method (including unit of analysis) - the methods used to collect data, including the unit of analy-sis (country, organisation, specific unit that has been ana-lysed, e.g., the number of use-cases, scope of the SLR etc.) 12) Contributions - the contributions of the study 13) Method - whether the study uses a qualitative, quantitative, or mixed methods approach? 14) Availability of the underlying research data- whether there is a reference to the publicly available underly-ing research data e.g., transcriptions of interviews, collected data, or explanation why these data are not shared? 15) Period under investigation - period (or moment) in which the study was conducted 16) Use of theory / theoretical concepts / approaches - does the study mention any theory / theoretical concepts / approaches? If any theory is mentioned, how is theory used in the study?

    Quality- and relevance- related information
    17) Quality concerns - whether there are any quality concerns (e.g., limited infor-mation about the research methods used)? 18) Primary research object - is the HVD a primary research object in the study? (primary - the paper is focused around the HVD determination, sec-ondary - mentioned but not studied (e.g., as part of discus-sion, future work etc.))

    HVD determination-related information
    19) HVD definition and type of value - how is the HVD defined in the article and / or any other equivalent term? 20) HVD indicators - what are the indicators to identify HVD? How were they identified? (components & relationships, “input -> output") 21) A framework for HVD determination - is there a framework presented for HVD identification? What components does it consist of and what are the rela-tionships between these components? (detailed description) 22) Stakeholders and their roles - what stakeholders or actors does HVD determination in-volve? What are their roles? 23) Data - what data do HVD cover? 24) Level (if relevant) - what is the level of the HVD determination covered in the article? (e.g., city, regional, national, international)

    Format of the file .xls, .csv (for the first spreadsheet only), .odt, .docx

    Licenses or restrictions CC-BY

    For more info, see README.txt

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Statista (2025). Number of new papers on Google Scholar related to edge computing worldwide 2010-2023 [Dataset]. https://www.statista.com/statistics/1193762/worldwide-edge-computing-google-scholar-paper/
Organization logo

Number of new papers on Google Scholar related to edge computing worldwide 2010-2023

Explore at:
Dataset updated
Jul 18, 2025
Dataset authored and provided by
Statistahttp://statista.com/
Area covered
Worldwide
Description

The number of articles related to edge computing on Google Scholar increase dramatically in the last decade. 2010 only recorded *** articles related to edge computing on Google Scholar, whereas 2023 saw more than ****** scholarly papers on this topic.

Search
Clear search
Close search
Google apps
Main menu