100+ datasets found
  1. o

    Citation Knowledge with Section and Context

    • ordo.open.ac.uk
    zip
    Updated May 5, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anita Khadka (2020). Citation Knowledge with Section and Context [Dataset]. http://doi.org/10.21954/ou.rd.11346848.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 5, 2020
    Dataset provided by
    The Open University
    Authors
    Anita Khadka
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    This dataset contains information from scientific publications written by authors who have published papers in the RecSys conference. It contains four files which have information extracted from scientific publications. The details of each file are explained below:i) all_authors.tsv: This file contains the details of authors who published research papers in the RecSys conference. The details include authors' identifier in various forms, such as number, orcid id, dblp url, dblp key and google scholar url, authors' first name, last name and their affiliation (where they work)ii) all_publications.tsv: This file contains the details of publications authored by the authors mentioned in the all_authors.tsv file (Please note the list of publications does not contain all the authored publications of the authors, refer to the publication for further details).The details include publications' identifier in different forms (such as number, dblp key, dblp url, dblp key, google scholar url), title, filtered title, published date, published conference and paper abstract.iii) selected_author_publications-information.tsv: This file consists of identifiers of authors and their publications. Here, we provide the information of selected authors and their publications used for our experiment.iv) selected_publication_citations-information.tsv: This file contains the information of the selected publications which consists of both citing and cited papers’ information used in our experiment. It consists of identifier of citing paper, identifier of cited paper, citation title, citation filtered title, the sentence before the citation is mentioned, citing sentence, the sentence after the citation is mentioned, citation position (section).Please note, it does not contain information of all the citations cited in the publications. For more detail, please refer to the paper.This dataset is for the use of research purposes only and if you use this dataset, please cite our paper "Capturing and exploiting citation knowledge for recommending recently published papers" due to be published in Web2Touch track 2020 (not yet published).

  2. z

    Data from: Google Scholar as a Data Source for Research Assessment in the...

    • zenodo.org
    bin
    Updated Dec 31, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Güleda Doğan; Güleda Doğan (2021). Google Scholar as a Data Source for Research Assessment in the Social Sciences [Dataset]. http://doi.org/10.5281/zenodo.5079007
    Explore at:
    binAvailable download formats
    Dataset updated
    Dec 31, 2021
    Dataset provided by
    Edward Elgar Publishing
    Authors
    Güleda Doğan; Güleda Doğan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Column 1

    Source

    Data sources that the publications retrieved. Values for this column are “Google Scholar”, “Scopus”, and “Web of Science”.

    Column 2

    Authors

    The authors of the publications. This column is kept as additional information for verification of data. Not used in the analysis, it has not been standardized.

    Column 3

    Title

    Titles of the publications. For non-English publications, English titles, if available, are kept in this column. Otherwise, the original titles have been entered. The headings were checked and errors and omissions were corrected. Corrected titles are marked in red.

    Column 4

    Title translated with Google Translate

    In this Column, the English translated titles of the publications that do not have English titles are kept. Google Translate is used for detecting the language and translation. For publications with an English title, the expression [Title in English] has been entered. The translations of the original titles kept in this field were used in the analysis made through VOSviewer. It is marked in red as it is newly added data.

    Column 5

    Language

    Language of the publications. The languages of all publications were checked, missing data were completed and errors were corrected. If the language of the publication could not be determined, the value is [Not found]. The cells with addition or correction are marked in red.

    Column 6

    Document type

    Types of the documents. For all publications, publication type information was checked, missing ones were completed and corrections were made. All intervened cells are marked in red. Article and Review types are referred to as “Article” in the text.

    Column 7

    Full-text available

    Values for this column are “Yes” and “No”. The values for this column are Yes and No. If there is access to the full text of the publication via the web, "Yes", otherwise the "No" value has been entered.

    Column 8

    On research evaluation

    Values for this column are “Yes” and “No”. Using the title and/or abstract information, it was tried to determine whether the publications were related to the research evaluation. “Yes”, if found relevant, and “No” if not. It is marked in red as it is newly added data.

    Column 9

    Publication year

    The publication years of the documents. If the publication years are missing, they have been completed. The current publication years have been checked and corrected if necessary. If the year of publication could not be found, it is indicated as [Not found].

    Column 10

    English abstract

    Abstracts of the publications. If there is an accessible/available English abstract for the publication, it is kept in this column. [Not found/Not available] for missing values. Abstracts that were added, changed, corrected, or completed are marked in red.

  3. l

    Data from: Where do engineering students really get their information? :...

    • opal.latrobe.edu.au
    • researchdata.edu.au
    pdf
    Updated Mar 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Clayton Bolitho (2025). Where do engineering students really get their information? : using reference list analysis to improve information literacy programs [Dataset]. http://doi.org/10.4225/22/59d45f4b696e4
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Mar 13, 2025
    Dataset provided by
    La Trobe
    Authors
    Clayton Bolitho
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundAn understanding of the resources which engineering students use to write their academic papers provides information about student behaviour as well as the effectiveness of information literacy programs designed for engineering students. One of the most informative sources of information which can be used to determine the nature of the material that students use is the bibliography at the end of the students’ papers. While reference list analysis has been utilised in other disciplines, few studies have focussed on engineering students or used the results to improve the effectiveness of information literacy programs. Gadd, Baldwin and Norris (2010) found that civil engineering students undertaking a finalyear research project cited journal articles more than other types of material, followed by books and reports, with web sites ranked fourth. Several studies, however, have shown that in their first year at least, most students prefer to use Internet search engines (Ellis & Salisbury, 2004; Wilkes & Gurney, 2009).PURPOSEThe aim of this study was to find out exactly what resources undergraduate students studying civil engineering at La Trobe University were using, and in particular, the extent to which students were utilising the scholarly resources paid for by the library. A secondary purpose of the research was to ascertain whether information literacy sessions delivered to those students had any influence on the resources used, and to investigate ways in which the information literacy component of the unit can be improved to encourage students to make better use of the resources purchased by the Library to support their research.DESIGN/METHODThe study examined student bibliographies for three civil engineering group projects at the Bendigo Campus of La Trobe University over a two-year period, including two first-year units (CIV1EP – Engineering Practice) and one-second year unit (CIV2GR – Engineering Group Research). All units included a mandatory library session at the start of the project where student groups were required to meet with the relevant faculty librarian for guidance. In each case, the Faculty Librarian highlighted specific resources relevant to the topic, including books, e-books, video recordings, websites and internet documents. The students were also shown tips for searching the Library catalogue, Google Scholar, LibSearch (the LTU Library’s research and discovery tool) and ProQuest Central. Subject-specific databases for civil engineering and science were also referred to. After the final reports for each project had been submitted and assessed, the Faculty Librarian contacted the lecturer responsible for the unit, requesting copies of the student bibliographies for each group. References for each bibliography were then entered into EndNote. The Faculty Librarian grouped them according to various facets, including the name of the unit and the group within the unit; the material type of the item being referenced; and whether the item required a Library subscription to access it. A total of 58 references were collated for the 2010 CIV1EP unit; 237 references for the 2010 CIV2GR unit; and 225 references for the 2011 CIV1EP unit.INTERIM FINDINGSThe initial findings showed that student bibliographies for the three group projects were primarily made up of freely available internet resources which required no library subscription. For the 2010 CIV1EP unit, all 58 resources used were freely available on the Internet. For the 2011 CIV1EP unit, 28 of the 225 resources used (12.44%) required a Library subscription or purchase for access, while the second-year students (CIV2GR) used a greater variety of resources, with 71 of the 237 resources used (29.96%) requiring a Library subscription or purchase for access. The results suggest that the library sessions had little or no influence on the 2010 CIV1EP group, but the sessions may have assisted students in the 2011 CIV1EP and 2010 CIV2GR groups to find books, journal articles and conference papers, which were all represented in their bibliographiesFURTHER RESEARCHThe next step in the research is to investigate ways to increase the representation of scholarly references (found by resources other than Google) in student bibliographies. It is anticipated that such a change would lead to an overall improvement in the quality of the student papers. One way of achieving this would be to make it mandatory for students to include a specified number of journal articles, conference papers, or scholarly books in their bibliographies. It is also anticipated that embedding La Trobe University’s Inquiry/Research Quiz (IRQ) using a constructively aligned approach will further enhance the students’ research skills and increase their ability to find suitable scholarly material which relates to their topic. This has already been done successfully (Salisbury, Yager, & Kirkman, 2012)CONCLUSIONS & CHALLENGESThe study shows that most students rely heavily on the free Internet for information. Students don’t naturally use Library databases or scholarly resources such as Google Scholar to find information, without encouragement from their teachers, tutors and/or librarians. It is acknowledged that the use of scholarly resources doesn’t automatically lead to a high quality paper. Resources must be used appropriately and students also need to have the skills to identify and synthesise key findings in the existing literature and relate these to their own paper. Ideally, students should be able to see the benefit of using scholarly resources in their papers, and continue to seek these out even when it’s not a specific assessment requirement, though it can’t be assumed that this will be the outcome.REFERENCESEllis, J., & Salisbury, F. (2004). Information literacy milestones: building upon the prior knowledge of first-year students. Australian Library Journal, 53(4), 383-396.Gadd, E., Baldwin, A., & Norris, M. (2010). The citation behaviour of civil engineering students. Journal of Information Literacy, 4(2), 37-49.Salisbury, F., Yager, Z., & Kirkman, L. (2012). Embedding Inquiry/Research: Moving from a minimalist model to constructive alignment. Paper presented at the 15th International First Year in Higher Education Conference, Brisbane. Retrieved from http://www.fyhe.com.au/past_papers/papers12/Papers/11A.pdfWilkes, J., & Gurney, L. J. (2009). Perceptions and applications of information literacy by first year applied science students. Australian Academic & Research Libraries, 40(3), 159-171.

  4. Dataset: A Systematic Literature Review on the topic of High-value datasets

    • zenodo.org
    • data.niaid.nih.gov
    bin, png, txt
    Updated Jul 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anastasija Nikiforova; Anastasija Nikiforova; Nina Rizun; Nina Rizun; Magdalena Ciesielska; Magdalena Ciesielska; Charalampos Alexopoulos; Charalampos Alexopoulos; Andrea Miletič; Andrea Miletič (2024). Dataset: A Systematic Literature Review on the topic of High-value datasets [Dataset]. http://doi.org/10.5281/zenodo.8075918
    Explore at:
    png, bin, txtAvailable download formats
    Dataset updated
    Jul 11, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Anastasija Nikiforova; Anastasija Nikiforova; Nina Rizun; Nina Rizun; Magdalena Ciesielska; Magdalena Ciesielska; Charalampos Alexopoulos; Charalampos Alexopoulos; Andrea Miletič; Andrea Miletič
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains data collected during a study ("Towards High-Value Datasets determination for data-driven development: a systematic literature review") conducted by Anastasija Nikiforova (University of Tartu), Nina Rizun, Magdalena Ciesielska (Gdańsk University of Technology), Charalampos Alexopoulos (University of the Aegean) and Andrea Miletič (University of Zagreb)
    It being made public both to act as supplementary data for "Towards High-Value Datasets determination for data-driven development: a systematic literature review" paper (pre-print is available in Open Access here -> https://arxiv.org/abs/2305.10234) and in order for other researchers to use these data in their own work.


    The protocol is intended for the Systematic Literature review on the topic of High-value Datasets with the aim to gather information on how the topic of High-value datasets (HVD) and their determination has been reflected in the literature over the years and what has been found by these studies to date, incl. the indicators used in them, involved stakeholders, data-related aspects, and frameworks. The data in this dataset were collected in the result of the SLR over Scopus, Web of Science, and Digital Government Research library (DGRL) in 2023.

    ***Methodology***

    To understand how HVD determination has been reflected in the literature over the years and what has been found by these studies to date, all relevant literature covering this topic has been studied. To this end, the SLR was carried out to by searching digital libraries covered by Scopus, Web of Science (WoS), Digital Government Research library (DGRL).

    These databases were queried for keywords ("open data" OR "open government data") AND ("high-value data*" OR "high value data*"), which were applied to the article title, keywords, and abstract to limit the number of papers to those, where these objects were primary research objects rather than mentioned in the body, e.g., as a future work. After deduplication, 11 articles were found unique and were further checked for relevance. As a result, a total of 9 articles were further examined. Each study was independently examined by at least two authors.

    To attain the objective of our study, we developed the protocol, where the information on each selected study was collected in four categories: (1) descriptive information, (2) approach- and research design- related information, (3) quality-related information, (4) HVD determination-related information.

    ***Test procedure***
    Each study was independently examined by at least two authors, where after the in-depth examination of the full-text of the article, the structured protocol has been filled for each study.
    The structure of the survey is available in the supplementary file available (see Protocol_HVD_SLR.odt, Protocol_HVD_SLR.docx)
    The data collected for each study by two researchers were then synthesized in one final version by the third researcher.

    ***Description of the data in this data set***

    Protocol_HVD_SLR provides the structure of the protocol
    Spreadsheets #1 provides the filled protocol for relevant studies.
    Spreadsheet#2 provides the list of results after the search over three indexing databases, i.e. before filtering out irrelevant studies

    The information on each selected study was collected in four categories:
    (1) descriptive information,
    (2) approach- and research design- related information,
    (3) quality-related information,
    (4) HVD determination-related information

    Descriptive information
    1) Article number - a study number, corresponding to the study number assigned in an Excel worksheet
    2) Complete reference - the complete source information to refer to the study
    3) Year of publication - the year in which the study was published
    4) Journal article / conference paper / book chapter - the type of the paper -{journal article, conference paper, book chapter}
    5) DOI / Website- a link to the website where the study can be found
    6) Number of citations - the number of citations of the article in Google Scholar, Scopus, Web of Science
    7) Availability in OA - availability of an article in the Open Access
    8) Keywords - keywords of the paper as indicated by the authors
    9) Relevance for this study - what is the relevance level of the article for this study? {high / medium / low}

    Approach- and research design-related information
    10) Objective / RQ - the research objective / aim, established research questions
    11) Research method (including unit of analysis) - the methods used to collect data, including the unit of analy-sis (country, organisation, specific unit that has been ana-lysed, e.g., the number of use-cases, scope of the SLR etc.)
    12) Contributions - the contributions of the study
    13) Method - whether the study uses a qualitative, quantitative, or mixed methods approach?
    14) Availability of the underlying research data- whether there is a reference to the publicly available underly-ing research data e.g., transcriptions of interviews, collected data, or explanation why these data are not shared?
    15) Period under investigation - period (or moment) in which the study was conducted
    16) Use of theory / theoretical concepts / approaches - does the study mention any theory / theoretical concepts / approaches? If any theory is mentioned, how is theory used in the study?

    Quality- and relevance- related information
    17) Quality concerns - whether there are any quality concerns (e.g., limited infor-mation about the research methods used)?
    18) Primary research object - is the HVD a primary research object in the study? (primary - the paper is focused around the HVD determination, sec-ondary - mentioned but not studied (e.g., as part of discus-sion, future work etc.))

    HVD determination-related information
    19) HVD definition and type of value - how is the HVD defined in the article and / or any other equivalent term?
    20) HVD indicators - what are the indicators to identify HVD? How were they identified? (components & relationships, “input -> output")
    21) A framework for HVD determination - is there a framework presented for HVD identification? What components does it consist of and what are the rela-tionships between these components? (detailed description)
    22) Stakeholders and their roles - what stakeholders or actors does HVD determination in-volve? What are their roles?
    23) Data - what data do HVD cover?
    24) Level (if relevant) - what is the level of the HVD determination covered in the article? (e.g., city, regional, national, international)


    ***Format of the file***
    .xls, .csv (for the first spreadsheet only), .odt, .docx

    ***Licenses or restrictions***
    CC-BY

    For more info, see README.txt

  5. Z

    Conceptualization of public data ecosystems

    • data.niaid.nih.gov
    Updated Sep 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Martin, Lnenicka (2024). Conceptualization of public data ecosystems [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_13842001
    Explore at:
    Dataset updated
    Sep 26, 2024
    Dataset provided by
    Martin, Lnenicka
    Anastasija, Nikiforova
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains data collected during a study "Understanding the development of public data ecosystems: from a conceptual model to a six-generation model of the evolution of public data ecosystems" conducted by Martin Lnenicka (University of Hradec Králové, Czech Republic), Anastasija Nikiforova (University of Tartu, Estonia), Mariusz Luterek (University of Warsaw, Warsaw, Poland), Petar Milic (University of Pristina - Kosovska Mitrovica, Serbia), Daniel Rudmark (Swedish National Road and Transport Research Institute, Sweden), Sebastian Neumaier (St. Pölten University of Applied Sciences, Austria), Karlo Kević (University of Zagreb, Croatia), Anneke Zuiderwijk (Delft University of Technology, Delft, the Netherlands), Manuel Pedro Rodríguez Bolívar (University of Granada, Granada, Spain).

    As there is a lack of understanding of the elements that constitute different types of value-adding public data ecosystems and how these elements form and shape the development of these ecosystems over time, which can lead to misguided efforts to develop future public data ecosystems, the aim of the study is: (1) to explore how public data ecosystems have developed over time and (2) to identify the value-adding elements and formative characteristics of public data ecosystems. Using an exploratory retrospective analysis and a deductive approach, we systematically review 148 studies published between 1994 and 2023. Based on the results, this study presents a typology of public data ecosystems and develops a conceptual model of elements and formative characteristics that contribute most to value-adding public data ecosystems, and develops a conceptual model of the evolutionary generation of public data ecosystems represented by six generations called Evolutionary Model of Public Data Ecosystems (EMPDE). Finally, three avenues for a future research agenda are proposed.

    This dataset is being made public both to act as supplementary data for "Understanding the development of public data ecosystems: from a conceptual model to a six-generation model of the evolution of public data ecosystems ", Telematics and Informatics*, and its Systematic Literature Review component that informs the study.

    Description of the data in this data set

    PublicDataEcosystem_SLR provides the structure of the protocol

    Spreadsheet#1 provides the list of results after the search over three indexing databases and filtering out irrelevant studies

    Spreadsheets #2 provides the protocol structure.

    Spreadsheets #3 provides the filled protocol for relevant studies.

    The information on each selected study was collected in four categories:(1) descriptive information,(2) approach- and research design- related information,(3) quality-related information,(4) HVD determination-related information

    Descriptive Information

    Article number

    A study number, corresponding to the study number assigned in an Excel worksheet

    Complete reference

    The complete source information to refer to the study (in APA style), including the author(s) of the study, the year in which it was published, the study's title and other source information.

    Year of publication

    The year in which the study was published.

    Journal article / conference paper / book chapter

    The type of the paper, i.e., journal article, conference paper, or book chapter.

    Journal / conference / book

    Journal article, conference, where the paper is published.

    DOI / Website

    A link to the website where the study can be found.

    Number of words

    A number of words of the study.

    Number of citations in Scopus and WoS

    The number of citations of the paper in Scopus and WoS digital libraries.

    Availability in Open Access

    Availability of a study in the Open Access or Free / Full Access.

    Keywords

    Keywords of the paper as indicated by the authors (in the paper).

    Relevance for our study (high / medium / low)

    What is the relevance level of the paper for our study

    Approach- and research design-related information

    Approach- and research design-related information

    Objective / Aim / Goal / Purpose & Research Questions

    The research objective and established RQs.

    Research method (including unit of analysis)

    The methods used to collect data in the study, including the unit of analysis that refers to the country, organisation, or other specific unit that has been analysed such as the number of use-cases or policy documents, number and scope of the SLR etc.

    Study’s contributions

    The study’s contribution as defined by the authors

    Qualitative / quantitative / mixed method

    Whether the study uses a qualitative, quantitative, or mixed methods approach?

    Availability of the underlying research data

    Whether the paper has a reference to the public availability of the underlying research data e.g., transcriptions of interviews, collected data etc., or explains why these data are not openly shared?

    Period under investigation

    Period (or moment) in which the study was conducted (e.g., January 2021-March 2022)

    Use of theory / theoretical concepts / approaches? If yes, specify them

    Does the study mention any theory / theoretical concepts / approaches? If yes, what theory / concepts / approaches? If any theory is mentioned, how is theory used in the study? (e.g., mentioned to explain a certain phenomenon, used as a framework for analysis, tested theory, theory mentioned in the future research section).

    Quality-related information

    Quality concerns

    Whether there are any quality concerns (e.g., limited information about the research methods used)?

    Public Data Ecosystem-related information

    Public data ecosystem definition

    How is the public data ecosystem defined in the paper and any other equivalent term, mostly infrastructure. If an alternative term is used, how is the public data ecosystem called in the paper?

    Public data ecosystem evolution / development

    Does the paper define the evolution of the public data ecosystem? If yes, how is it defined and what factors affect it?

    What constitutes a public data ecosystem?

    What constitutes a public data ecosystem (components & relationships) - their "FORM / OUTPUT" presented in the paper (general description with more detailed answers to further additional questions).

    Components and relationships

    What components does the public data ecosystem consist of and what are the relationships between these components? Alternative names for components - element, construct, concept, item, helix, dimension etc. (detailed description).

    Stakeholders

    What stakeholders (e.g., governments, citizens, businesses, Non-Governmental Organisations (NGOs) etc.) does the public data ecosystem involve?

    Actors and their roles

    What actors does the public data ecosystem involve? What are their roles?

    Data (data types, data dynamism, data categories etc.)

    What data do the public data ecosystem cover (is intended / designed for)? Refer to all data-related aspects, including but not limited to data types, data dynamism (static data, dynamic, real-time data, stream), prevailing data categories / domains / topics etc.

    Processes / activities / dimensions, data lifecycle phases

    What processes, activities, dimensions and data lifecycle phases (e.g., locate, acquire, download, reuse, transform, etc.) does the public data ecosystem involve or refer to?

    Level (if relevant)

    What is the level of the public data ecosystem covered in the paper? (e.g., city, municipal, regional, national (=country), supranational, international).

    Other elements or relationships (if any)

    What other elements or relationships does the public data ecosystem consist of?

    Additional comments

    Additional comments (e.g., what other topics affected the public data ecosystems and their elements, what is expected to affect the public data ecosystems in the future, what were important topics by which the period was characterised etc.).

    New papers

    Does the study refer to any other potentially relevant papers?

    Additional references to potentially relevant papers that were found in the analysed paper (snowballing).

    Format of the file.xls, .csv (for the first spreadsheet only), .docx

    Licenses or restrictionsCC-BY

    For more info, see README.txt

  6. Map of articles about "Teaching Open Science"

    • zenodo.org
    • data.niaid.nih.gov
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Isabel Steinhardt; Isabel Steinhardt (2020). Map of articles about "Teaching Open Science" [Dataset]. http://doi.org/10.5281/zenodo.3371415
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Isabel Steinhardt; Isabel Steinhardt
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This description is part of the blog post "Systematic Literature Review of teaching Open Science" https://sozmethode.hypotheses.org/839

    According to my opinion, we do not pay enough attention to teaching Open Science in higher education. Therefore, I designed a seminar to teach students the practices of Open Science by doing qualitative research.About this seminar, I wrote the article ”Teaching Open Science and qualitative methods“. For the article ”Teaching Open Science and qualitative methods“, I started to review the literature on ”Teaching Open Science“. The result of my literature review is that certain aspects of Open Science are used for teaching. However, Open Science with all its aspects (Open Access, Open Data, Open Methodology, Open Science Evaluation and Open Science Tools) is not an issue in publications about teaching.

    Based on this insight, I have started a systematic literature review. I realized quickly that I need help to analyse and interpret the articles and to evaluate my preliminary findings. Especially different disciplinary cultures of teaching different aspects of Open Science are challenging, as I myself, as a social scientist, do not have enough insight to be able to interpret the results correctly. Therefore, I would like to invite you to participate in this research project!

    I am now looking for people who would like to join a collaborative process to further explore and write the systematic literature review on “Teaching Open Science“. Because I want to turn this project into a Massive Open Online Paper (MOOP). According to the 10 rules of Tennant et al (2019) on MOOPs, it is crucial to find a core group that is enthusiastic about the topic. Therefore, I am looking for people who are interested in creating the structure of the paper and writing the paper together with me. I am also looking for people who want to search for and review literature or evaluate the literature I have already found. Together with the interested persons I would then define, the rules for the project (cf. Tennant et al. 2019). So if you are interested to contribute to the further search for articles and / or to enhance the interpretation and writing of results, please get in touch. For everyone interested to contribute, the list of articles collected so far is freely accessible at Zotero: https://www.zotero.org/groups/2359061/teaching_open_science. The figure shown below provides a first overview of my ongoing work. I created the figure with the free software yEd and uploaded the file to zenodo, so everyone can download and work with it:

    To make transparent what I have done so far, I will first introduce what a systematic literature review is. Secondly, I describe the decisions I made to start with the systematic literature review. Third, I present the preliminary results.

    Systematic literature review – an Introduction

    Systematic literature reviews “are a method of mapping out areas of uncertainty, and identifying where little or no relevant research has been done.” (Petticrew/Roberts 2008: 2). Fink defines the systematic literature review as a “systemic, explicit, and reproducible method for identifying, evaluating, and synthesizing the existing body of completed and recorded work produced by researchers, scholars, and practitioners.” (Fink 2019: 6). The aim of a systematic literature reviews is to surpass the subjectivity of a researchers’ search for literature. However, there can never be an objective selection of articles. This is because the researcher has for example already made a preselection by deciding about search strings, for example “Teaching Open Science”. In this respect, transparency is the core criteria for a high-quality review.

    In order to achieve high quality and transparency, Fink (2019: 6-7) proposes the following seven steps:

    1. Selecting a research question.
    2. Selecting the bibliographic database.
    3. Choosing the search terms.
    4. Applying practical screening criteria.
    5. Applying methodological screening criteria.
    6. Doing the review.
    7. Synthesizing the results.

    I have adapted these steps for the “Teaching Open Science” systematic literature review. In the following, I will present the decisions I have made.

    Systematic literature review – decisions I made

    1. Research question: I am interested in the following research questions: How is Open Science taught in higher education? Is Open Science taught in its full range with all aspects like Open Access, Open Data, Open Methodology, Open Science Evaluation and Open Science Tools? Which aspects are taught? Are there disciplinary differences as to which aspects are taught and, if so, why are there such differences?
    2. Databases: I started my search at the Directory of Open Science (DOAJ). “DOAJ is a community-curated online directory that indexes and provides access to high quality, open access, peer-reviewed journals.” (https://doaj.org/) Secondly, I used the Bielefeld Academic Search Engine (base). Base is operated by Bielefeld University Library and “one of the world’s most voluminous search engines especially for academic web resources” (base-search.net). Both platforms are non-commercial and focus on Open Access publications and thus differ from the commercial publication databases, such as Web of Science and Scopus. For this project, I deliberately decided against commercial providers and the restriction of search in indexed journals. Thus, because my explicit aim was to find articles that are open in the context of Open Science.
    3. Search terms: To identify articles about teaching Open Science I used the following search strings: “teaching open science” OR teaching “open science” OR teach „open science“. The topic search looked for the search strings in title, abstract and keywords of articles. Since these are very narrow search terms, I decided to broaden the method. I searched in the reference lists of all articles that appear from this search for further relevant literature. Using Google Scholar I checked which other authors cited the articles in the sample. If the so checked articles met my methodological criteria, I included them in the sample and looked through the reference lists and citations at Google Scholar. This process has not yet been completed.
    4. Practical screening criteria: I have included English and German articles in the sample, as I speak these languages (articles in other languages are very welcome, if there are people who can interpret them!). In the sample only journal articles, articles in edited volumes, working papers and conference papers from proceedings were included. I checked whether the journals were predatory journals – such articles were not included. I did not include blogposts, books or articles from newspapers. I only included articles that fulltexts are accessible via my institution (University of Kassel). As a result, recently published articles at Elsevier could not be included because of the special situation in Germany regarding the Project DEAL (https://www.projekt-deal.de/about-deal/). For articles that are not freely accessible, I have checked whether there is an accessible version in a repository or whether preprint is available. If this was not the case, the article was not included. I started the analysis in May 2019.
    5. Methodological criteria: The method described above to check the reference lists has the problem of subjectivity. Therefore, I hope that other people will be interested in this project and evaluate my decisions. I have used the following criteria as the basis for my decisions: First, the articles must focus on teaching. For example, this means that articles must describe how a course was designed and carried out. Second, at least one aspect of Open Science has to be addressed. The aspects can be very diverse (FOSS, repositories, wiki, data management, etc.) but have to comply with the principles of openness. This means, for example, I included an article when it deals with the use of FOSS in class and addresses the aspects of openness of FOSS. I did not include articles when the authors describe the use of a particular free and open source software for teaching but did not address the principles of openness or re-use.
    6. Doing the review: Due to the methodical approach of going through the reference lists, it is possible to create a map of how the articles relate to each other. This results in thematic clusters and connections between clusters. The starting point for the map were four articles (Cook et al. 2018; Marsden, Thompson, and Plonsky 2017; Petras et al. 2015; Toelch and Ostwald 2018) that I found using the databases and criteria described above. I used yEd to generate the network. „yEd is a powerful desktop application that can be used to quickly and effectively generate high-quality diagrams.” (https://www.yworks.com/products/yed) In the network, arrows show, which articles are cited in an article and which articles are cited by others as well. In addition, I made an initial rough classification of the content using colours. This classification is based on the contents mentioned in the articles’ title and abstract. This rough content classification requires a more exact, i.e., content-based subdivision and

  7. Data of the article "Journal research data sharing policies: a study of...

    • zenodo.org
    Updated May 26, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Antti Rousi; Antti Rousi (2021). Data of the article "Journal research data sharing policies: a study of highly-cited journals in neuroscience, physics, and operations research" [Dataset]. http://doi.org/10.5281/zenodo.3635511
    Explore at:
    Dataset updated
    May 26, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Antti Rousi; Antti Rousi
    Description

    The journals’ author guidelines and/or editorial policies were examined on whether they take a stance with regard to the availability of the underlying data of the submitted article. The mere explicated possibility of providing supplementary material along with the submitted article was not considered as a research data policy in the present study. Furthermore, the present article excluded source codes or algorithms from the scope of the paper and thus policies related to them are not included in the analysis of the present article.

    For selection of journals within the field of neurosciences, Clarivate Analytics’ InCites Journal Citation Reports database was searched using categories of neurosciences and neuroimaging. From the results, journals with the 40 highest Impact Factor (for the year 2017) indicators were extracted for scrutiny of research data policies. Respectively, the selection journals within the field of physics was created by performing a similar search with the categories of physics, applied; physics, atomic, molecular & chemical; physics, condensed matter; physics, fluids & plasmas; physics, mathematical; physics, multidisciplinary; physics, nuclear and physics, particles & fields. From the results, journals with the 40 highest Impact Factor indicators were again extracted for scrutiny. Similarly, the 40 journals representing the field of operations research were extracted by using the search category of operations research and management.

    Journal-specific data policies were sought from journal specific websites providing journal specific author guidelines or editorial policies. Within the present study, the examination of journal data policies was done in May 2019. The primary data source was journal-specific author guidelines. If journal guidelines explicitly linked to the publisher’s general policy with regard to research data, these were used in the analyses of the present article. If journal-specific research data policy, or lack of, was inconsistent with the publisher’s general policies, the journal-specific policies and guidelines were prioritized and used in the present article’s data. If journals’ author guidelines were not openly available online due to, e.g., accepting submissions on an invite-only basis, the journal was not included in the data of the present article. Also journals that exclusively publish review articles were excluded and replaced with the journal having the next highest Impact Factor indicator so that each set representing the three field of sciences consisted of 40 journals. The final data thus consisted of 120 journals in total.

    ‘Public deposition’ refers to a scenario where researcher deposits data to a public repository and thus gives the administrative role of the data to the receiving repository. ‘Scientific sharing’ refers to a scenario where researcher administers his or her data locally and by request provides it to interested reader. Note that none of the journals examined in the present article required that all data types underlying a submitted work should be deposited into a public data repositories. However, some journals required public deposition of data of specific types. Within the journal research data policies examined in the present article, these data types are well presented by the Springer Nature policy on “Availability of data, materials, code and protocols” (Springer Nature, 2018), that is, DNA and RNA data; protein sequences and DNA and RNA sequencing data; genetic polymorphisms data; linked phenotype and genotype data; gene expression microarray data; proteomics data; macromolecular structures and crystallographic data for small molecules. Furthermore, the registration of clinical trials in a public repository was also considered as a data type in this study. The term specific data types used in the custom coding framework of the present study thus refers to both life sciences data and public registration of clinical trials. These data types have community-endorsed public repositories where deposition was most often mandated within the journals’ research data policies.

    The term ‘location’ refers to whether the journal’s data policy provides suggestions or requirements for the repositories or services used to share the underlying data of the submitted works. A mere general reference to ‘public repositories’ was not considered a location suggestion, but only references to individual repositories and services. The category of ‘immediate release of data’ examines whether the journals’ research data policy addresses the timing of publication of the underlying data of submitted works. Note that even though the journals may only encourage public deposition of the data, the editorial processes could be set up so that it leads to either publication of the research data or the research data metadata in conjunction to publishing of the submitted work.

  8. Scicite (Classifying Citation Intents In Papers)

    • kaggle.com
    Updated Nov 30, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2022). Scicite (Classifying Citation Intents In Papers) [Dataset]. https://www.kaggle.com/datasets/thedevastator/harvesting-scholarly-insight-with-scicite
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 30, 2022
    Dataset provided by
    Kaggle
    Authors
    The Devastator
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Scicite (Classifying Citation Intents In Papers)

    Classifying citation intents in academic papers

    By Huggingface Hub [source]

    About this dataset

    Discover a world of knowledge and power with scicite! Through its labeled data of scholarly citations extracted from scientific articles, scicite unlocks the key to uncovering information in multiple fields like computer science, biomedicine, ecology and beyond. Laid out in easily digestible columns including strings, section names, labels, isKeyCitations, label2s and more – you’ll soon find yourself losing track of time as you explore this goldmine of facts and figures. With a quick glance at each entry noted down in the dataset’s information log, you can quickly start pinpointing pertinent pieces of info straight away; from sources to key citations to start/end indices that say it all. Don't be left behind - unlock the power hidden within today!

    More Datasets

    For more datasets, click here.

    Featured Notebooks

    • 🚨 Your notebook can be here! 🚨!

    How to use the dataset

    This dataset consists of three CSV files, each containing different elements related to scholarly citations gathered from scientific articles: train.csv, test.csv and validation.csv. These can be used in a variety of ways in order to gain insight into the research process and improve its accuracy and efficiency.

    • Extracting useful information from citations: The labels attached to each citation section can help in extracting specific information about the sources cited or any other data included for research purposes. Additionally, isKeyCitation gives an indication if the source referred is a key citation which could be looked into in greater detail by researchers or practitioners.

    • Identifying relationships between citations: scicite's sectionName column helps identify related elements of writing including introduction and abstracts that enable the identification of Potential relationships between these elements and references found within them thus helping better understand what connections scholar have made previously with their research pieces

    • Improving accuracy in data gathering: With string, citeStart and citeEnd columns available along with source labels one can easily identify if certain references are repeated multiple times while also double checking accuracy through start/end values associated with them

    • Validation purposes : Last but not least one can also use this dataset for validating documents written by scholars for peer review where similar sections found prior inside unrelated documents can be used as reference points that need to match signaling correctness on original authors part

    Research Ideas

    • Developing a search engine to quickly find citations relevant to specific topics and research areas.
    • Creating algorithms that can predict key citations and streamline the research process by automatically including only the most important references in a paper.
    • Designing AI systems that can accurately classify, analyze and summarize different scholarly works based on the citation frequency, source type & label assigned to them

    Acknowledgements

    If you use this dataset in your research, please credit the original authors. Data Source

    License

    License: CC0 1.0 Universal (CC0 1.0) - Public Domain Dedication No Copyright - You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission. See Other Information.

    Columns

    File: validation.csv | Column name | Description | |:------------------|:-----------------------------------------------------------------------------| | string | The string of text associated with the citation. (String) | | sectionName | The name of the section the citation is found in. (String) | | label | The label associated with the citation. (String) | | isKeyCitation | A boolean value indicating whether the citation is a key citation. (Boolean) | | label2 | The second label associated with the citation. (String) | | citeEnd | The end index of the citation in the text. (Integer) | | citeStart | The start index of the citation in the text. (Integer) | | source | The source of the citation. (String) ...

  9. f

    DataSheet1_Data Sources for Drug Utilization Research in Brazil—DUR-BRA...

    • frontiersin.figshare.com
    • figshare.com
    xlsx
    Updated Jun 15, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lisiane Freitas Leal; Claudia Garcia Serpa Osorio-de-Castro; Luiz Júpiter Carneiro de Souza; Felipe Ferre; Daniel Marques Mota; Marcia Ito; Monique Elseviers; Elisangela da Costa Lima; Ivan Ricardo Zimmernan; Izabela Fulone; Monica Da Luz Carvalho-Soares; Luciane Cruz Lopes (2023). DataSheet1_Data Sources for Drug Utilization Research in Brazil—DUR-BRA Study.xlsx [Dataset]. http://doi.org/10.3389/fphar.2021.789872.s001
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 15, 2023
    Dataset provided by
    Frontiers
    Authors
    Lisiane Freitas Leal; Claudia Garcia Serpa Osorio-de-Castro; Luiz Júpiter Carneiro de Souza; Felipe Ferre; Daniel Marques Mota; Marcia Ito; Monique Elseviers; Elisangela da Costa Lima; Ivan Ricardo Zimmernan; Izabela Fulone; Monica Da Luz Carvalho-Soares; Luciane Cruz Lopes
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Brazil
    Description

    Background: In Brazil, studies that map electronic healthcare databases in order to assess their suitability for use in pharmacoepidemiologic research are lacking. We aimed to identify, catalogue, and characterize Brazilian data sources for Drug Utilization Research (DUR).Methods: The present study is part of the project entitled, “Publicly Available Data Sources for Drug Utilization Research in Latin American (LatAm) Countries.” A network of Brazilian health experts was assembled to map secondary administrative data from healthcare organizations that might provide information related to medication use. A multi-phase approach including internet search of institutional government websites, traditional bibliographic databases, and experts’ input was used for mapping the data sources. The reviewers searched, screened and selected the data sources independently; disagreements were resolved by consensus. Data sources were grouped into the following categories: 1) automated databases; 2) Electronic Medical Records (EMR); 3) national surveys or datasets; 4) adverse event reporting systems; and 5) others. Each data source was characterized by accessibility, geographic granularity, setting, type of data (aggregate or individual-level), and years of coverage. We also searched for publications related to each data source.Results: A total of 62 data sources were identified and screened; 38 met the eligibility criteria for inclusion and were fully characterized. We grouped 23 (60%) as automated databases, four (11%) as adverse event reporting systems, four (11%) as EMRs, three (8%) as national surveys or datasets, and four (11%) as other types. Eighteen (47%) were classified as publicly and conveniently accessible online; providing information at national level. Most of them offered more than 5 years of comprehensive data coverage, and presented data at both the individual and aggregated levels. No information about population coverage was found. Drug coding is not uniform; each data source has its own coding system, depending on the purpose of the data. At least one scientific publication was found for each publicly available data source.Conclusions: There are several types of data sources for DUR in Brazil, but a uniform system for drug classification and data quality evaluation does not exist. The extent of population covered by year is unknown. Our comprehensive and structured inventory reveals a need for full characterization of these data sources.

  10. o

    Data Repository

    • osf.io
    • doi.org
    Updated Feb 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ebruphiyo R. Useh; Zara Trafford; Prince Changole; Xanthe Hunt (2025). Data Repository [Dataset]. http://doi.org/10.17605/OSF.IO/2Y4ST
    Explore at:
    Dataset updated
    Feb 28, 2025
    Dataset provided by
    Center For Open Science
    Authors
    Ebruphiyo R. Useh; Zara Trafford; Prince Changole; Xanthe Hunt
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    Data sources and search strategy: An extensive review was undertaken to describe and analyse findings from a wide range of published literature, including both peer-reviewed and grey sources.

    Our inclusion criteria were: - Sample: Key stakeholders, particularly policy-makers and programme managers working in various sectors such as health, education, social development and so forth, who make high-level decisions in all types of disability policy or programming

    • Phenomenon of interest: Papers that focus on how key stakeholders, who make high-level decisions, went about making a decision or a set of decisions about disability policy or programming

    • Design: All types of research methodologies used to collect primary research evidence exploring disability and decision-making in LMICs

    • Evaluation: Paper’s that focused on decision-making on disability-related matters–sources of information policy-makers and programme managers use to make decisions, other influences on their decision-making, and what processes are used to make decisions

    • Research type: grey literature reports; peer-reviewed studies; qualitative, quantitative, or mixed-method studies exploring disability and decision-making that reported primary research evidence exploring disability and decision-making in LMICs

    Our exclusion criteria were as follows: - Any papers that focused solely on HICs, were unrelated to disability, or did not concern high-level decision-making (e.g. respondents with disabilities describing access issues, limitations on their participation, difficulties at home, income problems, etc.) were excluded.

    • Literature published before the year 1990 was not included. This is because we understand evidence-based decision-making to have gained recognition among medical professionals around this time.

    We firstly conducted thorough searches of various databases of peer-reviewed material. The selection of databases were based on the senior author’s expert insight and extensive experience in disability and in review methods. These databases included Cumulative Index to Nursing and Allied Health Literature (CINAHL), Education Resources Information Center (ERIC), Scopus, Web of Science Social Sciences Citation Index, Medical Literature Analysis and Retrieval System Online (MEDLINE(R)), Excerpta Medica Database (Embase) Classic+Embase, PsycINFO, Cochrane, and Commonwealth Agricultural Bureaux (CAB) Global Health. Next, we conducted thorough online searches to gather relevant grey literature using the websites of the following large agencies and organisations: United Nations Educational, Scientific and Cultural Organization (UNESCO), World Bank, International Labour Organisation (ILO), World Health Organization (WHO), United Nations Children’s Fund (UNICEF), SightSavers, Christian Blind Mission (CBM), International, Disability Alliance, United Nations High Commissioner for Refugees (UNHCR), Humanity and Inclusion, Inclusion International, Global Policy Forum, Save the Children, World Vision International, International Rescue Committee (IRC) and Catholic Relief Services (CRS), as well as supplementary Google Scholar searches. To improve the reporting and methodological quality of this review, we used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) and employed a comprehensive search strategy to mitigate for bias.

    In the hand search of the grey literature, we selected relevant articles based on our inclusion and exclusion criteria. For our grey literature and Google Scholar hand search, the first 3 tabs/pages were searched using search strings. This is because search results yielded past the first page 3 became less specific and did not satisfy our inclusion criteria. All searches were conducted between the 27th of August 2023 and the 14th of December 2023.

    Study selection process: During study selection, the authors used Rayyan.ai, a software platform specifically developed to facilitate collaborative evidence reviews in teams. Working in two pairs, we carefully assessed and selected literature based on our predetermined inclusion criteria, double screening all titles and abstracts. At the end of the full text article-screening phase, the reviewers reported a disagreement rate of 33%. These disagreements were resolved by the senior author.

    Data extraction: All relevant information from each record was compiled in a spreadsheet. Information was extracted based on our research questions and included various details such as the authors’ names and publication year, publication title, study design, types of literature, category of participants, level or ambit of decision-making, sectors and topics covered, impairment or health condition of focus, setting, data collection methods, sample size, aims, and a summarised study findings section. In addition, we prepared a more specific study findings section(s) that considered sources of evidence used, barriers and facilitators to using these sources of evidence, and other influences on disability-inclusive decision-making. Three reviewers assembled the data into a spreadsheet which was used to double-extract all articles, where the third reviewer (second author) extracted 100% of the articles. Any disagreements or uncertainties were resolved during team meetings, also involving the fourth (senior) author.

    Data analysis: We conducted data analysis that included numeric and qualitative content analyses. Qualitative content analysis was used to create codes and synthesise non-numerical data. Codes were created from inductively developed, condensed units of meaning using steps as described by Erlingsson and Brysiewicz (2017).

    Numeric analysis focused on quantitatively summarising five sections: the aims of papers, sources of evidence used in decision-making, influences on decision-making aside from evidence, as well as the barriers to and facilitators of using the aforementioned sources of evidence in decision-making. To ensure methodological rigour, analysis was completed by three of the authors, who developed analytical codes for the five sections mentioned above and documented emerging patterns. The final codes were validated by a fourth member of the research team, with additional discussions on codes held until consensus was reached.

    Data was derived from the following sources: - Braun AMB. Barriers to inclusive education in Tanzania’s policy environment: national policy actors’ perspectives. Compare. 2022;52(1):110–28. - Brydges C, Munro LT. The policy transfer of community-based rehabilitation in Gulu, Uganda. Disabil Soc. 2020 Oct 21;35(10):1596–617. - Chibaya G, Naidoo D, Govender P. Exploring the implementation of the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD) in Namibia. Perspectives of policymakers and implementers. South African Journal of Occupational Therapy. 2022;52(1).
    - Cleaver S, Hunt M, Bond V, Lencucha R. Disability Focal Point Persons and Policy Implementation Across Sectors: A Qualitative Examination of Stakeholder Perspectives in Zambia. Front Public Health. 2020 Sep 15;8. - Grimes P, dela Cruz. A. Mapping of Disability-Inclusive Education Practices in South Asia [Internet]. Kathmandu; 2021. Available from: www.unicef.org/rosa/
    - Heidari A, Arab M, Damari B. A policy analysis of the national phenylketonuria screening program in Iran. BMC Health Serv Res. 2021 Dec 1;21(1). - International Disability Alliance. The Case Study on the Engagement of Organizations of Persons with Disabilities (DPOs) in Voluntary National Reviews [Internet]. 2017 [cited 2024 May 20]. Available from: https://www.internationaldisabilityalliance.org/sites/default/files/global_report_on_the_participation_of_organisations_of_persons_with_disabilities_dpos_in_vnr_processes.docx - Jerwanska V, Kebbie I, Magnusson L. Coordination of health and rehabilitation services for person with disabilities in Sierra Leone–a stakeholders’ perspective. Disabil Rehabil. 2023;45(11):1796–804. - Liechtenstein C. Still left behind? Growing up as a child with a disability in Somalia Disability inclusion: stakeholder, perception, and implementation of in Save the Children’s projects in Somalia.
    - Lyra TM, Veloso De Albuquerque MS, Santos De Oliveira R, Morais Duarte Miranda G, Andra De Oliveira M, Eduarda Carvalho M, et al. The National Health Policy for people with disabilities in Brazil: an analysis of the content, context and the performance of social actors. Health Policy Plan. 2022 Nov 1;37(9):1086–97. - Morone P, Camacho Cuena E, Kocur I, Banatvala N. Securing support for eye health policy in low- and middle-income countries: Identifying stakeholders through a multi-level analysis. Vol. 35, Journal of Public Health Policy. Palgrave Macmillan Ltd.; 2014. p. 185–203. - Najafi Z, Abdi K, Khanjani MS, Dalvand H, Amiri M. Convention on the rights of persons with disabilities: Qualitative exploration of barriers to the implementation of articles 25 (health) and 26 (rehabilitation) in Iran. Med J Islam Repub Iran. 2021;35(1):1–9. - Neill R, Shawar YR, Ashraf L, Das P, Champagne SN, Kautsar H, et al. Prioritizing rehabilitation in low- and middle-income country national health systems: a qualitative thematic synthesis and development of a policy framework. Int J Equity Health. 2023 Dec 1;22(1). - Pillay S, Duncan M, de Vries PJ. ‘We are doing damage control’: Government stakeholder perspectives of educational and other services for children with autism spectrum disorder in South Africa. Autism. 2024 Jan 1;28(1):73–83.
    - Shahabi S, Ahmadi Teymourlouy A, Shabaninejad H, Kamali M, Lankarani KB. Financing of physical rehabilitation services in Iran: A stakeholder and social network analysis. BMC Health Serv Res. 2020 Jul 1;20(1). - Wilbur J, Scherer N, Mactaggart I, Shrestha G, Mahon T, Torondel B, et al. Are Nepal’s water, sanitation and hygiene

  11. Data used by EPA researchers to generate illustrative figures for overview...

    • datasets.ai
    • s.cnmilf.com
    • +1more
    57
    Updated Sep 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Environmental Protection Agency (2024). Data used by EPA researchers to generate illustrative figures for overview article "Multiscale Modeling of Background Ozone: Research Needs to Inform and Improve Air Quality Management" [Dataset]. https://datasets.ai/datasets/data-used-by-epa-researchers-to-generate-illustrative-figures-for-overview-article-multisc
    Explore at:
    57Available download formats
    Dataset updated
    Sep 11, 2024
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Authors
    U.S. Environmental Protection Agency
    Description

    Data sets used to prepare illustrative figures for the overview article “Multiscale Modeling of Background Ozone” Overview

    The CMAQ model output datasets used to create illustrative figures for this overview article were generated by scientists in EPA/ORD/CEMM and EPA/OAR/OAQPS.

    The EPA/ORD/CEMM-generated dataset consisted of hourly CMAQ output from two simulations. The first simulation was performed for July 1 – 31 over a 12 km modeling domain covering the Western U.S. The simulation was configured with the Integrated Source Apportionment Method (ISAM) to estimate the contributions from 9 source categories to modeled ozone. ISAM source contributions for July 17 – 31 averaged over all grid cells located in Colorado were used to generate the illustrative pie chart in the overview article. The second simulation was performed for October 1, 2013 – August 31, 2014 over a 108 km modeling domain covering the northern hemisphere. This simulation was also configured with ISAM to estimate the contributions from non-US anthropogenic sources, natural sources, stratospheric ozone, and other sources on ozone concentrations. Ozone ISAM results from this simulation were extracted along a boundary curtain of the 12 km modeling domain specified over the Western U.S. for the time period January 1, 2014 – July 31, 2014 and used to generate the illustrative time-height cross-sections in the overview article.

    The EPA/OAR/OAQPS-generated dataset consisted of hourly gridded CMAQ output for surface ozone concentrations for the year 2016. The CMAQ simulations were performed over the northern hemisphere at a horizontal resolution of 108 km. NO2 and O3 data for July 2016 was extracted from these simulations generate the vertically-integrated column densities shown in the illustrative comparison to satellite-derived column densities.

    CMAQ Model Data

    The data from the CMAQ model simulations used in this research effort are very large (several terabytes) and cannot be uploaded to ScienceHub due to size restrictions. The model simulations are stored on the /asm archival system accessible through the atmos high-performance computing (HPC) system. Due to data management policies, files on /asm are subject to expiry depending on the template of the project. Files not requested for extension after the expiry date are deleted permanently from the system. The format of the files used in this analysis and listed below is ioapi/netcdf. Documentation of this format, including definitions of the geographical projection attributes contained in the file headers, are available at https://www.cmascenter.org/ioapi/

    Documentation on the CMAQ model, including a description of the output file format and output model species can be found in the CMAQ documentation on the CMAQ GitHub site at https://github.com/USEPA/CMAQ.

    This dataset is associated with the following publication: Hogrefe, C., B. Henderson, G. Tonnesen, R. Mathur, and R. Matichuk. Multiscale Modeling of Background Ozone: Research Needs to Inform and Improve Air Quality Management. EM Magazine. Air and Waste Management Association, Pittsburgh, PA, USA, 1-6, (2020).

  12. AHRQ Social Determinants of Health Updated Database

    • datalumos.org
    Updated Feb 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AHRQ (2025). AHRQ Social Determinants of Health Updated Database [Dataset]. http://doi.org/10.3886/E220762V1
    Explore at:
    Dataset updated
    Feb 25, 2025
    Dataset provided by
    Agency for Healthcare Research and Qualityhttp://www.ahrq.gov/
    Authors
    AHRQ
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    AHRQ's database on Social Determinants of Health (SDOH) was created under a project funded by the Patient Centered Outcomes Research (PCOR) Trust Fund. The purpose of this project is to create easy to use, easily linkable SDOH-focused data to use in PCOR research, inform approaches to address emerging health issues, and ultimately contribute to improved health outcomes.The database was developed to make it easier to find a range of well documented, readily linkable SDOH variables across domains without having to access multiple source files, facilitating SDOH research and analysis.Variables in the files correspond to five key SDOH domains: social context (e.g., age, race/ethnicity, veteran status), economic context (e.g., income, unemployment rate), education, physical infrastructure (e.g, housing, crime, transportation), and healthcare context (e.g., health insurance). The files can be linked to other data by geography (county, ZIP Code, and census tract). The database includes data files and codebooks by year at three levels of geography, as well as a documentation file.The data contained in the SDOH database are drawn from multiple sources and variables may have differing availability, patterns of missing, and methodological considerations across sources, geographies, and years. Users should refer to the data source documentation and codebooks, as well as the original data sources, to help identify these patterns

  13. Linked Open Data Management Services: A Comparison

    • zenodo.org
    • data.niaid.nih.gov
    Updated Sep 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Robert Nasarek; Robert Nasarek; Lozana Rossenova; Lozana Rossenova (2023). Linked Open Data Management Services: A Comparison [Dataset]. http://doi.org/10.5281/zenodo.7738424
    Explore at:
    Dataset updated
    Sep 18, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Robert Nasarek; Robert Nasarek; Lozana Rossenova; Lozana Rossenova
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Thanks to a variety of software services, it has never been easier to produce, manage and publish Linked Open Data. But until now, there has been a lack of an accessible overview to help researchers make the right choice for their use case. This dataset release will be regularly updated to reflect the latest data published in a comparison table developed in Google Sheets [1]. The comparison table includes the most commonly used LOD management software tools from NFDI4Culture to illustrate what functionalities and features a service should offer for the long-term management of FAIR research data, including:

    • ConedaKOR
    • LinkedDataHub
    • Metaphacts
    • Omeka S
    • ResearchSpace
    • Vitro
    • Wikibase
    • WissKI

    The table presents two views based on a comparison system of categories developed iteratively during workshops with expert users and developers from the respective tool communities. First, a short overview with field values coming from controlled vocabularies and multiple-choice options; and a second sheet allowing for more descriptive free text additions. The table and corresponding dataset releases for each view mode are designed to provide a well-founded basis for evaluation when deciding on a LOD management service. The Google Sheet table will remain open to collaboration and community contribution, as well as updates with new data and potentially new tools, whereas the datasets released here are meant to provide stable reference points with version control.

    The research for the comparison table was first presented as a paper at DHd2023, Open Humanities – Open Culture, 13-17.03.2023, Trier and Luxembourg [2].

    [1] Non-editing access is available here: docs.google.com/spreadsheets/d/1FNU8857JwUNFXmXAW16lgpjLq5TkgBUuafqZF-yo8_I/edit?usp=share_link To get editing access contact the authors.

    [2] Full paper will be made available open access in the conference proceedings.

  14. I

    Cline Center Coup d’État Project Dataset

    • databank.illinois.edu
    Updated May 11, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Buddy Peyton; Joseph Bajjalieh; Dan Shalmon; Michael Martin; Emilio Soto (2025). Cline Center Coup d’État Project Dataset [Dataset]. http://doi.org/10.13012/B2IDB-9651987_V7
    Explore at:
    Dataset updated
    May 11, 2025
    Authors
    Buddy Peyton; Joseph Bajjalieh; Dan Shalmon; Michael Martin; Emilio Soto
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Coups d'Ètat are important events in the life of a country. They constitute an important subset of irregular transfers of political power that can have significant and enduring consequences for national well-being. There are only a limited number of datasets available to study these events (Powell and Thyne 2011, Marshall and Marshall 2019). Seeking to facilitate research on post-WWII coups by compiling a more comprehensive list and categorization of these events, the Cline Center for Advanced Social Research (previously the Cline Center for Democracy) initiated the Coup d’État Project as part of its Societal Infrastructures and Development (SID) project. More specifically, this dataset identifies the outcomes of coup events (i.e., realized, unrealized, or conspiracy) the type of actor(s) who initiated the coup (i.e., military, rebels, etc.), as well as the fate of the deposed leader. Version 2.1.3 adds 19 additional coup events to the data set, corrects the date of a coup in Tunisia, and reclassifies an attempted coup in Brazil in December 2022 to a conspiracy. Version 2.1.2 added 6 additional coup events that occurred in 2022 and updated the coding of an attempted coup event in Kazakhstan in January 2022. Version 2.1.1 corrected a mistake in version 2.1.0, where the designation of “dissident coup” had been dropped in error for coup_id: 00201062021. Version 2.1.1 fixed this omission by marking the case as both a dissident coup and an auto-coup. Version 2.1.0 added 36 cases to the data set and removed two cases from the v2.0.0 data. This update also added actor coding for 46 coup events and added executive outcomes to 18 events from version 2.0.0. A few other changes were made to correct inconsistencies in the coup ID variable and the date of the event. Version 2.0.0 improved several aspects of the previous version (v1.0.0) and incorporated additional source material to include: • Reconciling missing event data • Removing events with irreconcilable event dates • Removing events with insufficient sourcing (each event needs at least two sources) • Removing events that were inaccurately coded as coup events • Removing variables that fell below the threshold of inter-coder reliability required by the project • Removing the spreadsheet ‘CoupInventory.xls’ because of inadequate attribution and citations in the event summaries • Extending the period covered from 1945-2005 to 1945-2019 • Adding events from Powell and Thyne’s Coup Data (Powell and Thyne, 2011)
    Items in this Dataset 1. Cline Center Coup d'État Codebook v.2.1.3 Codebook.pdf - This 15-page document describes the Cline Center Coup d’État Project dataset. The first section of this codebook provides a summary of the different versions of the data. The second section provides a succinct definition of a coup d’état used by the Coup d'État Project and an overview of the categories used to differentiate the wide array of events that meet the project's definition. It also defines coup outcomes. The third section describes the methodology used to produce the data. Revised February 2024 2. Coup Data v2.1.3.csv - This CSV (Comma Separated Values) file contains all of the coup event data from the Cline Center Coup d’État Project. It contains 29 variables and 1000 observations. Revised February 2024 3. Source Document v2.1.3.pdf - This 325-page document provides the sources used for each of the coup events identified in this dataset. Please use the value in the coup_id variable to identify the sources used to identify that particular event. Revised February 2024 4. README.md - This file contains useful information for the user about the dataset. It is a text file written in markdown language. Revised February 2024
    Citation Guidelines 1. To cite the codebook (or any other documentation associated with the Cline Center Coup d’État Project Dataset) please use the following citation: Peyton, Buddy, Joseph Bajjalieh, Dan Shalmon, Michael Martin, Jonathan Bonaguro, and Scott Althaus. 2024. “Cline Center Coup d’État Project Dataset Codebook”. Cline Center Coup d’État Project Dataset. Cline Center for Advanced Social Research. V.2.1.3. February 27. University of Illinois Urbana-Champaign. doi: 10.13012/B2IDB-9651987_V7 2. To cite data from the Cline Center Coup d’État Project Dataset please use the following citation (filling in the correct date of access): Peyton, Buddy, Joseph Bajjalieh, Dan Shalmon, Michael Martin, Jonathan Bonaguro, and Emilio Soto. 2024. Cline Center Coup d’État Project Dataset. Cline Center for Advanced Social Research. V.2.1.3. February 27. University of Illinois Urbana-Champaign. doi: 10.13012/B2IDB-9651987_V7

  15. w

    Data source for polygonal data used by the ASRIS project in generation of...

    • data.wu.ac.at
    • cloud.csiss.gmu.edu
    • +2more
    zip
    Updated Apr 12, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Commonwealth Scientific and Industrial Research Organisation (2018). Data source for polygonal data used by the ASRIS project in generation of modelled surfaces [Dataset]. https://data.wu.ac.at/odso/data_gov_au/OWQxOWMyM2MtMzhmMC00Njk5LWIxMTgtMDhlZjljYTExOWRk
    Explore at:
    zip(195617.0)Available download formats
    Dataset updated
    Apr 12, 2018
    Dataset provided by
    Commonwealth Scientific and Industrial Research Organisation
    Description

    Data provided are the scale of polygonal datasources used to generate the polygon derived surfaces for the intensive agricultural areas of Australia. Data modelled from area based observations made by State soil agencies.The final ASRIS polygon attributed surfaces are a mosaic of all of the data obtained from various state and federal agencies. The surfaces have been constructed with the best available soil survey information available at the time. The surfaces also rely on a number of assumptions. One being that an area weighted mean is a good estimate of the soil attributes for that polygon or mapunit. Another assumption made is that the lookup tables provided by McKenzie et al. (2000), state and territories accurately depict the soil attribute values for each soil type.The accuracy of the maps is most dependent on the scale of the original polygon data sets and the level of soil survey that has taken place in each state. The scale of the various soil maps used in deriving this map is available by accessing darasource grid, the scale is used as an assessment of the likely accuracy of the modelling.The Atlas of Australian Soils is considered to be the least accurate dataset and has therefore only been used where there is no state based data.Of the state datasets Western Australian sub-systems, South Australian land systems and NSW soil landscapes and reconnaissance mapping would be the most reliable based on scale. NSW soil landscapes and reconnaissance mapping however, may be less accurate than South Australia and Western Australia as only one dominant soil type per polygon was used in the estimation of attributes, compared to several soil types per polygon or mapunit in South Australia and Western Australia. NSW soil landscapes and reconnaissance mapping as the name suggests is reconnaissance level only with no laboratory data. The digital map data is provided in geographical coordinates based on the World Geodetic System 1984 (WGS84) datum.

    See further metadat for more detail.

  16. Empirical Data of the Public Policy Research (PPR) Funding Scheme Project...

    • data.gov.hk
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.gov.hk, Empirical Data of the Public Policy Research (PPR) Funding Scheme Project (Project No: 2014.A1.010.14E) “Performance Information Use: Experiments on Performance Dimensions, Communication and Data Sources in Education and Solid Waste Recycling” [Dataset]. https://data.gov.hk/en-data/dataset/hk-cepu-prfs-funding-scheme-project-no-2014-a1-010-14e
    Explore at:
    Dataset provided by
    data.gov.hk
    Description

    Principal Investigator: Professor Richard Mark WALKER Institution/Think Tank: City University of Hong Kong Five years after completion of the research projects granted under the PPR Funding Scheme, quantitative empirical data generated from the research would be released to the public. Only research raw data (e.g. surveys) of completed projects that are provided in file format of comma-separated values (CSV) will be uploaded under the Open Data Plan. Raw data provided in formats other than CSV will only be uploaded onto the scheme’s webpage. PPR Funding Scheme’s webpage: https://www.cepu.gov.hk/en/PRFS/research_report.html Users of the data sets archived are required to acknowledge the research team and the Government. [Remarks: Parts of the data sets archived may contain Chinese/English version only.]

  17. Human Written Text

    • kaggle.com
    Updated May 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Youssef Elebiary (2025). Human Written Text [Dataset]. https://www.kaggle.com/datasets/youssefelebiary/human-written-text
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 13, 2025
    Dataset provided by
    Kaggle
    Authors
    Youssef Elebiary
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Overview

    This dataset contains 20000 pieces of text collected from Wikipedia, Gutenberg, and CNN/DailyMail. The text is cleaned by replacing symbols such as (.*?/) with a white space using automatic scripts and regex.

    Data Source Distribution

    1. 10,000 Wikipedia Articles: From the 20220301 dump.
    2. 3,000 Gutenberg Books: Via the GutenDex API.
    3. 7,000 CNN/DailyMail News Articles: From the CNN/DailyMail 3.0.0 dataset.

    Why These Sources

    The data was collected from these source to ensure the highest level of integrity against AI generated text. * Wikipedia: The 20220301 dataset was chosen to minimize the chance of including articles generated or heavily edited by AI. * Gutenberg: Books from this source are guaranteed to be written by real humans and span various genres and time periods. * CNN/DailyMail: These news articles were written by professional journalists and cover a variety of topics, ensuring diversity in writing style and subject matter.

    Dataset Structure

    The dataset consists of 5 CSV files. 1. CNN_DailyMail.csv: Contains all processed news articles. 2. Gutenberg.csv: Contains all processed books. 3. Wikipedia.csv: Contains all processed Wikipedia articles. 4. Human.csv: Combines all three datasets in order. 5. Shuffled_Human.csv: This is the randomly shuffled version of Human.csv.

    Each file has 2 columns: - Title: The title of the item. - Text: The content of the item.

    Uses

    This dataset is suitable for a wide range of NLP tasks, including: - Training models to distinguish between human-written and AI-generated text (Human/AI classifiers). - Training LSTMs or Transformers for chatbots, summarization, or topic modeling. - Sentiment analysis, genre classification, or linguistic research.

    Disclaimer

    While the data was collected from such sources, the data may not be 100% pure from AI generated text. Wikipedia articles may reflect systemic biases in contributor demographics. CNN/DailyMail articles may focus on specific news topics or regions.

    For details on how the dataset was created, click here to view the Kaggle notebook used.

    Licensing

    This dataset is published under the MIT License, allowing free use for both personal and commercial purposes. Attribution is encouraged but not required.

  18. s

    Fostering cultures of open qualitative research: Dataset 2 – Interview...

    • orda.shef.ac.uk
    xlsx
    Updated Jun 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Matthew Hanchard; Itzel San Roman Pineda (2023). Fostering cultures of open qualitative research: Dataset 2 – Interview Transcripts [Dataset]. http://doi.org/10.15131/shef.data.23567223.v2
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 28, 2023
    Dataset provided by
    The University of Sheffield
    Authors
    Matthew Hanchard; Itzel San Roman Pineda
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    This dataset was created and deposited onto the University of Sheffield Online Research Data repository (ORDA) on 23-Jun-2023 by Dr. Matthew S. Hanchard, Research Associate at the University of Sheffield iHuman Institute. The dataset forms part of three outputs from a project titled ‘Fostering cultures of open qualitative research’ which ran from January 2023 to June 2023:

    · Fostering cultures of open qualitative research: Dataset 1 – Survey Responses · Fostering cultures of open qualitative research: Dataset 2 – Interview Transcripts · Fostering cultures of open qualitative research: Dataset 3 – Coding Book

    The project was funded with £13,913.85 of Research England monies held internally by the University of Sheffield - as part of their ‘Enhancing Research Cultures’ scheme 2022-2023.

    The dataset aligns with ethical approval granted by the University of Sheffield School of Sociological Studies Research Ethics Committee (ref: 051118) on 23-Jan-2021. This includes due concern for participant anonymity and data management.

    ORDA has full permission to store this dataset and to make it open access for public re-use on the basis that no commercial gain will be made form reuse. It has been deposited under a CC-BY-NC license. Overall, this dataset comprises:

    · 15 x Interview transcripts - in .docx file format which can be opened with Microsoft Word, Google Doc, or an open-source equivalent.

    All participants have read and approved their transcripts and have had an opportunity to retract details should they wish to do so.

    Participants chose whether to be pseudonymised or named directly. The pseudonym can be used to identify individual participant responses in the qualitative coding held within the ‘Fostering cultures of open qualitative research: Dataset 3 – Coding Book’ files.

    For recruitment, 14 x participants we selected based on their responses to the project survey., whilst one participant was recruited based on specific expertise.

    · 1 x Participant sheet – in .csv format which may by opened with Microsoft Excel, Google Sheet, or an open-source equivalent.

    The provides socio-demographic detail on each participant alongside their main field of research and career stage. It includes a RespondentID field/column which can be used to connect interview participants with their responses to the survey questions in the accompanying ‘Fostering cultures of open qualitative research: Dataset 1 – Survey Responses’ files.

    The project was undertaken by two staff:

    Co-investigator: Dr. Itzel San Roman Pineda ORCiD ID: 0000-0002-3785-8057 i.sanromanpineda@sheffield.ac.uk Postdoctoral Research Assistant Labelled as ‘Researcher 1’ throughout the dataset

    Principal Investigator (corresponding dataset author): Dr. Matthew Hanchard ORCiD ID: 0000-0003-2460-8638 m.s.hanchard@sheffield.ac.uk Research Associate iHuman Institute, Social Research Institutes, Faculty of Social Science Labelled as ‘Researcher 2’ throughout the dataset

  19. Data from: Invasive species - American bullfrog (Lithobates catesbeianus) in...

    • gbif.org
    • data.biodiversity.be
    • +4more
    Updated May 15, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sander Devisscher; Tim Adriaens; Gerald Louette; Dimitri Brosens; Peter Desmet; Sander Devisscher; Tim Adriaens; Gerald Louette; Dimitri Brosens; Peter Desmet (2025). Invasive species - American bullfrog (Lithobates catesbeianus) in Flanders, Belgium [Dataset]. http://doi.org/10.15468/2hqkqn
    Explore at:
    Dataset updated
    May 15, 2025
    Dataset provided by
    Global Biodiversity Information Facilityhttps://www.gbif.org/
    Research Institute for Nature and Forest (INBO)
    Authors
    Sander Devisscher; Tim Adriaens; Gerald Louette; Dimitri Brosens; Peter Desmet; Sander Devisscher; Tim Adriaens; Gerald Louette; Dimitri Brosens; Peter Desmet
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Time period covered
    Apr 27, 2010 - Dec 31, 2018
    Area covered
    Description

    Invasive species - American bullfrog (Lithobates catesbeianus) in Flanders, Belgium is a species occurrence dataset published by the Research Institute for Nature and Forest (INBO). The dataset contains over 7,500 occurrences (25% of which are American bullfrogs) sampled between 2010 until now, in the months April to October. The data are compiled from different sources at the INBO, but most of the occurrences were collected through fieldwork for the EU co-funded Interreg project INVEXO (http://www.invexo.eu). In this project, research was conducted on different methods for the management of American bullfrog populations, an alien invasive species in Belgium. Captured bullfrogs were almost always removed from the environment and humanely killed, while the other occurrences are recorded bycatch, which were released upon catch (see bibliography for detailed descriptions of the methods). Therefore, caution is advised when using these data for trend analysis, distribution range calculation, or other. Issues with the dataset can be reported at https://github.com/inbo/data-publication/tree/master/datasets/invasive-bullfrog-occurrences

    We strongly believe an open attitude is essential for tackling the IAS problem (Groom et al. 2015). To allow anyone to use this dataset, we have released the data to the public domain under a Creative Commons Zero waiver (http://creativecommons.org/publicdomain/zero/1.0/). We would appreciate it however if you read and follow these norms for data use (http://www.inbo.be/en/norms-for-data-use) and provide a link to the original dataset (https://doi.org/10.15468/2hqkqn) whenever possible. If you use these data for a scientific paper, please cite the dataset following the applicable citation norms and/or consider us for co-authorship. We are always interested to know how you have used or visualized the data, or to provide more information, so please contact us via the contact information provided in the metadata, opendata@inbo.be or https://twitter.com/LifeWatchINBO.

  20. w

    Global Dashboard Skeleton Market Research Report: By Technology...

    • wiseguyreports.com
    Updated Sep 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    wWiseguy Research Consultants Pvt Ltd (2024). Global Dashboard Skeleton Market Research Report: By Technology (Cloud-based, On-premises, Hybrid), By Deployment (Multi-tenant, Single-tenant), By Data Source (Relational databases, NoSQL databases, Data warehouses, Cloud-based data sources), By Use Case (Business intelligence, Data visualization, Performance management, Risk management, Compliance) and By Regional (North America, Europe, South America, Asia Pacific, Middle East and Africa) - Forecast to 2032. [Dataset]. https://www.wiseguyreports.com/reports/dashboard-skeleton-market
    Explore at:
    Dataset updated
    Sep 24, 2024
    Dataset authored and provided by
    wWiseguy Research Consultants Pvt Ltd
    License

    https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy

    Time period covered
    Jan 9, 2024
    Area covered
    Global
    Description
    BASE YEAR2024
    HISTORICAL DATA2019 - 2024
    REPORT COVERAGERevenue Forecast, Competitive Landscape, Growth Factors, and Trends
    MARKET SIZE 20231.69(USD Billion)
    MARKET SIZE 20241.84(USD Billion)
    MARKET SIZE 20323.69(USD Billion)
    SEGMENTS COVEREDTechnology ,Deployment ,Data Source ,Use Case ,Regional
    COUNTRIES COVEREDNorth America, Europe, APAC, South America, MEA
    KEY MARKET DYNAMICSGrowing digitalization technological advancements increasing demand for data visualization rising adoption in various industries expanding healthcare sector
    MARKET FORECAST UNITSUSD Billion
    KEY COMPANIES PROFILEDGoogle LLC ,Looker Data Sciences, Inc. ,Amazon.com, Inc. ,Microsoft Corporation ,Tibco Software Inc. ,Qlik ,SAS Institute Inc. ,Tableau Software ,MicroStrategy ,Oracle Corporation ,IBM Corporation ,Sisense ,Visual Analytics ,SAP SE
    MARKET FORECAST PERIOD2025 - 2032
    KEY MARKET OPPORTUNITIESRising demand for data visualization Growing adoption of agile methodologies Expanding use in healthcare increasing use in ecommerce Growing popularity of cloudbased dashboards
    COMPOUND ANNUAL GROWTH RATE (CAGR) 9.1% (2025 - 2032)
Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Anita Khadka (2020). Citation Knowledge with Section and Context [Dataset]. http://doi.org/10.21954/ou.rd.11346848.v1

Citation Knowledge with Section and Context

Explore at:
zipAvailable download formats
Dataset updated
May 5, 2020
Dataset provided by
The Open University
Authors
Anita Khadka
License

Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically

Description

This dataset contains information from scientific publications written by authors who have published papers in the RecSys conference. It contains four files which have information extracted from scientific publications. The details of each file are explained below:i) all_authors.tsv: This file contains the details of authors who published research papers in the RecSys conference. The details include authors' identifier in various forms, such as number, orcid id, dblp url, dblp key and google scholar url, authors' first name, last name and their affiliation (where they work)ii) all_publications.tsv: This file contains the details of publications authored by the authors mentioned in the all_authors.tsv file (Please note the list of publications does not contain all the authored publications of the authors, refer to the publication for further details).The details include publications' identifier in different forms (such as number, dblp key, dblp url, dblp key, google scholar url), title, filtered title, published date, published conference and paper abstract.iii) selected_author_publications-information.tsv: This file consists of identifiers of authors and their publications. Here, we provide the information of selected authors and their publications used for our experiment.iv) selected_publication_citations-information.tsv: This file contains the information of the selected publications which consists of both citing and cited papers’ information used in our experiment. It consists of identifier of citing paper, identifier of cited paper, citation title, citation filtered title, the sentence before the citation is mentioned, citing sentence, the sentence after the citation is mentioned, citation position (section).Please note, it does not contain information of all the citations cited in the publications. For more detail, please refer to the paper.This dataset is for the use of research purposes only and if you use this dataset, please cite our paper "Capturing and exploiting citation knowledge for recommending recently published papers" due to be published in Web2Touch track 2020 (not yet published).

Search
Clear search
Close search
Google apps
Main menu