21 datasets found
  1. The Red Queen in the Repository: metadata quality in an ever-changing...

    • zenodo.org
    • researchdata.se
    bin, csv, zip
    Updated Jul 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joakim Philipson; Joakim Philipson (2024). The Red Queen in the Repository: metadata quality in an ever-changing environment (preprint of paper, presentation slides and dataset collection with validation schemas to IDCC2019 conference paper) [Dataset]. http://doi.org/10.5281/zenodo.2276777
    Explore at:
    zip, bin, csvAvailable download formats
    Dataset updated
    Jul 25, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Joakim Philipson; Joakim Philipson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This fileset contains a preprint version of the conference paper (.pdf), presentation slides (as .pptx) and the dataset(s) and validation schema(s) for the IDCC 2019 (Melbourne) conference paper: The Red Queen in the Repository: metadata quality in an ever-changing environment. Datasets and schemas are in .xml, .xsd , Excel (.xlsx) and .csv (two files representing two different sheets in the .xslx -file). The validationSchemas.zip holds the additional validation schemas (.xsd), that were not found in the schemaLocations of the metadata xml-files to be validated. The schemas must all be placed in the same folder, and are to be used for validating the Dataverse dcterms records (with metadataDCT.xsd) and the Zenodo oai_datacite feeds respectively (schema.datacite.org_oai_oai-1.0_oai.xsd). In the latter case, a simpler way of doing it might be to replace the incorrect URL "http://schema.datacite.org/oai/oai-1.0/ oai_datacite.xsd" in the schemaLocation of these xml-files by the CORRECT: schemaLocation="http://schema.datacite.org/oai/oai-1.0/ http://schema.datacite.org/oai/oai-1.0/oai.xsd" as has been done already in the sample files here. The sample file folders testDVNcoll.zip (Dataverse), testFigColl.zip (Figshare) and testZenColl.zip (Zenodo) contain all the metadata files tested and validated that are registered in the spreadsheet with objectIDs.
    In the case of Zenodo, one original file feed,
    zen2018oai_datacite3orig-https%20_zenodo.org_oai2d%20verb=ListRecords%26metadata
    Prefix=oai_datacite%26from=2018-11-29%26until=2018-11-30.xml
    ,
    is also supplied to show what was necessary to change in order to perform validation as indicated in the paper.

    For Dataverse, a corrected version of a file,
    dvn2014ddi-27595Corr_https%20_dataverse.harvard.edu_api_datasets_export%20
    exporter=ddi%26persistentId=doi%253A10.7910_DVN_27595Corr.xml
    ,
    is also supplied in order to show the changes it would take to make the file validate without error.

  2. w

    Global Form Management Software Market Research Report: By Deployment Mode...

    • wiseguyreports.com
    Updated Aug 10, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    wWiseguy Research Consultants Pvt Ltd (2024). Global Form Management Software Market Research Report: By Deployment Mode (Cloud-based, On-premises), By Organization Size (Small and Medium-sized Enterprises (SMEs), Large Enterprises), By Industry Vertical (Healthcare, Financial Services, Education, Manufacturing), By Form Type (Web Forms, Mobile Forms, PDF Forms), By Key Features (Digital Signature Capture, Data Validation, Custom Branding) and By Regional (North America, Europe, South America, Asia Pacific, Middle East and Africa) - Forecast to 2032. [Dataset]. https://www.wiseguyreports.com/reports/form-management-software-market
    Explore at:
    Dataset updated
    Aug 10, 2024
    Dataset authored and provided by
    wWiseguy Research Consultants Pvt Ltd
    License

    https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy

    Time period covered
    Jan 8, 2024
    Area covered
    Global
    Description
    BASE YEAR2024
    HISTORICAL DATA2019 - 2024
    REPORT COVERAGERevenue Forecast, Competitive Landscape, Growth Factors, and Trends
    MARKET SIZE 20234.91(USD Billion)
    MARKET SIZE 20245.47(USD Billion)
    MARKET SIZE 203212.9(USD Billion)
    SEGMENTS COVEREDDeployment Mode ,Organization Size ,Industry Vertical ,Form Type ,Key Features ,Regional
    COUNTRIES COVEREDNorth America, Europe, APAC, South America, MEA
    KEY MARKET DYNAMICSRising adoption of cloudbased solutions Growing need for efficient data management Proliferation of mobile devices Increasing regulatory compliance requirements Emergence of advanced technologies like AI and ML
    MARKET FORECAST UNITSUSD Billion
    KEY COMPANIES PROFILEDMicrosoft ,K2 ,SAP SE ,Nintex ,Salesforce ,Hyland Software ,IBM ,Laserfiche ,MFiles ,Paperless Process Management ,Alfresco Software ,ServiceNow ,ProcessMaker ,Oracle ,Adobe
    MARKET FORECAST PERIOD2024 - 2032
    KEY MARKET OPPORTUNITIESAutomation of data entry Improved data security Streamlined workflows Enhanced customer experience Cost savings
    COMPOUND ANNUAL GROWTH RATE (CAGR) 11.32% (2024 - 2032)
  3. f

    Understanding and Managing Missing Data.pdf

    • figshare.com
    pdf
    Updated Jun 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ibrahim Denis Fofanah (2025). Understanding and Managing Missing Data.pdf [Dataset]. http://doi.org/10.6084/m9.figshare.29265155.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 9, 2025
    Dataset provided by
    figshare
    Authors
    Ibrahim Denis Fofanah
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This document provides a clear and practical guide to understanding missing data mechanisms, including Missing Completely At Random (MCAR), Missing At Random (MAR), and Missing Not At Random (MNAR). Through real-world scenarios and examples, it explains how different types of missingness impact data analysis and decision-making. It also outlines common strategies for handling missing data, including deletion techniques and imputation methods such as mean imputation, regression, and stochastic modeling.Designed for researchers, analysts, and students working with real-world datasets, this guide helps ensure statistical validity, reduce bias, and improve the overall quality of analysis in fields like public health, behavioral science, social research, and machine learning.

  4. f

    Table_1_Online Testing Yields the Same Results as Lab Testing: A Validation...

    • frontiersin.figshare.com
    pdf
    Updated Jun 2, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lydia Paulin Schidelko; Britta Schünemann; Hannes Rakoczy; Marina Proft (2023). Table_1_Online Testing Yields the Same Results as Lab Testing: A Validation Study With the False Belief Task.pdf [Dataset]. http://doi.org/10.3389/fpsyg.2021.703238.s003
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    Frontiers
    Authors
    Lydia Paulin Schidelko; Britta Schünemann; Hannes Rakoczy; Marina Proft
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Recently, online testing has become an increasingly important instrument in developmental research, in particular since the COVID-19 pandemic made in-lab testing impossible. However, online testing comes with two substantial challenges. First, it is unclear how valid results of online studies really are. Second, implementing online studies can be costly and/or require profound coding skills. This article addresses the validity of an online testing approach that is low-cost and easy to implement: The experimenter shares test materials such as videos or presentations via video chat and interactively moderates the test session. To validate this approach, we compared children’s performance on a well-established task, the change-of-location false belief task, in an in-lab and online test setting. In two studies, 3- and 4-year-old received online implementations of the false belief version (Study 1) and the false and true belief version of the task (Study 2). Children’s performance in these online studies was compared to data of matching tasks collected in the context of in-lab studies. Results revealed that the typical developmental pattern of performance in these tasks found in in-lab studies could be replicated with the novel online test procedure. These results suggest that the proposed method, which is both low-cost and easy to implement, provides a valid alternative to classical in-person test settings.

  5. Survey data of "Mapping Research Output to the Sustainable Development Goals...

    • zenodo.org
    • explore.openaire.eu
    bin, pdf, zip
    Updated Jul 22, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Maurice Vanderfeesten; Maurice Vanderfeesten; Eike Spielberg; Eike Spielberg; Yassin Gunes; Yassin Gunes (2024). Survey data of "Mapping Research Output to the Sustainable Development Goals (SDGs)" [Dataset]. http://doi.org/10.5281/zenodo.3813230
    Explore at:
    bin, zip, pdfAvailable download formats
    Dataset updated
    Jul 22, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Maurice Vanderfeesten; Maurice Vanderfeesten; Eike Spielberg; Eike Spielberg; Yassin Gunes; Yassin Gunes
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains information on what papers and concepts researchers find relevant to map domain specific research output to the 17 Sustainable Development Goals (SDGs).

    Sustainable Development Goals are the 17 global challenges set by the United Nations. Within each of the goals specific targets and indicators are mentioned to monitor the progress of reaching those goals by 2030. In an effort to capture how research is contributing to move the needle on those challenges, we earlier have made an initial classification model than enables to quickly identify what research output is related to what SDG. (This Aurora SDG dashboard is the initial outcome as proof of practice.)

    In order to validate our current classification model (on soundness/precision and completeness/recall), and receive input for improvement, a survey has been conducted to capture expert knowledge from senior researchers in their research domain related to the SDG. The survey was open to the world, but mainly distributed to researchers from the Aurora Universities Network. The survey was open from October 2019 till January 2020, and captured data from 244 respondents in Europe and North America.

    17 surveys were created from a single template, where the content was made specific for each SDG. Content, like a random set of publications, of each survey was ingested by a data provisioning server. That collected research output metadata for each SDG in an earlier stage. It took on average 1 hour for a respondent to complete the survey. The outcome of the survey data can be used for validating current and optimizing future SDG classification models for mapping research output to the SDGs.

    The survey contains the following questions (see inside dataset for exact wording):

    • Are you familiar with this SDG?
      • Respondents could only proceed if they were familiar with the targets and indicators of this SDG. Goal of this question was to weed out un knowledgeable respondents and to increase the quality of the survey data.
    • Suggest research papers that are relevant for this SDG (upload list)
      • This question, to provide a list, was put first to reduce influenced by the other questions. Goal of this question was to measure the completeness/recall of the papers in the result set of our current classification model. (To lower the bar, these lists could be provided by either uploading a file from a reference manager (preferred) in .ris of bibtex format, or by a list of titles. This heterogenous input was processed further on by hand into a uniform format.)
    • Select research papers that are relevant for this SDG (radio buttons: accept, reject)
      • A randomly selected set of 100 papers was injected in the survey, out of the full list of thousands of papers in the result set of our current classification model. Goal of this question was to measure the soundness/precision of our current classification model.
    • Select and Suggest Keywords related to SDG (checkboxes: accept | text field: suggestions)
      • The survey was injected with the top 100 most frequent keywords that appeared in the metadata of the papers in the result set of the current classification model. respondents could select relevant keywords we found, and add ones in a blank text field. Goal of this question was to get suggestions for keywords we can use to increase the recall of relevant papers in a new classification model.
    • Suggest SDG related glossaries with relevant keywords (text fields: url)
      • Open text field to add URL to lists with hundreds of relevant keywords related to this SDG. Goal of this question was to get suggestions for keywords we can use to increase the recall of relevant papers in a new classification model.
    • Select and Suggest Journals fully related to SDG (checkboxes: accept | text field: suggestions)
      • The survey was injected with the top 100 most frequent journals that appeared in the metadata of the papers in the result set of the current classification model. Respondents could select relevant journals we found, and add ones in a blank text field. Goal of this question was to get suggestions for complete journals we can use to increase the recall of relevant papers in a new classification model.
    • Suggest improvements for the current queries (text field: suggestions per target)
      • We showed respondents the queries we used in our current classification model next to each of the targets within the goal. Open text fields were presented to change, add, re-order, delete something (keywords, boolean operators, etc. ) in the query to improve it in their opinion. Goal of this question was to get suggestions we can use to increase the recall and precision of relevant papers in a new classification model.

    In the dataset root you'll find the following folders and files:

    • /00-survey-input/
      • This contains the survey questions for all the individual SDGs. It also contains lists of EIDs categorised to the SDGs we used to make randomized selections from to present to the respondents.
    • /01-raw-data/
      • This contains the raw survey output. (Excluding privacy sensitive information for public release.) This data needs to be combined with the data on the provisioning server to make sense.
    • /02-aggregated-data/
      • This data is where individual responses are aggregated. Also the survey data is combined with the provisioning server, of all sdg surveys combined, responses are aggregated, and split per question type.
    • /03-scripts/
      • This contains scripts to split data, and to add descriptive metadata for text analysis in a later stage.
    • /04-processed-data/
      • This is the main final result that can be used for further analysis. Data is split by SDG into subdirectories, in there you'll find files per question type containing the aggregated data of the respondents.
    • /images/
      • images of the results used in this README.md.
    • LICENSE.md
      • terms and conditions for reusing this data.
    • README.md
      • description of the dataset; each subfolders contains a README.md file to futher describe the content of each sub-folder.

    In the /04-processed-data/ you'll find in each SDG sub-folder the following files.:

    • SDG-survey-questions.pdf
      • This file contains the survey questions
      </li>
      <li><strong>SDG-survey-questions.doc</strong>
      <ul>
        <li>This file contains the survey questions</li>
      </ul>
      </li>
      <li><strong>SDG-survey-respondents-per-sdg.csv</strong>
      <ul>
        <li>Basic information about the survey and responses</li>
      </ul>
      </li>
      <li><strong>SDG-survey-city-heatmap.csv</strong>
      <ul>
        <li>Origin of the respondents per SDG survey</li>
      </ul>
      </li>
      <li><strong>SDG-survey-suggested-publications.txt</strong>
      <ul>
        <li>Formatted list of research papers researchers have uploaded or listed they want to see back in the result-set for this SDG.</li>
      </ul>
      </li>
      <li><strong>SDG-survey-suggested-publications-with-eid-match.csv</strong>
      <ul>
        <li>same as above, only matched with an EID. EIDs are matched my Elsevier's internal fuzzy matching algorithm. Only papers with high confidence are show with a match of an EID, referring to a record in Scopus.</li>
      </ul>
      </li>
      <li><strong>SDG-survey-selected-publications-accepted.csv</strong>
      <ul>
        <li>Based on our previous result set of papers, researchers were presented random samples, they selected papers they believe represent this SDG. (TRUE=accepted)</li>
      </ul>
      </li>
      <li><strong>SDG-survey-selected-publications-rejected.csv</strong>
      <ul>
        <li>Based on our previous result set of papers, researchers were presented random samples, they selected papers they believe not to represent this SDG. (FALSE=rejected)</li>
      </ul>
      </li>
      <li><strong>SDG-survey-selected-keywords.csv</strong>
      <ul>
        <li>Based on our previous result set of papers, we presented researchers the keywords that are in the metadata of those papers, they selected keywords they believe represent this SDG.</li>
      </ul>
      </li>
      <li><strong>SDG-survey-unselected-keywords.csv</strong>
      <ul>
        <li>As "selected-keywords", this is the list of keywords that respondents have not selected to represent this SDG.</li>
      </ul>
      </li>
      <li><strong>SDG-survey-suggested-keywords.csv</strong>
      <ul>
        <li>List of keywords researchers suggest to use to find papers related to this SDG</li>
      </ul>
      </li>
      <li><strong>SDG-survey-glossaries.csv</strong>
      <ul>
        <li>List of glossaries, containing keywords, researchers suggest to use to find papers related to this SDG</li>
      </ul>
      </li>
      <li><strong>SDG-survey-selected-journals.csv</strong>
      <ul>
        <li>Based on our previous result set of papers, we presented researchers the journals that are in the metadata of those papers, they selected journals they believe represent this SDG.</li>
      </ul>
      </li>
      <li><strong>SDG-survey-unselected-journals.csv</strong>
      <ul>
        <li>As "selected-journals", this is the list of journals
      
  6. Satellite avalanche mapping validation data

    • envidat.ch
    json, not available +1
    Updated May 29, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Elisabeth Hafner; Silvan Leinss; Frank Techel; Yves Bühler (2025). Satellite avalanche mapping validation data [Dataset]. http://doi.org/10.16904/envidat.202
    Explore at:
    xml, not available, jsonAvailable download formats
    Dataset updated
    May 29, 2025
    Dataset provided by
    Swiss Federal Institute for Forest, Snow and Landscape Research
    WSL Institute for Snow and Avalanche Research SLF
    ETH Zurich
    Authors
    Elisabeth Hafner; Silvan Leinss; Frank Techel; Yves Bühler
    License

    Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
    License information was derived automatically

    Time period covered
    Jan 10, 2019 - Aug 15, 2020
    Area covered
    Switzerland
    Dataset funded by
    Kanton Graubünden
    BAFU
    Description

    Validation points, validation area, ground truth coverage, SPOT 6 avalanche outlines, Sentinel-1 avalanche outlines, Sentinel-2 avalanche outlines, Davos avalanche mapping (DAvalMap) avalanche outlines as shapefiles and a detailed attribute description (DataDescription_EvalSatMappingMethods.pdf). Coordinate system: CH1903+_LV95 The generation of this dataset is described in detail in: Hafner, E. D., Techel, F., Leinss, S., and Bühler, Y.: Mapping avalanches with satellites – evaluation of performance and completeness, The Cryosphere, https://doi.org/10.5194/tc-2020-272, 2021.

  7. F

    Data from: A Neural Approach for Text Extraction from Scholarly Figures

    • data.uni-hannover.de
    zip
    Updated Jan 20, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TIB (2022). A Neural Approach for Text Extraction from Scholarly Figures [Dataset]. https://data.uni-hannover.de/dataset/a-neural-approach-for-text-extraction-from-scholarly-figures
    Explore at:
    zip(798357692)Available download formats
    Dataset updated
    Jan 20, 2022
    Dataset authored and provided by
    TIB
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Description

    A Neural Approach for Text Extraction from Scholarly Figures

    This is the readme for the supplemental data for our ICDAR 2019 paper.

    You can read our paper via IEEE here: https://ieeexplore.ieee.org/document/8978202

    If you found this dataset useful, please consider citing our paper:

    @inproceedings{DBLP:conf/icdar/MorrisTE19,
     author  = {David Morris and
            Peichen Tang and
            Ralph Ewerth},
     title   = {A Neural Approach for Text Extraction from Scholarly Figures},
     booktitle = {2019 International Conference on Document Analysis and Recognition,
            {ICDAR} 2019, Sydney, Australia, September 20-25, 2019},
     pages   = {1438--1443},
     publisher = {{IEEE}},
     year   = {2019},
     url    = {https://doi.org/10.1109/ICDAR.2019.00231},
     doi    = {10.1109/ICDAR.2019.00231},
     timestamp = {Tue, 04 Feb 2020 13:28:39 +0100},
     biburl  = {https://dblp.org/rec/conf/icdar/MorrisTE19.bib},
     bibsource = {dblp computer science bibliography, https://dblp.org}
    }
    

    This work was financially supported by the German Federal Ministry of Education and Research (BMBF) and European Social Fund (ESF) (InclusiveOCW project, no. 01PE17004).

    Datasets

    We used different sources of data for testing, validation, and training. Our testing set was assembled by the work we cited by Böschen et al. We excluded the DeGruyter dataset, and use it as our validation dataset.

    Testing

    These datasets contain a readme with license information. Further information about the associated project can be found in the authors' published work we cited: https://doi.org/10.1007/978-3-319-51811-4_2

    Validation

    The DeGruyter dataset does not include the labeled images due to license restrictions. As of writing, the images can still be downloaded from DeGruyter via the links in the readme. Note that depending on what program you use to strip the images out of the PDF they are provided in, you may have to re-number the images.

    Training

    We used label_generator's generated dataset, which the author made available on a requester-pays amazon s3 bucket. We also used the Multi-Type Web Images dataset, which is mirrored here.

    Code

    We have made our code available in code.zip. We will upload code, announce further news, and field questions via the github repo.

    Our text detection network is adapted from Argman's EAST implementation. The EAST/checkpoints/ours subdirectory contains the trained weights we used in the paper.

    We used a tesseract script to run text extraction from detected text rows. This is inside our code code.tar as text_recognition_multipro.py.

    We used a java script provided by Falk Böschen and adapted to our file structure. We included this as evaluator.jar.

    Parameter sweeps are automated by param_sweep.rb. This file also shows how to invoke all of these components.

  8. o

    Data from: Development and external validation of a prognostic multivariable...

    • explore.openaire.eu
    Updated Mar 30, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jianfeng Xie; Daniel Hungerford; Hui Chen; Simon Abrams; Shusheng Li; Guozheng Wang; Yishan Wang; Hanyujie Kang; Laura Bonnett; Ruiqiang Zheng; Xuyan Li; Zhaohui Tong; Bin Du; Haibo Qiu; Cheng-Hock Toh (2020). Development and external validation of a prognostic multivariable model on admission for hospitalized patients with COVID-19 [Dataset]. https://explore.openaire.eu/search/other?orpId=core_ac_uk_::e14f27bf6089960b5d809eb067301257
    Explore at:
    Dataset updated
    Mar 30, 2020
    Authors
    Jianfeng Xie; Daniel Hungerford; Hui Chen; Simon Abrams; Shusheng Li; Guozheng Wang; Yishan Wang; Hanyujie Kang; Laura Bonnett; Ruiqiang Zheng; Xuyan Li; Zhaohui Tong; Bin Du; Haibo Qiu; Cheng-Hock Toh
    Description

    Summary Background COVID-19 pandemic has developed rapidly and the ability to stratify the most vulnerable patients is vital. However, routinely used severity scoring systems are often low on diagnosis, even in non-survivors. Therefore, clinical prediction models for mortality are urgently required. Methods We developed and internally validated a multivariable logistic regression model to predict inpatient mortality in COVID-19 positive patients using data collected retrospectively from Tongji Hospital, Wuhan (299 patients). External validation was conducted using a retrospective cohort from Jinyintan Hospital, Wuhan (145 patients). Nine variables commonly measured in these acute settings were considered for model development, including age, biomarkers and comorbidities. Backwards stepwise selection and bootstrap resampling were used for model development and internal validation. We assessed discrimination via the C statistic, and calibration using calibration-in-the-large, calibration slopes and plots. Findings The final model included age, lymphocyte count, lactate dehydrogenase and SpO 2 as independent predictors of mortality. Discrimination of the model was excellent in both internal (c=0·89) and external (c=0·98) validation. Internal calibration was excellent (calibration slope=1). External validation showed some over-prediction of risk in low-risk individuals and under-prediction of risk in high-risk individuals prior to recalibration. Recalibration of the intercept and slope led to excellent performance of the model in independent data. Interpretation COVID-19 is a new disease and behaves differently from common critical illnesses. This study provides a new prediction model to identify patients with lethal COVID-19. Its practical reliance on commonly available parameters should improve usage of limited healthcare resources and patient survival rate. Funding This study was supported by following funding: Key Research and Development Plan of Jiangsu Province (BE2018743 and BE2019749), National Institute for Health Research (NIHR) (PDF-2018-11-ST2-006), British Heart Foundation (BHF) (PG/16/65/32313) and Liverpool University Hospitals NHS Foundation Trust in UK. Research in context Evidence before this study Since the outbreak of COVID-19, there has been a pressing need for development of a prognostic tool that is easy for clinicians to use. Recently, a Lancet publication showed that in a cohort of 191 patients with COVID-19, age, SOFA score and D-dimer measurements were associated with mortality. No other publication involving prognostic factors or models has been identified to date. Added value of this study In our cohorts of 444 patients from two hospitals, SOFA scores were low in the majority of patients on admission. The relevance of D-dimer could not be verified, as it is not included in routine laboratory tests. In this study, we have established a multivariable clinical prediction model using a development cohort of 299 patients from one hospital. After backwards selection, four variables, including age, lymphocyte count, lactate dehydrogenase and SpO 2 remained in the model to predict mortality. This has been validated internally and externally with a cohort of 145 patients from a different hospital. Discrimination of the model was excellent in both internal (c=0·89) and external (c=0·98) validation. Calibration plots showed excellent agreement between predicted and observed probabilities of mortality after recalibration of the model to account for underlying differences in the risk profile of the datasets. This demonstrated that the model is able to make reliable predictions in patients from different hospitals. In addition, these variables agree with pathological mechanisms and the model is easy to use in all types of clinical settings. Implication of all the available evidence After further external validation in different countries the model will enable better risk stratification and more targeted management of patients with COVID-19. With the nomogram, this model that is based on readily available parameters can help clinicians to stratify COVID-19 patients on diagnosis to use limited healthcare resources effectively and improve patient outcome.

  9. VandalFire ML Framework: Validation Part 3

    • zenodo.org
    pdf
    Updated Jun 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jeffrey Logan; Jeffrey Logan (2025). VandalFire ML Framework: Validation Part 3 [Dataset]. http://doi.org/10.5281/zenodo.15644742
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 12, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jeffrey Logan; Jeffrey Logan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1979 - Dec 2024
    Description

    Vandal Fire Intelligence (VFI) : Validation Part 3 (2025)

    This document is part of a two-part research package submitted to Zenodo. It is supported by the companion Jupyter Notebook 'vf-validation-part3.ipynb`, which contains all source code, data diagnostics, and visualizations referenced herein. The PDF version of that notebook is `vf-validation-part3.pdf`.

    Contact: Jeffrey Logan, Earth and Spatial Sciences, University of Idaho

    Email: jeffrey.logan@uidaho.edu

    ORCID: 0009-0001-9415-5809

    VFI is a high‐performance, Dask powered platform developed to deliver dynamic, scalable flammability predictions to serve hyperlocal needs and global flammability analyses (Logan, Smith 2024). VFI informs fire managers, climate researchers, and NASA/NSF-aligned programs about evolving flammability risk patterns under changing environmental conditions. This builds on other documented efforts to explore single climate variables associated with wildfire activity in the Columbia River Basin Area of the United States from 1979-2025, such as

    1. Wind Speed Trends (https://doi.org/10.5281/zenodo.15485100)
    2. 1000 Hour Fuel Moisture Trends (https://doi.org/10.5281/zenodo.15446652)
    3. Vapor Pressure Deficit Trends (https://doi.org/10.5281/zenodo.15391290).
    4. Vandal Fire Intelligence (Columbia River Basin https://doi.org/10.5281/zenodo.15580220).
  10. f

    Data Sheet 1_A short pragmatic tool for evaluating community engagement:...

    • frontiersin.figshare.com
    pdf
    Updated Jun 12, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    John G. Oetzel; Blake Boursaw; Lenora Littledeer; Sarah Kastelic; Page Castro-Reyes; Juan M. Peña; Patricia Rodriguez Espinosa; Shannon Sanchez-Youngman; Lorenda Belone; Nina Wallerstein (2025). Data Sheet 1_A short pragmatic tool for evaluating community engagement: Partnering for Health Improvement and Research Equity.pdf [Dataset]. http://doi.org/10.3389/fpubh.2025.1539864.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 12, 2025
    Dataset provided by
    Frontiers
    Authors
    John G. Oetzel; Blake Boursaw; Lenora Littledeer; Sarah Kastelic; Page Castro-Reyes; Juan M. Peña; Patricia Rodriguez Espinosa; Shannon Sanchez-Youngman; Lorenda Belone; Nina Wallerstein
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundAs community-engaged research (CEnR), community-based participatory research (CBPR) and patient-engaged research (PEnR) have become increasingly recognized as valued research approaches in the last several decades, there is need for pragmatic and validated tools to assess effective partnering practices that contribute to health and health equity outcomes. This article reports on the co-creation of an actionable pragmatic survey, shortened from validated metrics of partnership practices and outcomes.MethodsWe pursued a triple aim of preserving content validity, psychometric properties, and importance to stakeholders of items, scales, and constructs from a previously validated measure of CBRP/CEnR processes and outcomes. There were six steps in the methods: (a) established validity and shortening objectives; (b) used a conceptual model to guide decisions; (c) preserved content validity and importance; (d) preserved psychometric properties; (e) justified the selection of items and scales; and (f) validated the short-form version. Twenty-one CBPR/CEnR experts (13 academic and 8 community partners) completed a survey and participated in two focus groups to identify content validity and importance of the original 93 items.ResultsThe survey and focus group process resulted in the creation of the 30-item Partnering for Health Improvement and Research Equity (PHIRE) survey. Confirmatory factor analysis and a structural equation model of the original data set resulted in the validation of eight higher-order scales with good internal consistency and structural relationships (TLI > 0.98 and SRMR < 0.02). A reworded version of the PHIRE was administered to an additional sample demonstrating good reliability and construct validity.ConclusionThis study demonstrates that the PHIRE is a reliable instrument with construct validity compared to the larger version from which it was derived. The PHIRE is a straightforward and easy-to-use tool, for a range of CBPR/CEnR projects, that can provide benefit to partnerships by identifying actionable changes to their partnering practices to reach their desired research and practical outcomes.

  11. D

    Uncertainty analysis of gross primary production partitioned from net...

    • phys-techsciences.datastations.nl
    csv, pdf +4
    Updated Dec 8, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    R. Raj; R. Raj (2016). Uncertainty analysis of gross primary production partitioned from net ecosystem exchange measurements [Dataset]. http://doi.org/10.17026/DANS-X9T-NKBZ
    Explore at:
    txt(766), pdf(566962), txt(701), pdf(515444), csv(20157373), tsv(2862), tsv(8482), pdf(1283555), csv(21308612), text/comma-separated-values(599441), csv(21289458), zip(24547), csv(20160870), tsv(690459), tsv(2859), tsv(8484), tsv(690376)Available download formats
    Dataset updated
    Dec 8, 2016
    Dataset provided by
    DANS Data Station Physical and Technical Sciences
    Authors
    R. Raj; R. Raj
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This research estimated the uncertainty in gross primary production (GPP) at half hourly time steps, using a non-rectangular hyperbola (NRH) model for its separation from the flux tower measurements of net ecosystem exchange (NEE) at the Speulderbos forest site, The Netherlands. This provided relevant data for the calibration and validation of process-based simulators. A file list and description of data (i.e., metadata) in each file are provided in the uploaded pdf file "FluxPartitioning_NEEdata_codebook_Version1.pdf". This pdf file also provides a brief description of the methods adopted in this research.

  12. f

    Data Sheet 1_Body posture as a measure of emotional valence in young...

    • figshare.com
    pdf
    Updated Apr 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stella C. Gerdemann; Amrisha Vaish; Robert Hepach (2025). Data Sheet 1_Body posture as a measure of emotional valence in young children: a preregistered validation study.pdf [Dataset]. http://doi.org/10.3389/fdpys.2025.1536440.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Apr 11, 2025
    Dataset provided by
    Frontiers
    Authors
    Stella C. Gerdemann; Amrisha Vaish; Robert Hepach
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    IntroductionObjective measures of emotional valence in young children are rare, but recent work has employed motion depth sensor imaging to measure young children's emotional expression via changes in their body posture. This method efficiently captures children's emotional valence, moving beyond self-reports or caregiver reports, and avoiding extensive manual coding, e.g., of children's facial expressions. Moreover, it can be flexibly and non-invasively used in interactive study paradigms, thus offering an advantage over other physiological measures of emotional valence.MethodHere, we discuss the merits of studying body posture in developmental research and showcase its use in six studies. To this end, we provide a comprehensive validation in which we map the measures of children's posture onto the constructs of emotional valence and arousal. Using body posture data aggregated from six studies (N = 466; Mage = 5.08; range: 2 years, 5 months to 6 years, 2 months; 220 girls), coders rated children's expressed emotional valence and arousal, and provided a discrete emotion label for each child.ResultsEmotional valence was positively associated with children's change in chest height and chest expansion: children with more upright upper-body postures were rated as expressing a more positive emotional valence whereas the relation between emotional arousal and changes in body posture was weak.DiscussionThese data add to existing evidence that changes in body posture reliably reflect emotional valence. They thus provide an empirical foundation to conduct research on children's spontaneously expressed emotional valence using the automated and efficient tool of body posture analysis.

  13. f

    Data Sheet 1_Climate of Accountability, Respect, and Ethics Survey (CARES):...

    • frontiersin.figshare.com
    pdf
    Updated Feb 25, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Brian C. Martinson; Jarvis Smallfield; Vicki J. Magley; Carol R. Thrush; C. K. Gunsalus (2025). Data Sheet 1_Climate of Accountability, Respect, and Ethics Survey (CARES): development and validation of an organizational climate survey.pdf [Dataset]. http://doi.org/10.3389/frma.2025.1516726.s002
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Feb 25, 2025
    Dataset provided by
    Frontiers
    Authors
    Brian C. Martinson; Jarvis Smallfield; Vicki J. Magley; Carol R. Thrush; C. K. Gunsalus
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundThis research describes the development and validation of the CARES Climate Survey, a 22-item measure designed to assess interpersonal dimensions of work-unit climates. Dimensions of work-unit climates are identified through work-unit member perceptions and include civility, interpersonal accountability, conflict resolution, and institutional harassment responsiveness.MethodsTwo samples (N = 1,384; N = 868) of academic researchers, including one from the North American membership of the American Geophysical Union (AGU), and one from a large research-intensive university, responded to the CARES and additional measures via an online survey.ResultsWe demonstrate content validity of the CARES measure and confirm structural validity through exploratory and confirmatory factor analyses which yielded four dimensions of interpersonal climate. In addition, we confirm the CARES internal reliability, construct validity, and excellent sub-group invariance.ConclusionsThe CARES is a brief, psychometrically sound instrument that can be used by researchers, institutional leaders, and other practitioners to assess interpersonal climates in organizational work-units.Originality/valueThis is the first study to develop and validate such a measure of interpersonal climates specifically in research-intensive organizations, using rigorous psychometric methods, grounded in both theory and prior research on work-unit climates.

  14. f

    Data Sheet 1_Barriers for work in people with multiple sclerosis: a...

    • figshare.com
    pdf
    Updated Nov 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Britt Normann; Ellen Christin Arntzen; Cynthia A. Honan (2024). Data Sheet 1_Barriers for work in people with multiple sclerosis: a Norwegian cultural adaptation and validation of the short version of the multiple sclerosis work difficulties questionnaire.pdf [Dataset]. http://doi.org/10.3389/fresc.2024.1404723.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Nov 20, 2024
    Dataset provided by
    Frontiers
    Authors
    Britt Normann; Ellen Christin Arntzen; Cynthia A. Honan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Background and purposeMultiple sclerosis (MS) is associated with high rates of unemployment, and barriers for work are essential to identify in the regular follow-up of these people. The current study aimed to culturally adapt and evaluate the psychometric properties of the Norwegian version of the Multiple Sclerosis Work Difficulties Questionnaire-23 (MSWDQ-23).MethodsFollowing backward and forward translation, the Norwegian version of the MSWDQ-23 (MSWDQ-23NV) was completed by 229 people with multiple sclerosis (MS). Validity was evaluated through confirmatory factor analysis and by associating scores with employment status, disability, and health-related quality of life outcome measures. Convergent validity was checked by correlating MSWDQ-23 scores with alternative study measures. Internal consistencies were examined by Cronbach's alfa.ResultsA good fit for the data was demonstrated for the MSWDQ-23NV in confirmatory factor analysis, with excellent internal consistencies also demonstrated for the full scale and its subscales (physical barriers, psychological/cognitive barriers, external barriers). The MSWDQ-23NV subscales were related in the expected direction to health-related quality of life outcome measures. While higher scores on the physical barriers subscale was strongly associated with higher levels of disability and progressive MS types, higher scores on all subscales were associated with not working in the past year.DiscussionThe Norwegian MSWDQ-23 is an internally consistent and valid instrument to measure perceived work difficulties in persons with all types of MS in a Norwegian-speaking population. The MSWDQ-23NV can be considered a useful tool for health care professionals to assess self-reported work difficulties in persons with MS. The Norwegian MSWDQ-23 scale should be examined for test-retest reliability and considered implemented in the regular follow up at the MS-outpatient clinics in Norway to support employment maintenance.

  15. f

    Data Sheet 2_Increasing ecological validity in mental fatigue research—A...

    • frontiersin.figshare.com
    pdf
    Updated May 27, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Helena Weiler; Fabienne Ennigkeit; Jan Spielmann; Chris Englert (2025). Data Sheet 2_Increasing ecological validity in mental fatigue research—A Footbonaut study.pdf [Dataset]. http://doi.org/10.3389/fpsyg.2025.1586944.s002
    Explore at:
    pdfAvailable download formats
    Dataset updated
    May 27, 2025
    Dataset provided by
    Frontiers
    Authors
    Helena Weiler; Fabienne Ennigkeit; Jan Spielmann; Chris Englert
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    IntroductionPast studies have mainly used Stroop tasks to induce mental fatigue in soccer. However, due to the non-sport-specificity of these tasks, their transferability to the real-life effects of mental fatigue in soccer have been questioned. The study's aim was to investigate the effects of two different versions (mentally less vs. mentally more demanding) of a soccer passing task in the so-called Footbonaut on cognitive and soccer-specific performance.MethodsA randomized, counterbalanced experimental within-subjects design was employed (N = 27). We developed two different versions of the soccer passing task in the Footbonaut: a mentally more demanding decision-making and inhibition task in the experimental condition, and a mentally less demanding standard task of the Footbonaut in the control condition.ResultsParticipants showed significantly worse soccer-specific performance in the experimental condition compared to the control condition. No corresponding effects were revealed in cognitive performance.DiscussionThe findings suggest that cognitive-motor interference induced by 30-min Footbonaut technology-based training may induce mental fatigue in soccer players. Future studies should consider developing mentally less-demanding yet comparable control tasks.

  16. f

    Data_Sheet_1_A validation study to analyze the reliability of center of...

    • frontiersin.figshare.com
    pdf
    Updated Mar 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Masoud Aghapour; Nadja Affenzeller; Christiane Lutonsky; Christian Peham; Alexander Tichy; Barbara Bockstahler (2024). Data_Sheet_1_A validation study to analyze the reliability of center of pressure data in static posturography in dogs.PDF [Dataset]. http://doi.org/10.3389/fvets.2024.1353824.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Mar 14, 2024
    Dataset provided by
    Frontiers
    Authors
    Masoud Aghapour; Nadja Affenzeller; Christiane Lutonsky; Christian Peham; Alexander Tichy; Barbara Bockstahler
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    IntroductionCenter of pressure (COP) parameters are frequently assessed to analyze movement disorders in humans and animals. Methodological discrepancies are a major concern when evaluating conflicting study results. This study aimed to assess the inter-observer reliability and test-retest reliability of body COP parameters including mediolateral and craniocaudal sway, total length, average speed and support surface in healthy dogs during quiet standing on a pressure plate. Additionally, it sought to determine the minimum number of trials and the shortest duration necessary for accurate COP assessment.Materials and methodsTwelve clinically healthy dogs underwent three repeated trials, which were analyzed by three independent observers to evaluate inter-observer reliability. Test-retest reliability was assessed across the three trials per dog, each lasting 20 seconds (s). Selected 20 s measurements were analyzed in six different ways: 1 × 20 s, 1 × 15 s, 2 × 10 s, 4 × 5 s, 10 × 2 s, and 20 × 1 s.ResultsResults demonstrated excellent inter-observer reliability (ICC ≥ 0.93) for all COP parameters. However, only 5 s, 10 s, and 15 s measurements achieved the reliability threshold (ICC ≥ 0.60) for all evaluated parameters.DiscussionThe shortest repeatable durations were obtained from either two 5 s measurements or a single 10 s measurement. Most importantly, statistically significant differences were observed between the different measurement durations, which underlines the need to standardize measurement times in COP analysis. The results of this study aid scientists in implementing standardized methods, thereby easing comparisons across studies and enhancing the reliability and validity of research findings in veterinary medicine.

  17. Data from: Visual Validation of the e-RUSLE Model Applied at the...

    • figshare.com
    pdf
    Updated Oct 17, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Claudio Bosco; Daniele de Rigo; Olivier Dewitte (2019). Visual Validation of the e-RUSLE Model Applied at the Pan-European Scale [Dataset]. http://doi.org/10.6084/m9.figshare.844627.v5
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Oct 17, 2019
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Claudio Bosco; Daniele de Rigo; Olivier Dewitte
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Europe
    Description

    Bosco, C., de Rigo, D., Dewitte, O., 2014. Visual Validation of the e-RUSLE Model Applied at the Pan-European Scale. Scientific Topics Focus 1, MRI-11a13. Notes Transdiscipl. Model. Env., Maieutike Research Initiative. https://doi.org/10.6084/m9.figshare.844627 PDF: http://purl.org/mtv/STF/pdf/1843225Version: DRAFT 0.4.2This is a preliminary version. Any future version will be accessible from the same DOI code (https://doi.org/10.6084/m9.figshare.844627). The views expressed are those of the authors and may not be regarded as stating an official position of mentioned organisations.

    Visual Validation of the e-RUSLE Model Applied at the Pan-European Scale Claudio Bosco ¹ ² ⁴, Daniele de Rigo ² ³ ⁴ and Olivier Dewitte ⁵ 1 Loughborough University, Department of Civil and Building Engineering,Loughborough, United Kingdom 2 Joint Research Centre of the European Commission,Institute for Environment and Sustainability, Ispra, Italy 3 Politecnico di Milano, Dipartimento di Elettronica, Informazione e Bioingegneria, Milano, Italy 4 Maieutike Research Initiative, Milano, Italy 5 Royal Museum for Central Africa, Department of Earth Sciences, Tervuren, Belgium

    Validating soil erosion estimates at regional or continental scale is still extremely challenging. The common procedures are not technically and financially applicable for large spatial extents, despite this some options are still applicable. For validating the European map of soil erosion by water calculated using the approach proposed in Bosco et al. [1] we applied alternative qualitative methods based on visual evaluation. The 1 km 2 map was validated through a visual and categorical comparison between modelled and observed soil erosion. A procedure employing high-resolution Google Earth images and pictures as validation data is here shown. The resolution of the images, rapidly increased during the last years, allows for a visual qualitative estimation of local soil erosion rates. A cluster of 3x3 K m 2 around 85 selected points was analysed by the authors. The results corroborate the map obtained applying the e-RUSLE model. The 63% of a random sample of 732 grid cells are accurate, 83% at least moderately accurate with a bootstrap p ≤ 0.05). For each of the 85 clusters, the complete details of the validation also containing the comments of the evaluators and the geo-location of the analysed areas have been reported.

  18. f

    Data_Sheet_1_The Factor Structure and External Validity of the COPE 60...

    • frontiersin.figshare.com
    pdf
    Updated Jun 10, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Júlia Halamová; Martin Kanovský; Katarina Krizova; Katarína Greškovičová; Bronislava Strnádelová; Martina Baránková (2023). Data_Sheet_1_The Factor Structure and External Validity of the COPE 60 Inventory in Slovak Translation.pdf [Dataset]. http://doi.org/10.3389/fpsyg.2021.800166.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 10, 2023
    Dataset provided by
    Frontiers
    Authors
    Júlia Halamová; Martin Kanovský; Katarina Krizova; Katarína Greškovičová; Bronislava Strnádelová; Martina Baránková
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The COPE Inventory (Carver et al., 1989) is the most frequently used measure of coping; yet previous studies examining its factor structure yielded mixed results. The purpose of the current study, therefore, was to validate the factor structure of the COPE Inventory in a representative sample of over 2,000 adults in Slovakia. Our second goal was to evaluate the external validity of the COPE inventory, which has not been done before. Firstly, we performed the exploratory factor analysis (EFA) with half of the sample. Subsequently, we performed the confirmatory factor analysis with the second half of the sample. Both factor analyses with 15 factor solutions showed excellent fit with the data. Additionally, we performed a hierarchical factor analysis with fifteen first-order factors (acceptance, active coping, behavioral disengagement, denial, seeking emotional support, humor, seeking instrumental support, mental disengagement, planning, positive reinterpretation, religion, restraint, substance use, suppression of competing activities, and venting) and three second-order factors (active coping, social emotional coping, and avoidance coping) which showed good fit with the data. Moreover, the COPE Inventory’s external validity was evaluated using consensual qualitative research (CQR) analysis on data collected by in-depth interviews. Categories of coping created using CQR corresponded with all COPE first-order factors. Moreover, we identified two additional first-order factors that were not present in the COPE Inventory: self-care and care for others. Our study shows that the Slovak translation of the COPE Inventory is a reliable, externally valid, and well-structured instrument for measuring coping in the Slovak population.

  19. f

    Data_Sheet_1_Appraising systematic reviews: a comprehensive guide to...

    • frontiersin.figshare.com
    pdf
    Updated Dec 21, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nour Shaheen; Ahmed Shaheen; Alaa Ramadan; Mahmoud Tarek Hefnawy; Abdelraouf Ramadan; Ismail A. Ibrahim; Maged Elsayed Hassanein; Mohamed E. Ashour; Oliver Flouty (2023). Data_Sheet_1_Appraising systematic reviews: a comprehensive guide to ensuring validity and reliability.PDF [Dataset]. http://doi.org/10.3389/frma.2023.1268045.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Dec 21, 2023
    Dataset provided by
    Frontiers
    Authors
    Nour Shaheen; Ahmed Shaheen; Alaa Ramadan; Mahmoud Tarek Hefnawy; Abdelraouf Ramadan; Ismail A. Ibrahim; Maged Elsayed Hassanein; Mohamed E. Ashour; Oliver Flouty
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Systematic reviews play a crucial role in evidence-based practices as they consolidate research findings to inform decision-making. However, it is essential to assess the quality of systematic reviews to prevent biased or inaccurate conclusions. This paper underscores the importance of adhering to recognized guidelines, such as the PRISMA statement and Cochrane Handbook. These recommendations advocate for systematic approaches and emphasize the documentation of critical components, including the search strategy and study selection. A thorough evaluation of methodologies, research quality, and overall evidence strength is essential during the appraisal process. Identifying potential sources of bias and review limitations, such as selective reporting or trial heterogeneity, is facilitated by tools like the Cochrane Risk of Bias and the AMSTAR 2 checklist. The assessment of included studies emphasizes formulating clear research questions and employing appropriate search strategies to construct robust reviews. Relevance and bias reduction are ensured through meticulous selection of inclusion and exclusion criteria. Accurate data synthesis, including appropriate data extraction and analysis, is necessary for drawing reliable conclusions. Meta-analysis, a statistical method for aggregating trial findings, improves the precision of treatment impact estimates. Systematic reviews should consider crucial factors such as addressing biases, disclosing conflicts of interest, and acknowledging review and methodological limitations. This paper aims to enhance the reliability of systematic reviews, ultimately improving decision-making in healthcare, public policy, and other domains. It provides academics, practitioners, and policymakers with a comprehensive understanding of the evaluation process, empowering them to make well-informed decisions based on robust data.

  20. f

    Data_Sheet_2_Predictive Models of Assistance Dog Training Outcomes Using the...

    • frontiersin.figshare.com
    pdf
    Updated May 31, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Emily E. Bray; Kerinne M. Levy; Brenda S. Kennedy; Deborah L. Duffy; James A. Serpell; Evan L. MacLean (2023). Data_Sheet_2_Predictive Models of Assistance Dog Training Outcomes Using the Canine Behavioral Assessment and Research Questionnaire and a Standardized Temperament Evaluation.PDF [Dataset]. http://doi.org/10.3389/fvets.2019.00049.s002
    Explore at:
    pdfAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Frontiers
    Authors
    Emily E. Bray; Kerinne M. Levy; Brenda S. Kennedy; Deborah L. Duffy; James A. Serpell; Evan L. MacLean
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Assistance dogs can greatly improve the lives of people with disabilities. However, a large proportion of dogs bred and trained for this purpose are deemed unable to successfully fulfill the behavioral demands of this role. Often, this determination is not finalized until weeks or even months into training, when the dog is close to 2 years old. Thus, there is an urgent need to develop objective selection protocols that can identify dogs most and least likely to succeed, from early in the training process. We assessed the predictive validity of two candidate measures employed by Canine Companions for Independence (CCI), a national assistance dog organization headquartered in Santa Rosa, CA. For more than a decade, CCI has collected data on their population using the Canine Behavioral Assessment and Research Questionnaire (C-BARQ) and a standardized temperament assessment known internally as the In-For-Training (IFT) test, which is conducted at the beginning of professional training. Data from both measures were divided into independent training and test datasets, with the training data used for variable selection and cross-validation. We developed three predictive models in which we predicted success or release from the training program using C-BARQ scores (N = 3,569), IFT scores (N = 5,967), and a combination of scores from both instruments (N = 2,990). All three final models performed significantly better than the null expectation when applied to the test data, with overall accuracies ranging from 64 to 68%. Model predictions were most accurate for dogs predicted to have the lowest probability of success (ranging from 85 to 92% accurate for dogs in the lowest 10% of predicted probabilities), and moderately accurate for identifying the dogs most likely to succeed (ranging from 62 to 72% for dogs in the top 10% of predicted probabilities). Combining C-BARQ and IFT predictors into a single model did not improve overall accuracy, although it did improve accuracy for dogs in the lowest 20% of predicted probabilities. Our results suggest that both types of assessments have the potential to be used as powerful screening tools, thereby allowing more efficient allocation of resources in assistance dog selection and training.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Joakim Philipson; Joakim Philipson (2024). The Red Queen in the Repository: metadata quality in an ever-changing environment (preprint of paper, presentation slides and dataset collection with validation schemas to IDCC2019 conference paper) [Dataset]. http://doi.org/10.5281/zenodo.2276777
Organization logo

The Red Queen in the Repository: metadata quality in an ever-changing environment (preprint of paper, presentation slides and dataset collection with validation schemas to IDCC2019 conference paper)

Explore at:
2 scholarly articles cite this dataset (View in Google Scholar)
zip, bin, csvAvailable download formats
Dataset updated
Jul 25, 2024
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Joakim Philipson; Joakim Philipson
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

This fileset contains a preprint version of the conference paper (.pdf), presentation slides (as .pptx) and the dataset(s) and validation schema(s) for the IDCC 2019 (Melbourne) conference paper: The Red Queen in the Repository: metadata quality in an ever-changing environment. Datasets and schemas are in .xml, .xsd , Excel (.xlsx) and .csv (two files representing two different sheets in the .xslx -file). The validationSchemas.zip holds the additional validation schemas (.xsd), that were not found in the schemaLocations of the metadata xml-files to be validated. The schemas must all be placed in the same folder, and are to be used for validating the Dataverse dcterms records (with metadataDCT.xsd) and the Zenodo oai_datacite feeds respectively (schema.datacite.org_oai_oai-1.0_oai.xsd). In the latter case, a simpler way of doing it might be to replace the incorrect URL "http://schema.datacite.org/oai/oai-1.0/ oai_datacite.xsd" in the schemaLocation of these xml-files by the CORRECT: schemaLocation="http://schema.datacite.org/oai/oai-1.0/ http://schema.datacite.org/oai/oai-1.0/oai.xsd" as has been done already in the sample files here. The sample file folders testDVNcoll.zip (Dataverse), testFigColl.zip (Figshare) and testZenColl.zip (Zenodo) contain all the metadata files tested and validated that are registered in the spreadsheet with objectIDs.
In the case of Zenodo, one original file feed,
zen2018oai_datacite3orig-https%20_zenodo.org_oai2d%20verb=ListRecords%26metadata
Prefix=oai_datacite%26from=2018-11-29%26until=2018-11-30.xml
,
is also supplied to show what was necessary to change in order to perform validation as indicated in the paper.

For Dataverse, a corrected version of a file,
dvn2014ddi-27595Corr_https%20_dataverse.harvard.edu_api_datasets_export%20
exporter=ddi%26persistentId=doi%253A10.7910_DVN_27595Corr.xml
,
is also supplied in order to show the changes it would take to make the file validate without error.

Search
Clear search
Close search
Google apps
Main menu