Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This fileset contains a preprint version of the conference paper (.pdf), presentation slides (as .pptx) and the dataset(s) and validation schema(s) for the IDCC 2019 (Melbourne) conference paper: The Red Queen in the Repository: metadata quality in an ever-changing environment. Datasets and schemas are in .xml, .xsd , Excel (.xlsx) and .csv (two files representing two different sheets in the .xslx -file). The validationSchemas.zip holds the additional validation schemas (.xsd), that were not found in the schemaLocations of the metadata xml-files to be validated. The schemas must all be placed in the same folder, and are to be used for validating the Dataverse dcterms records (with metadataDCT.xsd) and the Zenodo oai_datacite feeds respectively (schema.datacite.org_oai_oai-1.0_oai.xsd). In the latter case, a simpler way of doing it might be to replace the incorrect URL "http://schema.datacite.org/oai/oai-1.0/ oai_datacite.xsd" in the schemaLocation of these xml-files by the CORRECT: schemaLocation="http://schema.datacite.org/oai/oai-1.0/ http://schema.datacite.org/oai/oai-1.0/oai.xsd" as has been done already in the sample files here. The sample file folders testDVNcoll.zip (Dataverse), testFigColl.zip (Figshare) and testZenColl.zip (Zenodo) contain all the metadata files tested and validated that are registered in the spreadsheet with objectIDs.
In the case of Zenodo, one original file feed,
zen2018oai_datacite3orig-https%20_zenodo.org_oai2d%20verb=ListRecords%26metadata
Prefix=oai_datacite%26from=2018-11-29%26until=2018-11-30.xml ,
is also supplied to show what was necessary to change in order to perform validation as indicated in the paper.
For Dataverse, a corrected version of a file,
dvn2014ddi-27595Corr_https%20_dataverse.harvard.edu_api_datasets_export%20
exporter=ddi%26persistentId=doi%253A10.7910_DVN_27595Corr.xml ,
is also supplied in order to show the changes it would take to make the file validate without error.
https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy
BASE YEAR | 2024 |
HISTORICAL DATA | 2019 - 2024 |
REPORT COVERAGE | Revenue Forecast, Competitive Landscape, Growth Factors, and Trends |
MARKET SIZE 2023 | 4.91(USD Billion) |
MARKET SIZE 2024 | 5.47(USD Billion) |
MARKET SIZE 2032 | 12.9(USD Billion) |
SEGMENTS COVERED | Deployment Mode ,Organization Size ,Industry Vertical ,Form Type ,Key Features ,Regional |
COUNTRIES COVERED | North America, Europe, APAC, South America, MEA |
KEY MARKET DYNAMICS | Rising adoption of cloudbased solutions Growing need for efficient data management Proliferation of mobile devices Increasing regulatory compliance requirements Emergence of advanced technologies like AI and ML |
MARKET FORECAST UNITS | USD Billion |
KEY COMPANIES PROFILED | Microsoft ,K2 ,SAP SE ,Nintex ,Salesforce ,Hyland Software ,IBM ,Laserfiche ,MFiles ,Paperless Process Management ,Alfresco Software ,ServiceNow ,ProcessMaker ,Oracle ,Adobe |
MARKET FORECAST PERIOD | 2024 - 2032 |
KEY MARKET OPPORTUNITIES | Automation of data entry Improved data security Streamlined workflows Enhanced customer experience Cost savings |
COMPOUND ANNUAL GROWTH RATE (CAGR) | 11.32% (2024 - 2032) |
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This document provides a clear and practical guide to understanding missing data mechanisms, including Missing Completely At Random (MCAR), Missing At Random (MAR), and Missing Not At Random (MNAR). Through real-world scenarios and examples, it explains how different types of missingness impact data analysis and decision-making. It also outlines common strategies for handling missing data, including deletion techniques and imputation methods such as mean imputation, regression, and stochastic modeling.Designed for researchers, analysts, and students working with real-world datasets, this guide helps ensure statistical validity, reduce bias, and improve the overall quality of analysis in fields like public health, behavioral science, social research, and machine learning.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Recently, online testing has become an increasingly important instrument in developmental research, in particular since the COVID-19 pandemic made in-lab testing impossible. However, online testing comes with two substantial challenges. First, it is unclear how valid results of online studies really are. Second, implementing online studies can be costly and/or require profound coding skills. This article addresses the validity of an online testing approach that is low-cost and easy to implement: The experimenter shares test materials such as videos or presentations via video chat and interactively moderates the test session. To validate this approach, we compared children’s performance on a well-established task, the change-of-location false belief task, in an in-lab and online test setting. In two studies, 3- and 4-year-old received online implementations of the false belief version (Study 1) and the false and true belief version of the task (Study 2). Children’s performance in these online studies was compared to data of matching tasks collected in the context of in-lab studies. Results revealed that the typical developmental pattern of performance in these tasks found in in-lab studies could be replicated with the novel online test procedure. These results suggest that the proposed method, which is both low-cost and easy to implement, provides a valid alternative to classical in-person test settings.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains information on what papers and concepts researchers find relevant to map domain specific research output to the 17 Sustainable Development Goals (SDGs).
Sustainable Development Goals are the 17 global challenges set by the United Nations. Within each of the goals specific targets and indicators are mentioned to monitor the progress of reaching those goals by 2030. In an effort to capture how research is contributing to move the needle on those challenges, we earlier have made an initial classification model than enables to quickly identify what research output is related to what SDG. (This Aurora SDG dashboard is the initial outcome as proof of practice.)
In order to validate our current classification model (on soundness/precision and completeness/recall), and receive input for improvement, a survey has been conducted to capture expert knowledge from senior researchers in their research domain related to the SDG. The survey was open to the world, but mainly distributed to researchers from the Aurora Universities Network. The survey was open from October 2019 till January 2020, and captured data from 244 respondents in Europe and North America.
17 surveys were created from a single template, where the content was made specific for each SDG. Content, like a random set of publications, of each survey was ingested by a data provisioning server. That collected research output metadata for each SDG in an earlier stage. It took on average 1 hour for a respondent to complete the survey. The outcome of the survey data can be used for validating current and optimizing future SDG classification models for mapping research output to the SDGs.
The survey contains the following questions (see inside dataset for exact wording):
In the dataset root you'll find the following folders and files:
In the /04-processed-data/ you'll find in each SDG sub-folder the following files.:
</li>
<li><strong>SDG-survey-questions.doc</strong>
<ul>
<li>This file contains the survey questions</li>
</ul>
</li>
<li><strong>SDG-survey-respondents-per-sdg.csv</strong>
<ul>
<li>Basic information about the survey and responses</li>
</ul>
</li>
<li><strong>SDG-survey-city-heatmap.csv</strong>
<ul>
<li>Origin of the respondents per SDG survey</li>
</ul>
</li>
<li><strong>SDG-survey-suggested-publications.txt</strong>
<ul>
<li>Formatted list of research papers researchers have uploaded or listed they want to see back in the result-set for this SDG.</li>
</ul>
</li>
<li><strong>SDG-survey-suggested-publications-with-eid-match.csv</strong>
<ul>
<li>same as above, only matched with an EID. EIDs are matched my Elsevier's internal fuzzy matching algorithm. Only papers with high confidence are show with a match of an EID, referring to a record in Scopus.</li>
</ul>
</li>
<li><strong>SDG-survey-selected-publications-accepted.csv</strong>
<ul>
<li>Based on our previous result set of papers, researchers were presented random samples, they selected papers they believe represent this SDG. (TRUE=accepted)</li>
</ul>
</li>
<li><strong>SDG-survey-selected-publications-rejected.csv</strong>
<ul>
<li>Based on our previous result set of papers, researchers were presented random samples, they selected papers they believe not to represent this SDG. (FALSE=rejected)</li>
</ul>
</li>
<li><strong>SDG-survey-selected-keywords.csv</strong>
<ul>
<li>Based on our previous result set of papers, we presented researchers the keywords that are in the metadata of those papers, they selected keywords they believe represent this SDG.</li>
</ul>
</li>
<li><strong>SDG-survey-unselected-keywords.csv</strong>
<ul>
<li>As "selected-keywords", this is the list of keywords that respondents have not selected to represent this SDG.</li>
</ul>
</li>
<li><strong>SDG-survey-suggested-keywords.csv</strong>
<ul>
<li>List of keywords researchers suggest to use to find papers related to this SDG</li>
</ul>
</li>
<li><strong>SDG-survey-glossaries.csv</strong>
<ul>
<li>List of glossaries, containing keywords, researchers suggest to use to find papers related to this SDG</li>
</ul>
</li>
<li><strong>SDG-survey-selected-journals.csv</strong>
<ul>
<li>Based on our previous result set of papers, we presented researchers the journals that are in the metadata of those papers, they selected journals they believe represent this SDG.</li>
</ul>
</li>
<li><strong>SDG-survey-unselected-journals.csv</strong>
<ul>
<li>As "selected-journals", this is the list of journals
Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
Validation points, validation area, ground truth coverage, SPOT 6 avalanche outlines, Sentinel-1 avalanche outlines, Sentinel-2 avalanche outlines, Davos avalanche mapping (DAvalMap) avalanche outlines as shapefiles and a detailed attribute description (DataDescription_EvalSatMappingMethods.pdf). Coordinate system: CH1903+_LV95 The generation of this dataset is described in detail in: Hafner, E. D., Techel, F., Leinss, S., and Bühler, Y.: Mapping avalanches with satellites – evaluation of performance and completeness, The Cryosphere, https://doi.org/10.5194/tc-2020-272, 2021.
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
This is the readme for the supplemental data for our ICDAR 2019 paper.
You can read our paper via IEEE here: https://ieeexplore.ieee.org/document/8978202
If you found this dataset useful, please consider citing our paper:
@inproceedings{DBLP:conf/icdar/MorrisTE19,
author = {David Morris and
Peichen Tang and
Ralph Ewerth},
title = {A Neural Approach for Text Extraction from Scholarly Figures},
booktitle = {2019 International Conference on Document Analysis and Recognition,
{ICDAR} 2019, Sydney, Australia, September 20-25, 2019},
pages = {1438--1443},
publisher = {{IEEE}},
year = {2019},
url = {https://doi.org/10.1109/ICDAR.2019.00231},
doi = {10.1109/ICDAR.2019.00231},
timestamp = {Tue, 04 Feb 2020 13:28:39 +0100},
biburl = {https://dblp.org/rec/conf/icdar/MorrisTE19.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
This work was financially supported by the German Federal Ministry of Education and Research (BMBF) and European Social Fund (ESF) (InclusiveOCW project, no. 01PE17004).
We used different sources of data for testing, validation, and training. Our testing set was assembled by the work we cited by Böschen et al. We excluded the DeGruyter dataset, and use it as our validation dataset.
These datasets contain a readme with license information. Further information about the associated project can be found in the authors' published work we cited: https://doi.org/10.1007/978-3-319-51811-4_2
The DeGruyter dataset does not include the labeled images due to license restrictions. As of writing, the images can still be downloaded from DeGruyter via the links in the readme. Note that depending on what program you use to strip the images out of the PDF they are provided in, you may have to re-number the images.
We used label_generator's generated dataset, which the author made available on a requester-pays amazon s3 bucket. We also used the Multi-Type Web Images dataset, which is mirrored here.
We have made our code available in code.zip
. We will upload code, announce further news, and field questions via the github repo.
Our text detection network is adapted from Argman's EAST implementation. The EAST/checkpoints/ours
subdirectory contains the trained weights we used in the paper.
We used a tesseract script to run text extraction from detected text rows. This is inside our code code.tar
as text_recognition_multipro.py
.
We used a java script provided by Falk Böschen and adapted to our file structure. We included this as evaluator.jar
.
Parameter sweeps are automated by param_sweep.rb
. This file also shows how to invoke all of these components.
Summary Background COVID-19 pandemic has developed rapidly and the ability to stratify the most vulnerable patients is vital. However, routinely used severity scoring systems are often low on diagnosis, even in non-survivors. Therefore, clinical prediction models for mortality are urgently required. Methods We developed and internally validated a multivariable logistic regression model to predict inpatient mortality in COVID-19 positive patients using data collected retrospectively from Tongji Hospital, Wuhan (299 patients). External validation was conducted using a retrospective cohort from Jinyintan Hospital, Wuhan (145 patients). Nine variables commonly measured in these acute settings were considered for model development, including age, biomarkers and comorbidities. Backwards stepwise selection and bootstrap resampling were used for model development and internal validation. We assessed discrimination via the C statistic, and calibration using calibration-in-the-large, calibration slopes and plots. Findings The final model included age, lymphocyte count, lactate dehydrogenase and SpO 2 as independent predictors of mortality. Discrimination of the model was excellent in both internal (c=0·89) and external (c=0·98) validation. Internal calibration was excellent (calibration slope=1). External validation showed some over-prediction of risk in low-risk individuals and under-prediction of risk in high-risk individuals prior to recalibration. Recalibration of the intercept and slope led to excellent performance of the model in independent data. Interpretation COVID-19 is a new disease and behaves differently from common critical illnesses. This study provides a new prediction model to identify patients with lethal COVID-19. Its practical reliance on commonly available parameters should improve usage of limited healthcare resources and patient survival rate. Funding This study was supported by following funding: Key Research and Development Plan of Jiangsu Province (BE2018743 and BE2019749), National Institute for Health Research (NIHR) (PDF-2018-11-ST2-006), British Heart Foundation (BHF) (PG/16/65/32313) and Liverpool University Hospitals NHS Foundation Trust in UK. Research in context Evidence before this study Since the outbreak of COVID-19, there has been a pressing need for development of a prognostic tool that is easy for clinicians to use. Recently, a Lancet publication showed that in a cohort of 191 patients with COVID-19, age, SOFA score and D-dimer measurements were associated with mortality. No other publication involving prognostic factors or models has been identified to date. Added value of this study In our cohorts of 444 patients from two hospitals, SOFA scores were low in the majority of patients on admission. The relevance of D-dimer could not be verified, as it is not included in routine laboratory tests. In this study, we have established a multivariable clinical prediction model using a development cohort of 299 patients from one hospital. After backwards selection, four variables, including age, lymphocyte count, lactate dehydrogenase and SpO 2 remained in the model to predict mortality. This has been validated internally and externally with a cohort of 145 patients from a different hospital. Discrimination of the model was excellent in both internal (c=0·89) and external (c=0·98) validation. Calibration plots showed excellent agreement between predicted and observed probabilities of mortality after recalibration of the model to account for underlying differences in the risk profile of the datasets. This demonstrated that the model is able to make reliable predictions in patients from different hospitals. In addition, these variables agree with pathological mechanisms and the model is easy to use in all types of clinical settings. Implication of all the available evidence After further external validation in different countries the model will enable better risk stratification and more targeted management of patients with COVID-19. With the nomogram, this model that is based on readily available parameters can help clinicians to stratify COVID-19 patients on diagnosis to use limited healthcare resources effectively and improve patient outcome.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Vandal Fire Intelligence (VFI) : Validation Part 3 (2025)
This document is part of a two-part research package submitted to Zenodo. It is supported by the companion Jupyter Notebook 'vf-validation-part3.ipynb`, which contains all source code, data diagnostics, and visualizations referenced herein. The PDF version of that notebook is `vf-validation-part3.pdf`.
Contact: Jeffrey Logan, Earth and Spatial Sciences, University of Idaho
Email: jeffrey.logan@uidaho.edu
ORCID: 0009-0001-9415-5809
VFI is a high‐performance, Dask powered platform developed to deliver dynamic, scalable flammability predictions to serve hyperlocal needs and global flammability analyses (Logan, Smith 2024). VFI informs fire managers, climate researchers, and NASA/NSF-aligned programs about evolving flammability risk patterns under changing environmental conditions. This builds on other documented efforts to explore single climate variables associated with wildfire activity in the Columbia River Basin Area of the United States from 1979-2025, such as
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundAs community-engaged research (CEnR), community-based participatory research (CBPR) and patient-engaged research (PEnR) have become increasingly recognized as valued research approaches in the last several decades, there is need for pragmatic and validated tools to assess effective partnering practices that contribute to health and health equity outcomes. This article reports on the co-creation of an actionable pragmatic survey, shortened from validated metrics of partnership practices and outcomes.MethodsWe pursued a triple aim of preserving content validity, psychometric properties, and importance to stakeholders of items, scales, and constructs from a previously validated measure of CBRP/CEnR processes and outcomes. There were six steps in the methods: (a) established validity and shortening objectives; (b) used a conceptual model to guide decisions; (c) preserved content validity and importance; (d) preserved psychometric properties; (e) justified the selection of items and scales; and (f) validated the short-form version. Twenty-one CBPR/CEnR experts (13 academic and 8 community partners) completed a survey and participated in two focus groups to identify content validity and importance of the original 93 items.ResultsThe survey and focus group process resulted in the creation of the 30-item Partnering for Health Improvement and Research Equity (PHIRE) survey. Confirmatory factor analysis and a structural equation model of the original data set resulted in the validation of eight higher-order scales with good internal consistency and structural relationships (TLI > 0.98 and SRMR < 0.02). A reworded version of the PHIRE was administered to an additional sample demonstrating good reliability and construct validity.ConclusionThis study demonstrates that the PHIRE is a reliable instrument with construct validity compared to the larger version from which it was derived. The PHIRE is a straightforward and easy-to-use tool, for a range of CBPR/CEnR projects, that can provide benefit to partnerships by identifying actionable changes to their partnering practices to reach their desired research and practical outcomes.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This research estimated the uncertainty in gross primary production (GPP) at half hourly time steps, using a non-rectangular hyperbola (NRH) model for its separation from the flux tower measurements of net ecosystem exchange (NEE) at the Speulderbos forest site, The Netherlands. This provided relevant data for the calibration and validation of process-based simulators. A file list and description of data (i.e., metadata) in each file are provided in the uploaded pdf file "FluxPartitioning_NEEdata_codebook_Version1.pdf". This pdf file also provides a brief description of the methods adopted in this research.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionObjective measures of emotional valence in young children are rare, but recent work has employed motion depth sensor imaging to measure young children's emotional expression via changes in their body posture. This method efficiently captures children's emotional valence, moving beyond self-reports or caregiver reports, and avoiding extensive manual coding, e.g., of children's facial expressions. Moreover, it can be flexibly and non-invasively used in interactive study paradigms, thus offering an advantage over other physiological measures of emotional valence.MethodHere, we discuss the merits of studying body posture in developmental research and showcase its use in six studies. To this end, we provide a comprehensive validation in which we map the measures of children's posture onto the constructs of emotional valence and arousal. Using body posture data aggregated from six studies (N = 466; Mage = 5.08; range: 2 years, 5 months to 6 years, 2 months; 220 girls), coders rated children's expressed emotional valence and arousal, and provided a discrete emotion label for each child.ResultsEmotional valence was positively associated with children's change in chest height and chest expansion: children with more upright upper-body postures were rated as expressing a more positive emotional valence whereas the relation between emotional arousal and changes in body posture was weak.DiscussionThese data add to existing evidence that changes in body posture reliably reflect emotional valence. They thus provide an empirical foundation to conduct research on children's spontaneously expressed emotional valence using the automated and efficient tool of body posture analysis.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundThis research describes the development and validation of the CARES Climate Survey, a 22-item measure designed to assess interpersonal dimensions of work-unit climates. Dimensions of work-unit climates are identified through work-unit member perceptions and include civility, interpersonal accountability, conflict resolution, and institutional harassment responsiveness.MethodsTwo samples (N = 1,384; N = 868) of academic researchers, including one from the North American membership of the American Geophysical Union (AGU), and one from a large research-intensive university, responded to the CARES and additional measures via an online survey.ResultsWe demonstrate content validity of the CARES measure and confirm structural validity through exploratory and confirmatory factor analyses which yielded four dimensions of interpersonal climate. In addition, we confirm the CARES internal reliability, construct validity, and excellent sub-group invariance.ConclusionsThe CARES is a brief, psychometrically sound instrument that can be used by researchers, institutional leaders, and other practitioners to assess interpersonal climates in organizational work-units.Originality/valueThis is the first study to develop and validate such a measure of interpersonal climates specifically in research-intensive organizations, using rigorous psychometric methods, grounded in both theory and prior research on work-unit climates.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Background and purposeMultiple sclerosis (MS) is associated with high rates of unemployment, and barriers for work are essential to identify in the regular follow-up of these people. The current study aimed to culturally adapt and evaluate the psychometric properties of the Norwegian version of the Multiple Sclerosis Work Difficulties Questionnaire-23 (MSWDQ-23).MethodsFollowing backward and forward translation, the Norwegian version of the MSWDQ-23 (MSWDQ-23NV) was completed by 229 people with multiple sclerosis (MS). Validity was evaluated through confirmatory factor analysis and by associating scores with employment status, disability, and health-related quality of life outcome measures. Convergent validity was checked by correlating MSWDQ-23 scores with alternative study measures. Internal consistencies were examined by Cronbach's alfa.ResultsA good fit for the data was demonstrated for the MSWDQ-23NV in confirmatory factor analysis, with excellent internal consistencies also demonstrated for the full scale and its subscales (physical barriers, psychological/cognitive barriers, external barriers). The MSWDQ-23NV subscales were related in the expected direction to health-related quality of life outcome measures. While higher scores on the physical barriers subscale was strongly associated with higher levels of disability and progressive MS types, higher scores on all subscales were associated with not working in the past year.DiscussionThe Norwegian MSWDQ-23 is an internally consistent and valid instrument to measure perceived work difficulties in persons with all types of MS in a Norwegian-speaking population. The MSWDQ-23NV can be considered a useful tool for health care professionals to assess self-reported work difficulties in persons with MS. The Norwegian MSWDQ-23 scale should be examined for test-retest reliability and considered implemented in the regular follow up at the MS-outpatient clinics in Norway to support employment maintenance.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionPast studies have mainly used Stroop tasks to induce mental fatigue in soccer. However, due to the non-sport-specificity of these tasks, their transferability to the real-life effects of mental fatigue in soccer have been questioned. The study's aim was to investigate the effects of two different versions (mentally less vs. mentally more demanding) of a soccer passing task in the so-called Footbonaut on cognitive and soccer-specific performance.MethodsA randomized, counterbalanced experimental within-subjects design was employed (N = 27). We developed two different versions of the soccer passing task in the Footbonaut: a mentally more demanding decision-making and inhibition task in the experimental condition, and a mentally less demanding standard task of the Footbonaut in the control condition.ResultsParticipants showed significantly worse soccer-specific performance in the experimental condition compared to the control condition. No corresponding effects were revealed in cognitive performance.DiscussionThe findings suggest that cognitive-motor interference induced by 30-min Footbonaut technology-based training may induce mental fatigue in soccer players. Future studies should consider developing mentally less-demanding yet comparable control tasks.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionCenter of pressure (COP) parameters are frequently assessed to analyze movement disorders in humans and animals. Methodological discrepancies are a major concern when evaluating conflicting study results. This study aimed to assess the inter-observer reliability and test-retest reliability of body COP parameters including mediolateral and craniocaudal sway, total length, average speed and support surface in healthy dogs during quiet standing on a pressure plate. Additionally, it sought to determine the minimum number of trials and the shortest duration necessary for accurate COP assessment.Materials and methodsTwelve clinically healthy dogs underwent three repeated trials, which were analyzed by three independent observers to evaluate inter-observer reliability. Test-retest reliability was assessed across the three trials per dog, each lasting 20 seconds (s). Selected 20 s measurements were analyzed in six different ways: 1 × 20 s, 1 × 15 s, 2 × 10 s, 4 × 5 s, 10 × 2 s, and 20 × 1 s.ResultsResults demonstrated excellent inter-observer reliability (ICC ≥ 0.93) for all COP parameters. However, only 5 s, 10 s, and 15 s measurements achieved the reliability threshold (ICC ≥ 0.60) for all evaluated parameters.DiscussionThe shortest repeatable durations were obtained from either two 5 s measurements or a single 10 s measurement. Most importantly, statistically significant differences were observed between the different measurement durations, which underlines the need to standardize measurement times in COP analysis. The results of this study aid scientists in implementing standardized methods, thereby easing comparisons across studies and enhancing the reliability and validity of research findings in veterinary medicine.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Bosco, C., de Rigo, D., Dewitte, O., 2014. Visual Validation of the e-RUSLE Model Applied at the Pan-European Scale. Scientific Topics Focus 1, MRI-11a13. Notes Transdiscipl. Model. Env., Maieutike Research Initiative. https://doi.org/10.6084/m9.figshare.844627 PDF: http://purl.org/mtv/STF/pdf/1843225Version: DRAFT 0.4.2This is a preliminary version. Any future version will be accessible from the same DOI code (https://doi.org/10.6084/m9.figshare.844627). The views expressed are those of the authors and may not be regarded as stating an official position of mentioned organisations.
Visual Validation of the e-RUSLE Model Applied at the Pan-European Scale Claudio Bosco ¹ ² ⁴, Daniele de Rigo ² ³ ⁴ and Olivier Dewitte ⁵ 1 Loughborough University, Department of Civil and Building Engineering,Loughborough, United Kingdom 2 Joint Research Centre of the European Commission,Institute for Environment and Sustainability, Ispra, Italy 3 Politecnico di Milano, Dipartimento di Elettronica, Informazione e Bioingegneria, Milano, Italy 4 Maieutike Research Initiative, Milano, Italy 5 Royal Museum for Central Africa, Department of Earth Sciences, Tervuren, Belgium
Validating soil erosion estimates at regional or continental scale is still extremely challenging. The common procedures are not technically and financially applicable for large spatial extents, despite this some options are still applicable. For validating the European map of soil erosion by water calculated using the approach proposed in Bosco et al. [1] we applied alternative qualitative methods based on visual evaluation. The 1 km 2 map was validated through a visual and categorical comparison between modelled and observed soil erosion. A procedure employing high-resolution Google Earth images and pictures as validation data is here shown. The resolution of the images, rapidly increased during the last years, allows for a visual qualitative estimation of local soil erosion rates. A cluster of 3x3 K m 2 around 85 selected points was analysed by the authors. The results corroborate the map obtained applying the e-RUSLE model. The 63% of a random sample of 732 grid cells are accurate, 83% at least moderately accurate with a bootstrap p ≤ 0.05). For each of the 85 clusters, the complete details of the validation also containing the comments of the evaluators and the geo-location of the analysed areas have been reported.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The COPE Inventory (Carver et al., 1989) is the most frequently used measure of coping; yet previous studies examining its factor structure yielded mixed results. The purpose of the current study, therefore, was to validate the factor structure of the COPE Inventory in a representative sample of over 2,000 adults in Slovakia. Our second goal was to evaluate the external validity of the COPE inventory, which has not been done before. Firstly, we performed the exploratory factor analysis (EFA) with half of the sample. Subsequently, we performed the confirmatory factor analysis with the second half of the sample. Both factor analyses with 15 factor solutions showed excellent fit with the data. Additionally, we performed a hierarchical factor analysis with fifteen first-order factors (acceptance, active coping, behavioral disengagement, denial, seeking emotional support, humor, seeking instrumental support, mental disengagement, planning, positive reinterpretation, religion, restraint, substance use, suppression of competing activities, and venting) and three second-order factors (active coping, social emotional coping, and avoidance coping) which showed good fit with the data. Moreover, the COPE Inventory’s external validity was evaluated using consensual qualitative research (CQR) analysis on data collected by in-depth interviews. Categories of coping created using CQR corresponded with all COPE first-order factors. Moreover, we identified two additional first-order factors that were not present in the COPE Inventory: self-care and care for others. Our study shows that the Slovak translation of the COPE Inventory is a reliable, externally valid, and well-structured instrument for measuring coping in the Slovak population.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Systematic reviews play a crucial role in evidence-based practices as they consolidate research findings to inform decision-making. However, it is essential to assess the quality of systematic reviews to prevent biased or inaccurate conclusions. This paper underscores the importance of adhering to recognized guidelines, such as the PRISMA statement and Cochrane Handbook. These recommendations advocate for systematic approaches and emphasize the documentation of critical components, including the search strategy and study selection. A thorough evaluation of methodologies, research quality, and overall evidence strength is essential during the appraisal process. Identifying potential sources of bias and review limitations, such as selective reporting or trial heterogeneity, is facilitated by tools like the Cochrane Risk of Bias and the AMSTAR 2 checklist. The assessment of included studies emphasizes formulating clear research questions and employing appropriate search strategies to construct robust reviews. Relevance and bias reduction are ensured through meticulous selection of inclusion and exclusion criteria. Accurate data synthesis, including appropriate data extraction and analysis, is necessary for drawing reliable conclusions. Meta-analysis, a statistical method for aggregating trial findings, improves the precision of treatment impact estimates. Systematic reviews should consider crucial factors such as addressing biases, disclosing conflicts of interest, and acknowledging review and methodological limitations. This paper aims to enhance the reliability of systematic reviews, ultimately improving decision-making in healthcare, public policy, and other domains. It provides academics, practitioners, and policymakers with a comprehensive understanding of the evaluation process, empowering them to make well-informed decisions based on robust data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Assistance dogs can greatly improve the lives of people with disabilities. However, a large proportion of dogs bred and trained for this purpose are deemed unable to successfully fulfill the behavioral demands of this role. Often, this determination is not finalized until weeks or even months into training, when the dog is close to 2 years old. Thus, there is an urgent need to develop objective selection protocols that can identify dogs most and least likely to succeed, from early in the training process. We assessed the predictive validity of two candidate measures employed by Canine Companions for Independence (CCI), a national assistance dog organization headquartered in Santa Rosa, CA. For more than a decade, CCI has collected data on their population using the Canine Behavioral Assessment and Research Questionnaire (C-BARQ) and a standardized temperament assessment known internally as the In-For-Training (IFT) test, which is conducted at the beginning of professional training. Data from both measures were divided into independent training and test datasets, with the training data used for variable selection and cross-validation. We developed three predictive models in which we predicted success or release from the training program using C-BARQ scores (N = 3,569), IFT scores (N = 5,967), and a combination of scores from both instruments (N = 2,990). All three final models performed significantly better than the null expectation when applied to the test data, with overall accuracies ranging from 64 to 68%. Model predictions were most accurate for dogs predicted to have the lowest probability of success (ranging from 85 to 92% accurate for dogs in the lowest 10% of predicted probabilities), and moderately accurate for identifying the dogs most likely to succeed (ranging from 62 to 72% for dogs in the top 10% of predicted probabilities). Combining C-BARQ and IFT predictors into a single model did not improve overall accuracy, although it did improve accuracy for dogs in the lowest 20% of predicted probabilities. Our results suggest that both types of assessments have the potential to be used as powerful screening tools, thereby allowing more efficient allocation of resources in assistance dog selection and training.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This fileset contains a preprint version of the conference paper (.pdf), presentation slides (as .pptx) and the dataset(s) and validation schema(s) for the IDCC 2019 (Melbourne) conference paper: The Red Queen in the Repository: metadata quality in an ever-changing environment. Datasets and schemas are in .xml, .xsd , Excel (.xlsx) and .csv (two files representing two different sheets in the .xslx -file). The validationSchemas.zip holds the additional validation schemas (.xsd), that were not found in the schemaLocations of the metadata xml-files to be validated. The schemas must all be placed in the same folder, and are to be used for validating the Dataverse dcterms records (with metadataDCT.xsd) and the Zenodo oai_datacite feeds respectively (schema.datacite.org_oai_oai-1.0_oai.xsd). In the latter case, a simpler way of doing it might be to replace the incorrect URL "http://schema.datacite.org/oai/oai-1.0/ oai_datacite.xsd" in the schemaLocation of these xml-files by the CORRECT: schemaLocation="http://schema.datacite.org/oai/oai-1.0/ http://schema.datacite.org/oai/oai-1.0/oai.xsd" as has been done already in the sample files here. The sample file folders testDVNcoll.zip (Dataverse), testFigColl.zip (Figshare) and testZenColl.zip (Zenodo) contain all the metadata files tested and validated that are registered in the spreadsheet with objectIDs.
In the case of Zenodo, one original file feed,
zen2018oai_datacite3orig-https%20_zenodo.org_oai2d%20verb=ListRecords%26metadata
Prefix=oai_datacite%26from=2018-11-29%26until=2018-11-30.xml ,
is also supplied to show what was necessary to change in order to perform validation as indicated in the paper.
For Dataverse, a corrected version of a file,
dvn2014ddi-27595Corr_https%20_dataverse.harvard.edu_api_datasets_export%20
exporter=ddi%26persistentId=doi%253A10.7910_DVN_27595Corr.xml ,
is also supplied in order to show the changes it would take to make the file validate without error.