Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ImportanceTo efficiently perform bimanual daily tasks, bimanual coordination is needed. Bimanual coordination is the interaction between an individual’s hands, which may be impaired post-stroke, however clinical and functional assessments are lacking and research is limited.ObjectivesTo develop a valid and reliable observation tool to assess bimanual coordination of individuals post-stroke.DesignA cross-sectional study.SettingRehabilitation settings.ParticipantsOccupational therapists (OTs) with stroke rehabilitation experience and individuals post stroke.Outcomes and measuresThe development and content validity of BOTH included a literature review, review of existing tools and followed a 10-step process. The conceptual and operational definitions of bimanual coordination were defined as well as scoring criteria. Then multiple rounds of feedback from expert OTs were performed. OTs reviewed BOTH using the ‘Template for assessing content validity through expert judgement’ questionnaire. Then, BOTH was administered to 51 participants post-stroke. Cronbach’s alpha was used to verify internal reliability of BOTH and construct validity of BOTH was assessed by correlating it to the bimanual subtests of The Purdue Pegboard Test.ResultsExpert validity was established in two-rounds with 11 OTs. Cronbach’s alpha was α = 0.923 for the asymmetrical items, 0.897 for the symmetrical items and 0.949 for all eight items. The item-total correlations of BOTH were also strong and significant. The total score of BOTH was strongly significantly correlated with The Purdue–Both hands placement (r = .787, p < .001) and Assembly (r = .730, p < .001) subtests.Conclusions and relevanceBOTH is a new observation tool to assess bimanual coordination post-stroke. Expert validity of BOTH was established, excellent internal reliability and construct validity were demonstrated. Further research is needed, so in the future, BOTH can be used for clinical and research purposes to address bimanual coordination post-stroke.
Facebook
TwitterDataset abstract This dataset contains the results from 40 language and speech researchers, who completed a survey. In the first part of the survey, respondents were asked to complete a demographic (e.g., age, gender, first language) and professional background questionnaire (e.g., current academic position, research interests). In addition, they were asked several open-ended questions about their familiarity with and understanding of the term ‘ecological validity’ (e.g., which words come to mind when you hear this term, how to measure the ecological validity of a study, how does ecological validity apply to your area of research). In the second part of the survey, respondents were presented with 24 short speech excerpts, representing 12 different stimulus types. They were asked to rate each speech excerpt on its degree of casualness (i.e. spontaneity) and naturalness, and how likely they are to encounter each excerpt in everyday listening situations. Article abstract This paper explores how researchers in the field of language and speech sciences understand and apply the concept of ecological validity. It also assesses the ecological validity of various stimulus materials, ranging from isolated word productions to sentences taken from authentic interviews. Forty researchers participated in a survey, which contained (i) a demographic and professional background questionnaire with open-ended questions about the definition, feasibility and desirability of ecological validity, and (ii) a speech rating task. In the rating task, respondents evaluated 24 speech excerpts, representing 12 types of stimulus materials, on their casualness, naturalness, and likelihood of occurrence in real-life contexts. The results showed that while most researchers acknowledge the importance of ecological validity, defining the necessary and sufficient criteria for evaluating or achieving it remains challenging. Regarding stimulus types, unscripted sentences from interviews and Map Task dialogues were rated as the most casual and natural. In contrast, carefully read sentences and digitally modified stimuli were viewed as the least casual and natural, although individual differences in rating were noticeable. Similarly, ratings for the likelihood of occurrence in everyday listening situations were highest for various types of extemporaneous speech. The survey responses not only enhance our theoretical understanding of ecological validity, but also raise awareness about the implications of methodological choices, such as the selection of tasks and stimulus materials, on the ecological validity of a study.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a synthetic smart card data set that can be used to test pattern detection methods for the extraction of temporal and spatial data. The data set is tab seperated and based on a stylized travel pattern description for city of Utrecht in The Netherlands and is developed and used in Chapter 6 of the PhD Thesis of Paul Bouman.
This dataset contains the following files:
journeys.tsv : the actual data set of synthetic smart card data
utrecht.xml : the activity pattern definition that was used to randomly generate the synthethic smart card data
validate.ref : a file derived from the activity pattern definition that can be used for validation purposes. It specifies which activity types occur at each location in the smart card data set.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Introduction
This repository hosts the Testing Roads for Autonomous VEhicLes (TRAVEL) dataset. TRAVEL is an extensive collection of virtual roads that have been used for testing lane assist/keeping systems (i.e., driving agents) and data from their execution in state of the art, physically accurate driving simulator, called BeamNG.tech. Virtual roads consist of sequences of road points interpolated using Cubic splines.
Along with the data, this repository contains instructions on how to install the tooling necessary to generate new data (i.e., test cases) and analyze them in the context of test regression. We focus on test selection and test prioritization, given their importance for developing high-quality software following the DevOps paradigms.
This dataset builds on top of our previous work in this area, including work on
test generation (e.g., AsFault, DeepJanus, and DeepHyperion) and the SBST CPS tool competition (SBST2021),
test selection: SDC-Scissor and related tool
test prioritization: automated test cases prioritization work for SDCs.
Dataset Overview
The TRAVEL dataset is available under the data folder and is organized as a set of experiments folders. Each of these folders is generated by running the test-generator (see below) and contains the configuration used for generating the data (experiment_description.csv), various statistics on generated tests (generation_stats.csv) and found faults (oob_stats.csv). Additionally, the folders contain the raw test cases generated and executed during each experiment (test..json).
The following sections describe what each of those files contains.
Experiment Description
The experiment_description.csv contains the settings used to generate the data, including:
Time budget. The overall generation budget in hours. This budget includes both the time to generate and execute the tests as driving simulations.
The size of the map. The size of the squared map defines the boundaries inside which the virtual roads develop in meters.
The test subject. The driving agent that implements the lane-keeping system under test. The TRAVEL dataset contains data generated testing the BeamNG.AI and the end-to-end Dave2 systems.
The test generator. The algorithm that generated the test cases. The TRAVEL dataset contains data obtained using various algorithms, ranging from naive and advanced random generators to complex evolutionary algorithms, for generating tests.
The speed limit. The maximum speed at which the driving agent under test can travel.
Out of Bound (OOB) tolerance. The test cases' oracle that defines the tolerable amount of the ego-car that can lie outside the lane boundaries. This parameter ranges between 0.0 and 1.0. In the former case, a test failure triggers as soon as any part of the ego-vehicle goes out of the lane boundary; in the latter case, a test failure triggers only if the entire body of the ego-car falls outside the lane.
Experiment Statistics
The generation_stats.csv contains statistics about the test generation, including:
Total number of generated tests. The number of tests generated during an experiment. This number is broken down into the number of valid tests and invalid tests. Valid tests contain virtual roads that do not self-intersect and contain turns that are not too sharp.
Test outcome. The test outcome contains the number of passed tests, failed tests, and test in error. Passed and failed tests are defined by the OOB Tolerance and an additional (implicit) oracle that checks whether the ego-car is moving or standing. Tests that did not pass because of other errors (e.g., the simulator crashed) are reported in a separated category.
The TRAVEL dataset also contains statistics about the failed tests, including the overall number of failed tests (total oob) and its breakdown into OOB that happened while driving left or right. Further statistics about the diversity (i.e., sparseness) of the failures are also reported.
Test Cases and Executions
Each test..json contains information about a test case and, if the test case is valid, the data observed during its execution as driving simulation.
The data about the test case definition include:
The road points. The list of points in a 2D space that identifies the center of the virtual road, and their interpolation using cubic splines (interpolated_points)
The test ID. The unique identifier of the test in the experiment.
Validity flag and explanation. A flag that indicates whether the test is valid or not, and a brief message describing why the test is not considered valid (e.g., the road contains sharp turns or the road self intersects)
The test data are organized according to the following JSON Schema and can be interpreted as RoadTest objects provided by the tests_generation.py module.
{ "type": "object", "properties": { "id": { "type": "integer" }, "is_valid": { "type": "boolean" }, "validation_message": { "type": "string" }, "road_points": { §\label{line:road-points}§ "type": "array", "items": { "$ref": "schemas/pair" }, }, "interpolated_points": { §\label{line:interpolated-points}§ "type": "array", "items": { "$ref": "schemas/pair" }, }, "test_outcome": { "type": "string" }, §\label{line:test-outcome}§ "description": { "type": "string" }, "execution_data": { "type": "array", "items": { "$ref" : "schemas/simulationdata" } } }, "required": [ "id", "is_valid", "validation_message", "road_points", "interpolated_points" ] }
Finally, the execution data contain a list of timestamped state information recorded by the driving simulation. State information is collected at constant frequency and includes absolute position, rotation, and velocity of the ego-car, its speed in Km/h, and control inputs from the driving agent (steering, throttle, and braking). Additionally, execution data contain OOB-related data, such as the lateral distance between the car and the lane center and the OOB percentage (i.e., how much the car is outside the lane).
The simulation data adhere to the following (simplified) JSON Schema and can be interpreted as Python objects using the simulation_data.py module.
{ "$id": "schemas/simulationdata", "type": "object", "properties": { "timer" : { "type": "number" }, "pos" : { "type": "array", "items":{ "$ref" : "schemas/triple" } } "vel" : { "type": "array", "items":{ "$ref" : "schemas/triple" } } "vel_kmh" : { "type": "number" }, "steering" : { "type": "number" }, "brake" : { "type": "number" }, "throttle" : { "type": "number" }, "is_oob" : { "type": "number" }, "oob_percentage" : { "type": "number" } §\label{line:oob-percentage}§ }, "required": [ "timer", "pos", "vel", "vel_kmh", "steering", "brake", "throttle", "is_oob", "oob_percentage" ] }
Dataset Content
The TRAVEL dataset is a lively initiative so the content of the dataset is subject to change. Currently, the dataset contains the data collected during the SBST CPS tool competition, and data collected in the context of our recent work on test selection (SDC-Scissor work and tool) and test prioritization (automated test cases prioritization work for SDCs).
SBST CPS Tool Competition Data
The data collected during the SBST CPS tool competition are stored inside data/competition.tar.gz. The file contains the test cases generated by Deeper, Frenetic, AdaFrenetic, and Swat, the open-source test generators submitted to the competition and executed against BeamNG.AI with an aggression factor of 0.7 (i.e., conservative driver).
Name
Map Size (m x m)
Max Speed (Km/h)
Budget (h)
OOB Tolerance (%)
Test Subject
DEFAULT
200 × 200
120
5 (real time)
0.95
BeamNG.AI - 0.7
SBST
200 × 200
70
2 (real time)
0.5
BeamNG.AI - 0.7
Specifically, the TRAVEL dataset contains 8 repetitions for each of the above configurations for each test generator totaling 64 experiments.
SDC Scissor
With SDC-Scissor we collected data based on the Frenetic test generator. The data is stored inside data/sdc-scissor.tar.gz. The following table summarizes the used parameters.
Name
Map Size (m x m)
Max Speed (Km/h)
Budget (h)
OOB Tolerance (%)
Test Subject
SDC-SCISSOR
200 × 200
120
16 (real time)
0.5
BeamNG.AI - 1.5
The dataset contains 9 experiments with the above configuration. For generating your own data with SDC-Scissor follow the instructions in its repository.
Dataset Statistics
Here is an overview of the TRAVEL dataset: generated tests, executed tests, and faults found by all the test generators grouped by experiment configuration. Some 25,845 test cases are generated by running 4 test generators 8 times in 2 configurations using the SBST CPS Tool Competition code pipeline (SBST in the table). We ran the test generators for 5 hours, allowing the ego-car a generous speed limit (120 Km/h) and defining a high OOB tolerance (i.e., 0.95), and we also ran the test generators using a smaller generation budget (i.e., 2 hours) and speed limit (i.e., 70 Km/h) while setting the OOB tolerance to a lower value (i.e., 0.85). We also collected some 5, 971 additional tests with SDC-Scissor (SDC-Scissor in the table) by running it 9 times for 16 hours using Frenetic as a test generator and defining a more realistic OOB tolerance (i.e., 0.50).
Generating new Data
Generating new data, i.e., test cases, can be done using the SBST CPS Tool Competition pipeline and the driving simulator BeamNG.tech.
Extensive instructions on how to install both software are reported inside the SBST CPS Tool Competition pipeline Documentation;
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Residence permits data collection refers to residence permits as any authorisation issued by the authorities of a Member State allowing a third-country national (non-EU citizen) to stay legally on its territory. These statistics cover also some specific cases in which the third-country nationals have the right to be move to and stay in other EU Member States.
Data is based on administrative sources1, provided mainly by the Ministries of Interior or related Immigration Agencies. Data are generally disseminated in July in the year following the reference year, subject to data availability and data quality.
Residence permits statistics is based on http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32007R0862" target="_blank">Council Regulation (CE) No 862 of 11 July 2007 (Migration Statistics Regulation) as amended by the Regulation 2020/851 and it covers the following topics:
The definitions used for residence permits and other concepts (e.g. first permit) are presented in the section 3.4. Statistical concepts and definitions. The detailed data collection methodology is presented in Annex 9 of this metadata file.
Temporary protection status is considered of different administrative nature then the residence permits reported in RESPER data collection. Therefore, persons benefitting from temporary protection are not included in any of the Residence permits statistics. These persons are subject of another data collection on Temporary Protection (TP).
LEGAL FRAMEWORK
Residence data contain statistical information based on Article 6 of http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32007R0862" target="_blank">Council Regulation (CE) No 862 of 11 July 2007. This legal framework refers to the initial residence permits data collection with 2008 first reference period (e.g. first residence permits; change of immigration status or reason to stay; all valid residence permits in the end of the year and long-term residence permits valid in the end of the year) and it provides also a general framework for newer data collections based on specific European legal acts (e.g. statistics on EU Blue Cards and statistics on single permits) or provided on voluntary basis (e.g. residence permits issued for family reunification with beneficiaries of protection status).
Regulation 2020/851 amending Council Regulation (CE) No 862 of 11 July 2007 was recently implemented. The amendment introduced several changes to the statistics on Asylum and Managed Migration. Some data collections become mandatory starting with the 2021 reference period, while new statistics are subject to pilot studies for further assessing the feasibility of collecting these statistics.
RECENT DEVELOPMENTS
Starting with the 2021 reference period, there were several improvements in the data collection, including the methodological aspects. These changes were introduced through the implementation of Regulation 2020/851 amending Council Regulation (CE) No 862 of 11 July 2007. More details are available in the Annex 9.
Starting from 2025, the residence permits and EU directives data collection now includes six metadata files in total. Countries are required to submit six distinct files. For those that have not yet provided the updated six files, the previous metadata format, included in the annex of this metadata file (Annex 10), remains available as a reference.
INDICATORS
The indicators presented in the table 'Long-term residents among all non-EU citizens holding residence permits by citizenship on 31 December (%)' are produced within the framework of the pilot study related to the integration of migrants in the Member States, following the Zaragoza Declaration.
The Zaragoza Declaration, adopted in April 2010 by EU Ministers responsible for immigrant integration issues, and approved at the Justice and Home Affairs Council on 3-4 June 2010, called upon the Commission to undertake a pilot study to examine proposals for common integration indicators and to report on the availability and quality of the data from agreed harmonised sources necessary for the calculation of these indicators. In June 2010 the ministers agreed "to promote the launching of a pilot project with a view to the evaluation of integration policies, including examining the indicators and analysing the significance of the defined indicators taking into account the national contexts, the background of diverse migrant populations and different migration and integration policies of the Member States, and reporting on the availability and quality of the data from agreed harmonised sources necessary for the calculation of these indicators".
These indicators are produced on the basis of residence permit statistics collected by Eurostat on the basis of Article 6 of the Migration Statistics Regulation 862/2007. As a denominator data on the stock of all valid permits to stay at the end of each reporting year are used. As a numerator data on the stock of long-term residents are used. Two types of long term residents are distinguished in accordance with the residence permit statistics: EU long-term resident status (as regulated by the Council Directive 2003/109/EC) and the National long-term resident status (as regulated by the national legislation in the Member States).
DATA CONSISTENCY
The data providers should use the same methodological specifications provided by Eurostat and some collected tables from should be cross-consistent according to this methodology. However, consistency issues between tables exist due to some technical limitations (e.g. different data sources) or different methodology applied to each table (see the quality information from below or the national metadata files) or different point in time of producing each tables.
1There are few exceptions referring to the situation in which the administrative registers cannot provide the required information and some estimations are made. For example, the statistics for the United Kingdom (2008-2019) use different data sources to those used in EU Member States and EFTA countries. For that reason, the statistics on residence permits published by Eurostat for UK may not be fully comparable with the statistics reported by other countries. Statistics for the United Kingdom are not based on records of residence permits issued (as the United Kingdom does not operate a system of residence permits), but instead relate to the numbers of arriving non-EU citizens permitted to enter the country under selected immigration categories. According to the United Kingdom authorities, data are estimated from a combination of information due to be published in the Home Office Statistical Bulletin 'Control of Immigration: Statistics, United Kingdom' and unpublished management information. The 'Other reasons' category includes: diplomat, consular officer treated as exempt from control; retired persons of independent means; all other passengers given limited leave to enter who are not included in any other category; non-asylum discretionary permissions. Another example is the data on stock of all valid residence permits for Denmark, see Annex 8 (Data quality of valid residence permits in Denmark).
Facebook
Twitterhttps://www.shibatadb.com/license/data/proprietary/v1.0/license.txthttps://www.shibatadb.com/license/data/proprietary/v1.0/license.txt
Yearly citation counts for the publication titled "Generalized Possibilistic Fuzzy C-Means with novel cluster validity indices for clustering noisy data".
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The leadership and personal competencies exhibits limitations in terms of construct definition, behavior specifications and valid theory-based measuring strategies. An explanatory design with latent variables and the statistical software SAS 9.4 were used for the validation and adaptation to Spanish of the Leadership Virtues Questionnaire applied to work and organizational psychologists and people who exercise leadership functions in Chile. The levels of agreement between judges for the adaptation to the Spanish language and the confirmatory factor analysis of first order with four dimensions shows insufficient statistical indices for the absolute, comparative and parsimonious adjustments. However, a second-order confirmatory factor analysis with two dimensions presents a satisfactory fit for the item, model, and parameter matrices. The measurement of Virtuous Leadership would provide relevant inputs for further evaluation and training based on ethical competencies aimed at improving management, which would, in turn, allow for its treatment as an independent variable to generate an ethical organizational culture.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset consists of simulated and observed salinity/temperature data which were used in the manuscript "A method for assessment of the general circulation model quality using k-means clustering algorithm" submitted to Geoscientific Model Development.
The model simulation dataset is from long-term 3D circulation model simulation (Maljutenko and Raudsepp 2014, 2019). The observations are from the "Baltic Sea - Eutrophication and Acidity aggregated datasets 1902/2017 v2018" SMHI (2018).
The files are in simple comma separated table format without headers.
The Dout-t_z_lat_lon_Smod_Sobs_Tmod_Tobs.csv file contains columns with following variables [units]:
Time [matlab datenum units], Vertical coordinate [m], latitude [oN], longitude [oE], model salinity [g/kg], observed salinity [g/kg], model temperature [oC], observed temperature [oC].
The Dout-t_z_lat_lon_dS_dT_K1_K2_K3_K4_K5_K6_K7_K8_K9.csv file contains columns with following variables [units]:
4 first columns are the same as in the previous file, salinity error [g/kg], temperature error [oC], columns 7-8 are integers showing the cluster to which the error pair is designated.
do_clust_valid_DataFig.m is a Matlab script which reads the two csv files (and optionally mask file Model_mask.mat), performs the clustering analysis and creates plots which are used in Manuscript. The script is organized into %% blocks which can be executed separately (default: ctrl+enter).
k-means function is used from the Matlab Statistics and Machine Learning Toolbox.
Additional software used in the do_clust_valid_DataFig.m:
Author's auxiliary formatting scripts script/
datetick_cst.m
do_fitfig.m
do_skipticks.m
do_skipticks_y.m
Colormaps are generated using cbrewer.m (Charles, 2021).
Moving average smoothing is performed using nanmoving_average.m (Aguilera, 2021).
Facebook
TwitterObjectiveTo examine the validity of the Recent Physical Activity Questionnaire (RPAQ) which assesses physical activity (PA) in 4 domains (leisure, work, commuting, home) during past month.Methods580 men and 1343 women from 10 European countries attended 2 visits at which PA energy expenditure (PAEE), time at moderate-to-vigorous PA (MVPA) and sedentary time were measured using individually-calibrated combined heart-rate and movement sensing. At the second visit, RPAQ was administered electronically. Validity was assessed using agreement analysis.ResultsRPAQ significantly underestimated PAEE in women [median(IQR) 34.1 (22.1, 52.2) vs. 40.6 (32.4, 50.9) kJ/kg/day, 95%LoA: −44.4, 63.4 kJ/kg/day) and in men (43.7 (29.0, 69.0) vs. 45.5 (34.1, 57.6) kJ/kg/day, 95%LoA: −47.2, 101.3 kJ/kg/day]. Using individualised definition of 1MET, RPAQ significantly underestimated MVPA in women [median(IQR): 62.1 (29.4, 124.3) vs. 73.6 (47.8, 107.2) min/day, 95%LoA: −130.5, 305.3 min/day] and men [82.7 (38.8, 185.6) vs. 83.3 (55.1, 125.0) min/day, 95%LoA: −136.4, 400.1 min/day]. Correlations (95%CI) between subjective and objective estimates were statistically significant [PAEE: women, rho = 0.20 (0.15–0.26); men, rho = 0.37 (0.30–0.44); MVPA: women, rho = 0.18 (0.13–0.23); men, rho = 0.31 (0.24–0.39)]. When using non-individualised definition of 1MET (3.5 mlO2/kg/min), MVPA was substantially overestimated (∼30 min/day). Revisiting occupational intensity assumptions in questionnaire estimation algorithms with occupational group-level empirical distributions reduced median PAEE-bias in manual (25.1 kJ/kg/day vs. −9.0 kJ/kg/day, p<0.001) and heavy manual workers (64.1 vs. −4.6 kJ/kg/day, p<0.001) in an independent hold-out sample.ConclusionRelative validity of RPAQ-derived PAEE and MVPA is comparable to previous studies but underestimation of PAEE is smaller. Electronic RPAQ may be used in large-scale epidemiological studies including surveys, providing information on all domains of PA.
Facebook
TwitterThe data in this data release are from an effort focused on understanding social vulnerability to water insecurity, resiliency demonstrated by institutions, and conflict or crisis around water resource management. This data release focuses on definitions and metrics of resilience in water management institutions. Water resource managers, at various scales, are tasked with making complex and time-sensitive decisions in the face of uncertainty, competing objectives, and difficult tradeoffs. To do this, they must incorporate data, tacit knowledge, cultural and organizational norms, and individual or institutional values in a way that maintains consistent and predictable operations under normal circumstances, while simultaneously demonstrating the flexibility to respond to disturbances or opportunities that are both expected and unexpected (Leveson et al. 2006; Wolfe 2009). These capabilities are collectively referred to as system resilience. A lack of resilience in water institutions can have cascading effects ecologically, socially, and politically, making this system characteristic a common, albeit loosely defined, objective among water managers (Rodina 2019; Lawson et al. 2020). However, in order to gauge when resilience is or is not being achieved, meaningful metrics are required. A nonsystematic, scoping review was conducted to explore themes in the literature related to resilience, water management resilience, institutional resilience, and water management decision-making. A resilience engineering framework known as the “Four Cornerstones of Resilience” emerged from this search as a way of thinking about the capacities required for system resilience (Hollnagel 2011). These capacities are: the ability to anticipate, the ability to monitor, the ability to learn, and the ability to respond. These metrics have been commonly applied to assess resilience across different industries such as transportation, aviation, and health care (for example, see Hollnagel et al. 2006; Lee et al. 2013; Lay et al. 2015). To evaluate their validity in the water resource management sector and look for other potential metrics applicable to these institutions, we surveyed and interviewed water and natural resource managers in the Delaware River Basin and the Upper Colorado River Basin. The survey and interviews were conducted under the approved OMB Information Collection Request #1028-0131 (expiration 9/30/2026), in compliance with the Paperwork Reduction Act of 1995 (44 U.S.C. 3501). Our participants were sampled purposively across multiple organizational categories (previously defined by Restrepo-Osorio et al. 2022) in order to reach individuals who possessed the necessary professional expertise to answer our questions and offer meaningful insight about resilience and decision-making in water management institutions (Palinkas et al. 2015). These data reflect the personal, career-spanning observations, opinions, and experiences of our participants at the time of collection and are therefore not necessarily generalizable or replicable. Instead, these data provide context and support to the resilience metrics identified from our interviews with water resource managers in the Upper Colorado and Delaware River Basins and lay the groundwork for future validation of these metrics in other locations. In making these data available, our expectation is that future research will leverage this work to ask new questions about how water management institutions can achieve and maintain resilience in a changing world. This data release contains five (5) related datafiles and their associated metadata. Identifying information has been removed from all data files to protect confidentiality in accordance with the Privacy Act, our agreement with participants, and principles of ethical qualitative research. SurveyResponses.csv contains data from an online survey describing some of the decisions made within water resource management institutions across various scales of governance (e.g., federal, state, local, private). Codebook.csv contains the codes and their definitions that were used by researchers to identify metrics of resilience. Codes were identified both deductively (a priori) and inductively and are defined based either on their accepted definition in the literature (a priori) or the definition agreed upon by the coding team (for emergent themes). ResilienceMetrics.csv contains interview excerpts that were coded using NVivo (Lumivero) qualitative analysis software. These excerpts describe participant experiences and observations that relate to the four cornerstones of resilience (the ability to anticipate, monitor, learn, and respond), and four additional metrics identified inductively through the interview process. ResilienceDefinitions.csv contains interview excerpts that were coded using NVivo (Lumivero) qualitative analysis software describing how participants define resilience for their immediate work unit, their organization, and/or the larger socio-hydrologic system. Interview_Protocol.pdf contains interview questions and instructions, including a figure used to explain the concept of socio-technical systems to participants in order to prompt broader thinking throughout our conversations. This instrument was also provided to each participant prior to our interview to allow them to prepare their responses. ResilienceMetricsSummary.csv summarizes coding frequency by Basin and governance sector.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The set contains information about advertising tools (data about the location of the advertising medium, its type and size, the name of the distributor of outdoor advertising, his phone number, the date of issuance of the permit and the term of its validity, the number and date of concluding the contract, if the location of the advertising medium belongs to the communal property) Bolekhiv City Council
Facebook
TwitterThis dataset contains data on all Real Property parcels that have sold since 2013 in Allegheny County, PA. Before doing any market analysis on property sales, check the sales validation codes. Many property "sales" are not considered a valid representation of the true market value of the property. For example, when multiple lots are together on one deed with one price they are generally coded as invalid ("H") because the sale price for each parcel ID number indicates the total price paid for a group of parcels, not just for one parcel. See the Sales Validation Codes Dictionary for a complete explanation of valid and invalid sale codes. Sales Transactions Disclaimer: Sales information is provided from the Allegheny County Department of Administrative Services, Real Estate Division. Content and validation codes are subject to change. Please review the Data Dictionary for details on included fields before each use. Property owners are not required by law to record a deed at the time of sale. Consequently the assessment system may not contain a complete sales history for every property and every sale. You may do a deed search at http://www.alleghenycounty.us/re/index.aspx directly for the most updated information. Note: Ordinance 3478-07 prohibits public access to search assessment records by owner name. It was signed by the Chief Executive in 2007.
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Background: Critical care units (CCUs) with wide use of various monitoring devices generate massive data. To utilize the valuable information of these devices; data are collected and stored using systems like Clinical Information System (CIS), Laboratory Information Management System (LIMS), etc. These systems are proprietary in nature, allow limited access to their database and have vendor specific clinical implementation. In this study we focus on developing an open source web-based meta-data repository for CCU representing stay of patient with relevant details.
Methods: After developing the web-based open source repository we analyzed prospective data from two sites for four months for data quality dimensions (completeness, timeliness, validity, accuracy and consistency), morbidity and clinical outcomes. We used a regression model to highlight the significance of practice variations linked with various quality indicators. Results: Data dictionary (DD) with 1447 fields (90.39% categorical and 9.6% text fields) is presented to cover clinical workflow of NICU. The overall quality of 1795 patient days data with respect to standard quality dimensions is 87%. The data exhibit 82% completeness, 97% accuracy, 91% timeliness and 94% validity in terms of representing CCU processes. The data scores only 67% in terms of consistency. Furthermore, quality indicator and practice variations are strongly correlated (p-value < 0.05).
Results: Data dictionary (DD) with 1555 fields (89.6% categorical and 11.4% text fields) is presented to cover clinical workflow of a CCU. The overall quality of 1795 patient days data with respect to standard quality dimensions is 87%. The data exhibit 82% completeness, 97% accuracy, 91% timeliness and 94% validity in terms of representing CCU processes. The data scores only 67% in terms of consistency. Furthermore, quality indicators and practice variations are strongly correlated (p-value < 0.05).
Conclusion: This study documents DD for standardized data collection in CCU. This provides robust data and insights for audit purposes and pathways for CCU to target practice improvements leading to specific quality improvements.
Facebook
TwitterAbstract Introduction: The concept of quality of life is subjective and variable definition, which depends on the individual's perception of their state of health. Quality of life questionnaires are instruments designed to measure quality of life, but most are developed in a language other than Portuguese. Questionnaires can identify the most important symptoms, focus on consultation, and assist in defining the goals of treatment. Some of these have been validated for the Portuguese language, but none in children. Objective: To validate the translation with cross-cultural adaptation and validation of the Sinus and Nasal Quality of Life Survey (SN-5) into Portuguese. Methods: Prospective study of children aged 2-12 years with sinonasal symptoms of over 30 days. The study comprised two stages: (I) translation and cross-cultural adaptation of the SN-5 into Portuguese (SN-5p); and (II) validation of the SN5-p. Statistical analysis was performed to assess internal consistency, test-retest reliability, and sensitivity, as well as construct and discriminant validity and standardization. Results: The SN-5 was translated and adapted into Portuguese (SN-5p) and the author of the original version approved the process. Validation was carried out by administration of the SN-5p to 51 pediatric patients with sinonasal complaints (mean age, 5.8 ± 2.5 years; range, 2-12 years). The questionnaire exhibited adequate construct validity (0.62, p < 0.01), internal consistency (Cronbach's alpha = 0.73), and discriminant validity (p < 0.01), as well as good test-retest reproducibility (Goodman-Kruskal gamma = 0.957, p < 0.001), good correlation with a visual analog scale (r = 0.62, p < 0.01), and sensitivity to change. Conclusion: This study reports the successful translation and cross-cultural adaptation of the SN-5 instrument into Brazilian Portuguese. The translated version exhibited adequate psychometric properties for assessment of disease-specific quality of life in pediatric patients with sinonasal complaints.
Facebook
Twitter
According to our latest research, the global Lane Geometry Validation Services market size reached USD 1.67 billion in 2024, reflecting a robust growth trajectory driven by the increasing adoption of advanced driver assistance systems (ADAS) and autonomous vehicle technologies. The market is projected to expand at a CAGR of 13.8% during the forecast period, with the total value expected to reach USD 4.86 billion by 2033. This impressive growth is attributed to the rising demand for precise and reliable lane geometry data, which is pivotal for road safety, navigation, and the seamless operation of next-generation mobility solutions. As per our latest research, technological advancements and regulatory mandates for road safety are further propelling the market forward.
One of the primary growth factors for the Lane Geometry Validation Services market is the rapid proliferation of autonomous vehicles and the integration of sophisticated ADAS in modern automobiles. These technologies require highly accurate lane geometry data to ensure vehicle positioning, lane keeping, and overall road safety. Automotive OEMs are increasingly collaborating with validation service providers to enhance the accuracy and reliability of their navigation and perception systems. The surge in R&D investments by leading automotive players, coupled with stringent safety regulations in major economies, is accelerating the deployment of lane geometry validation solutions on a global scale.
Another significant driver is the growing emphasis on intelligent traffic management and smart infrastructure development. Governments and municipal authorities are investing heavily in digital road infrastructure, which necessitates the continuous validation and updating of lane geometry data. Accurate lane information is crucial for traffic flow optimization, congestion management, and the implementation of smart city initiatives. The integration of real-time data analytics and cloud-based validation platforms is enabling stakeholders to access, process, and utilize lane geometry data more efficiently, thereby fostering market expansion.
The evolution of mapping and navigation technologies is also fueling market growth. With the advent of high-definition (HD) mapping, the demand for precise lane-level data has surged, particularly among mapping and navigation providers. These entities rely on lane geometry validation services to enhance the accuracy and reliability of their digital maps, which are essential for both consumer navigation applications and commercial fleet management. The convergence of artificial intelligence, machine learning, and geospatial analytics is further enhancing the capabilities of validation service providers, allowing them to deliver more comprehensive and actionable insights to end-users.
Regionally, North America dominates the Lane Geometry Validation Services market, accounting for the largest revenue share in 2024, followed closely by Europe and Asia Pacific. The strong presence of leading automotive OEMs, advanced research institutions, and a supportive regulatory environment have positioned North America as a key hub for innovation and adoption. However, Asia Pacific is expected to witness the fastest growth during the forecast period, driven by rapid urbanization, increasing vehicle production, and government initiatives aimed at modernizing transportation infrastructure. Europe continues to be a significant market, buoyed by stringent safety standards and the widespread adoption of autonomous driving technologies.
The Service Type segment of the Lane Geometry Validation Services market encompasses Data Collection, Data Processing, Data Validation, Reporting & Analytics, and Others. Data Collection services form the foundational layer, involving the acquisition of raw lane geometry data from various sources such as LiDAR, cameras, GPS, and other sensor technologies. As vehicles and infrastruct
Facebook
TwitterThis datasets is a summary of the CTD profiles measured with the RV Belgica. It provides general meta-information such as the campaign code, the date of measurement and the geographical information. An important information is the profile quality flag that describes the validity of the data. A quality flag = 2 means the data is generally good although some outliers can still be present. A quality flag = 4 means the data should not be trusted. 1 meter binned data can be download on the SeaDataNet CDI portal (enter the cruise_id in the search bar) ONLY for the good quality profiles. Full acquisition frequency datasets are available on request to BMDC.
Facebook
TwitterThe Transmute extension for CKAN provides a data pipeline for validating and converting data using schemas. It allows users to define schemas that specify validation rules and data transformations, thus ensuring data quality and consistency. The extension enables transformations using an action API with the ability to transform data using defined schemas. Key Features: * Schema-Driven Validation: Uses schemas to define data types, required fields, and validation rules providing the opportunity to validate data against these rules. * Data Transformation: Supports data transformation based on schemas. This includes modifying fields, adding new fields, and removing unnecessary data to fit the desired output format. * Inline Schema Definition: Allows defining schemas directly within the CKAN API calls. This provides a convenient way to apply transformations on-the-fly. * Custom Validators: Supports creation of custom validators, enabling tailored data validation logic. The readme specifically identifies "tsm_concat" as an example of a custom validator. * Field Weighting: Enables control over the order in which fields are processed during validation and transformation, by specifying weight values. * Post-Processing: Provides the option to define steps to execute after processing fields, such as removing fields that are no longer needed after transformation. Technical Integration: The Transmute extension integrates with CKAN by adding a new action API called tsm_transmute. This API allows users to submit data and a schema, and the extension applies the schema to validate and transform the data. The extension is enabled by adding transmute to the list of enabled plugins in the CKAN configuration file. Benefits & Impact: Implementing the Transmute extension enhances CKAN's data quality control and transformation capabilities. It provides a flexible and configurable way to ensure data consistency and conformity to defined standards, thus improving the overall reliability and usability of datasets managed within CKAN. Furthermore, it automates the data transformation process using defined schemas, which can reduce the manual workload of data administrators.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This page details the data sources and methodology used in the 2023 report by Avery Fairburn, “Examining Jail Data in Hamilton County, TN”. Interactive data visualizations for the report can be found here. A data dictionary is provided in the file section.
This dataset was collected using a web scraper tool created by Wren Tefft, and the records included in the sample are from August 2nd, 2022 through January 31st, 2023. This data is still being collected daily. The scraper pulls the name, home address, age, charges, and arresting agency for each person booked into the jail. In the file available for download on this page, I have removed the names and street addresses of arrested individuals to maintain their privacy, but have included the city, state, and ZIP code.
The addresses in these records are provided by arrestees upon being booked, and recorded by jail staff. The raw data contained a considerable number of errors, so I tested the validity of addresses by using Google’s Address Validation API, and categorized them based on the results:
Address Status - Valid w/ No Errors: Address was able to be identified by the Google API and confirmed as a known address of record with USPS, and included no errors. Non-address values (such as “Homeless”) that had no errors are included in this category as well. - Valid w/ Errors: Address was confirmed but included errors (such as a misspelled street name or city name, or an incorrect ZIP code). Non-address values (such as “Homeless”) that contained errors are included in this category as well. - Invalid: Address was not able to be confirmed, due to either too many errors, missing address components, or non-existent address components (such as a street number that did not correspond with any real location). - No Apt. Number: Address was confirmed, but is invalid due to a missing unit number. These addresses are included in analysis, as the street address is correct, but otherwise considered invalid as they are undeliverable. - None: No address or other value was listed by jail staff.
I also categorized addresses by type, to account for the fact that a large number of arrestees were listed as homeless, living at a hotel or homeless shelter, or living at a commercial address. Categories are detailed below:
Address Type - Single-Unit Residential: Valid residential addresses that do not contain a unit number. - Multi-Unit Residential Residential addresses that contain (or should contain) a unit number. Addresses that were missing a unit number are included in this category. - Commercial: Valid non-residential addresses not listed in another category. - Hotel: Valid addresses of hotels. - Community Kitchen: The address of a homeless service provider in Chattanooga, listed as the home address for a significant portion of arrestees. - Homeless: Arrestees that had “Homeless”, “Transient”, or variations listed instead of an address. - P.O. Box: P.O. boxes that were listed as home addresses. - Invalid: Addresses that were not able to be confirmed, due to either too many errors, missing address components, or non-existent address components (such as a street number that did not correspond with any real location). - None: No address or other value was listed by jail staff.
To choose the primary charge in arrests that included multiple different charges, I used this method: Charges were ranked first by classification, from highest (Class A felony) to lowest (Class C misdemeanor). Out of a group of multiple charges, the primary charge would be the one with the highest classification. If there were multiple charges with the same classification (e.g. two class A misdemeanor charges), then the one listed first in the booking record was identified as the primary charge.
I made exceptions to this method for Violation of Probation, Failure to Appear charges, and Resisting or Evading Arrest charges, which I did not list as the primary charge except when there were no other charges. This was to account for the fact that Failure to Appear charges are typically issued as warrants, and the fact that being charged with another crime while on probation typically constitutes a probation violation.
There was also a group of charges that I did not list as primary unless they were the sole charge, due to the fact that their classification or definition is dependent on other charges. These charges were Possession of Firearm During a Felony, Contributing to the Deli...
Facebook
Twitter*excludes Yoshihara data, p-value = .001597 when including Yoshihara data.+definition of optimal/suboptimal not clearly defined in Yoshihara data.Clinical differences between discovery and validation studies.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract: In this article we discuss the definition of obstetric violence (VO) regarding certain relationships and practices in medical care of pregnancy and childbirth. We approached violence in the specific case of VO through a socio-anthropological model of analysis, and through the comparison between objective definitions (juridical, political, academic) and subjective ones as produced by civil society associations. We assume that what underlies the discourses on VO is a dispute over the legitimacy of its definition, and that, in this process, objective naming and subjective meanings attributed to certain obstetric practices as violent are intertwined. We argue that the change in social sensitivities regarding many forms of violence (which resulted in changes of legitimate discourses) explains the current discussion on the definition of what is VO.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ImportanceTo efficiently perform bimanual daily tasks, bimanual coordination is needed. Bimanual coordination is the interaction between an individual’s hands, which may be impaired post-stroke, however clinical and functional assessments are lacking and research is limited.ObjectivesTo develop a valid and reliable observation tool to assess bimanual coordination of individuals post-stroke.DesignA cross-sectional study.SettingRehabilitation settings.ParticipantsOccupational therapists (OTs) with stroke rehabilitation experience and individuals post stroke.Outcomes and measuresThe development and content validity of BOTH included a literature review, review of existing tools and followed a 10-step process. The conceptual and operational definitions of bimanual coordination were defined as well as scoring criteria. Then multiple rounds of feedback from expert OTs were performed. OTs reviewed BOTH using the ‘Template for assessing content validity through expert judgement’ questionnaire. Then, BOTH was administered to 51 participants post-stroke. Cronbach’s alpha was used to verify internal reliability of BOTH and construct validity of BOTH was assessed by correlating it to the bimanual subtests of The Purdue Pegboard Test.ResultsExpert validity was established in two-rounds with 11 OTs. Cronbach’s alpha was α = 0.923 for the asymmetrical items, 0.897 for the symmetrical items and 0.949 for all eight items. The item-total correlations of BOTH were also strong and significant. The total score of BOTH was strongly significantly correlated with The Purdue–Both hands placement (r = .787, p < .001) and Assembly (r = .730, p < .001) subtests.Conclusions and relevanceBOTH is a new observation tool to assess bimanual coordination post-stroke. Expert validity of BOTH was established, excellent internal reliability and construct validity were demonstrated. Further research is needed, so in the future, BOTH can be used for clinical and research purposes to address bimanual coordination post-stroke.