100+ datasets found
  1. Measuring quality of routine primary care data

    • zenodo.org
    • data.niaid.nih.gov
    • +1more
    txt, xls
    Updated Jun 4, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Olga Kostopoulou; Olga Kostopoulou; Brendan Delaney; Brendan Delaney (2022). Measuring quality of routine primary care data [Dataset]. http://doi.org/10.5061/dryad.dncjsxkzh
    Explore at:
    xls, txtAvailable download formats
    Dataset updated
    Jun 4, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Olga Kostopoulou; Olga Kostopoulou; Brendan Delaney; Brendan Delaney
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Objective: Routine primary care data may be used for the derivation of clinical prediction rules and risk scores. We sought to measure the impact of a decision support system (DSS) on data completeness and freedom from bias.

    Materials and Methods: We used the clinical documentation of 34 UK General Practitioners who took part in a previous study evaluating the DSS. They consulted with 12 standardized patients. In addition to suggesting diagnoses, the DSS facilitates data coding. We compared the documentation from consultations with the electronic health record (EHR) (baseline consultations) vs. consultations with the EHR-integrated DSS (supported consultations). We measured the proportion of EHR data items related to the physician's final diagnosis. We expected that in baseline consultations, physicians would document only or predominantly observations related to their diagnosis, while in supported consultations, they would also document other observations as a result of exploring more diagnoses and/or ease of coding.

    Results: Supported documentation contained significantly more codes (IRR=5.76 [4.31, 7.70] P<0.001) and less free text (IRR = 0.32 [0.27, 0.40] P<0.001) than baseline documentation. As expected, the proportion of diagnosis-related data was significantly lower (b=-0.08 [-0.11, -0.05] P<0.001) in the supported consultations, and this was the case for both codes and free text.

    Conclusions: We provide evidence that data entry in the EHR is incomplete and reflects physicians' cognitive biases. This has serious implications for epidemiological research that uses routine data. A DSS that facilitates and motivates data entry during the consultation can improve routine documentation.

  2. f

    Living Standards Measurement Survey 2003 (Wave 3 Panel) - Bosnia and...

    • microdata.fao.org
    Updated Nov 17, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    State Agency for Statistics (BHAS) (2022). Living Standards Measurement Survey 2003 (Wave 3 Panel) - Bosnia and Herzegovina [Dataset]. https://microdata.fao.org/index.php/catalog/2353
    Explore at:
    Dataset updated
    Nov 17, 2022
    Dataset provided by
    Federation of BiH Institute of Statistics (FIS)
    State Agency for Statistics (BHAS)
    Republika Srpska Institute of Statistics (RSIS)
    Time period covered
    2003
    Area covered
    Bosnia and Herzegovina
    Description

    Abstract

    In 2001, the World Bank in co-operation with the Republika Srpska Institute of Statistics (RSIS), the Federal Institute of Statistics (FOS) and the Agency for Statistics of BiH (BHAS), carried out a Living Standards Measurement Survey (LSMS). The Living Standard Measurement Survey LSMS, in addition to collecting the information necessary to obtain a comprehensive as possible measure of the basic dimensions of household living standards, has three basic objectives, as follows:

    1. To provide the public sector, government, the business community, scientific institutions, international donor organizations and social organizations with information on different indicators of the population's living conditions, as well as on available resources for satisfying basic needs.

    2. To provide information for the evaluation of the results of different forms of government policy and programs developed with the aim to improve the population's living standard. The survey will enable the analysis of the relations between and among different aspects of living standards (housing, consumption, education, health, labor) at a given time, as well as within a household.

    3. To provide key contributions for development of government's Poverty Reduction Strategy Paper, based on analyzed data.

    The Department for International Development, UK (DFID) contributed funding to the LSMS and provided funding for a further two years of data collection for a panel survey, known as the Household Survey Panel Series (HSPS). Birks Sinclair & Associates Ltd. were responsible for the management of the HSPS with technical advice and support provided by the Institute for Social and Economic Research (ISER), University of Essex, UK. The panel survey provides longitudinal data through re-interviewing approximately half the LSMS respondents for two years following the LSMS, in the autumn of 2002 and 2003. The LSMS constitutes Wave 1 of the panel survey so there are three years of panel data available for analysis. For the purposes of this documentation we are using the following convention to describe the different rounds of the panel survey: - Wave 1 LSMS conducted in 2001 forms the baseline survey for the panel - Wave 2 Second interview of 50% of LSMS respondents in Autumn/ Winter 2002 - Wave 3 Third interview with sub-sample respondents in Autumn/ Winter 2003

    The panel data allows the analysis of key transitions and events over this period such as labour market or geographical mobility and observe the consequent outcomes for the well-being of individuals and households in the survey. The panel data provides information on income and labour market dynamics within FBiH and RS. A key policy area is developing strategies for the reduction of poverty within FBiH and RS. The panel will provide information on the extent to which continuous poverty is experienced by different types of households and individuals over the three year period. And most importantly, the co-variates associated with moves into and out of poverty and the relative risks of poverty for different people can be assessed. As such, the panel aims to provide data, which will inform the policy debates within FBiH and RS at a time of social reform and rapid change. KIND OF DATA

    Geographic coverage

    National coverage. Domains: Urban/rural/mixed; Federation; Republic

    Analysis unit

    Households

    Kind of data

    Sample survey data [ssd]

    Sampling procedure

    The Wave 3 sample consisted of 2878 households who had been interviewed at Wave 2 and a further 73 households who were interviewed at Wave 1 but were non-contact at Wave 2 were issued. A total of 2951 households (1301 in the RS and 1650 in FBiH) were issued for Wave 3. As at Wave 2, the sample could not be replaced with any other households.

    Panel design

    Eligibility for inclusion

    The household and household membership definitions are the same standard definitions as a Wave 2. While the sample membership status and eligibility for interview are as follows: i) All members of households interviewed at Wave 2 have been designated as original sample members (OSMs). OSMs include children within households even if they are too young for interview. ii) Any new members joining a household containing at least one OSM, are eligible for inclusion and are designated as new sample members (NSMs). iii) At each wave, all OSMs and NSMs are eligible for inclusion, apart from those who move outof-scope (see discussion below). iv) All household members aged 15 or over are eligible for interview, including OSMs and NSMs.

    Following rules

    The panel design means that sample members who move from their previous wave address must be traced and followed to their new address for interview. In some cases the whole household will move together but in others an individual member may move away from their previous wave household and form a new split-off household of their own. All sample members, OSMs and NSMs, are followed at each wave and an interview attempted. This method has the benefit of maintaining the maximum number of respondents within the panel and being relatively straightforward to implement in the field.

    Definition of 'out-of-scope'

    It is important to maintain movers within the sample to maintain sample sizes and reduce attrition and also for substantive research on patterns of geographical mobility and migration. The rules for determining when a respondent is 'out-of-scope' are as follows:

    i. Movers out of the country altogether i.e. outside FBiH and RS. This category of mover is clear. Sample members moving to another country outside FBiH and RS will be out-of-scope for that year of the survey and not eligible for interview.

    ii. Movers between entities Respondents moving between entities are followed for interview. The personal details of the respondent are passed between the statistical institutes and a new interviewer assigned in that entity.

    iii. Movers into institutions Although institutional addresses were not included in the original LSMS sample, Wave 3 individuals who have subsequently moved into some institutions are followed. The definitions for which institutions are included are found in the Supervisor Instructions.

    iv. Movers into the district of Brcko are followed for interview. When coding entity Brcko is treated as the entity from which the household who moved into Brcko originated.

    Mode of data collection

    Face-to-face [f2f]

    Cleaning operations

    Data entry

    As at Wave 2 CSPro was the chosen data entry software. The CSPro program consists of two main features to reduce to number of keying errors and to reduce the editing required following data entry: - Data entry screens that included all skip patterns. - Range checks for each question (allowing three exceptions for inappropriate, don't know and missing codes). The Wave 3 data entry program had more checks than at Wave 2 and DE staff were instructed to get all anomalies cleared by SIG fieldwork. The program was extensively tested prior to DE. Ten computer staff were employed in each Field Office and as all had worked on Wave 2 training was not undertaken.

    Editing

    Editing Instructions were compiled (Annex G) and sent to Supervisors. For Wave 3 Supervisors were asked to take more time to edit every questionnaire returned by their interviewers. The FBTSA examined the work twelve of the twenty-two Supervisors. All Supervisors made occasional errors with the Control Form so a further 100% check of Control Forms and Module 1 was undertaken by the FBTSA and SIG members.

    Response rate

    The panel survey has enjoyed high response rates throughout the three years of data collection with the wave 3 response rates being slightly higher than those achieved at wave 2. At wave 3, 1650 households in the FBiH and 1300 households in the RS were issued for interview. Since there may be new households created from split-off movers it is possible for the number of households to increase during fieldwork. A similar number of new households were formed in each entity; 62 in the FBiH and 63 in the RS. This means that 3073 households were identified during fieldwork. Of these, 3003 were eligible for interview, 70 households having either moved out of BiH, institutionalised or deceased (34 in the RS and 36 in the FBiH).

    Interviews were achieved in 96% of eligible households, an extremely high response rate by international standards for a survey of this type.

    In total, 8712 individuals (including children) were enumerated within the sample households (4796 in the FBiH and 3916 in the RS). Within in the 3003 eligible households, 7781 individuals aged 15 or over were eligible for interview with 7346 (94.4%) being successfully interviewed. Within cooperating households (where there was at least one interview) the interview rate was higher (98.8%).

    A very important measure in longitudinal surveys is the annual individual re-interview rate. This is because a high attrition rate, where large numbers of respondents drop out of the survey over time, can call into question the quality of the data collected. In BiH the individual re-interview rates have been high for the survey. The individual re-interview rate is the proportion of people who gave an interview at time t-1 who also give an interview at t. Of those who gave a full interview at wave 2, 6653 also gave a full interview at wave 3. This represents a re-interview rate of 97.9% - which is extremely high by international standards. When we look at those respondents who have been interviewed at all three years of the survey there are 6409 cases which are available for longitudinal analysis, 2881 in the RS and 3528 in the FBiH. This represents 82.8% of the responding wave 1 sample, a

  3. Data for Calculating Efficient Outdoor Water Uses

    • catalog.data.gov
    • data.cnra.ca.gov
    • +3more
    Updated May 14, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    California Department of Water Resources (2024). Data for Calculating Efficient Outdoor Water Uses [Dataset]. https://catalog.data.gov/dataset/data-for-calculating-efficient-outdoor-water-uses-147dd
    Explore at:
    Dataset updated
    May 14, 2024
    Dataset provided by
    California Department of Water Resourceshttp://www.water.ca.gov/
    Description

    December 6, 2023 (Final DWR Data) The 2018 Legislation required DWR to provide or otherwise identify data regarding the unique local conditions to support the calculation of an urban water use objective (CWC 10609. (b)(2) (C)). The urban water use objective (UWUO) is an estimate of aggregate efficient water use for the previous year based on adopted water use efficiency standards and local service area characteristics for that year. UWUO is calculated as the sum of efficient indoor residential water use, efficient outdoor residential water use, efficient outdoor irrigation of landscape areas with dedicated irrigation meter for Commercial, Industrial, and Institutional (CII) water use, efficient water losses, and an estimated water use in accordance with variances, as appropriate. Details of urban water use objective calculations can be obtained from DWR’s Recommendations for Guidelines and Methodologies document (Recommendations for Guidelines and Methodologies for Calculating Urban Water Use Objective - https://water.ca.gov/-/media/DWR-Website/Web-Pages/Programs/Water-Use-And-Efficiency/2018-Water-Conservation-Legislation/Performance-Measures/UWUO_GM_WUES-DWR-2021-01B_COMPLETE.pdf). The datasets provided in the links below enable urban retail water suppliers calculate efficient outdoor water uses (both residential and CII), agricultural variances, variances for significant uses of water for dust control for horse corals, and temporary provisions for water use for existing pools (as stated in Water Boards’ draft regulation). DWR will provide technical assistance for estimating the remaining UWUO components, as needed. Data for calculating outdoor water uses include: • Reference evapotranspiration (ETo) – ETo is evaporation plant and soil surface plus transpiration through the leaves of standardized grass surfaces over which weather stations stand. Standardization of the surfaces is required because evapotranspiration (ET) depends on combinations of several factors, making it impractical to take measurements under all sets of conditions. Plant factors, known as crop coefficients (Kc) or landscape coefficients (KL), are used to convert ETo to actual water use by specific crop/plant. The ETo data that DWR provides to urban retail water suppliers for urban water use objective calculation purposes is derived from the California Irrigation Management Information System (CIMIS) program (https://cimis.water.ca.gov/). CIMIS is a network of over 150 automated weather stations throughout the state that measure weather data that are used to estimate ETo. CIMIS also provides daily maps of ETo at 2-km grid using the Spatial CIMIS modeling approach that couples satellite data with point measurements. The ETo data provided below for each urban retail water supplier is an area weighted average value from the Spatial CIMIS ETo. • Effective precipitation (Peff) - Peff is the portion of total precipitation which becomes available for plant growth. Peff is affected by soil type, slope, land cover type, and intensity and duration of rainfall. DWR is using a soil water balance model, known as Cal-SIMETAW, to estimate daily Peff at 4-km grid and an area weighted average value is calculated at the service area level. Cal-SIMETAW is a model that was developed by UC Davis and DWR and it is widely used to quantify agricultural, and to some extent urban, water uses for the publication of DWR’s Water Plan Update. Peff from Cal-SIMETAW is capped at 25% of total precipitation to account for potential uncertainties in its estimation. Daily Peff at each grid point is aggregated to produce weighted average annual or seasonal Peff at the service area level. The total precipitation that Cal-SIMETAW uses to estimate Peff comes from the Parameter-elevation Relationships on Independent Slopes Model (PRISM), which is a climate mapping model developed by the PRISM Climate Group at Oregon State University. • Residential Landscape Area Measurement (LAM) – The 2018 Legislation required DWR to provide each urban retail water supplier with data regarding the area of residential irrigable lands in a manner that can reasonably be applied to the standards (CWC 10609.6.(b)). DWR delivered the LAM data to all retail water suppliers, and a tabular summary of selected data types will be provided here. The data summary that is provided in this file contains irrigable-irrigated (II), irrigable-not-irrigated (INI), and not irrigable (NI) irrigation status classes, as well as horse corral areas (HCL_area), agricultural areas (Ag_area), and pool areas (Pool_area) for all retail suppliers.

  4. f

    Data from: Omega over alpha for reliability estimation of unidimensional...

    • tandf.figshare.com
    rar
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alan K. Goodboy; Matthew M. Martin (2023). Omega over alpha for reliability estimation of unidimensional communication measures [Dataset]. http://doi.org/10.6084/m9.figshare.13207397.v1
    Explore at:
    rarAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Alan K. Goodboy; Matthew M. Martin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Cronbach’s alpha (coefficient α) is the conventional statistic communication scholars use to estimate the reliability of multi-item measurement instruments. For many, if not most communication measures, α should not be calculated for reliability estimation. Instead, coefficient omega (ω) should be reported as it aligns with the definition of reliability itself. In this primer, we review α and ω, and explain why ω should be the new ‘gold standard’ in reliability estimation. Using Mplus, we demonstrate how ω is calculated on an available data set and show how preliminary scales can be revised with ‘ω if item deleted.’ We also list several easy-to-use resources to calculate ω in other software programs. Communication researchers should routinely report ω instead of α.

  5. Measurement-based MIMO channel model at 140GHz

    • zenodo.org
    zip
    Updated Apr 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mar Francis de Guzman; Katsuyuki Haneda; Pekka Kyösti; Mar Francis de Guzman; Katsuyuki Haneda; Pekka Kyösti (2024). Measurement-based MIMO channel model at 140GHz [Dataset]. http://doi.org/10.5281/zenodo.7640353
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 6, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Mar Francis de Guzman; Katsuyuki Haneda; Pekka Kyösti; Mar Francis de Guzman; Katsuyuki Haneda; Pekka Kyösti
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    1. Introduction

    The file “gen_dd_channel.zip” is a package of a wideband multiple-input multiple-output (MIMO) stored radio channel model at 140 GHz in indoor hall, outdoor suburban, residential and urban scenarios. The package consists of 1) measured wideband double-directional multipath data sets estimated from radio channel sounding and processed through measurement-based ray-launching and 2) MATLAB code sets that allows users to generate wideband MIMO radio channels with various antenna array types, e.g., uniform planar and circular arrays at link ends.

    2. What does this package do?

    Outputs of the channel model

    The MATLAB file “ChannelGeneratorDD_hexax.m” gives the following variables, among others. The .m file also gives optional figures illustrating antennas and radio channel responses.

    Variables

    Descriptions

    CIR

    MIMO channel impulse responses

    CFR

    MIMO channel frequency responses

    Inputs to the channel model

    In order for the MATLAB file “ChannelGeneratorDD_hexax.m” to run properly, the following inputs are required.

    Directory

    Descriptions

    data_030123_double_directional_paths

    Double-directional multipath data, measured and complemented by ray-launching tool, for various cellular sites.

    User’s parameters

    When using “ChannelGeneratorDD_hexax.m”, the following choices are available.

    Features

    Choices

    Channel model types for transfer function generation

    • 'snapshot': single time sample per link = static, random phase for each path, amplitude from measurements

    • 'virtualMotion': Doppler shifts & temporal fading, static propagation parameters, random phase for each path, amplitude from measurements, Doppler frequency per path from AoA and velocity vector

    Antenna / beam shapes

    • 'single3GPP': single antenna element with power pattern shape defined in 3GPP, adjustable HPBW etc.

    • 'URA': uniform rectangular array, omni-directional elements

    • 'UCA': uniform circular array, omni-directional elements

    List of files in the dataset

    MATLAB codes that implement the channel model

    The MATLAB files consist of the following files.

    File and directory names

    Descriptions

    readme_100223.txt

    Readme file; please read it before using the files

    ChannelGeneratorDD_hexax.m

    Main code to run; a code to integrate antenna arrays and double-directional path data to derive MIMO radio channels. No need to see/edit other files.

    gen_pathDD.m, randl.m, randLoc.m

    Sub-routines used in ChannelGeneratorDD_hexax.m; no need of modifications.

    Hexa-X channel generator DD_presentation.pdf

    User manual of ChannelGeneratorDD_hexax.m.

    Measured multipath data

    The directory "data_030123_double_directional_paths" in the package contains the following files.

    Filenames

    Descriptions

    readme_100223.txt

    Readme file; please read it before using the files

    RTdata_[scenario]_[date].mat

    Containing double-directional multipath parameters at 140 GHz in the specified scenario, estimated from radio channel sounding and ray-tracing.

    description_of_data_dd_[scenario].pdf

    Explaining data formats, the measurement site and sample results.

    References

    Details of the data set are available in the following two documents:

    The stored channel models

    A. Nimr (ed.), "Hexa-X Deliverable D2.3 Radio models and enabling techniques towards ultra-high data rate links and capacity in 6G," April 2023, available: https://hexa-x.eu/deliverables/

    @misc{Hexa-XD23,
    author = {{A. Nimr (ed.)}},
    title = {{Hexa-X Deliverable D2.3 Radio models and enabling techniques towards ultra-high data rate links and capacity in 6G}},
    year = {2023},
    month = {Apr.},
    howpublished = {https://hexa-x.eu/deliverables/},
    }

    Derivation of the data, i.e., radio channel sounding and measurement-based ray-launching

    M. F. De Guzman and K. Haneda, "Analysis of wave-interacting objects in indoor and outdoor environments at 142 GHz," IEEE Transactions on Antennas and Propagation, vol. 71, no. 12, pp. 9838-9848, Dec. 2023, doi: 10.1109/TAP.2023.3318861

    @ARTICLE{DeGuzman23_TAP,
    author={De Guzman, Mar Francis and Haneda, Katsuyuki},
    journal={IEEE Transactions on Antennas and Propagation},
    title={Analysis of Wave-Interacting Objects in Indoor and Outdoor Environments at 142 {GHz}},
    year={2023},
    volume={71},
    number={12},
    pages={9838-9848},
    }

    Finally, the code “randl.m” are from the following MATLAB Central File Exchange.

    Hristo Zhivomirov (2023). Generation of Random Numbers with Laplace Distribution (https://www.mathworks.com/matlabcentral/fileexchange/53397-generation-of-random-numbers-with-laplace-distribution), MATLAB Central File Exchange. Retrieved February 15, 2023.

    Data usage terms

    Any usage of the data must be upon consent on the following conditions:

    • The file “ChannelGeneratorDD_hexax.m” is owned by OUL. Contact: Dr. Pekka Kyösti, Pekka.Kyosti@oulu.fi.
    • The other files and those in the directories, except for “randl.m”, are owned by AAU. Contact: Mr. Mar Francis de Guzman, francis.deguzman@aalto.fi.
    • When a scientific paper is published that exploits the data and code, please cite this data set; the citation can be downloaded from the zenodo page of this data set.
  6. Z

    Data set - Measured in a context : making sense of open access book data

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ronald Snijder (2023). Data set - Measured in a context : making sense of open access book data [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7799222
    Explore at:
    Dataset updated
    Jul 6, 2023
    Dataset authored and provided by
    Ronald Snijder
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    For more than a decade, open access book platforms have been distributing titles in order to maximise their impact. Each platform offers some form of usage data, showcasing the success of their offering. However, the numbers alone are not sufficient to convey how well a book is actually performing.

    Our data set is consists of 18,014 books and chapters. The selected titles have been added to the OAPEN Library collection before 1 January 2022, and the usage data of twelve months (January to December 2022) has been captured. During that period, this collection of books and chapters has been downloaded more than 10 million times. Each title has been linked to one broad subject and the title’s language has been coded as either English, German or other languages.

    The titles are rated using the TOANI score.

    The acronym stands for Transparent Open Access Normalised Index. The transparency is based on the application of clear regulations, and by making all data used visible. The data is normalised, by using a common scale for the complete collection of an open access book platform. Additionally, there are only three possible values to score the titles: average, less than average and more than average. This index is set up to provide a clear and simple answer to the question whether an open access book has made an impact. It is not meant to give a sense of false accuracy; the complexities surrounding this issue cannot be measured in several decimal places.

    The TOANI score is based on the following principles:

    Select only titles that have been available for at least 12 months;

    Use the usage data of the same 12 months period for the whole collection;

    Each title is assigned one – high level – subject;

    Each title is assigned one language;

    All titles are grouped based on subject and language;

    The groups should consists of at least 100 titles;

    The following data must be made available for each title:

    Platform

    Total number of titles in the group

    Subject

    Language

    Period used for the measurement

    Minimum value, maximum value, median, first and third quartile of the platform’s usage data

    Based on the previous, titles are classified as:

    “Less than average” – First quartile; 25 % of the titles

    “Average” – Second and third quartile; 50% of the titles

    “More than average” – Fourth quartile; 25 % of the titles

  7. UWB Positioning and Tracking Data Set

    • data.europa.eu
    • zenodo.org
    unknown
    Updated Jul 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zenodo (2025). UWB Positioning and Tracking Data Set [Dataset]. https://data.europa.eu/data/datasets/oai-zenodo-org-8280736?locale=nl
    Explore at:
    unknownAvailable download formats
    Dataset updated
    Jul 3, 2025
    Dataset authored and provided by
    Zenodohttp://zenodo.org/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    UWB Positioning and Tracking Data Set UWB positioning data set contains measurements from four different indoor environments. The data set contains measurements that can be used for range-based positioning evaluation in different indoor environments. # Measurement system The measurements were made using 9 DW1000 UWB transceivers (DWM1000 modules) connected to the networked RaspberryPi computer using in-house radio board SNPN_UWB. 8 nodes were used as positioning anchor nodes with fixed locations in individual indoor environment and one node was used as a mobile positioning tag. Each UWB node is designed arround the RaspberryPi computer and are wirelessly connected to the measurement controller (e.g. laptop) using Wi-Fi and MQTT communication technologies. All tag positions were generated beforehand to as closelly resemble the human walking path as possible. All walking path points are equally spaced to represent the equidistand samples of a walking path in a time-domain. The sampled walking path (measurement TAG positions) are included in a downloadable data set file under downloads section. # Folder structure Folder structure is represented below this text. Folder contains four subfolders named by the indoor environments measured during the measurement campaign and a folder raw_data where raw measurement data is saved. Each environment folder has a anchors.csv file with anchor names and locations, .json file data.json with measurements, file walking_path.csv file with tag positions and subfolder floorplan with floorplan.dxf (AutoCAD format), floorplan.png and floorplan_track.jpg. Subfolder raw_data contains raw data in subfolders named by the four indor environments where the measurements were taken. Each location subfolder contains a subfolder data where data from each tag position from the walking_path.csv is collected in a separate folder. There is exactly the same number of folders in data folder as is the number of measurement points in the walking_path.csv. Each measurement subfolder contains 48 .csv files named by communication channel and anchor used for those measurements. For example: ch1_A1.csv contains all measurements at selected tag location with anchor A1 on UWB channel ch1. The location folder contains also anchors.csv and walking_path.csv files which are identical to the files mentioned previously. The last folder in the data set is the technical_validation folder, where results of technical validation of the data set are collected. They are separated into 8 subfolders: - cir_min_max_mean - los_nlos - positioning_wls - range - range_error - range_error_A6 - range_error_histograms - rss The organization of the data set is the following: data_set + location0 - anchors.csv - data.json - walking_path.csv + floorplan - floorplan.dxf - floorplan.png - floorplan_track.jpg - walking_path.csv + location1 - ... + location2 - ... + location3 - ... + raw_data + location0 + data + 1.07_9.37_1.2 - ch1_A1.csv - ch7_A8.csv - ... + 1.37_9.34_1.2 - ... + ... + location1 + ... + location2 + ... + location3 + ... + technical validation + cir_min_max_mean + positioning_wls + range + range_error + range_error_histograms + rss - LICENSE - README # Data format Raw measurements are saved in .csv files. Each file starts with a header, where first line represents the version of the file and the second line represents the data column names. The column names have a missing column name. Actual column names included in the .csv files are: TAG_ID ANCHOR_ID X_TAG Y_TAG Z_TAG X_ANCHOR Y_ANCHOR Z_ANCHOR NLOS RANGE FP_INDEX RSS RSS_FP FP_POINT1 FP_POINT2 FP_POINT3 STDEV_NOISE CIR_POWER MAX_NOISE RXPACC CHANNEL_NUMBER FRAME_LENGTH PREAMBLE_LENGTH BITRATE PRFR PREAMBLE_CODE CIR (starts with this column; all columns until the end of the line represent the channel impulse response) # Availability of CODE Code for data analysis and preprocessing of all data available in this data set is published on GitHub: https://github.com/KlemenBr/uwb_positioning.git The code is licensed under the Apache License 2.0. # Authors and License Author of data set in this repository is Klemen Bregar, klemen.bregar@ijs.si. This work is licensed under a Creative Commons Attribution 4.0 International License. # Funding The research leading to the data collection has been partially funded from the European Horizon 2020 Programme project eWINE under grant agreement No. 688116, the Slovenian Research Agency under Grant numbers P2-0016, J2-2507 and bilateral project with Grant number BI-ME/21-22-007.

  8. Data from: Can measurements of foraging behaviour predict variation in...

    • catalog.data.gov
    • agdatacommons.nal.usda.gov
    Updated Jun 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agricultural Research Service (2025). Data from: Can measurements of foraging behaviour predict variation in weight gains of free-ranging cattle? [Dataset]. https://catalog.data.gov/dataset/data-from-can-measurements-of-foraging-behaviour-predict-variation-in-weight-gains-of-free-f45bb
    Explore at:
    Dataset updated
    Jun 5, 2025
    Dataset provided by
    Agricultural Research Servicehttps://www.ars.usda.gov/
    Description

    Technologies are now available to continuously monitor livestock foraging behaviours, but it remains unclear whether such measurements can meaningfully inform livestock grazing management decisions. Empirical studies in extensive rangelands are needed to quantify relationships between short-term foraging behaviours (e.g. minutes to days) and longer-term measures of animal performance. The objective of this study was to examine whether four different ways of measuring daily foraging behaviour (grazing-bout duration, grazing time per day, velocity while grazing, and turn angle while grazing) were related to weight gain by free-ranging yearling steers grazing semiarid rangeland. These data include measurements interpreted from yearling steer outfitted with neck collars supporting a solar-powered device that measured GPS locations at 5 minute intervals and used an accelerometer to predict grazing activity at 4 second intervals. Average daily weight gains of steers are included as well as an estimate of standing forage biomass derived from the Harmonized Landsat-Sentinel remote-sensing product. These data support research to advance knowledge regarding the use of on-animal sensors that monitor foraging behaviour, which have the potential to transmit indicators to livestock managers in real time (e.g. daily). This approach can help inform decisions such as when to move animals among paddocks, or when to sell or transition animals from rangeland to confined feeding operations. Resources in this dataset:Resource Title: Means of Moonitor Metrics from 2019-2020 Study Periods. File Name: Moo2019-20_dailymetrics_w_ADG_by_studyperiod.csvResource Title: Data Dictionary for Means of Moonitor Metrics from 2019-2020 Study Period. File Name: Moo2019-20_dailymetrics_w_ADG_by_studyperiod_dictionary.csvResource Title: Daily Moonitor Metrics from 2019-2020. File Name: Moo2019-20_dailymetrics_database.csvResource Title: Data Dictionary for Daily Moonitor Metrics from 2019-2020. File Name: Moo2019-20_dailymetrics_database_dictionary.csv

  9. f

    Data from: Can GDP Measurement Be Further Improved? Data Revision and...

    • tandf.figshare.com
    pdf
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jan P. A. M. Jacobs; Samad Sarferaz; Jan-Egbert Sturm; Simon van Norden (2023). Can GDP Measurement Be Further Improved? Data Revision and Reconciliation [Dataset]. http://doi.org/10.6084/m9.figshare.13119974.v3
    Explore at:
    pdfAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Jan P. A. M. Jacobs; Samad Sarferaz; Jan-Egbert Sturm; Simon van Norden
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Recent years have seen many attempts to combine expenditure-side estimates of U.S. real output (GDE) growth with income-side estimates (GDI) to improve estimates of real GDP growth. We show how to incorporate information from multiple releases of noisy data to provide more precise estimates while avoiding some of the identifying assumptions required in earlier work. This relies on a new insight: using multiple data releases allows us to distinguish news and noise measurement errors in situations where a single vintage does not. We find that (a) the data prefer averaging across multiple releases instead of discarding early releases in favor of later ones, and (b) that initial estimates of GDI are quite informative. Our new measure, GDP++, undergoes smaller revisions and tracks expenditure measures of GDP growth more closely than either the simple average of the expenditure and income measures published by the BEA or the GDP growth measure of Aruoba et al. published by the Federal Reserve Bank of Philadelphia.

  10. C

    Aquatic measurement data

    • ckan.mobidatalab.eu
    Updated Mar 10, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Abteilung II - Integrativer Umweltschutz (2022). Aquatic measurement data [Dataset]. https://ckan.mobidatalab.eu/dataset/water-portal-hydrological-measurement-data
    Explore at:
    http://publications.europa.eu/resource/authority/file-type/csv, http://publications.europa.eu/resource/authority/file-type/pdfAvailable download formats
    Dataset updated
    Mar 10, 2022
    Dataset provided by
    Abteilung II - Integrativer Umweltschutz
    License

    Data licence Germany – Attribution – Version 2.0https://www.govdata.de/dl-de/by-2-0
    License information was derived automatically

    Description

    For Berlin, current and historical measurement data from the state measurement network for surface water (rivers and lakes) and groundwater (aquifers) are available on a daily basis. Available data includes hydrological (water level, flow rate) and hydrogeological (groundwater level, hydrochemical analysis results) readings. Furthermore, various quality parameters such as temperature, electrical conductivity, pH value, oxygen content, etc. are available from both areas. There are several ways to download data: The API can be used to query data. Documentation is available for this. However, data can also be downloaded directly in CSV format from the wasserportal.berlin.de website. There are 3 variants of aggregated data available: 1. Individual values ​​from the last 12 months as the arithmetic mean of the measured values ​​over a time interval of 15 minutes. 2. Daily values ​​from the start of measurement as daily mean values ​​and, if necessary, with daily maximum and daily minimum values. 3. Monthly values ​​from the start of measurement with monthly minimum, mean and maximum.

  11. o

    Measurement Using Linked SED and UMETRICS Data

    • explore.openaire.eu
    Updated Apr 15, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ekaterina Levitskaya; Brian Kim; Maryah Garner; Rukhshan Mian; Benjamin Feder; Allison Nunez (2022). Measurement Using Linked SED and UMETRICS Data [Dataset]. http://doi.org/10.5281/zenodo.6463886
    Explore at:
    Dataset updated
    Apr 15, 2022
    Authors
    Ekaterina Levitskaya; Brian Kim; Maryah Garner; Rukhshan Mian; Benjamin Feder; Allison Nunez
    Description

    This is a Jupyter notebook that explores the linked Survey of Earned Doctorates (SED)-Universities: Measuring the Impacts of Research on Innovation, Competitiveness, and Science (UMETRICS) data to get a better sense of how these two data sources might be used together. Furthermore, the purpose of this notebook is to allow participants to think critically about what exactly is being measured and how missingness in the data should be interpreted. This notebook was developed for the Fall 2021 Applied Data Analytics training facilitated by the National Center for Science and Engineering Statistics (NCSES) and Coleridge Initiative.

  12. a

    Data from: Can magnetic susceptibilities measured on outcrops be used for...

    • hub.arcgis.com
    • metalearth.geohub.laurentian.ca
    Updated Sep 7, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MetalEarth (2019). Can magnetic susceptibilities measured on outcrops be used for modelling (and constraining inversions of) aeromagnetic data? [Dataset]. https://hub.arcgis.com/documents/4e2b83369c424ed4bd98bd04ab838393
    Explore at:
    Dataset updated
    Sep 7, 2019
    Dataset authored and provided by
    MetalEarth
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    AbstractMagnetic susceptibilities measured on outcrop and drill-core samples using hand-held instruments have been shown in the literature to be useful for identifying mineralogical changes. It is not yet clear how useful these measurements are for constraining magnetic modelling and inversion. We have generated estimates of the apparent magnetic susceptibility of the ground by mathematical transformation of an aeromagnetic data and assumed that these values can be used to model the magnetic data. In the same area we have a large number of measurements on outcrop and have compared these two independent estimates. When the measured values are below 1x10-3 S.I., there is a no correlation between the measured and apparent values, interpreted to be likely due to the influence or interference from nearby or underlying magnetic sources. Hence, in this case the measured values cannot be used to constrain modelling and inversion. When the measurements are above this value there is a limited correlation, with values only agreeing to within a factor of about 10, so these values can be used as very rough constraints. The poor correlation is interpreted as due to the presence of remanent magnetization or heterogeneity of the magnetic susceptibility within the rock. A large database of outcrop measurements gives an indication of the range of the variation in magnetic susceptibility values that could be used in modelling.Link to thesis: https://zone.biblio.laurentian.ca/handle/10219/3404

  13. Degradation Measurement of Robot Arm Position Accuracy

    • data.nist.gov
    • catalog.data.gov
    Updated Sep 7, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Helen Qiao (2018). Degradation Measurement of Robot Arm Position Accuracy [Dataset]. http://doi.org/10.18434/M31962
    Explore at:
    Dataset updated
    Sep 7, 2018
    Dataset provided by
    National Institute of Standards and Technologyhttp://www.nist.gov/
    Authors
    Helen Qiao
    License

    https://www.nist.gov/open/licensehttps://www.nist.gov/open/license

    Description

    The dataset contains both the robot's high-level tool center position (TCP) health data and controller-level components' information (i.e., joint positions, velocities, currents, temperatures, currents). The datasets can be used by users (e.g., software developers, data scientists) who work on robot health management (including accuracy) but have limited or no access to robots that can capture real data. The datasets can support the: - Development of robot health monitoring algorithms and tools - Research of technologies and tools to support robot monitoring, diagnostics, prognostics, and health management (collectively called PHM) - Validation and verification of the industrial PHM implementation. For example, the verification of a robot's TCP accuracy after the work cell has been reconfigured, or whenever a manufacturer wants to determine if the robot arm has experienced a degradation. For data collection, a trajectory is programmed for the Universal Robot (UR5) approaching and stopping at randomly-selected locations in its workspace. The robot moves along this preprogrammed trajectory during different conditions of temperature, payload, and speed. The TCP (x,y,z) of the robot are measured by a 7-D measurement system developed at NIST. Differences are calculated between the measured positions from the 7-D measurement system and the nominal positions calculated by the nominal robot kinematic parameters. The results are recorded within the dataset. Controller level sensing data are also collected from each joint (direct output from the controller of the UR5), to understand the influences of position degradation from temperature, payload, and speed. Controller-level data can be used for the root cause analysis of the robot performance degradation, by providing joint positions, velocities, currents, accelerations, torques, and temperatures. For example, the cold-start temperatures of the six joints were approximately 25 degrees Celsius. After two hours of operation, the joint temperatures increased to approximately 35 degrees Celsius. Control variables are listed in the header file in the data set (UR5TestResult_header.xlsx). If you'd like to comment on this data and/or offer recommendations on future datasets, please email guixiu.qiao@nist.gov.

  14. 2020 Census Redistricting Data (P.L. 94-171) Noisy Measurement File

    • registry.opendata.aws
    • dataverse.harvard.edu
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States Census Bureau, 2020 Census Redistricting Data (P.L. 94-171) Noisy Measurement File [Dataset]. https://registry.opendata.aws/census-2020-pl94-nmf/
    Explore at:
    Dataset provided by
    United States Census Bureauhttp://census.gov/
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The 2020 Census Redistricting Data (P.L. 94-171) Noisy Measurement File (NMF) is an intermediate output of the 2020 Census Disclosure Avoidance System (DAS) TopDown Algorithm (TDA) (as described in Abowd, J. et al [2022] https://doi.org/10.1162/99608f92.529e3cb9, and implemented in the DAS 2020 Redistricting Production Code). The NMF was generated using the Census Bureau's implementation of the Discrete Gaussian Mechanism, calibrated to satisfy zero-Concentrated Differential Privacy with bounded neighbors.

    The NMF values, called noisy measurements are the output of applying the Discrete Gaussian Mechanism to counts from the 2020 Census Edited File (CEF). They are generally inconsistent with one another (for example, in a county composed of two tracts, the noisy measurement for the county's total population may not equal the sum of the noisy measurements of the two tracts' total population), and frequently negative (especially when the population being measured was small), but are integer-valued. The NMF was later post-processed as part of the DAS code to take the form of microdata and to satisfy various constraints. The NMF documented here contains both the noisy measurements themselves as well as the data needed to represent the DAS constraints; thus, the NMF could be used to reproduce the steps taken by the DAS code to produce microdata from the noisy measurements by applying the production code base.

    The 2020 Census Redistricting Data (P.L. 94-171) Noisy Measurement File includes zero-Concentrated Differentially Private (zCDP) (Bun, M. and Steinke, T [2016]) noisy measurements, implemented via the discrete Gaussian mechanism. These are estimated counts of individuals and housing units included in the 2020 Census Edited File (CEF), which includes confidential data initially collected in the 2020 Census of Population and Housing. The noisy measurements included in this file were subsequently post-processed by the TopDown Algorithm (TDA) to produce the 2020 Census Redistricting Data (P.L. 94-171) Summary File.

    The NMF provides estimates of counts of persons in the CEF by various characteristics and combinations of characteristics including their reported race and ethnicity, whether they were of voting age, whether they resided in a housing unit or one of 7 group quarters types, and their census block of residence after the addition of discrete Gaussian noise (with the scale parameter determined by the privacy-loss budget allocation for that particular query under zCDP). Noisy measurements of the counts of occupied and vacant housing units by census block are also included. Lastly, data on constraints--information into which no noise was infused by the Disclosure Avoidance System (DAS) and used by the TDA to post-process the noisy measurements into the 2020 Census Redistricting Data (P.L. 94-171) Summary File --are provided.

  15. s

    SES Water Night flow Monitoring

    • streamwaterdata.co.uk
    Updated Apr 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    dpararajasingam_ses (2024). SES Water Night flow Monitoring [Dataset]. https://www.streamwaterdata.co.uk/items/6ab069aa9fe54f979aa7ca7352e7311d
    Explore at:
    Dataset updated
    Apr 30, 2024
    Dataset authored and provided by
    dpararajasingam_ses
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Key Definitions   Dataset A structured and organized collection of related elements, often stored digitally, used for analysis and interpretation in various fields.  Data Triage The process carried out by a Data Custodian to determine if there is any evidence of sensitivities associated with Data Assets, their associated Metadata and Software Scripts used to process Data Assets if they are used as Open Data.  District Metered Area (DMA) The role of a district metered area (DMA) is to divide the water distribution network into manageable areas or sectors into which the flow can be measured. These areas provide the water providers with guidance as to which DMAs (District Metered Areas) require leak detection work. Leakage The accidental admission or escape of a fluid or gas through a hole or crack Night Flow This technique considers that in a DMA, leakages can be estimated when the flow into the DMA is at its minimum. Typically, this is measured at night between 2am and 4am when customer demand is low so that network leakage can be detected. Centroid The centre of a geometric object. Data History   Data Origin   Companies have configured their networks to be able to continuously monitor night flows using district meters. Flow data is recorded on meters and normally transmitted daily to a data centre. Data is analysed to confirm its validity and used to derive continuous night flow in each monitored area. Data Triage Considerations  Data Quality Not all DMAs provide quality data for the purposes of trend analysis. It was decided that water companies should choose 10% of their DMAs to be represented in this data set to begin with. The advice to publishers is to choose those with reliable and consistent telemetry, indicative of genuine low demand during measurement times and not revealing of sensitive night usage patterns. Data Consistency  There is a concern that companies measure flow allowance for legitimate night use and/or potential night use differently. To avoid any inconsistency, it was decided that we would share the net flow. Critical National Infrastructure The release of boundary data for district metered areas has been deemed to be revealing of critical national infrastructure. Because of this, it has been decided that the data set shall only contain point data from a centroid within the DMA. Data Triage Review Frequency Every 12 months, unless otherwise requested. Data Limitations Some of the flow recorded may be legitimate night-time usage of the network Some measuring systems automatically infill estimated measurements where none have been received via telemetry. These estimates are based on past flow. The reason for a fluctuation in night flow may not be determined by this dataset but potential causes can include seasonal variation in night-time water usage and mains bursts Data Publish Frequency   Monthly

  16. Measures that would support utilizing data in healthcare worldwide in 2022

    • statista.com
    Updated Jul 5, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2022). Measures that would support utilizing data in healthcare worldwide in 2022 [Dataset]. https://www.statista.com/statistics/1316677/factors-that-would-help-healthcare-data-utilization-worldwide/
    Explore at:
    Dataset updated
    Jul 5, 2022
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Dec 2021 - Feb 2022
    Area covered
    Worldwide
    Description

    According to a survey conducted among healthcare leaders globally in February 2022, 27 percent of respondents reported that gaining more clarity on how data is being used within their hospital/healthcare facility would support them to fully utilize the data available. Additionally, 24 percent of respondents advised availability of data specialists to manage and analyze data would enable greater utilization.

  17. Supporting data and codes

    • figshare.com
    zip
    Updated Aug 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yang Liu (2024). Supporting data and codes [Dataset]. http://doi.org/10.6084/m9.figshare.25662030.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 26, 2024
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Yang Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is the supporting data and codes for our paper titled "Analytically Articulating the Optimal Buffer Size for Urban Green Space Exposure Measures and Implications for Health Geography Studies".Data and codes availability statement: The participants of this study did not give written consent for their data to be shared publicly. So, due to the sensitive nature of the research, original data from the survey cannot be shared publicly. Alternatively, we provided the mocked survey data outside our study area to illustrate the procedure that can reproduce our results. The employed remote sensing data are commercial data that cannot be shared publicly. We can only provide a smaller section of our green space inventory that can efficiently illustrate our methods. Any usage of our green space data other than validating our methods in this paper needs permission from both Planet Labs, Inc. and the authors.

  18. f

    Data from: Measurement with hexagonal prismatic configuration: an...

    • scielo.figshare.com
    jpeg
    Updated Jun 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Natally Annunciato Siqueira; Podalyro Amaral de Souza (2023). Measurement with hexagonal prismatic configuration: an alternative to macromeasuring [Dataset]. http://doi.org/10.6084/m9.figshare.14283690.v1
    Explore at:
    jpegAvailable download formats
    Dataset updated
    Jun 11, 2023
    Dataset provided by
    SciELO journals
    Authors
    Natally Annunciato Siqueira; Podalyro Amaral de Souza
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ABSTRACT Flow measurements are essential for processes management involving fluids. In water companies, measuring inlets and outlets of water is essential to manage revenue and water losses. The water can be measured through diverse instruments with different application principles. Due to the high acquisition costs of the usual water flow meters, as well the difficulties of installation and maintenance, macro meter in Brazil is deficient. The Pitot Cole Tube is able to determine flow in pipes through the pressure differential. The main advantages are the low cost and easy installation, that may occur with the pipe under load. However, its peaces are fragile and complex. This paper propose the construction of a simple and robust instrument to measure the water flow. The prototype is build with pressure plugs installed on opposite faces of a hexagonal prism, then is evaluated of its effectiveness in water flow measurements. The results were promising given the stability obtained in the calibration coefficient.

  19. a

    SES Water Reservoir Levels

    • hub.arcgis.com
    • streamwaterdata.co.uk
    • +1more
    Updated Apr 26, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    dpararajasingam_ses (2024). SES Water Reservoir Levels [Dataset]. https://hub.arcgis.com/datasets/f8699b39279b4def88ef3eff6ebdc5ab
    Explore at:
    Dataset updated
    Apr 26, 2024
    Dataset authored and provided by
    dpararajasingam_ses
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    Overview   This dataset provides the measurements of raw water storage levels in reservoirs crucial for public water supply, The reservoirs included in this dataset are natural bodies of water that have been dammed to store untreated water.    Key Definitions   Aggregation  The process of summarizing or grouping data to obtain a single or reduced set of information, often for analysis or reporting purposes.    Capacity The maximum volume of water a reservoir can hold above the natural level of the surrounding land, with thresholds for regulation at 10,000 cubic meters in England, Wales and Northern Ireland and a modified threshold of 25,000 cubic meters in Scotland pending full implementation of the Reservoirs (Scotland) Act 2011. Current Level The present volume of water held in a reservoir measured above a set baseline crucial for safety and regulatory compliance. Current Percentage The current water volume in a reservoir as a percentage of its total capacity, indicating how full the reservoir is at any given time. Dataset  Structured and organized collection of related elements, often stored digitally, used for analysis and interpretation in various fields.   Granularity  Data granularity is a measure of the level of detail in a data structure. In time-series data, for example, the granularity of measurement might be based on intervals of years, months, weeks, days, or hours  ID  Abbreviation for Identification that refers to any means of verifying the unique identifier assigned to each asset for the purposes of tracking, management, and maintenance.   Open Data Triage  The process carried out by a Data Custodian to determine if there is any evidence of sensitivities associated with Data Assets, their associated Metadata and Software Scripts used to process Data Assets if they are used as Open Data.   Reservoir Large natural lake used for storing raw water intended for human consumption. Its volume is measurable, allowing for careful management and monitoring to meet demand for clean, safe water. Reservoir Type The classification of a reservoir based on the method of construction, the purpose it serves or the source of water it stores. Schema  Structure for organizing and handling data within a dataset, defining the attributes, their data types, and the relationships between different entities. It acts as a framework that ensures data integrity and consistency by specifying permissible data types and constraints for each attribute.   Units  Standard measurements used to quantify and compare different physical quantities.     Data History   Data Origin   Reservoir level data is sourced from water companies who may also update this information on their website and government publications such as the Water situation reports provided by the UK government. Data Triage Considerations  Identification of Critical Infrastructure Special attention is given to safeguard data on essential reservoirs in line with the National Infrastructure Act, to mitigate security risks and ensure resilience of public water systems. Currently, it is agreed that only reservoirs with a location already available in the public domain are included in this dataset. Commercial Risks and Anonymisation The risk of personal information exposure is minimal to none since the data concerns reservoir levels, which are not linked to individuals or households. Data Freshness It is not currently possible to make the dataset live. Some companies have digital monitoring, and some are measuring reservoir levels analogically. This dataset may not be used to determine reservoir level in place of visual checks where these are advised. Data Triage Review Frequency   Annually unless otherwise requested  Data Specifications  Data specifications define what is included and excluded in the dataset to maintain clarity and focus. For this dataset: Each dataset covers measurements taken by the publisher. This dataset is published periodically in line with the publisher’s capabilities Historical datasets may be provided for comparison but are not required The location data provided may be a point from anywhere within the body of water or on its boundary. Reservoirs included in the dataset must be: Open bodies of water used to store raw/untreated water Filled naturally Measurable Contain water that may go on to be used for public supply Context  This dataset must not be used to determine the implementation of low supply or high supply measures such as hose pipe bans being put in place or removed. Please await guidance from your water supplier regarding any changes required to your usage of water. Particularly high or low reservoir levels may be considered normal or as expected given the season or recent weather. This dataset does not remove the requirement for visual checks on reservoir level that are in place for caving/pot holing safety. Some water companies calculate the capacity of reservoirs differently than others. The capacity can mean the useable volume of the reservoir or the overall volume that can be held in the reservoir including water below the water table. Data Publish Frequency   Annually

  20. Data from: A novel nonparametric measure of explained variation for survival...

    • zenodo.org
    • data.niaid.nih.gov
    • +1more
    csv, txt
    Updated May 30, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Verena Weiß; Matthias Schmidt; Martin Hellmich; Verena Weiß; Matthias Schmidt; Martin Hellmich (2022). Data from: A novel nonparametric measure of explained variation for survival data with an easy graphical interpretation [Dataset]. http://doi.org/10.5061/dryad.5c6bq
    Explore at:
    txt, csvAvailable download formats
    Dataset updated
    May 30, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Verena Weiß; Matthias Schmidt; Martin Hellmich; Verena Weiß; Matthias Schmidt; Martin Hellmich
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Introduction: For survival data the coefficient of determination cannot be used to describe how good a model fits to the data. Therefore, several measures of explained variation for survival data have been proposed in recent years. Methods: We analyse an existing measure of explained variation with regard to minimisation aspects and demonstrate that these are not fulfilled for the measure. Results: In analogy to the least squares method from linear regression analysis we develop a novel measure for categorical covariates which is based only on the Kaplan-Meier estimator. Hence, the novel measure is a completely nonparametric measure with an easy graphical interpretation. For the novel measure different weighting possibilities are available and a statistical test of significance can be performed. Eventually, we apply the novel measure and further measures of explained variation to a dataset comprising persons with a histopathological papillary thyroid carcinoma. Conclusion: We propose a novel measure of explained variation with a comprehensible derivation as well as a graphical interpretation, which may be used in further analyses with survival data.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Olga Kostopoulou; Olga Kostopoulou; Brendan Delaney; Brendan Delaney (2022). Measuring quality of routine primary care data [Dataset]. http://doi.org/10.5061/dryad.dncjsxkzh
Organization logo

Measuring quality of routine primary care data

Explore at:
xls, txtAvailable download formats
Dataset updated
Jun 4, 2022
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Olga Kostopoulou; Olga Kostopoulou; Brendan Delaney; Brendan Delaney
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Description

Objective: Routine primary care data may be used for the derivation of clinical prediction rules and risk scores. We sought to measure the impact of a decision support system (DSS) on data completeness and freedom from bias.

Materials and Methods: We used the clinical documentation of 34 UK General Practitioners who took part in a previous study evaluating the DSS. They consulted with 12 standardized patients. In addition to suggesting diagnoses, the DSS facilitates data coding. We compared the documentation from consultations with the electronic health record (EHR) (baseline consultations) vs. consultations with the EHR-integrated DSS (supported consultations). We measured the proportion of EHR data items related to the physician's final diagnosis. We expected that in baseline consultations, physicians would document only or predominantly observations related to their diagnosis, while in supported consultations, they would also document other observations as a result of exploring more diagnoses and/or ease of coding.

Results: Supported documentation contained significantly more codes (IRR=5.76 [4.31, 7.70] P<0.001) and less free text (IRR = 0.32 [0.27, 0.40] P<0.001) than baseline documentation. As expected, the proportion of diagnosis-related data was significantly lower (b=-0.08 [-0.11, -0.05] P<0.001) in the supported consultations, and this was the case for both codes and free text.

Conclusions: We provide evidence that data entry in the EHR is incomplete and reflects physicians' cognitive biases. This has serious implications for epidemiological research that uses routine data. A DSS that facilitates and motivates data entry during the consultation can improve routine documentation.

Search
Clear search
Close search
Google apps
Main menu