100+ datasets found
  1. d

    Performance Measure Definition: Average Call Processing Interval

    • catalog.data.gov
    Updated Jun 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.austintexas.gov (2024). Performance Measure Definition: Average Call Processing Interval [Dataset]. https://catalog.data.gov/dataset/performance-measure-definition-average-call-processing-interval
    Explore at:
    Dataset updated
    Jun 25, 2024
    Dataset provided by
    data.austintexas.gov
    Description

    Performance Measure Definition: Average Call Processing Interval

  2. d

    Performance Measure Definition: STEMI Alert Call-to-Door Interval

    • catalog.data.gov
    • s.cnmilf.com
    • +1more
    Updated Jun 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.austintexas.gov (2024). Performance Measure Definition: STEMI Alert Call-to-Door Interval [Dataset]. https://catalog.data.gov/dataset/performance-measure-definition-stemi-alert-call-to-door-interval
    Explore at:
    Dataset updated
    Jun 25, 2024
    Dataset provided by
    data.austintexas.gov
    Description

    Performance Measure Definition: STEMI Alert Call-to-Door Interval

  3. W

    Fire Return Interval Departure (Frid) - Mean Condition Class

    • wifire-data.sdsc.edu
    geotiff, wcs, wms
    Updated May 6, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    California Wildfire & Forest Resilience Task Force (2025). Fire Return Interval Departure (Frid) - Mean Condition Class [Dataset]. https://wifire-data.sdsc.edu/dataset/clm-fire-return-interval-departure-frid-mean-condition-class
    Explore at:
    wcs, wms, geotiffAvailable download formats
    Dataset updated
    May 6, 2025
    Dataset provided by
    California Wildfire & Forest Resilience Task Force
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This metric, uses the mean percent FRID to a measure of the extent to which contemporary fires (i.e., since 1908) are burning at frequencies similar to the frequencies that occurred prior to Euro-American settlement, with the mean reference FRI binned into another basis for comparison. Mean PFRID is a metric of fire return interval departure (FRID), and measures the departure of current FRI from reference mean FRI in percent.

  4. f

    Group data (mean and 95% confidence interval) for pain and function outcome...

    • datasetcatalog.nlm.nih.gov
    Updated Jun 30, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hodges, Paul W.; Bennell, Kim L.; Hinman, Rana S.; Chang, Wei-Ju; Young, Carolyn L.; Buscemi, Valentina; Schabrun, Siobhan M.; Liston, Matthew B. (2017). Group data (mean and 95% confidence interval) for pain and function outcome measures. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001780265
    Explore at:
    Dataset updated
    Jun 30, 2017
    Authors
    Hodges, Paul W.; Bennell, Kim L.; Hinman, Rana S.; Chang, Wei-Ju; Young, Carolyn L.; Buscemi, Valentina; Schabrun, Siobhan M.; Liston, Matthew B.
    Description

    Group data (mean and 95% confidence interval) for pain and function outcome measures.

  5. (Table 3 core) Interval-mean bedding directions based on core-scan data of...

    • doi.pangaea.de
    • search.dataone.org
    html, tsv
    Updated 2001
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Richard D Jarrard; Christian J Bücker; Terry Wilson; Timothy S Paulsen (2001). (Table 3 core) Interval-mean bedding directions based on core-scan data of core CRP-3 [Dataset]. http://doi.org/10.1594/PANGAEA.191112
    Explore at:
    html, tsvAvailable download formats
    Dataset updated
    2001
    Dataset provided by
    PANGAEA
    Authors
    Richard D Jarrard; Christian J Bücker; Terry Wilson; Timothy S Paulsen
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Time period covered
    Oct 9, 1999 - Nov 19, 1999
    Area covered
    Variables measured
    Azimuth, Bed dip, Precision, Confidence, Depth, top/min, Depth, bottom/max, Lithology/composition/facies
    Description

    This dataset is about: (Table 3 core) Interval-mean bedding directions based on core-scan data of core CRP-3. Please consult parent dataset @ https://doi.org/10.1594/PANGAEA.485006 for more information.

  6. w

    Performance Measure Definition: Trauma Alert Call-to-Door Interval

    • data.wu.ac.at
    • s.cnmilf.com
    • +1more
    Updated May 30, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Austin (2017). Performance Measure Definition: Trauma Alert Call-to-Door Interval [Dataset]. https://data.wu.ac.at/schema/data_gov/ZDkxZmU1NWItNjg2Zi00ZmQ3LWIxYmQtNDUyZDM4YzJmM2Ux
    Explore at:
    Dataset updated
    May 30, 2017
    Dataset provided by
    City of Austin
    Description

    Performance Measure Definition: Trauma Alert Call-to-Door Interval

  7. W

    Fire Return Interval Departure (Frid) - Mean Percent - Since 1970

    • wifire-data.sdsc.edu
    geotiff, wcs, wms
    Updated May 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    California Wildfire & Forest Resilience Task Force (2025). Fire Return Interval Departure (Frid) - Mean Percent - Since 1970 [Dataset]. https://wifire-data.sdsc.edu/dataset/clm-fire-return-interval-departure-frid-mean-percent-since-1970
    Explore at:
    wms, wcs, geotiffAvailable download formats
    Dataset updated
    May 6, 2025
    Dataset provided by
    California Wildfire & Forest Resilience Task Force
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Percent FRID (PFRID) quantifies the extent in percentage to which recent fires (i.e., since 1970) are burning at frequencies similar to those that occurred prior to Euro-American settlement, with the mean reference FRI as the basis for comparison. Mean PFRID measures the departure of current FRI from reference mean FRI in percent

  8. u

    HIPPO Pressure-Weighted Mean Total, 10-km, and 100-m Interval Column...

    • data.ucar.edu
    • ckanprod.data-commons.k8s.ucar.edu
    archive
    Updated Oct 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrew Watt; Anne E. Perring; Benjamin R. Miller; Bin Xiang; Bradley Hall; Britton B. Stephens; Bruce C. Daube; Christopher A. Pickett-Heaps; Colm Sweeney; Dale Hurst; Daniel J. Jacob; David C. Rogers; David Nance; David W. Fahey; Elliot Atlas; Eric A. Kort; Eric A. Ray; Eric J. Hintsa; Fred Moore; Geoff S. Dutton; Greg Santoni; Huiqun Wang; J. Ryan Spackman; James W. Elkins; Jasna V. Pittman; Jenny A. Fisher; Jonathan Bent; Joshua P. Schwarz; Julie Haggerty; Karen H. Rosenlof; Kevin J. Wecht; Laurel A. Watts; Mark Zondlo; Michael J. Mahoney; Minghui Diao; Pavel Romashkin; Qiaoqiao Wang; Ralph F. Keeling; Richard Lueb; Rodrigo Jimenez-Pizarro; Roger Hendershot; Roisin Commane; Ru-Shan Gao; Samuel J. Oltmans; Stephen A. Montzka; Stephen R. Shertz; Steven C. Wofsy; Stuart Beaton; Sunyoung Park; Teresa Campos; William A. Cooper (2025). HIPPO Pressure-Weighted Mean Total, 10-km, and 100-m Interval Column Concentrations [Dataset]. http://doi.org/10.3334/CDIAC/HIPPO_011
    Explore at:
    archiveAvailable download formats
    Dataset updated
    Oct 7, 2025
    Authors
    Andrew Watt; Anne E. Perring; Benjamin R. Miller; Bin Xiang; Bradley Hall; Britton B. Stephens; Bruce C. Daube; Christopher A. Pickett-Heaps; Colm Sweeney; Dale Hurst; Daniel J. Jacob; David C. Rogers; David Nance; David W. Fahey; Elliot Atlas; Eric A. Kort; Eric A. Ray; Eric J. Hintsa; Fred Moore; Geoff S. Dutton; Greg Santoni; Huiqun Wang; J. Ryan Spackman; James W. Elkins; Jasna V. Pittman; Jenny A. Fisher; Jonathan Bent; Joshua P. Schwarz; Julie Haggerty; Karen H. Rosenlof; Kevin J. Wecht; Laurel A. Watts; Mark Zondlo; Michael J. Mahoney; Minghui Diao; Pavel Romashkin; Qiaoqiao Wang; Ralph F. Keeling; Richard Lueb; Rodrigo Jimenez-Pizarro; Roger Hendershot; Roisin Commane; Ru-Shan Gao; Samuel J. Oltmans; Stephen A. Montzka; Stephen R. Shertz; Steven C. Wofsy; Stuart Beaton; Sunyoung Park; Teresa Campos; William A. Cooper
    Time period covered
    Jan 8, 2009 - Sep 8, 2011
    Area covered
    Description

    This dataset contains the total column and vertical profile data for all Missions, 1 through 5, of the HIAPER Pole-to-Pole Observations (HIPPO) study of carbon cycle and greenhouse gases. The pressure-weighted mean column concentrations of parameters reported in this data set are estimates of the quantities that would be observed from a total column instrument at the top of each profile, i.e., from an airplane looking down or from a satellite. The Missions took place from 08 January 2009 to 08 September 2011. There are five spacedelimited format ASCII files included with this data set that have been compressed into one *.zip file for convenient download. Please refer to the readme for more information. The EOL Version 1.0 data set was created in 2012 and previously served as R. 20121129 by ORNL.

  9. (Table 3 BHTV) Interval-mean bedding directions based on borehole televiewer...

    • doi.pangaea.de
    • search.dataone.org
    html, tsv
    Updated 2001
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Richard D Jarrard; Christian J Bücker; Terry Wilson; Timothy S Paulsen (2001). (Table 3 BHTV) Interval-mean bedding directions based on borehole televiewer data of core CRP-3 [Dataset]. http://doi.org/10.1594/PANGAEA.191114
    Explore at:
    html, tsvAvailable download formats
    Dataset updated
    2001
    Dataset provided by
    PANGAEA
    Authors
    Richard D Jarrard; Christian J Bücker; Terry Wilson; Timothy S Paulsen
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Time period covered
    Oct 9, 1999 - Nov 19, 1999
    Area covered
    Variables measured
    Azimuth, Bed dip, Precision, Confidence, Depth, top/min, Depth, bottom/max, Lithology/composition/facies
    Description

    This dataset is about: (Table 3 BHTV) Interval-mean bedding directions based on borehole televiewer data of core CRP-3. Please consult parent dataset @ https://doi.org/10.1594/PANGAEA.485006 for more information.

  10. d

    Performance Measure Definition: Trauma Alert Scene Interval

    • catalog.data.gov
    • s.cnmilf.com
    • +1more
    Updated Jun 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.austintexas.gov (2024). Performance Measure Definition: Trauma Alert Scene Interval [Dataset]. https://catalog.data.gov/dataset/performance-measure-definition-trauma-alert-scene-interval
    Explore at:
    Dataset updated
    Jun 25, 2024
    Dataset provided by
    data.austintexas.gov
    Description

    Performance Measure Definition: Trauma Alert Scene Interval

  11. undefined undefined: undefined | undefined (undefined)

    • data.census.gov
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States Census Bureau, undefined undefined: undefined | undefined (undefined) [Dataset]. https://data.census.gov/table/ACSDT5Y2014.B16003
    Explore at:
    Dataset provided by
    United States Census Bureauhttp://census.gov/
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Supporting documentation on code lists, subject definitions, data accuracy, and statistical testing can be found on the American Community Survey website in the Data and Documentation section...Sample size and data quality measures (including coverage rates, allocation rates, and response rates) can be found on the American Community Survey website in the Methodology section..Although the American Community Survey (ACS) produces population, demographic and housing unit estimates, it is the Census Bureau''s Population Estimates Program that produces and disseminates the official estimates of the population for the nation, states, counties, cities and towns and estimates of housing units for states and counties..Explanation of Symbols:An ''**'' entry in the margin of error column indicates that either no sample observations or too few sample observations were available to compute a standard error and thus the margin of error. A statistical test is not appropriate..An ''-'' entry in the estimate column indicates that either no sample observations or too few sample observations were available to compute an estimate, or a ratio of medians cannot be calculated because one or both of the median estimates falls in the lowest interval or upper interval of an open-ended distribution..An ''-'' following a median estimate means the median falls in the lowest interval of an open-ended distribution..An ''+'' following a median estimate means the median falls in the upper interval of an open-ended distribution..An ''***'' entry in the margin of error column indicates that the median falls in the lowest interval or upper interval of an open-ended distribution. A statistical test is not appropriate..An ''*****'' entry in the margin of error column indicates that the estimate is controlled. A statistical test for sampling variability is not appropriate. .An ''N'' entry in the estimate and margin of error columns indicates that data for this geographic area cannot be displayed because the number of sample cases is too small..An ''(X)'' means that the estimate is not applicable or not available..Estimates of urban and rural population, housing units, and characteristics reflect boundaries of urban areas defined based on Census 2010 data. As a result, data for urban and rural areas from the ACS do not necessarily reflect the results of ongoing urbanization..While the 2010-2014 American Community Survey (ACS) data generally reflect the February 2013 Office of Management and Budget (OMB) definitions of metropolitan and micropolitan statistical areas; in certain instances the names, codes, and boundaries of the principal cities shown in ACS tables may differ from the OMB definitions due to differences in the effective dates of the geographic entities..Methodological changes to data collection in 2013 may have affected language data for 2013. Users should be aware of these changes when using multi-year data containing data from 2013..A "limited English speaking household" is one in which no member 14 years old and over (1) speaks only English or (2) speaks a non-English language and speaks English "very well." In other words, all members 14 years old and over have at least some difficulty with English. By definition, English-only households cannot belong to this group. Previous Census Bureau data products have referred to these households as "linguistically isolated" and "Households in which no one 14 and over speaks English only or speaks a language other than English at home and speaks English 'very well'." This table is directly comparable to tables from earlier years that used these labels..Data are based on a sample and are subject to sampling variability. The degree of uncertainty for an estimate arising from sampling variability is represented through the use of a margin of error. The value shown here is the 90 percent margin of error. The margin of error can be interpreted roughly as providing a 90 percent probability that the interval defined by the estimate minus the margin of error and the estimate plus the margin of error (the lower and upper confidence bounds) contains the true value. In addition to sampling variability, the ACS estimates are subject to nonsampling error (for a discussion of nonsampling variability, see Accuracy of the Data). The effect of nonsampling error is not represented in these tables..Source: U.S. Census Bureau, 2010-2014 American Community Survey 5-Year Estimates

  12. f

    Regions with sea surface temperatures, as defined by 95% confidence...

    • data.apps.fao.org
    Updated Nov 11, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). Regions with sea surface temperatures, as defined by 95% confidence intervals, between 22 and 32 �C over entire year [Dataset]. https://data.apps.fao.org/map/catalog/components/search?keyword=Sea%20surface%20temperature
    Explore at:
    Dataset updated
    Nov 11, 2023
    Description

    This dataset identifies all regions in which the full 95% confidence interval is between 22 and 32 �C for all 12 months. The sea surface temperature data includes the mean sea surface temperature per month, the standard deviation and the number of observations used to calculate the mean. Based on these values, the 95% upper and lower confidence levels about the mean for each month have been generated.

  13. Data from: Checking the Cox Proportional Hazards Model with...

    • tandf.figshare.com
    zip
    Updated Aug 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yangjianchen Xu; Donglin Zeng; D. Y. Lin (2025). Checking the Cox Proportional Hazards Model with Interval-Censored Data [Dataset]. http://doi.org/10.6084/m9.figshare.29473601.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 5, 2025
    Dataset provided by
    Taylor & Francishttps://taylorandfrancis.com/
    Authors
    Yangjianchen Xu; Donglin Zeng; D. Y. Lin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This article presents a general framework for checking the adequacy of the Cox proportional hazards model with interval-censored data, which arise when the event of interest is known only to occur over a random time interval. Specifically, we construct certain stochastic processes that are informative about various aspects of the model, that is, the functional forms of covariates, the exponential link function and the proportional hazards assumption. We establish their weak convergence to zero-mean Gaussian processes under the assumed model through empirical process theory. We then approximate the limiting distributions by Monte Carlo simulation and develop graphical and numerical procedures to check model assumptions and improve goodness of fit. We evaluate the performance of the proposed methods through extensive simulation studies and provide an application to the Atherosclerosis Risk in Communities Study. Supplementary materials for this article are available online, including a standardized description of the materials available for reproducing the work.

  14. Condition Data with Random Recording Time

    • kaggle.com
    zip
    Updated Jun 10, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Prognostics @ HSE (2022). Condition Data with Random Recording Time [Dataset]. https://www.kaggle.com/datasets/prognosticshse/condition-data-with-random-recording-time/data
    Explore at:
    zip(1167682 bytes)Available download formats
    Dataset updated
    Jun 10, 2022
    Authors
    Prognostics @ HSE
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Context: This data set originates from a practice-relevant degradation process, which is representative for Prognostics and Health Management (PHM) applications. The observed degradation process is the clogging of filters when separating of solid particles from gas. A test bench is used for this purpose, which performs automated life testing of filter media by loading them. For testing, dust complying with ISO standard 12103-1 and with a known particle size distribution is employed. The employed filter media is made of randomly oriented non-woven fibre material. Further data sets are generated for various practice-relevant data situations which do not correspond to the ideal conditions of full data coverage. These data sets are uploaded to Kaggle by the user "Prognostics @ HSE" in a continuous process. In order to avoid the carryover between two data sets, a different configuration of the filter tests is used for each uploaded practice-relevant data situation, for example by selecting a different filter media.

    Detailed specification: For more information about the general operation and the components used, see the provided description file Random Recording Condition Data Data Set.pdf

    Given data situation: In order to implement a predictive maintenance policy, knowledge about the time of failure respectively about the remaining useful life (RUL) of the technical system is necessary. The time of failure or the RUL can be predicted on the basis of condition data that indicate the damage progression of a technical system over time. However, the collection of condition data in typical industrial PHM applications is often only possible in an incomplete manner. An example is the collection of data during defined test cycles with specific loads, carried at intervals. For instance, this approach is often used with machining centers, where test cycles are only carried out between finished machining jobs or work shifts. Due to different work pieces, the machining time varies and the test cycle with the recording of condition data is not performed equidistantly. This results in a data characteristic that is comparable to a random sample of continuously recorded condition data. Another example that may result in such a data characteristic comes from the effort to reduce data volumes when recording condition data. Attempts can be made to keep the amount of data with unchanged damage as small as possible. One possible measure is not to transmit and store the continuous sensor readings, but rather sections of them, which also leads to gaps in the data available for prognosis. In the present data set, the life cycle of filters or rather their condition data, represented by the differential pressure, is considered. Failure of the filter occurs when the differential pressure across the filter exceeds 600 Pa. The time until a filter failure occurs depends especially on the amount of dust supplied per time, which is constant within a run-to-failure cycle. The previously explained data characteristics are addressed by means of corresponding training and test data. The training data is structured as follows: A run-to-failure cycle contains n batches of data. The number n varies between the cycles and depends on the duration of the batches and the time interval between the individual batches. The duration and time interval of the batches are random variables. A data batch includes the sensor readings of differential pressure and flow rate for the filter, the start and end time of the batch, and RUL information related to the end time of the batch. The sensor readings of the differential pressure and flow rate are recorded at a constant sampling rate. Figure 6 shows an illustrative run-to-failure cycle with multiple batches. The test data are randomly right-censored. They are also made of batches with a random duration and time interval between the batches. For each batch contained, the start and end time are given, as well as the sensor readings within the batch. The RUL is not given for each batch but only for the last data point of the right-censored run-to-failure cycle.

    Task: The aim is to predict the RUL of the censored filter test cycles given in the test data. In order to predict the RUL, training and test data are given, each consisting of 60 and 40 run-to-failure cycles. The test data contains random right-censored run-to-failure cycles and the respective RUL for the prediction task. The main challenge is to make the best use of the incompletely recorded training and test data to provide the most accurate prediction possible. Due to the detailed description of the setup and the various physical filter models described in literature, it is possible to support the actual data-driven models by integrating physical knowledge respectively models in the sense of theory-guided data science or informed machi...

  15. 2022 American Community Survey: B01001B | Sex by Age (Black or African...

    • data.census.gov
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ACS, 2022 American Community Survey: B01001B | Sex by Age (Black or African American Alone) (ACS 1-Year Estimates Detailed Tables) [Dataset]. https://data.census.gov/table/ACSDT1Y2022.B01001B?q=Race%20and%20Ethnicity
    Explore at:
    Dataset provided by
    United States Census Bureauhttp://census.gov/
    Authors
    ACS
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Time period covered
    2022
    Area covered
    United States
    Description

    Although the American Community Survey (ACS) produces population, demographic and housing unit estimates, the decennial census is the official source of population totals for April 1st of each decennial year. In between censuses, the Census Bureau's Population Estimates Program produces and disseminates the official estimates of the population for the nation, states, counties, cities, and towns and estimates of housing units for states and counties..Information about the American Community Survey (ACS) can be found on the ACS website. Supporting documentation including code lists, subject definitions, data accuracy, and statistical testing, and a full list of ACS tables and table shells (without estimates) can be found on the Technical Documentation section of the ACS website.Sample size and data quality measures (including coverage rates, allocation rates, and response rates) can be found on the American Community Survey website in the Methodology section..Source: U.S. Census Bureau, 2022 American Community Survey 1-Year Estimates.Data are based on a sample and are subject to sampling variability. The degree of uncertainty for an estimate arising from sampling variability is represented through the use of a margin of error. The value shown here is the 90 percent margin of error. The margin of error can be interpreted roughly as providing a 90 percent probability that the interval defined by the estimate minus the margin of error and the estimate plus the margin of error (the lower and upper confidence bounds) contains the true value. In addition to sampling variability, the ACS estimates are subject to nonsampling error (for a discussion of nonsampling variability, see ACS Technical Documentation). The effect of nonsampling error is not represented in these tables..The Hispanic origin and race codes were updated in 2020. For more information on the Hispanic origin and race code changes, please visit the American Community Survey Technical Documentation website..The 2022 American Community Survey (ACS) data generally reflect the March 2020 Office of Management and Budget (OMB) delineations of metropolitan and micropolitan statistical areas. In certain instances the names, codes, and boundaries of the principal cities shown in ACS tables may differ from the OMB delineations due to differences in the effective dates of the geographic entities..Estimates of urban and rural populations, housing units, and characteristics reflect boundaries of urban areas defined based on 2020 Census data. As a result, data for urban and rural areas from the ACS do not necessarily reflect the results of ongoing urbanization..Explanation of Symbols:- The estimate could not be computed because there were an insufficient number of sample observations. For a ratio of medians estimate, one or both of the median estimates falls in the lowest interval or highest interval of an open-ended distribution. For a 5-year median estimate, the margin of error associated with a median was larger than the median itself.N The estimate or margin of error cannot be displayed because there were an insufficient number of sample cases in the selected geographic area. (X) The estimate or margin of error is not applicable or not available.median- The median falls in the lowest interval of an open-ended distribution (for example "2,500-")median+ The median falls in the highest interval of an open-ended distribution (for example "250,000+").** The margin of error could not be computed because there were an insufficient number of sample observations.*** The margin of error could not be computed because the median falls in the lowest interval or highest interval of an open-ended distribution.***** A margin of error is not appropriate because the corresponding estimate is controlled to an independent population or housing estimate. Effectively, the corresponding estimate has no sampling error and the margin of error may be treated as zero.

  16. c

    Performance Measure Definition: Stroke Alert Call-to-Door Interval

    • s.cnmilf.com
    • catalog.data.gov
    Updated Jun 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.austintexas.gov (2024). Performance Measure Definition: Stroke Alert Call-to-Door Interval [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/performance-measure-definition-stroke-alert-call-to-door-interval
    Explore at:
    Dataset updated
    Jun 25, 2024
    Dataset provided by
    data.austintexas.gov
    Description

    Performance Measure Definition: Stroke Alert Call-to-Door Interval

  17. Heart Rate Prediction

    • kaggle.com
    zip
    Updated Dec 10, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Saurav Anand (2020). Heart Rate Prediction [Dataset]. https://www.kaggle.com/saurav9786/heart-rate-prediction
    Explore at:
    zip(147294088 bytes)Available download formats
    Dataset updated
    Dec 10, 2020
    Authors
    Saurav Anand
    Description

    The data comprises various attributes taken from signals measured using ECG recorded for different individuals having different heart rates at the time the measurement was taken. These various features contribute to the heart rate at the given instant of time for the individual.

    You have been provided with a total of 7 CSV files with the names as follows: time_domain_features_train.csv - This file contains all time domain features of heart rate for training data frequency_domain_features_train.csv - This file contains all frequency domain features of heart rate for training data heart_rate_non_linear_features_train.csv - This file contains all non linear features of heart rate for training data time_domain_features_test.csv - This file contains all time domain features of heart rate for testing data frequency_domain_features_test.csv - This file contains all frequency domain features of heart rate for testing data heart_rate_non_linear_features_test.csv - This file contains all non linear features of heart rate for testing data sample_submission.csv - This file contains the format in which you need to make submissions to the portal

    Following is the data dictionary for the features you will come across in the files mentioned:

    MEAN_RR - Mean of RR intervals MEDIAN_RR - Median of RR intervals SDRR - Standard deviation of RR intervals RMSSD - Root mean square of successive RR interval differences SDSD - Standard deviation of successive RR interval differences SDRR_RMSSD - Ratio of SDRR / RMSSD pNN25 - Percentage of successive RR intervals that differ by more than 25 ms pNN50 - Percentage of successive RR intervals that differ by more than 50 ms KURT - Kurtosis of distribution of successive RR intervals SKEW - Skew of distribution of successive RR intervals MEAN_REL_RR - Mean of relative RR intervals MEDIAN_REL_RR - Median of relative RR intervals SDRR_REL_RR - Standard deviation of relative RR intervals RMSSD_REL_RR - Root mean square of successive relative RR interval differences SDSD_REL_RR - Standard deviation of successive relative RR interval differences SDRR_RMSSD_REL_RR - Ratio of SDRR/RMSSD for relative RR interval differences KURT_REL_RR - Kurtosis of distribution of relative RR intervals SKEW_REL_RR - Skewness of distribution of relative RR intervals uuid - Unique ID for each patient VLF - Absolute power of the very low frequency band (0.0033 - 0.04 Hz) VLF_PCT - Principal component transform of VLF LF - Absolute power of the low frequency band (0.04 - 0.15 Hz) LF_PCT - Principal component transform of LF LF_NU - Absolute power of the low frequency band in normal units HF - Absolute power of the high frequency band (0.15 - 0.4 Hz) HF_PCT - Principal component transform of HF HF_NU - Absolute power of the highest frequency band in normal units TP - Total power of RR intervals LF_HF - Ratio of LF to HF HF_LF - Ratio of HF to LF SD1 - Poincaré plot standard deviation perpendicular to the line of identity SD2 - Poincaré plot standard deviation along the line of identity Sampen - sample entropy which measures the regularity and complexity of a time series higuci - higuci fractal dimension of heartrate datasetId - ID of the whole dataset condition - condition of the patient at the time the data was recorded HR - Heart rate of the patient at the time of data recorded

    Objective

    The objective is to build a regressor model which can predict the heart rate of an individual. This prediction can help to monitor stress levels of the individual.

    Reference :- Great learning

  18. Age dependence of SDNN and RMSSD and origin of matched PWI.

    • plos.figshare.com
    xls
    Updated Jun 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Johannes Zschocke; Maria Kluge; Luise Pelikan; Antonia Graf; Martin Glos; Alexander Müller; Rafael Mikolajczyk; Ronny P. Bartsch; Thomas Penzel; Jan W. Kantelhardt (2023). Age dependence of SDNN and RMSSD and origin of matched PWI. [Dataset]. http://doi.org/10.1371/journal.pone.0226843.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Johannes Zschocke; Maria Kluge; Luise Pelikan; Antonia Graf; Martin Glos; Alexander Müller; Rafael Mikolajczyk; Ronny P. Bartsch; Thomas Penzel; Jan W. Kantelhardt
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    For three similarly sized age groups the fractions of matched PWI derived from each of the three accelerometer axes are reported, showing the y axis data is used for more than half of all PWI correctly associated with RRI. The mean values of the HRV parameters SDNN and RMSSD and the mean PTT as derived from matched RRI and PWI are shown for comparison with literature [35, 39]. Regarding SDNN and RMSSD, all differences between the young age group and the other two groups are highly significant (p ≤ 0.002), while no significant differences occur between the intermediate and the elderly group. The results indicate that the reduction in SDNN and RMSSD with age is similar in RRI (as derived from the ECG) and PWI (as reconstructed through wrist actigraphy). The differences between the mean PPT values of the young group and the other two groups are weak but still highly significant (p = 0.004 and p < 0.001, respectively), but also not significant between the intermediate and the elderly group.

  19. Code and Partial Test Data

    • figshare.com
    txt
    Updated Apr 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    yongqi Chen (2025). Code and Partial Test Data [Dataset]. http://doi.org/10.6084/m9.figshare.28853564.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Apr 26, 2025
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    yongqi Chen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    代码和测试数据

  20. Z

    Data from: HRV-ACC: a dataset with R-R intervals and accelerometer data for...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Aug 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kamil Książek; Wilhelm Masarczyk; Przemysław Głomb; Michał Romaszewski; Iga Stokłosa; Piotr Ścisło; Paweł Dębski; Robert Pudlo; Piotr Gorczyca; Magdalena Piegza (2023). HRV-ACC: a dataset with R-R intervals and accelerometer data for the diagnosis of psychotic disorders using a Polar H10 wearable sensor [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8171265
    Explore at:
    Dataset updated
    Aug 9, 2023
    Dataset provided by
    Department of Psychoprophylaxis, Faculty of Medical Sciences in Zabrze, Medical University of Silesia
    Institute of Psychology, Humanitas University in Sosnowiec
    Institute of Theoretical and Applied Informatics, Polish Academy of Sciences
    Psychiatric Department of the Multidisciplinary Hospital in Tarnowskie Góry
    Department of Psychiatry, Faculty of Medical Sciences in Zabrze, Medical University of Silesia
    Authors
    Kamil Książek; Wilhelm Masarczyk; Przemysław Głomb; Michał Romaszewski; Iga Stokłosa; Piotr Ścisło; Paweł Dębski; Robert Pudlo; Piotr Gorczyca; Magdalena Piegza
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ABSTRACT

    The issue of diagnosing psychotic diseases, including schizophrenia and bipolar disorder, in particular, the objectification of symptom severity assessment, is still a problem requiring the attention of researchers. Two measures that can be helpful in patient diagnosis are heart rate variability calculated based on electrocardiographic signal and accelerometer mobility data. The following dataset contains data from 30 psychiatric ward patients having schizophrenia or bipolar disorder and 30 healthy persons. The duration of the measurements for individuals was usually between 1.5 and 2 hours. R-R intervals necessary for heart rate variability calculation were collected simultaneously with accelerometer data using a wearable Polar H10 device. The Positive and Negative Syndrome Scale (PANSS) test was performed for each patient participating in the experiment, and its results were attached to the dataset. Furthermore, the code for loading and preprocessing data, as well as for statistical analysis, was included on the corresponding GitHub repository.

    BACKGROUND

    Heart rate variability (HRV), calculated based on electrocardiographic (ECG) recordings of R-R intervals stemming from the heart's electrical activity, may be used as a biomarker of mental illnesses, including schizophrenia and bipolar disorder (BD) [Benjamin et al]. The variations of R-R interval values correspond to the heart's autonomic regulation changes [Berntson et al, Stogios et al]. Moreover, the HRV measure reflects the activity of the sympathetic and parasympathetic parts of the autonomous nervous system (ANS) [Task Force of the European Society of Cardiology the North American Society of Pacing Electrophysiology, Matusik et al]. Patients with psychotic mental disorders show a tendency for a change in the centrally regulated ANS balance in the direction of less dynamic changes in the ANS activity in response to different environmental conditions [Stogios et al]. Larger sympathetic activity relative to the parasympathetic one leads to lower HRV, while, on the other hand, higher parasympathetic activity translates to higher HRV. This loss of dynamic response may be an indicator of mental health. Additional benefits may come from measuring the daily activity of patients using accelerometry. This may be used to register periods of physical activity and inactivity or withdrawal for further correlation with HRV values recorded at the same time.

    EXPERIMENTS

    In our experiment, the participants were 30 psychiatric ward patients with schizophrenia or BD and 30 healthy people. All measurements were performed using a Polar H10 wearable device. The sensor collects ECG recordings and accelerometer data and, additionally, prepares a detection of R wave peaks. Participants of the experiment had to wear the sensor for a given time. Basically, it was between 1.5 and 2 hours, but the shortest recording was 70 minutes. During this time, evaluated persons could perform any activity a few minutes after starting the measurement. Participants were encouraged to undertake physical activity and, more specifically, to take a walk. Due to patients being in the medical ward, they received instruction to take a walk in the corridors at the beginning of the experiment. They were to repeat the walk 30 minutes and 1 hour after the first walk. The subsequent walks were to be slightly longer (about 3, 5 and 7 minutes, respectively). We did not remind or supervise the command during the experiment, both in the treatment and the control group. Seven persons from the control group did not receive this order and their measurements correspond to freely selected activities with rest periods but at least three of them performed physical activities during this time. Nevertheless, at the start of the experiment, all participants were requested to rest in a sitting position for 5 minutes. Moreover, for each patient, the disease severity was assessed using the PANSS test and its scores are attached to the dataset.

    The data from sensors were collected using Polar Sensor Logger application [Happonen]. Such extracted measurements were then preprocessed and analyzed using the code prepared by the authors of the experiment. It is publicly available on the GitHub repository [Książek et al].

    Firstly, we performed a manual artifact detection to remove abnormal heartbeats due to non-sinus beats and technical issues of the device (e.g. temporary disconnections and inappropriate electrode readings). We also performed anomaly detection using Daubechies wavelet transform. Nevertheless, the dataset includes raw data, while a full code necessary to reproduce our anomaly detection approach is available in the repository. Optionally, it is also possible to perform cubic spline data interpolation. After that step, rolling windows of a particular size and time intervals between them are created. Then, a statistical analysis is prepared, e.g. mean HRV calculation using the RMSSD (Root Mean Square of Successive Differences) approach, measuring a relationship between mean HRV and PANSS scores, mobility coefficient calculation based on accelerometer data and verification of dependencies between HRV and mobility scores.

    DATA DESCRIPTION

    The structure of the dataset is as follows. One folder, called HRV_anonymized_data contains values of R-R intervals together with timestamps for each experiment participant. The data was properly anonymized, i.e. the day of the measurement was removed to prevent person identification. Files concerned with patients have the name treatment_X.csv, where X is the number of the person, while files related to the healthy controls are named control_Y.csv, where Y is the identification number of the person. Furthermore, for visualization purposes, an image of the raw RR intervals for each participant is presented. Its name is raw_RR_{control,treatment}_N.png, where N is the number of the person from the control/treatment group. The collected data are raw, i.e. before the anomaly removal. The code enabling reproducing the anomaly detection stage and removing suspicious heartbeats is publicly available in the repository [Książek et al]. The structure of consecutive files collecting R-R intervals is following:

        Phone timestamp
        RR-interval [ms]
    
    
        12:43:26.538000
        651
    
    
        12:43:27.189000
        632
    
    
        12:43:27.821000
        618
    
    
        12:43:28.439000
        621
    
    
        12:43:29.060000
        661
    
    
        ...
        ...
    

    The first column contains the timestamp for which the distance between two consecutive R peaks was registered. The corresponding R-R interval is presented in the second column of the file and is expressed in milliseconds.
    The second folder, called accelerometer_anonymized_data contains values of accelerometer data collected at the same time as R-R intervals. The naming convention is similar to that of the R-R interval data: treatment_X.csv and control_X.csv represent the data coming from the persons from the treatment and control group, respectively, while X is the identification number of the selected participant. The numbers are exactly the same as for R-R intervals. The structure of the files with accelerometer recordings is as follows:

        Phone timestamp
        X [mg]
        Y [mg]
        Z [mg]
    
    
        13:00:17.196000
        -961
        -23
        182
    
    
        13:00:17.205000
        -965
        -21
        181
    
    
        13:00:17.215000
        -966
        -22
        187
    
    
        13:00:17.225000
        -967
        -26
        193
    
    
        13:00:17.235000
        -965
        -27
        191
    
    
        ...
        ...
        ...
        ...
    

    The first column contains a timestamp, while the next three columns correspond to the currently registered acceleration in three axes: X, Y and Z, in milli-g unit.

    We also attached a file with the PANSS test scores (PANSS.csv) for all patients participating in the measurement. The structure of this file is as follows:

        no_of_person
        PANSS_P
        PANSS_N
        PANSS_G
        PANSS_total
    
    
        1
        8
        13
        22
        43
    
    
        2
        11
        7
        18
        36
    
    
        3
        14
        30
        44
        88
    
    
        4
        18
        13
        27
        58
    
    
        ...
        ...
        ...
        ...
        ..
    

    The first column contains the identification number of the patient, while the three following columns refer to the PANSS scores related to positive, negative and general symptoms, respectively.

    USAGE NOTES

    All the files necessary to run the HRV and/or accelerometer data analysis are available on the GitHub repository [Książek et al]. HRV data loading, preprocessing (i.e. anomaly detection and removal), as well as the calculation of mean HRV values in terms of the RMSSD, is performed in the main.py file. Also, Pearson's correlation coefficients between HRV values and PANSS scores and the statistical tests (Levene's and Mann-Whitney U tests) comparing the treatment and control groups are computed. By default, a sensitivity analysis is made, i.e. running the full pipeline for different settings of the window size for which the HRV is calculated and various time intervals between consecutive windows. Preparing the heatmaps of correlation coefficients and corresponding p-values can be done by running the utils_advanced_plots.py file after performing the sensitivity analysis. Furthermore, a detailed analysis for the one selected set of hyperparameters may be prepared (by setting sensitivity_analysis = False), i.e. for 15-minute window sizes, 1-minute time intervals between consecutive windows and without data interpolation method. Also, patients taking quetiapine may be excluded from further calculations by setting exclude_quetiapine = True because this medicine can have a strong impact on HRV [Hattori et al].

    The accelerometer data processing may be performed using the utils_accelerometer.py file. In this case, accelerometer recordings are downsampled to ensure the same timestamps as for R-R intervals and, for each participant, the mobility coefficient is calculated. Then, a correlation

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
data.austintexas.gov (2024). Performance Measure Definition: Average Call Processing Interval [Dataset]. https://catalog.data.gov/dataset/performance-measure-definition-average-call-processing-interval

Performance Measure Definition: Average Call Processing Interval

Explore at:
Dataset updated
Jun 25, 2024
Dataset provided by
data.austintexas.gov
Description

Performance Measure Definition: Average Call Processing Interval

Search
Clear search
Close search
Google apps
Main menu