74 datasets found
  1. f

    Data from: A method for calculating BMI z-scores and percentiles above the...

    • tandf.figshare.com
    • datasetcatalog.nlm.nih.gov
    pdf
    Updated Jun 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rong Wei; Cynthia L. Ogden; Van L. Parsons; David S. Freedman; Craig M. Hales (2023). A method for calculating BMI z-scores and percentiles above the 95th percentile of the CDC growth charts [Dataset]. http://doi.org/10.6084/m9.figshare.12932858.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 5, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Rong Wei; Cynthia L. Ogden; Van L. Parsons; David S. Freedman; Craig M. Hales
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The 2000 CDC growth charts are based on national data collected between 1963 and 1994 and include a set of selected percentiles between the 3rd and 97th and LMS parameters that can be used to obtain other percentiles and associated z-scores. Obesity is defined as a sex- and age-specific body mass index (BMI) at or above the 95th percentile. Extrapolating beyond the 97th percentile is not recommended and leads to compressed z-score values. This study attempts to overcome this limitation by constructing a new method for calculating BMI distributions above the 95th percentile using an extended reference population. Data from youth at or above the 95th percentile of BMI-for-age in national surveys between 1963 and 2016 were modelled as half-normal distributions. Scale parameters for these distributions were estimated at each sex-specific 6-month age-interval, from 24 to 239 months, and then smoothed as a function of age using regression procedures. The modelled distributions above the 95th percentile can be used to calculate percentiles and non-compressed z-scores for extreme BMI values among youth. This method can be used, in conjunction with the current CDC BMI-for-age growth charts, to track extreme values of BMI among youth.

  2. A

    SAGA: Calculate Standard Deviation (Grain Size)

    • data.amerigeoss.org
    esri rest, html
    Updated Nov 8, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States (2018). SAGA: Calculate Standard Deviation (Grain Size) [Dataset]. https://data.amerigeoss.org/gl/dataset/saga-calculate-standard-deviation-grain-size
    Explore at:
    html, esri restAvailable download formats
    Dataset updated
    Nov 8, 2018
    Dataset provided by
    United States
    License

    http://geospatial-usace.opendata.arcgis.com/datasets/4a170b34bced4d06a0ba41cbab51a2af/license.jsonhttp://geospatial-usace.opendata.arcgis.com/datasets/4a170b34bced4d06a0ba41cbab51a2af/license.json

    Description

    A sieve analysis (or gradation test) is a practice or procedure commonly used in civil engineering to assess the particle size distribution (also called gradation) of a granular material.

    As part of the Sediment Analysis and Geo-App (SAGA) a series of data processing web services are available to assist in computing sediment statistics based on results of sieve analysis. The Standard Deviation first computes the percentiles for D5, D16, D35, D84,D95 and then uses the formula, (D16-D84)/4)+(D5-D95)/6

    Percentiles can also be computed for classification sub-groups: Overall (OVERALL), <62.5 um (DS_FINE), 62.5-250um (DS_MED), and > 250um (DS_COARSE)

    Parameter #1: Input Sieve Size, Percent Passing, Sieve Units.

    • Semi-colon separated. ex: 75000, 100, um; 50000, 100, um; 37500, 100, um; 25000,100,um; 19000,100,um
    • A minimum of 4 sieve sizes must be used. Units supported: um, mm, inches, #, Mesh, phi
    • All sieve sizes must be numeric

    Parameter #2: Subgroup

    • Options: OVERALL, DS_COARSE, DS_MED, DS_FINE
    • The statistics are computed for the overall sample and into Coarse, Medium, and Fine sub-classes
      • Coarse (> 250 um) DS_COARSE
      • Medium (62.5 – 250 um) DS_MED
      • Fine (< 62.5 um) DS_FINE
      • OVERALL (all records)

    Parameter #3: Outunits

    • Options: phi, m, um

  3. Table 3.1 Percentile points for total income before and after tax

    • gov.uk
    Updated Mar 12, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    HM Revenue & Customs (2025). Table 3.1 Percentile points for total income before and after tax [Dataset]. https://www.gov.uk/government/statistics/percentile-points-for-total-income-before-and-after-tax-1992-to-2011
    Explore at:
    Dataset updated
    Mar 12, 2025
    Dataset provided by
    GOV.UKhttp://gov.uk/
    Authors
    HM Revenue & Customs
    Description

    The table only covers individuals who have some liability to Income Tax. The percentile points have been independently calculated on total income before tax and after tax.

    These statistics are classified as accredited official statistics.

    You can find more information about these statistics and collated tables for the latest and previous tax years on the Statistics about personal incomes page.

    Supporting documentation on the methodology used to produce these statistics is available in the release for each tax year.

    Note: comparisons over time may be affected by changes in methodology. Notably, there was a revision to the grossing factors in the 2018 to 2019 publication, which is discussed in the commentary and supporting documentation for that tax year. Further details, including a summary of significant methodological changes over time, data suitability and coverage, are included in the Background Quality Report.

  4. 50th Percentile Rent Estimates

    • s.cnmilf.com
    • catalog.data.gov
    • +1more
    Updated Mar 1, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Department of Housing and Urban Development (2024). 50th Percentile Rent Estimates [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/50th-percentile-rent-estimates
    Explore at:
    Dataset updated
    Mar 1, 2024
    Dataset provided by
    United States Department of Housing and Urban Developmenthttp://www.hud.gov/
    Description

    Rent estimates at the 50th percentile (or median) are calculated for all Fair Market Rent areas. Fair Market Rents (FMRs) are primarily used to determine payment standard amounts for the Housing Choice Voucher program, to determine initial renewal rents for some expiring project-based Section 8 contracts, to determine initial rents for housing assistance payment (HAP) contracts in the Moderate Rehabilitation Single Room Occupancy program (Mod Rehab), and to serve as a rent ceiling in the HOME rental assistance program. FMRs are gross rent estimates. They include the shelter rent plus the cost of all tenant-paid utilities, except telephones, cable or satellite television service, and internet service. The U.S. Department of Housing and Urban Development (HUD) annually estimates FMRs for 530 metropolitan areas and 2,045 nonmetropolitan county FMR areas. Under certain conditions, as set forth in the Interim Rule (Federal Register Vol. 65, No. 191, Monday October 2, 2000, pages 58870-58875), these 50th percentile rents can be used to set success rate payment standards.

  5. D

    Percentile Intervals in Bayesian Inference are Overconfident

    • darus.uni-stuttgart.de
    Updated Mar 19, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sebastian Höpfl (2024). Percentile Intervals in Bayesian Inference are Overconfident [Dataset]. http://doi.org/10.18419/DARUS-4068
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 19, 2024
    Dataset provided by
    DaRUS
    Authors
    Sebastian Höpfl
    License

    https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-4068https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-4068

    Dataset funded by
    BMBF
    DFG
    Description

    This dataset demonstrates the difference in calculating percentile Intervals as approximation for Highest Density Intervals (HDI) vs. Highest Posterior Density (HPD). This is demonstrated with extended partial liver resection data (ZeLeR-study, ethical vote: 2018-1246-Material). The data includes Computed Tomography (CT) liver volume measurements of patients before (POD 0) and after partial hepatectomy. Liver volume was normalized per patient to the preoperative liver volume. was used to screen the liver regeneration courses. The Fujifilm Synapse3D software was used to calculate volume estimates from CT images. The data is structured in a tabular separated value file of the PEtab format.

  6. r

    Bare Cover Time Series Statistical Layer [dif stage]

    • researchdata.edu.au
    Updated Sep 6, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.nsw.gov.au (2018). Bare Cover Time Series Statistical Layer [dif stage] [Dataset]. https://researchdata.edu.au/bare-cover-time-dif-stage/1342560?source=suggested_datasets
    Explore at:
    Dataset updated
    Sep 6, 2018
    Dataset provided by
    data.nsw.gov.au
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    The di5 stage is a 5 layer image with the 5, 50 and 95 percentiles of the beta distribution calculated using the statistics from the mean and standard deviation from the time series image stack. This is a more robust version of the minimum, mean and maximum since the statistics are calculated from the entire time series for the specified time interval. ; ; Layers are:; 5th percentile (band 1), 50th percentile (band2), 95th percentile (band 3), standard deviation (band 4), number of observations (band 5).; ; The input timeseries images are bare fractional cover images (stage die) masked for water, cloud and cloud shadow. ; ; Scaling for layers 1-3 is Percentile = DN -100; Layers 4 & 5 are not scaled; ; Images is 8 bit unsigned integer; ; this layer is created for the LMD CMA at this time.

  7. a

    Median Income v2 0

    • ct-ejscreen-v1-connecticut.hub.arcgis.com
    Updated Aug 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    University of Connecticut (2023). Median Income v2 0 [Dataset]. https://ct-ejscreen-v1-connecticut.hub.arcgis.com/items/d4464fafb8594926bad4fca52600e1bd
    Explore at:
    Dataset updated
    Aug 2, 2023
    Dataset authored and provided by
    University of Connecticut
    Area covered
    Description

    This indicator represents the tracts ranked by their percentile level of median household incomes per census tract, per capita income. The data source is 2017-2021 American Community Survey, 5-year estimates. The percentile and the rank were calculated. A percentile is a score indicating the value below which a given percentage of observations in a group of observations fall. It indicates the relative position of a particular value within a dataset. For example, the 20th percentile is the value below which 20% of the observations may be found. The rank refers to a process of arranging percentiles in descending order, starting from the highest percentile and ending with the lowest percentile. Once the percentiles are ranked, a normalization step is performed to rescale the rank values between 0 and 10. A rank value of 10 represents the highest percentile, while a rank value of 0 corresponds to the lowest percentile in the dataset. The normalized rank provides a relative assessment of the position of each percentile within the distribution, making it simpler to understand the relative magnitude of differences between percentiles. Normalization between 0 and 10 ensures that the rank values are standardized and uniformly distributed within the specified range. This normalization allows for easier interpretation and comparison of the rank values, as they are now on a consistent scale. For detailed methods, go to connecticut-environmental-justice.circa.uconn.edu.

  8. e

    GMAT Focus Edition Section Score Percentile Rankings

    • e-gmat.com
    html
    Updated Aug 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    e-GMAT (2025). GMAT Focus Edition Section Score Percentile Rankings [Dataset]. https://e-gmat.com/blogs/gmat-scores-percentiles-score-chart-calculator/
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Aug 4, 2025
    Dataset authored and provided by
    e-GMAT
    Variables measured
    GMAT Section Score
    Description

    Official GMAT Focus Edition section scores (Quantitative, Verbal, and Data Insights) to percentile conversion tables for scores ranging from 60 to 90

  9. T

    2009-2012_50th Percentile Rent Estimates: Data by County

    • data.opendatanetwork.com
    application/rdfxml +5
    Updated May 12, 2014
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department of Housing and Urban Development (2014). 2009-2012_50th Percentile Rent Estimates: Data by County [Dataset]. https://data.opendatanetwork.com/Statistics/2009-2012_50th-Percentile-Rent-Estimates-Data-by-C/ema8-g2sk
    Explore at:
    tsv, csv, xml, application/rssxml, json, application/rdfxmlAvailable download formats
    Dataset updated
    May 12, 2014
    Dataset authored and provided by
    Department of Housing and Urban Development
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    Rent estimates at the 50th percentile (or median) are calculated for all Fair Market Rent areas. THESE ARE NOT FAIR MARKET RENTS. Under certain conditions, as set forth in the Interim Rule (Federal Register Vol. 65, No. 191, Monday October 2, 2000, pages 58870-58875), these 50th percentile rents can be used to set success rate payment standards. FY2009-FY2012. Note that data included herein are aggregated from individual files listed in main URL field below.

  10. d

    U.S. Geological Survey calculated 95th percentile of wave-current bottom...

    • catalog.data.gov
    • search.dataone.org
    • +1more
    Updated Oct 2, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). U.S. Geological Survey calculated 95th percentile of wave-current bottom shear stress for the South Atlantic Bight for May 2010 to May 2011 (SAB_95th_perc, polygon shapefile, Geographic, WGS84) [Dataset]. https://catalog.data.gov/dataset/u-s-geological-survey-calculated-95th-percentile-of-wave-current-bottom-shear-stress-for-t
    Explore at:
    Dataset updated
    Oct 2, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    The U.S. Geological Survey has been characterizing the regional variation in shear stress on the sea floor and sediment mobility through statistical descriptors. The purpose of this project is to identify patterns in stress in order to inform habitat delineation or decisions for anthropogenic use of the continental shelf. The statistical characterization spans the continental shelf from the coast to approximately 120 m water depth, at approximately 5 km resolution. Time-series of wave and circulation are created using numerical models, and near-bottom output of steady and oscillatory velocities and an estimate of bottom roughness are used to calculate a time-series of bottom shear stress at 1-hour intervals. Statistical descriptions such as the median and 95th percentile, which are the output included with this database, are then calculated to create a two-dimensional picture of the regional patterns in shear stress. In addition, time-series of stress are compared to critical stress values at select points calculated from observed surface sediment texture data to determine estimates of sea floor mobility.

  11. j

    Data from: Source code and example data for article: Co-Citation Percentile...

    • jyx.jyu.fi
    Updated Sep 23, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Janne-Tuomas Seppänen (2020). Source code and example data for article: Co-Citation Percentile Rank and JYUcite: a new network-standardized output-level citation influence metric [Dataset]. http://doi.org/10.17011/jyx/dataset/71858
    Explore at:
    Dataset updated
    Sep 23, 2020
    Authors
    Janne-Tuomas Seppänen
    License

    https://opensource.org/license/MIThttps://opensource.org/license/MIT

    Description

    Algorithm (.php) for retrieving the co-citation set of a scholarly output by DOI, and calculating CPR for it. Configuration, database operations and input sanitizing code omitted. Also, example data and statistical analyses used in Seppänen et al (2020). For context see: Seppänen et al (2020): Co-Citation Percentile Rank and JYUcite: a new network-standardized output-level citation influence metric https://oscsolutions.cc.jyu.fi/jyucite

  12. DEA Fractional Cover Percentiles (Landsat) Version 4.0.0

    • researchdata.edu.au
    Updated Sep 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jorand, C.,; Ai, E.; Ebadi, T.; Lymburner, L.; Lymburner, L.; Jorand, C.,; Ebadi, T.; Ai, E. (2025). DEA Fractional Cover Percentiles (Landsat) Version 4.0.0 [Dataset]. http://doi.org/10.26186/150570
    Explore at:
    Dataset updated
    Sep 23, 2025
    Dataset provided by
    Geoscience Australiahttp://ga.gov.au/
    Authors
    Jorand, C.,; Ai, E.; Ebadi, T.; Lymburner, L.; Lymburner, L.; Jorand, C.,; Ebadi, T.; Ai, E.
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 1987 - Present
    Area covered
    Description
    Fractional Cover Percentiles (Landsat) estimate the 10th, 50th, and 90th percentiles independently for the green vegetation, non-green vegetation, and bare soil fractions observed in each calendar year from 1987.

    The spatial extent is all Australia and the spatial resolution is 30 m x 30 m.

    Percentiles provide an indicator of where an observation sits, relative to the rest of the observations for the pixel. For example, the 90th percentile is the value below which 90% of the observations fall.
    The 10th, 50th, and 90th percentiles represent low, median and high values in a distribution that are robust against outliers. These values can be used separately or combined to understand the land cover dynamics. For example, the three percentiles for the green cover fraction can serve as proxies for the minimum, typical and maximum green cover for a given year. Difference between the 10th and 90th percentiles provides an estimate of the magnitude of change within a year. A large range of values may be observed in the agricultural land for all cover types while high green cover and a small difference between 10th and 90th percentiles are expected for forest cover.
    A representative view of the landscape in a year can be obtained by combining the 50th percentiles, or the median values, for the three cover types.

    The statistics are calculated using the following satellites for the following periods of time:
    - 1987-1998 : Landsat 5 only
    - 1999 : Landsat 5 and Landsat 7
    - 2000-2002 : Landsat 7 only
    - 2003 : Landsat 5 and Landsat 7
    - 2004-2010 : Landsat 5 only
    - 2011-2012 : Landsat 7 only
    - 2013-2021 : Landsat 8 only
    - 2022 onwards: Landsat 8 and Landsat 9

    The values for this product are as follows:
    For the fractional cover bands (PV, NPV, BS)
    0-100 = fractional cover values that range between 0 and 100%

    Quality Assurance:
    This layer provides a breakdown of each FCP pixel between:
    - sufficient observations
    - insufficient observations dry
    - insufficient observations wet
    For insufficient observations, these are pixels that have been masked out of the percentiles results e.g. NODATA, and provides an explanation as to why they have been masked out.

    Each product’s datasets is:
    - divided into tiles of 3200 x 3200 pixels, with a pixel size of 30 m x 30 m
    - presented in EPSG:3577 coordinate reference system

    Fractional Cover Masking
    DEA Water Observations are used to identify clear pixels from DEA Fractional Cover to be included in percentile calculation. A Fractional Cover observation is included if:

    - it has corresponding DEA Water Observation information. If an observation within DEA Fractional Cover has no corresponding Water Observation, it is discarded. This can happen for ARD scenes that have a geometric quality assessment of greater than one, which occurs when there is poor geometric quality.

    - the DEA Water Observation has the following characteristics:
    -- it is contiguous (data for all bands is present and valid),
    -- it is not saturated,
    -- it is not cloud,
    -- it is not cloud shadow,
    -- it is not terrain shadow,
    -- it is not low solar angle,
    -- it can be high slope,
    -- it is not wet,
    -- there are at least 3 clear and dry observations for the time period.

    - No land/sea masking is applied.

    - Observation dates for given percentiles are not captured.



  13. 50th percentile U.S. male data

    • figshare.com
    xlsx
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Manoj Gupta (2023). 50th percentile U.S. male data [Dataset]. http://doi.org/10.6084/m9.figshare.6143423.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Manoj Gupta
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This table contains anthropometric data for 50th percentile U.S. male. This data has been used to calculate dimensions of truncated ellipsoidal finite element segments.

  14. d

    Gridded uniform hazard peak ground acceleration data and 84th-percentile...

    • catalog.data.gov
    • datasets.ai
    • +1more
    Updated Sep 30, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Gridded uniform hazard peak ground acceleration data and 84th-percentile peak ground acceleration data used to calculate the Maximum Considered Earthquake Geometric Mean (MCEG) peak ground acceleration (PGA) values of the 2020 NEHRP Recommended Seismic Provisions and 2022 ASCE/SEI 7 Standard for Guam and the Northern Mariana Islands. [Dataset]. https://catalog.data.gov/dataset/gridded-uniform-hazard-peak-ground-acceleration-data-and-84th-percentile-peak-ground-accel
    Explore at:
    Dataset updated
    Sep 30, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    The Maximum Considered Earthquake Geometric Mean (MCEG) peak ground acceleration (PGA) values of the 2020 NEHRP Recommended Seismic Provisions and 2022 ASCE/SEI 7 Standard are derived from the downloadable data files. For each site class, the MCEG peak ground acceleration (PGA_M) is calculated via the following equation: PGA_M = min[ PGA_MUH, max( PGA_M84th , PGA_MDLL ) ] where PGA_MUH = uniform-hazard peak ground acceleration PGA_M84th = 84th-percentile peak ground acceleration PGA_MDLL = deterministic lower limit spectral acceleration

  15. d

    Gridded uniform hazard peak ground acceleration data and 84th-percentile...

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Sep 24, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Gridded uniform hazard peak ground acceleration data and 84th-percentile peak ground acceleration data used to calculate the Maximum Considered Earthquake Geometric Mean (MCEG) peak ground acceleration (PGA) values of the 2020 NEHRP Recommended Seismic Provisions and 2022 ASCE/SEI 7 Standard for the conterminous United States. [Dataset]. https://catalog.data.gov/dataset/gridded-uniform-hazard-peak-ground-acceleration-data-and-84th-percentile-peak-ground-accel-40c4f
    Explore at:
    Dataset updated
    Sep 24, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    United States
    Description

    The Maximum Considered Earthquake Geometric Mean (MCEG) peak ground acceleration (PGA) values of the 2020 NEHRP Recommended Seismic Provisions and 2022 ASCE/SEI 7 Standard are derived from the downloadable data files. For each site class, the MCEG peak ground acceleration (PGA_M) is calculated via the following equation: PGA_M = min[ PGA_MUH, max( PGA_M84th , PGA_MDLL ) ] where PGA_MUH = uniform-hazard peak ground acceleration PGA_M84th = 84th-percentile peak ground acceleration PGA_MDLL = deterministic lower limit spectral acceleration

  16. g

    HGW: Chrome, 90th percentile (top) | gimi9.com

    • gimi9.com
    Updated Dec 15, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). HGW: Chrome, 90th percentile (top) | gimi9.com [Dataset]. https://gimi9.com/dataset/eu_2444cd3d-d6e5-3ac4-7681-8c0613b9cb72/
    Explore at:
    Dataset updated
    Dec 15, 2024
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    As a 90.P background value, that's 90. Percentile of a Data Collective. It is the value at which 90% of the cases observed so far have lower levels. The calculation is made after the data group of outliers has been cleaned up. The 90. The percentile often serves as the upper limit of the background range to delineate unusually high levels. The total content is determined from the aqua regia extract (according to DIN ISO 11466 (1997)). The concentration is given in mg/kg. The salary classes take into account, among other things, the pension values of the BBodSchV (1999). These are 30 mg/kg for sand, 60 mg/kg for clay, silt and very silty sand and 100 mg/kg for clay. According to LABO (2003) a sample count of >=20 is required for the calculation of background values. However, the map also shows groups with a sample count >= 10. This information is then only informal and not representative.

  17. c

    Data from: Half interpercentile range (half of the difference between the...

    • s.cnmilf.com
    • data.usgs.gov
    • +6more
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Half interpercentile range (half of the difference between the 16th and 84th percentiles) of wave-current bottom shear stress in the Middle Atlantic Bight for May, 2010 - May, 2011 (MAB_hIPR.SHP) [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/half-interpercentile-range-half-of-the-difference-between-the-16th-and-84th-percentiles-of
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    The U.S. Geological Survey has been characterizing the regional variation in shear stress on the sea floor and sediment mobility through statistical descriptors. The purpose of this project is to identify patterns in stress in order to inform habitat delineation or decisions for anthropogenic use of the continental shelf. The statistical characterization spans the continental shelf from the coast to approximately 120 m water depth, at approximately 5 km resolution. Time-series of wave and circulation are created using numerical models, and near-bottom output of steady and oscillatory velocities and an estimate of bottom roughness are used to calculate a time-series of bottom shear stress at 1-hour intervals. Statistical descriptions such as the median and 95th percentile, which are the output included with this database, are then calculated to create a two-dimensional picture of the regional patterns in shear stress. In addition, time-series of stress are compared to critical stress values at select points calculated from observed surface sediment texture data to determine estimates of sea floor mobility.

  18. o

    Power Quality

    • ukpowernetworks.opendatasoft.com
    Updated Oct 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Power Quality [Dataset]. https://ukpowernetworks.opendatasoft.com/explore/dataset/ukpn-power-quality/
    Explore at:
    Dataset updated
    Oct 6, 2025
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    IntroductionThis dataset contains data captured from remote Power Quality logging devices currently available across 450 UK Power Network sites*. A weekly 95th percentile value per harmonic is calculated and the highest value of each harmonic amongst all weeks, over a period of 12 months (also applicable to THD) is shown.

    Methodological Approach Power Quality data is collected from meters on a 10-minute basis and stored in a database. 95th percentile statistics are calculated on a weekly basis and used to generate the harmonics report.Year-week is the ISO 8601 year and week number.

    Quality Control Statement The data is provided "as is".

    Assurance Statement Harmonic data is periodically extracted and reviewed prior to publication.

    Other Download dataset information: Metadata (JSON)

    Definitions of key terms related to this dataset can be found in the Open Data Portal Glossary: https://ukpowernetworks.opendatasoft.com/pages/glossary/To view this data please register and login.

  19. a

    Noise v2 0

    • ct-ejscreen-v1-connecticut.hub.arcgis.com
    Updated Aug 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    University of Connecticut (2023). Noise v2 0 [Dataset]. https://ct-ejscreen-v1-connecticut.hub.arcgis.com/items/3e42945438e64c968703da8d2e0e4057
    Explore at:
    Dataset updated
    Aug 2, 2023
    Dataset authored and provided by
    University of Connecticut
    Area covered
    Description

    This processed data represents the estimated percentile level of noise energy from transportation. The data is from the U.S. Department of Transportation, Bureau of Transportation Statistics, National Transportation Noise Map, 2018. The census block data was converted into census tract data by the mean of the census blocks within a tract comprising the data associated with each tract. From there the percentile and the rank were calculated. A percentile is a score indicating the value below which a given percentage of observations in a group of observations fall. It indicates the relative position of a particular value within a dataset. For example, the 20th percentile is the value below which 20% of the observations may be found. The rank refers to a process of arranging percentiles in descending order, starting from the highest percentile and ending with the lowest percentile. Once the percentiles are ranked, a normalization step is performed to rescale the rank values between 0 and 10. A rank value of 10 represents the highest percentile, while a rank value of 0 corresponds to the lowest percentile in the dataset. The normalized rank provides a relative assessment of the position of each percentile within the distribution, making it simpler to understand the relative magnitude of differences between percentiles. Normalization between 0 and 10 ensures that the rank values are standardized and uniformly distributed within the specified range. This normalization allows for easier interpretation and comparison of the rank values, as they are now on a consistent scale. For detailed methods, go to connecticut-environmental-justice.circa.uconn.edu.

  20. Income statistics reported by each 5th percentile for comprehensive tax...

    • data.gov.tw
    csv
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fiscal Information Agency,Ministry of Finance, Income statistics reported by each 5th percentile for comprehensive tax based on calculated income. [Dataset]. https://data.gov.tw/en/datasets/17882
    Explore at:
    csvAvailable download formats
    Dataset provided by
    Fiscal Information Agencyhttps://www.fia.gov.tw/eng/
    Authors
    Fiscal Information Agency,Ministry of Finance
    License

    https://data.gov.tw/licensehttps://data.gov.tw/license

    Description

    Comprehensive tax calculation statistics table of various income amounts based on income percentile reporting. Unit: Amount (in thousand dollars)

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Rong Wei; Cynthia L. Ogden; Van L. Parsons; David S. Freedman; Craig M. Hales (2023). A method for calculating BMI z-scores and percentiles above the 95th percentile of the CDC growth charts [Dataset]. http://doi.org/10.6084/m9.figshare.12932858.v1

Data from: A method for calculating BMI z-scores and percentiles above the 95th percentile of the CDC growth charts

Related Article
Explore at:
pdfAvailable download formats
Dataset updated
Jun 5, 2023
Dataset provided by
Taylor & Francis
Authors
Rong Wei; Cynthia L. Ogden; Van L. Parsons; David S. Freedman; Craig M. Hales
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

The 2000 CDC growth charts are based on national data collected between 1963 and 1994 and include a set of selected percentiles between the 3rd and 97th and LMS parameters that can be used to obtain other percentiles and associated z-scores. Obesity is defined as a sex- and age-specific body mass index (BMI) at or above the 95th percentile. Extrapolating beyond the 97th percentile is not recommended and leads to compressed z-score values. This study attempts to overcome this limitation by constructing a new method for calculating BMI distributions above the 95th percentile using an extended reference population. Data from youth at or above the 95th percentile of BMI-for-age in national surveys between 1963 and 2016 were modelled as half-normal distributions. Scale parameters for these distributions were estimated at each sex-specific 6-month age-interval, from 24 to 239 months, and then smoothed as a function of age using regression procedures. The modelled distributions above the 95th percentile can be used to calculate percentiles and non-compressed z-scores for extreme BMI values among youth. This method can be used, in conjunction with the current CDC BMI-for-age growth charts, to track extreme values of BMI among youth.

Search
Clear search
Close search
Google apps
Main menu