91 datasets found
  1. f

    Data from: A method for calculating BMI z-scores and percentiles above the...

    • tandf.figshare.com
    • datasetcatalog.nlm.nih.gov
    pdf
    Updated Jun 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rong Wei; Cynthia L. Ogden; Van L. Parsons; David S. Freedman; Craig M. Hales (2023). A method for calculating BMI z-scores and percentiles above the 95th percentile of the CDC growth charts [Dataset]. http://doi.org/10.6084/m9.figshare.12932858.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 5, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Rong Wei; Cynthia L. Ogden; Van L. Parsons; David S. Freedman; Craig M. Hales
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The 2000 CDC growth charts are based on national data collected between 1963 and 1994 and include a set of selected percentiles between the 3rd and 97th and LMS parameters that can be used to obtain other percentiles and associated z-scores. Obesity is defined as a sex- and age-specific body mass index (BMI) at or above the 95th percentile. Extrapolating beyond the 97th percentile is not recommended and leads to compressed z-score values. This study attempts to overcome this limitation by constructing a new method for calculating BMI distributions above the 95th percentile using an extended reference population. Data from youth at or above the 95th percentile of BMI-for-age in national surveys between 1963 and 2016 were modelled as half-normal distributions. Scale parameters for these distributions were estimated at each sex-specific 6-month age-interval, from 24 to 239 months, and then smoothed as a function of age using regression procedures. The modelled distributions above the 95th percentile can be used to calculate percentiles and non-compressed z-scores for extreme BMI values among youth. This method can be used, in conjunction with the current CDC BMI-for-age growth charts, to track extreme values of BMI among youth.

  2. A

    SAGA: Calculate Percentile

    • data.amerigeoss.org
    esri rest, html
    Updated Oct 1, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States (2018). SAGA: Calculate Percentile [Dataset]. https://data.amerigeoss.org/gl/dataset/saga-calculate-percentile
    Explore at:
    esri rest, htmlAvailable download formats
    Dataset updated
    Oct 1, 2018
    Dataset provided by
    United States
    License

    http://geospatial-usace.opendata.arcgis.com/datasets/9defaa133d434c0a8bb82d5db54e1934/license.jsonhttp://geospatial-usace.opendata.arcgis.com/datasets/9defaa133d434c0a8bb82d5db54e1934/license.json

    Description

    A sieve analysis (or gradation test) is a practice or procedure commonly used in civil engineering to assess the particle size distribution (also called gradation) of a granular material.

    As part of the Sediment Analysis and Geo-App (SAGA) a series of data processing web services are available to assist in computing sediment statistics based on results of sieve analysis. The Calculate Percentile service returns one of the following percentiles: D5, D10, D16, D35, D50, D84, D90, D95.

    Percentiles can also be computed for classification sub-groups: Overall (OVERALL), <62.5 um (DS_FINE), 62.5-250um (DS_MED), and > 250um (DS_COARSE)

    Parameter #1: Input Sieve Size, Percent Passing, Sieve Units.

    • Semi-colon separated. ex: 75000, 100, um; 50000, 100, um; 37500, 100, um; 25000,100,um; 19000,100,um
    • A minimum of 4 sieve sizes must be used. Units supported: um, mm, inches, #, Mesh, phi
    • All sieve sizes must be numeric

    Parameter #2: Percentile

    • Options: D5, D10, D16, D35, D50, D84, D90, D95

    Parameter #3: Subgroup

    • Options: OVERALL, DS_COARSE, DS_MED, DS_FINE
    • The statistics are computed for the overall sample and into Coarse, Medium, and Fine sub-classes
      • Coarse (> 250 um) DS_COARSE
      • Medium (62.5 – 250 um) DS_MED
      • Fine (< 62.5 um) DS_FINE
      • OVERALL (all records)

    Parameter #4: Outunits

    • Options: phi, m, um

    This service is part of the Sediment Analysis and Geo-App (SAGA) Toolkit.

    Looking for a comprehensive user interface to run this tool?
    Go to SAGA Online to view this geoprocessing service with data already stored in the SAGA database.

    This service can be used independently of the SAGA application and user interface, or the tool can be directly accessed through http://navigation.usace.army.mil/SEM/Analysis/GSD

  3. Table 3.1 Percentile points for total income before and after tax

    • gov.uk
    Updated Mar 12, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    HM Revenue & Customs (2025). Table 3.1 Percentile points for total income before and after tax [Dataset]. https://www.gov.uk/government/statistics/percentile-points-for-total-income-before-and-after-tax-1992-to-2011
    Explore at:
    Dataset updated
    Mar 12, 2025
    Dataset provided by
    GOV.UKhttp://gov.uk/
    Authors
    HM Revenue & Customs
    Description

    The table only covers individuals who have some liability to Income Tax. The percentile points have been independently calculated on total income before tax and after tax.

    These statistics are classified as accredited official statistics.

    You can find more information about these statistics and collated tables for the latest and previous tax years on the Statistics about personal incomes page.

    Supporting documentation on the methodology used to produce these statistics is available in the release for each tax year.

    Note: comparisons over time may be affected by changes in methodology. Notably, there was a revision to the grossing factors in the 2018 to 2019 publication, which is discussed in the commentary and supporting documentation for that tax year. Further details, including a summary of significant methodological changes over time, data suitability and coverage, are included in the Background Quality Report.

  4. o

    The Percentile Bootstrap For Calculating The 95%Ci For The Median -...

    • explore.openaire.eu
    Updated Apr 17, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    J. Goedhart (2018). The Percentile Bootstrap For Calculating The 95%Ci For The Median - Animation (With R-Script And Example Data) [Dataset]. http://doi.org/10.5281/zenodo.1219874
    Explore at:
    Dataset updated
    Apr 17, 2018
    Authors
    J. Goedhart
    Description

    R Scripts and example data to perform a percentile bootstrap to determine the 95% confidence interval for the median. More background is described in this blog: http://thenode.biologists.com/a-better-bar/education/

  5. a

    Median Income v2 0

    • ct-ejscreen-v1-connecticut.hub.arcgis.com
    Updated Aug 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    University of Connecticut (2023). Median Income v2 0 [Dataset]. https://ct-ejscreen-v1-connecticut.hub.arcgis.com/items/d4464fafb8594926bad4fca52600e1bd
    Explore at:
    Dataset updated
    Aug 2, 2023
    Dataset authored and provided by
    University of Connecticut
    Area covered
    Description

    This indicator represents the tracts ranked by their percentile level of median household incomes per census tract, per capita income. The data source is 2017-2021 American Community Survey, 5-year estimates. The percentile and the rank were calculated. A percentile is a score indicating the value below which a given percentage of observations in a group of observations fall. It indicates the relative position of a particular value within a dataset. For example, the 20th percentile is the value below which 20% of the observations may be found. The rank refers to a process of arranging percentiles in descending order, starting from the highest percentile and ending with the lowest percentile. Once the percentiles are ranked, a normalization step is performed to rescale the rank values between 0 and 10. A rank value of 10 represents the highest percentile, while a rank value of 0 corresponds to the lowest percentile in the dataset. The normalized rank provides a relative assessment of the position of each percentile within the distribution, making it simpler to understand the relative magnitude of differences between percentiles. Normalization between 0 and 10 ensures that the rank values are standardized and uniformly distributed within the specified range. This normalization allows for easier interpretation and comparison of the rank values, as they are now on a consistent scale. For detailed methods, go to connecticut-environmental-justice.circa.uconn.edu.

  6. D

    Percentile Intervals in Bayesian Inference are Overconfident

    • darus.uni-stuttgart.de
    Updated Mar 19, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sebastian Höpfl (2024). Percentile Intervals in Bayesian Inference are Overconfident [Dataset]. http://doi.org/10.18419/DARUS-4068
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 19, 2024
    Dataset provided by
    DaRUS
    Authors
    Sebastian Höpfl
    License

    https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-4068https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-4068

    Dataset funded by
    BMBF
    DFG
    Description

    This dataset demonstrates the difference in calculating percentile Intervals as approximation for Highest Density Intervals (HDI) vs. Highest Posterior Density (HPD). This is demonstrated with extended partial liver resection data (ZeLeR-study, ethical vote: 2018-1246-Material). The data includes Computed Tomography (CT) liver volume measurements of patients before (POD 0) and after partial hepatectomy. Liver volume was normalized per patient to the preoperative liver volume. was used to screen the liver regeneration courses. The Fujifilm Synapse3D software was used to calculate volume estimates from CT images. The data is structured in a tabular separated value file of the PEtab format.

  7. g

    HGW: Lead, 90th percentile (data in mg/kg) | gimi9.com

    • gimi9.com
    Updated Dec 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). HGW: Lead, 90th percentile (data in mg/kg) | gimi9.com [Dataset]. https://gimi9.com/dataset/eu_475ed45e-db1d-dff3-e2dd-d3826a78baca
    Explore at:
    Dataset updated
    Dec 15, 2024
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    As a 90.P background value, that's 90. Percentile of a Data Collective. It is the value at which 90% of the cases observed so far have lower levels. The calculation is made after the data group of outliers has been cleaned up. The 90. The percentile often serves as the upper limit of the background range to delineate unusually high levels. The total content is determined from the aqua regia extract (according to DIN ISO 11466 (1997)). The concentration is given in mg/kg. The salary classes take into account, among other things, the pension values of the BBodSchV (1999). These are 40 mg/kg for sand, 70 mg/kg for clay, silt and very silty sand and 100 mg/kg for clay. According to LABO (2003) a sample count of >=20 is required for the calculation of background values. However, the map also shows groups with a sample count >= 10. This information is then only informal and not representative.

  8. j

    Data from: Source code and example data for article: Co-Citation Percentile...

    • jyx.jyu.fi
    Updated Sep 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Janne-Tuomas Seppänen (2025). Source code and example data for article: Co-Citation Percentile Rank and JYUcite: a new network-standardized output-level citation influence metric [Dataset]. http://doi.org/10.17011/jyx/dataset/71858
    Explore at:
    Dataset updated
    Sep 15, 2025
    Authors
    Janne-Tuomas Seppänen
    License

    https://opensource.org/license/MIThttps://opensource.org/license/MIT

    Description

    Algorithm (.php) for retrieving the co-citation set of a scholarly output by DOI, and calculating CPR for it. Configuration, database operations and input sanitizing code omitted. Also, example data and statistical analyses used in Seppänen et al (2020). For context see: Seppänen et al (2020): Co-Citation Percentile Rank and JYUcite: a new network-standardized output-level citation influence metric https://oscsolutions.cc.jyu.fi/jyucite

  9. 50th Percentile Rent Estimates

    • catalog.data.gov
    • s.cnmilf.com
    • +1more
    Updated Mar 1, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Department of Housing and Urban Development (2024). 50th Percentile Rent Estimates [Dataset]. https://catalog.data.gov/dataset/50th-percentile-rent-estimates
    Explore at:
    Dataset updated
    Mar 1, 2024
    Dataset provided by
    United States Department of Housing and Urban Developmenthttp://www.hud.gov/
    Description

    Rent estimates at the 50th percentile (or median) are calculated for all Fair Market Rent areas. Fair Market Rents (FMRs) are primarily used to determine payment standard amounts for the Housing Choice Voucher program, to determine initial renewal rents for some expiring project-based Section 8 contracts, to determine initial rents for housing assistance payment (HAP) contracts in the Moderate Rehabilitation Single Room Occupancy program (Mod Rehab), and to serve as a rent ceiling in the HOME rental assistance program. FMRs are gross rent estimates. They include the shelter rent plus the cost of all tenant-paid utilities, except telephones, cable or satellite television service, and internet service. The U.S. Department of Housing and Urban Development (HUD) annually estimates FMRs for 530 metropolitan areas and 2,045 nonmetropolitan county FMR areas. Under certain conditions, as set forth in the Interim Rule (Federal Register Vol. 65, No. 191, Monday October 2, 2000, pages 58870-58875), these 50th percentile rents can be used to set success rate payment standards.

  10. e

    HGW: Cadmium, 90th percentile (surface)

    • data.europa.eu
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    HGW: Cadmium, 90th percentile (surface) [Dataset]. https://data.europa.eu/88u/dataset/ba0a3ff6-980c-6408-4483-9541a60d992b
    Explore at:
    Description

    As a 90.P background value, that's 90. Percentile of a Data Collective. It is the value at which 90% of the cases observed so far have lower levels. The calculation is made after the data group of outliers has been cleaned up. The 90. The percentile often serves as the upper limit of the background range to delineate unusually high levels. The total content is determined from the aqua regia extract (according to DIN ISO 11466 (1997)). The concentration is given in mg/kg. The salary classes take into account, among other things, the pension values of the BBodSchV (1999). These are 0.4 mg/kg for sand, 1.0 mg/kg for clay, silt and very silty sand and 1.5 mg/kg for clay. According to LABO (2003) a sample count of >=20 is required for the calculation of background values. However, the map also shows groups with a sample count >= 10. This information is then only informal and not representative.

  11. e

    HGW: Chrome, 90th percentile (top)

    • data.europa.eu
    Updated Sep 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). HGW: Chrome, 90th percentile (top) [Dataset]. https://data.europa.eu/data/datasets/2444cd3d-d6e5-3ac4-7681-8c0613b9cb72?locale=en
    Explore at:
    Dataset updated
    Sep 22, 2024
    Description

    As a 90.P background value, that's 90. Percentile of a Data Collective. It is the value at which 90% of the cases observed so far have lower levels. The calculation is made after the data group of outliers has been cleaned up. The 90. The percentile often serves as the upper limit of the background range to delineate unusually high levels. The total content is determined from the aqua regia extract (according to DIN ISO 11466 (1997)). The concentration is given in mg/kg. The salary classes take into account, among other things, the pension values of the BBodSchV (1999). These are 30 mg/kg for sand, 60 mg/kg for clay, silt and very silty sand and 100 mg/kg for clay. According to LABO (2003) a sample count of >=20 is required for the calculation of background values. However, the map also shows groups with a sample count >= 10. This information is then only informal and not representative.

  12. d

    The comprehensive tax return statistical table for each item of deduction...

    • data.gov.tw
    csv
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fiscal Information Agency,Ministry of Finance, The comprehensive tax return statistical table for each item of deduction based on the tax calculation 10th percentile. [Dataset]. https://data.gov.tw/en/datasets/17875
    Explore at:
    csvAvailable download formats
    Dataset authored and provided by
    Fiscal Information Agency,Ministry of Finance
    License

    https://data.gov.tw/licensehttps://data.gov.tw/license

    Description

    Tax Deduction Statistics Form Based on Taxable Income Percentiles Unit: Amount (in thousand dollars)

  13. d

    Gridded uniform hazard peak ground acceleration data and 84th-percentile...

    • catalog.data.gov
    • datasets.ai
    • +1more
    Updated Sep 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Gridded uniform hazard peak ground acceleration data and 84th-percentile peak ground acceleration data used to calculate the Maximum Considered Earthquake Geometric Mean (MCEG) peak ground acceleration (PGA) values of the 2020 NEHRP Recommended Seismic Provisions and 2022 ASCE/SEI 7 Standard for Guam and the Northern Mariana Islands. [Dataset]. https://catalog.data.gov/dataset/gridded-uniform-hazard-peak-ground-acceleration-data-and-84th-percentile-peak-ground-accel
    Explore at:
    Dataset updated
    Sep 15, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    The Maximum Considered Earthquake Geometric Mean (MCEG) peak ground acceleration (PGA) values of the 2020 NEHRP Recommended Seismic Provisions and 2022 ASCE/SEI 7 Standard are derived from the downloadable data files. For each site class, the MCEG peak ground acceleration (PGA_M) is calculated via the following equation: PGA_M = min[ PGA_MUH, max( PGA_M84th , PGA_MDLL ) ] where PGA_MUH = uniform-hazard peak ground acceleration PGA_M84th = 84th-percentile peak ground acceleration PGA_MDLL = deterministic lower limit spectral acceleration

  14. a

    Time-mean Sea Level Projections to 2100 (cm)

    • space-geoportal-queensub.hub.arcgis.com
    • keep-cool-global-community.hub.arcgis.com
    Updated Apr 7, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Met Office (2022). Time-mean Sea Level Projections to 2100 (cm) [Dataset]. https://space-geoportal-queensub.hub.arcgis.com/datasets/TheMetOffice::time-mean-sea-level-projections-to-2100-cm
    Explore at:
    Dataset updated
    Apr 7, 2022
    Dataset authored and provided by
    Met Office
    Area covered
    Description

    Please note this dataset supersedes previous versions on the Climate Data Portal. It has been uploaded following an update to the dataset in March 2023. This means sea level rise is approximately 1cm higher (larger) compared to the original data release (i.e. the previous version available on this portal) for all UKCP18 site-specific sea level projections at all timescales. For more details please refer to the technical note.What does the data show?The time-mean sea-level projections to 2100 show the amount of sea-level change (in cm) for each coastal location (grid-box) around the British Isles for several emission scenarios. Sea-level rise is the primary mechanism by which we expect coastal flood hazard to change in the UK in the future. The amount of sea-level rise depends on the location around the British Isles and increases with higher emission scenarios. Here, we provide the relative time-mean sea-level projections to 2100, i.e. the local sea-level change experienced at a particular location compared to the 1981-2000 average, produced as part of UKCP18.For each grid box the time-mean sea-level change projections are provided for the end of each decade (e.g. 2010, 2020, 2030 etc) for three emission scenarios known as Representative Concentration Pathways (RCP) and for three percentiles.The emission scenarios are:RCP2.6RCP4.5RCP8.5The percentiles are:5th percentile50th percentile95th percentileImportant limitations of the dataWe cannot rule out substantial additional sea-level rise associated with ice sheet instability processes that are not represented in the UKCP18 projections, as discussed in the recent IPCC Sixth Assessment Report (AR6). Although the time-mean sea-level projections presented here are to 2100, past greenhouse gas emissions have already committed us to substantial additional sea level rise beyond 2100. This is because the ocean and cryosphere (i.e. the frozen parts of our planet) are very slow to respond to global warming. So, even if global average air temperature stops rising, as global emissions are reduced, sea level will continue to rise well beyond the time changes in global average air temperature level off or decline. This is illustrated by the extended exploratory time-mean sea level projections and discussed further in AR6 (Fox-Kemper et al, 2021).What are the naming conventions and how do I explore the data?The data is supplied so that each row corresponds to the combination of a RCP emissions scenario and percentile value e.g. 'RCP45_50' is the RCP4.5 scenario and the 50th percentile. This can be viewed and filtered by the field 'RCP and Percentile'. The columns (fields) correspond to the end of each decade and the fields are named by sea level anomaly at year x, e.g. '2050 seaLevelAnom' is the sea level anomaly at 2050 compared to the 1981-2000 average.Please note that the styling and filtering options are independent of each other and the attribute you wish to style the data by can be set differently to the one you filter by. Please ensure that you have selected the RCP/percentile and decade you want to both filter and style the data by. Select the cell you are interested in to view all values. To understand how to explore the data please refer to the New Users ESRI Storymap.What are the emission scenarios?The 21st Century time-mean sea level projections were produced using some of the future emission scenarios used in the IPCC Fifth Assessment Report (AR5). These are RCP2.6, RCP4.5 and RCP8.5, which are based on the concentration of greenhouse gases and aerosols in the atmosphere. RCP2.6 is an aggressive mitigation pathway, where greenhouse gas emissions are strongly reduced. RCP4.5 is an intermediate ‘stabilisation’ pathway, where greenhouse gas emissions are reduced by varying levels. RCP8.5 is a high emission pathway, where greenhouse gas emissions continue to grow unmitigated. Further information is available in the Understanding Climate Data ESRI Storymap and the RCP Guidance on the UKCP18 website.What are the percentiles?The UKCP18 sea-level projections are based on a large Monte Carlo simulation that represents 450,000 possible outcomes in terms of global mean sea-level change. The Monte Carlo simulation is designed to sample the uncertainties across the different components of sea-level rise, and the amount of warming we see for a given emissions scenario across CMIP5 climate models. The percentiles are used to characterise the uncertainty in the Monte Carlo projections based on the statistical distribution of the 450,000 individual simulation members. For example, the 50th percentile represents the central estimate (median) amongst the model projections. Whilst the 95th percentile value means 95% of the model distribution is below that value and similarly the 5th percentile value means 5% of the model distribution is below that value. The range between the 5th to 95th percentiles represent the projection range amongst models and corresponds to the IPCC AR5 “likely range”. It should be noted that, there may be a greater than 10% chance that the real-world sea level rise lies outside this range. Data sourceThis data is an extract of a larger dataset (every year and more percentiles) which is available on CEDA at https://catalogue.ceda.ac.uk/uuid/0f8d27b1192f41088cd6983e98faa46eData has been extracted from the v20221219 version (downloaded 17/04/2023) of three files:seaLevelAnom_marine-sim_rcp26_ann_2007-2100.ncseaLevelAnom_marine-sim_rcp45_ann_2007-2100.ncseaLevelAnom_marine-sim_rcp85_ann_2007-2100.ncUseful links to find out moreFor a comprehensive description of the underpinning science, evaluation and results see the UKCP18 Marine Projections Report (Palmer et al, 2018).For a discussion on ice sheet instability processes in the latest IPCC assessment report, see Fox-Kemper et al (2021). Technical note for the update to the underpinning data: https://www.metoffice.gov.uk/binaries/content/assets/metofficegovuk/pdf/research/ukcp/ukcp_tech_note_sea_level_mar23.pdfFurther information in the Met Office Climate Data Portal Understanding Climate Data ESRI Storymap.

  15. Poverty and Inequality Platform (PIP): Percentiles

    • datacatalog.worldbank.org
    csv, stata
    Updated Jan 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    pip@worldbank.org (2023). Poverty and Inequality Platform (PIP): Percentiles [Dataset]. https://datacatalog.worldbank.org/search/dataset/0063646?version=3
    Explore at:
    csv, stataAvailable download formats
    Dataset updated
    Jan 13, 2023
    Dataset provided by
    World Bankhttp://topics.nytimes.com/top/reference/timestopics/organizations/w/world_bank/index.html
    License

    https://datacatalog.worldbank.org/public-licenses?fragment=cchttps://datacatalog.worldbank.org/public-licenses?fragment=cc

    Description

    Survey years

    The Poverty and Inequality Platform: Percentiles database reports 100 points ranked according to the consumption or income distributions for country-year survey data available in the World Bank’s Poverty and Inequality Platform (PIP). There are, as of September 19, 2024, a total of 2,456 country-survey-year data points, which include 2,274 distributions based on microdata, binned data, or imputed/synthetic data, and 182 based on grouped data. For the grouped data, the percentiles are derived by fitting a parametric Lorenz distribution following Datt (1998). For ease of communication, all distributions are referred to as survey data henceforth, and the welfare variable is referred to as income.


    Details

    Each distribution reports 100 points per country per survey year ranked from the smallest (percentile 1) to the largest (percentile 100) income or consumption. For each income percentile, the database reports the following variables: the average daily per person income or consumption (avg_welfare); the income or consumption value for the upper threshold of the percentile (quantile); the share of the population in the percentile (which might deviate slightly from 1% due to coarseness in the raw data) (pop_share); and the share of income or consumption held by each percentile (welfare_share). In addition, the database reports the welfare measure (welfare_type) used in the survey data—income or consumption—and the region covered (reporting_level)—urban, rural, or national. The distributions are available in 2011 or 2017 PPP$.


    Stata code example

    Below is an example of how to use the database to generate an anonymous growth incidence curve for Bangladesh between 2005 and 2010

    keep if country_code"BGD" & reporting_level1 & ///

    inlist(year,2005,2010)

    bys country_code percentile (year): ///

    gen growth05_10 = (avg_welfare/avg_welfare[_n-1] - 1) * 100

    twoway connected growth05_10 percentile, ytitle("%") ///

    title("Cumulative growth in Bangladesh, 2005-2010")


    Metadata

    Some metadata of the data set, such as the version of the data, can be found by typing char dir in the Stata console. Alternatively, please refer to this portal, which contains all the information available.


    PIP version date: 20250401 (updated June 05, 2025)



    Lineup years

    Not currently available

  16. 50th percentile U.S. male data

    • figshare.com
    xlsx
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Manoj Gupta (2023). 50th percentile U.S. male data [Dataset]. http://doi.org/10.6084/m9.figshare.6143423.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Manoj Gupta
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This table contains anthropometric data for 50th percentile U.S. male. This data has been used to calculate dimensions of truncated ellipsoidal finite element segments.

  17. a

    Exploratory Extended Time-mean Sea Level Projections to 2300 (cm)

    • ai-climate-hackathon-global-community.hub.arcgis.com
    • keep-cool-global-community.hub.arcgis.com
    • +1more
    Updated Apr 12, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Met Office (2022). Exploratory Extended Time-mean Sea Level Projections to 2300 (cm) [Dataset]. https://ai-climate-hackathon-global-community.hub.arcgis.com/datasets/TheMetOffice::exploratory-extended-time-mean-sea-level-projections-to-2300-cm
    Explore at:
    Dataset updated
    Apr 12, 2022
    Dataset authored and provided by
    Met Office
    Area covered
    Description

    Please note this dataset supersedes previous versions on the Climate Data Portal. It has been uploaded following an update to the dataset in March 2023. This means sea level rise is approximately 1cm higher (larger) compared to the original data release (i.e. the previous version available on this portal) for all UKCP18 site-specific sea level projections at all timescales. For more details please refer to the technical note.What does the data show?The exploratory extended time-mean sea-level projections to 2300 show the amount of sea-level change (in cm) for each coastal location (grid-box) around the British Isles for several emission scenarios. Sea-level rise is the primary mechanism by which we expect coastal flood risk to change in the UK in the future. The amount of sea-level rise depends on the location around the British Isles and increases with higher emission scenarios. Here, we provide the relative time-mean sea-level projections to 2300, i.e. the local sea-level change experienced at a particular location compared to the 1981-2000 average, produced as part of UKCP18.For each grid box the time-mean sea-level change projections are provided for the end of each decade (e.g. 2010, 2020, 2030 etc) for three emission scenarios known as Representative Concentration Pathways (RCP) and for three percentiles.The emission scenarios are:RCP2.6RCP4.5RCP8.5The percentiles are:5th percentile50th percentile95th percentileImportant limitations of the dataWe cannot rule out substantial additional sea-level rise associated with ice sheet instability processes that are not represented in the UKCP18 projections, as discussed in the recent IPCC Sixth Assessment Report (AR6). These exploratory projections show sea levels continue to increase beyond 2100 even with large reductions in greenhouse gas emissions. It should be noted that these projections have a greater degree of uncertainty than the 21st Century Projections and should therefore be treated as illustrative of the potential future changes. They are designed to be used alongside the 21st Century projections for those interested in exploring post-2100 changes.What are the naming conventions and how do I explore the data?The data is supplied so that each row corresponds to the combination of a RCP emissions scenario and percentile value e.g. 'RCP45_50' is the RCP4.5 scenario and the 50th percentile. This can be viewed and filtered by the field 'RCP and Percentile'. The columns (fields) correspond to the end of each decade and the fields are named by sea level anomaly at year x, e.g. '2050 seaLevelAnom' is the sea level anomaly at 2050 compared to the 1981-2000 average.Please note that the styling and filtering options are independent of each other and the attribute you wish to style the data by can be set differently to the one you filter by. Please ensure that you have selected the RCP/percentile and decade you want to both filter and style the data by. Select the cell you are interested in to view all values.To understand how to explore the data please refer to the New Users ESRI Storymap.What are the emission scenarios?The 21st Century time-mean sea level projections were produced using some of the future emission scenarios used in the IPCC Fifth Assessment Report (AR5). These are RCP2.6, RCP4.5 and RCP8.5, which are based on the concentration of greenhouse gases and aerosols in the atmosphere. RCP2.6 is an aggressive mitigation pathway, where greenhouse gas emissions are strongly reduced. RCP4.5 is an intermediate ‘stabilisation’ pathway, where greenhouse gas emissions are reduced by varying levels. RCP8.5 is a high emission pathway, where greenhouse gas emissions continue to grow unmitigated. Further information is available in the Understanding Climate Data ESRI Storymap and the RCP Guidance on the UKCP18 website.What are the percentiles?The UKCP18 sea-level projections are based on a large Monte Carlo simulation that represents 450,000 possible outcomes in terms of global mean sea-level change. The Monte Carlo simulation is designed to sample the uncertainties across the different components of sea-level rise, and the amount of warming we see for a given emissions scenario across CMIP5 climate models. The percentiles are used to characterise the uncertainty in the Monte Carlo projections based on the statistical distribution of the 450,000 individual simulation members. For example, the 50th percentile represents the central estimate (median) amongst the model projections. Whilst the 95th percentile value means 95% of the model distribution is below that value and similarly the 5th percentile value means 5% of the model distribution is below that value. The range between the 5th to 95th percentiles represent the projection range amongst models and corresponds to the IPCC AR5 “likely range”. It should be noted that, there may be a greater than 10% chance that the real-world sea level rise lies outside this range.Data sourceThis data is an extract of a larger dataset (every year and more percentiles) which is available on CEDA at https://catalogue.ceda.ac.uk/uuid/a077f4058cda4cd4b37ccfbdf1a6bd29Data has been extracted from the v20221219 version (downloaded 17/04/2023) of three files:seaLevelAnom_marine-sim_rcp26_ann_2007-2300.ncseaLevelAnom_marine-sim_rcp45_ann_2007-2300.ncseaLevelAnom_marine-sim_rcp85_ann_2007-2300.ncUseful links to find out moreFor a comprehensive description of the underpinning science, evaluation and results see the UKCP18 Marine Projections Report (Palmer et al, 2018).For a discussion on ice sheet instability processes in the latest IPCC assessment report, see Fox-Kemper et al (2021). Technical note for the update to the underpinning data: https://www.metoffice.gov.uk/binaries/content/assets/metofficegovuk/pdf/research/ukcp/ukcp_tech_note_sea_level_mar23.pdf.Further information in the Met Office Climate Data Portal Understanding Climate Data ESRI Storymap.

  18. Satellite Imagery Fractional Cover Percentiles Annual - Bare Soil

    • data.gov.au
    basic, html, wms
    Updated Feb 11, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Geoscience Australia (2020). Satellite Imagery Fractional Cover Percentiles Annual - Bare Soil [Dataset]. https://data.gov.au/dataset/ds-neii-5dd808fb-fe84-4984-9514-f85e28ae7687
    Explore at:
    html, wms, basicAvailable download formats
    Dataset updated
    Feb 11, 2020
    Dataset provided by
    Geoscience Australiahttp://ga.gov.au/
    Description

    Fractional Cover Percentiles version 2.2.0, 25 metre, 100km tile, Australian Albers Equal Area projection (EPSG:3577). Data is only visible at higher resolutions; when zoomed-out the available area …Show full descriptionFractional Cover Percentiles version 2.2.0, 25 metre, 100km tile, Australian Albers Equal Area projection (EPSG:3577). Data is only visible at higher resolutions; when zoomed-out the available area will be displayed as a shaded region. Fractional cover provides information about the the proportions of green vegetation, non-green vegetation (including deciduous trees during autumn, dry grass, etc.), and bare areas for every 25m x 25m ground footprint. Fractional cover provides insight into how areas of dry vegetation and/or bare soil and green vegetation are changing over time. The percentile summaries are designed to make it easier to analyse and interpret fractional cover. Percentiles provide an indicator of where an observation sits, relative to the rest of the observations for the pixel. For example, the 90th percentile is the value below which 90% of the observations fall. The fractional cover algorithm was developed by the Joint Remote Sensing Research Program for more information please see data.auscover.org.au/xwiki/bin/view/Product+pages/Landsat+Fractional+Cover This contains the percentage of bare soil per pixel at the 10th, 50th (median) and 90th percentiles for observations acquired in each full calendar year (1st of January - 31st December) from 1987 to the most recent full calendar year. Fractional Cover products use Water Observations from Space (WOfS) to mask out areas of water, cloud and other phenomena. To be considered in the FCP product a pixel must have had at least 10 clear observations over the year. For service status information, see https://status.dea.ga.gov.au

  19. d

    Gridded uniform hazard peak ground acceleration data and 84th-percentile...

    • datasets.ai
    • res1catalogd-o-tdatad-o-tgov.vcapture.xyz
    • +2more
    55
    Updated Sep 21, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department of the Interior (2024). Gridded uniform hazard peak ground acceleration data and 84th-percentile peak ground acceleration data used to calculate the Maximum Considered Earthquake Geometric Mean (MCEG) peak ground acceleration (PGA) values of the 2020 NEHRP Recommended Seismic Provisions and 2022 ASCE/SEI 7 Standard for Alaska. [Dataset]. https://datasets.ai/datasets/gridded-uniform-hazard-peak-ground-acceleration-data-and-84th-percentile-peak-ground-accel-90d7f
    Explore at:
    55Available download formats
    Dataset updated
    Sep 21, 2024
    Dataset authored and provided by
    Department of the Interior
    Description

    The Maximum Considered Earthquake Geometric Mean (MCEG) peak ground acceleration (PGA) values of the 2020 NEHRP Recommended Seismic Provisions and 2022 ASCE/SEI 7 Standard are derived from the downloadable data files. For each site class, the MCEG peak ground acceleration (PGA_M) is calculated via the following equation: PGA_M = min[ PGA_MUH, max( PGA_M84th , PGA_MDLL ) ] where PGA_MUH = uniform-hazard peak ground acceleration PGA_M84th = 84th-percentile peak ground acceleration PGA_MDLL = deterministic lower limit spectral acceleration

  20. O

    Equity Report Data: Geography

    • data.sandiegocounty.gov
    Updated May 21, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Various (2025). Equity Report Data: Geography [Dataset]. https://data.sandiegocounty.gov/dataset/Equity-Report-Data-Geography/p6uw-qxpv
    Explore at:
    application/rssxml, application/rdfxml, csv, tsv, xml, application/geo+json, kmz, kmlAvailable download formats
    Dataset updated
    May 21, 2025
    Dataset authored and provided by
    Various
    Description

    This dataset contains the geographic data used to create maps for the San Diego County Regional Equity Indicators Report led by the Office of Equity and Racial Justice (OERJ). The full report can be found here: https://data.sandiegocounty.gov/stories/s/7its-kgpt

    Demographic data from the report can be found here: https://data.sandiegocounty.gov/dataset/Equity-Report-Data-Demographics/q9ix-kfws

    Filter by the Indicator column to select data for a particular indicator map.

    Export notes: Dataset may not automatically open correctly in Excel due to geospatial data. To export the data for geospatial analysis, select Shapefile or GEOJSON as the file type. To view the data in Excel, export as a CSV but do not open the file. Then, open a blank Excel workbook, go to the Data tab, select “From Text/CSV,” and follow the prompts to import the CSV file into Excel. Alternatively, use the exploration options in "View Data" to hide the geographic column prior to exporting the data.

    USER NOTES: 4/7/2025 - The maps and data have been removed for the Health Professional Shortage Areas indicator due to inconsistencies with the data source leading to some missing health professional shortage areas. We are working to fix this issue, including exploring possible alternative data sources.

    5/21/2025 - The following changes were made to the 2023 report data (Equity Report Year = 2023). Self-Sufficiency Wage - a typo in the indicator name was fixed (changed sufficienct to sufficient) and the percent for one PUMA corrected from 56.9 to 59.9 (PUMA = San Diego County (Northwest)--Oceanside City & Camp Pendleton). Notes were made consistent for all rows where geography = ZCTA. A note was added to all rows where geography = PUMA. Voter registration - label "92054, 92051" was renamed to be in numerical order and is now "92051, 92054". Removed data from the percentile column because the categories are not true percentiles. Employment - Data was corrected to show the percent of the labor force that are employed (ages 16 and older). Previously, the data was the percent of the population 16 years and older that are in the labor force. 3- and 4-Year-Olds Enrolled in School - percents are now rounded to one decimal place. Poverty - the last two categories/percentiles changed because the 80th percentile cutoff was corrected by 0.01 and one ZCTA was reassigned to a different percentile as a result. Low Birthweight - the 33th percentile label was corrected to be written as the 33rd percentile. Life Expectancy - Corrected the category and percentile assignment for SRA CENTRAL SAN DIEGO. Parks and Community Spaces - corrected the category assignment for six SRAs.

    5/21/2025 - Data was uploaded for Equity Report Year 2025. The following changes were made relative to the 2023 report year. Adverse Childhood Experiences - added geographic data for 2025 report. No calculation of bins nor corresponding percentiles due to small number of geographic areas. Low Birthweight - no calculation of bins nor corresponding percentiles due to small number of geographic areas.

    Prepared by: Office of Evaluation, Performance, and Analytics and the Office of Equity and Racial Justice, County of San Diego, in collaboration with the San Diego Regional Policy & Innovation Center (https://www.sdrpic.org).

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Rong Wei; Cynthia L. Ogden; Van L. Parsons; David S. Freedman; Craig M. Hales (2023). A method for calculating BMI z-scores and percentiles above the 95th percentile of the CDC growth charts [Dataset]. http://doi.org/10.6084/m9.figshare.12932858.v1

Data from: A method for calculating BMI z-scores and percentiles above the 95th percentile of the CDC growth charts

Related Article
Explore at:
pdfAvailable download formats
Dataset updated
Jun 5, 2023
Dataset provided by
Taylor & Francis
Authors
Rong Wei; Cynthia L. Ogden; Van L. Parsons; David S. Freedman; Craig M. Hales
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

The 2000 CDC growth charts are based on national data collected between 1963 and 1994 and include a set of selected percentiles between the 3rd and 97th and LMS parameters that can be used to obtain other percentiles and associated z-scores. Obesity is defined as a sex- and age-specific body mass index (BMI) at or above the 95th percentile. Extrapolating beyond the 97th percentile is not recommended and leads to compressed z-score values. This study attempts to overcome this limitation by constructing a new method for calculating BMI distributions above the 95th percentile using an extended reference population. Data from youth at or above the 95th percentile of BMI-for-age in national surveys between 1963 and 2016 were modelled as half-normal distributions. Scale parameters for these distributions were estimated at each sex-specific 6-month age-interval, from 24 to 239 months, and then smoothed as a function of age using regression procedures. The modelled distributions above the 95th percentile can be used to calculate percentiles and non-compressed z-scores for extreme BMI values among youth. This method can be used, in conjunction with the current CDC BMI-for-age growth charts, to track extreme values of BMI among youth.

Search
Clear search
Close search
Google apps
Main menu