Facebook
Twitterhttp://geospatial-usace.opendata.arcgis.com/datasets/9defaa133d434c0a8bb82d5db54e1934/license.jsonhttp://geospatial-usace.opendata.arcgis.com/datasets/9defaa133d434c0a8bb82d5db54e1934/license.json
A sieve analysis (or gradation test) is a practice or procedure commonly used in civil engineering to assess the particle size distribution (also called gradation) of a granular material.
As part of the Sediment Analysis and Geo-App (SAGA) a series of data processing web services are available to assist in computing sediment statistics based on results of sieve analysis. The Calculate Percentile service returns one of the following percentiles: D5, D10, D16, D35, D50, D84, D90, D95.
Percentiles can also be computed for classification sub-groups: Overall (OVERALL), <62.5 um (DS_FINE), 62.5-250um (DS_MED), and > 250um (DS_COARSE)
Parameter #1: Input Sieve Size, Percent Passing, Sieve Units.
Parameter #2: Percentile
Parameter #3: Subgroup
Parameter #4: Outunits
This service is part of the Sediment Analysis and Geo-App (SAGA) Toolkit.
Looking for a comprehensive user interface to run this tool?
Go to SAGA Online to view this geoprocessing service with data already stored in the SAGA database.
This service can be used independently of the SAGA application and user interface, or the tool can be directly accessed through http://navigation.usace.army.mil/SEM/Analysis/GSD
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset was derived by the Bioregional Assessment Programme from multiple source datasets. The source datasets are identified in the Lineage field in this metadata statement. The processes undertaken to produce this derived dataset are described in the History field in this metadata statement.
Groundwater modelling in Bioregional Assessments was undertaken in a probabilistic manner. Multiple runs (200 runs) of the model using calibration constrained parameter sets were undertaken to predict drawdown impacts caused by the MBC BA baseline coal resource development. This resulted in 200 different sets of predicted drawdown impacts. This dataset gives different percentiles of the drawdown corresponding to the baseline in ASCII grid format. Percentiles from 5th to 95th percentile are registered in this dataset.
The purpose of this data set is to provide the base files in required format that was used for producing some figures/maps in MBC 2.6.2
This a derived data set. All the inputs for this data set were obtained from the groundwater model data set. The outputs have been derived from Monte Carlo runs to produce the percentile drawdowns for uncertainty analysis.
200 runs of the groundwater model corresponding to the OGIA base and BA baseline resulted in 200 (each) model output files that stores the groundwater head (registered as groundwater model dataset). The maximum drawdown simulated for each of the model runs (over the entire simulation time period) were extracted from these files. These outputs were then used together with custom made scripts (all registered in this dataset) to identify different percentiles of drawdown among these 200 runs.
Bioregional Assessment Programme (2016) MBC Groundwater model baseline 5th to 95th percentile drawdown. Bioregional Assessment Derived Dataset. Viewed 25 October 2017, http://data.bioregionalassessments.gov.au/dataset/6ca506e1-0a2e-464d-a8de-8e931c8f01e8.
Derived From MBC Groundwater model
Derived From MBC Groundwater model mine footprints
Derived From MBC Groundwater model layer boundaries
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
List of Subdatasets: Long-term data: 2000-2021 5th percentile (p05) monthly time-series: 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021 50th percentile (p50) monthly time-series: 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021 95th percentile (p95) monthly time-series: 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021 General Description The monthly aggregated Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) dataset is derived from 250m 8d GLASS V6 FAPAR. The data set is derived from Moderate Resolution Imaging Spectroradiometer (MODIS) reflectance and LAI data using several other FAPAR products (MODIS Collection 6, GLASS FAPAR V5, and PROBA-V1 FAPAR) to generate a bidirectional long-short-term memory (Bi-LSTM) model to estimate FAPAR. The dataset time spans from March 2000 to December 2021 and provides data that covers the entire globe. The dataset can be used in many applications like land degradation modeling, land productivity mapping, and land potential mapping. The dataset includes: Long-term: Derived from monthly time-series. This dataset provides linear trend model for the p95 variable: (1) slope beta mean (p95.beta_m), p-value for beta (p95.beta_pv), intercept alpha mean (p95.alpha_m), p-value for alpha (p95.alpha_pv), and coefficient of determination R2 (p95.r2_m). Monthly time-series: Monthly aggregation with three standard statistics: (1) 5th percentile (p05), median (p50), and 95th percentile (p95). For each month, we aggregate all composites within that month plus one composite each before and after, ending up with 5 to 6 composites for a single month depending on the number of images within that month. Data Details Time period: March 2000 – December 2021 Type of data: Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) How the data was collected or derived: Derived from 250m 8 d GLASS V6 FAPAR using Python running in a local HPC. The time-series analysis were computed using the Scikit-map Python package. Statistical methods used: for the long-term, Ordinary Least Square (OLS) of p95 monthly variable; for the monthly time-series, percentiles 05, 50, and 95. Limitations or exclusions in the data: The dataset does not include data for Antarctica. Coordinate reference system: EPSG:4326 Bounding box (Xmin, Ymin, Xmax, Ymax): (-180.00000, -62.0008094, 179.9999424, 87.37000) Spatial resolution: 1/480 d.d. = 0.00208333 (250m) Image size: 172,800 x 71,698 File format: Cloud Optimized Geotiff (COG) format. Support If you discover a bug, artifact, or inconsistency, or if you have a question please raise a GitHub issue: https://github.com/Open-Earth-Monitor/Global_FAPAR_250m/issues Reference Hackländer, J., Parente, L., Ho, Y.-F., Hengl, T., Simoes, R., Consoli, D., Şahin, M., Tian, X., Herold, M., Jung, M., Duveiller, G., Weynants, M., Wheeler, I., (2023?) "Land potential assessment and trend-analysis using 2000–2021 FAPAR monthly time-series at 250 m spatial resolution", submitted to PeerJ, preprint available at: https://doi.org/10.21203/rs.3.rs-3415685/v1 Name convention To ensure consistency and ease of use across and within the projects, we follow the standard Open-Earth-Monitor file-naming convention. The convention works with 10 fields that describes important properties of the data. In this way users can search files, prepare data analysis etc, without needing to open files. The fields are: generic variable name: fapar = Fraction of Absorbed Photosynthetically Active Radiation variable procedure combination: essd.lstm = Earth System Science Data with bidirectional long short-term memory (Bi–LSTM) Position in the probability distribution / variable type: p05/p50/p95 = 5th/50th/95th percentile Spatial support: 250m Depth reference: s = surface Time reference begin time: 20000301 = 2000-03-01 Time reference end time: 20211231 = 2022-12-31 Bounding box: go = global (without Antarctica) EPSG code: epsg.4326 = EPSG:4326 Version code: v20230628 = 2023-06-28 (creation date)
Facebook
TwitterThe U.S. Geological Survey has been characterizing the regional variation in shear stress on the sea floor and sediment mobility through statistical descriptors. The purpose of this project is to identify patterns in stress in order to inform habitat delineation or decisions for anthropogenic use of the continental shelf. The statistical characterization spans the continental shelf from the coast to approximately 120 m water depth, at approximately 5 km resolution. Time-series of wave and circulation are created using numerical models, and near-bottom output of steady and oscillatory velocities and an estimate of bottom roughness are used to calculate a time-series of bottom shear stress at 1-hour intervals. Statistical descriptions such as the median and 95th percentile, which are the output included with this database, are then calculated to create a two-dimensional picture of the regional patterns in shear stress. In addition, time-series of stress are compared to critical stress values at select points calculated from observed surface sediment texture data to determine estimates of sea floor mobility.
Facebook
TwitterThis dataset contains the geographic data used to create maps for the San Diego County Regional Equity Indicators Report led by the Office of Equity and Racial Justice (OERJ). The full report can be found here: https://data.sandiegocounty.gov/stories/s/7its-kgpt
Demographic data from the report can be found here: https://data.sandiegocounty.gov/dataset/Equity-Report-Data-Demographics/q9ix-kfws
Filter by the Indicator column to select data for a particular indicator map.
Export notes: Dataset may not automatically open correctly in Excel due to geospatial data. To export the data for geospatial analysis, select Shapefile or GEOJSON as the file type. To view the data in Excel, export as a CSV but do not open the file. Then, open a blank Excel workbook, go to the Data tab, select “From Text/CSV,” and follow the prompts to import the CSV file into Excel. Alternatively, use the exploration options in "View Data" to hide the geographic column prior to exporting the data.
USER NOTES: 4/7/2025 - The maps and data have been removed for the Health Professional Shortage Areas indicator due to inconsistencies with the data source leading to some missing health professional shortage areas. We are working to fix this issue, including exploring possible alternative data sources.
5/21/2025 - The following changes were made to the 2023 report data (Equity Report Year = 2023). Self-Sufficiency Wage - a typo in the indicator name was fixed (changed sufficienct to sufficient) and the percent for one PUMA corrected from 56.9 to 59.9 (PUMA = San Diego County (Northwest)--Oceanside City & Camp Pendleton). Notes were made consistent for all rows where geography = ZCTA. A note was added to all rows where geography = PUMA. Voter registration - label "92054, 92051" was renamed to be in numerical order and is now "92051, 92054". Removed data from the percentile column because the categories are not true percentiles. Employment - Data was corrected to show the percent of the labor force that are employed (ages 16 and older). Previously, the data was the percent of the population 16 years and older that are in the labor force. 3- and 4-Year-Olds Enrolled in School - percents are now rounded to one decimal place. Poverty - the last two categories/percentiles changed because the 80th percentile cutoff was corrected by 0.01 and one ZCTA was reassigned to a different percentile as a result. Low Birthweight - the 33th percentile label was corrected to be written as the 33rd percentile. Life Expectancy - Corrected the category and percentile assignment for SRA CENTRAL SAN DIEGO. Parks and Community Spaces - corrected the category assignment for six SRAs.
5/21/2025 - Data was uploaded for Equity Report Year 2025. The following changes were made relative to the 2023 report year. Adverse Childhood Experiences - added geographic data for 2025 report. No calculation of bins nor corresponding percentiles due to small number of geographic areas. Low Birthweight - no calculation of bins nor corresponding percentiles due to small number of geographic areas.
Prepared by: Office of Evaluation, Performance, and Analytics and the Office of Equity and Racial Justice, County of San Diego, in collaboration with the San Diego Regional Policy & Innovation Center (https://www.sdrpic.org).
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
General Description The monthly aggregated Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) dataset is derived from 250m 8d GLASS V6 FAPAR. The data set is derived from Moderate Resolution Imaging Spectroradiometer (MODIS) reflectance and LAI data using several other FAPAR products (MODIS Collection 6, GLASS FAPAR V5, and PROBA-V1 FAPAR) to generate a bidirectional long-short-term memory (Bi-LSTM) model to estimate FAPAR. The dataset time spans from March 2000 to December 2021 and provides data that covers the entire globe. The dataset can be used in many applications like land degradation modeling, land productivity mapping, and land potential mapping. The dataset includes: Long-term: Derived from monthly time-series. This dataset provides linear trend model for the p95 variable: (1) slope beta mean (p95.beta_m), p-value for beta (p95.beta_pv), intercept alpha mean (p95.alpha_m), p-value for alpha (p95.alpha_pv), and coefficient of determination R2 (p95.r2_m). Monthly time-series: Monthly aggregation with three standard statistics: (1) 5th percentile (p05), median (p50), and 95th percentile (p95). For each month, we aggregate images inside the months and one image before and after, about 5 to 6 images for a single month depending on the number of images inside the month. Data Details Time period: March 2000 – December 2021 Type of data: Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) How the data was collected or derived: Derived from 250m 8 d GLASS V6 FAPAR using Python running in a local HPC. Cloudy pixels were removed and only positive values of water vapor were considered to compute the statistics. The time-series gap-filling and time-series analysis were computed using the Scikit-map Python package. Statistical methods used: for the long-term, trend analysis of p95 monthly variable; for the monthly time-series, percentiles 05, 50, and 95. Limitations or exclusions in the data: The dataset does not include data for Antarctica. Coordinate reference system: EPSG:4326 Bounding box (Xmin, Ymin, Xmax, Ymax): (-180.00000, -62.0008094, 179.9999424, 87.37000) Spatial resolution: 1/480 d.d. = 0.00208333 (250m) Image size: 172,800 x 71,698 File format: Cloud Optimized Geotiff (COG) format. Support If you discover a bug, artifact, or inconsistency, or if you have a question please use some of the following channels: Technical issues and questions about the code: GitLab Issues General questions and comments: LandGIS Forum Name convention To ensure consistency and ease of use across and within the projects, we follow the standard Open-Earth-Monitor file-naming convention. The convention works with 10 fields that describes important properties of the data. In this way users can search files, prepare data analysis etc, without needing to open files. The fields are: generic variable name: fapar = Fraction of Absorbed Photosynthetically Active Radiation variable procedure combination: essd.lstm = Earth System Science Data with bidirectional long short-term memory (Bi–LSTM) Position in the probability distribution / variable type: p05/p50/p95 = 5th/50th/95th percentile Spatial support: 250m Depth reference: s = surface Time reference begin time: 20000301 = 2000-03-01 Time reference end time: 20211231 = 2022-12-31 Bounding box: go = global (without Antarctica) EPSG code: epsg.4326 = EPSG:4326 Version code: v20230628 = 2023-06-28 (creation date)
Facebook
TwitterM2SMNXPCT (or statM_2d_pct_Nx) is a 2-dimensional monthly data collection for percentile statistics derived from monthly Modern-Era Retrospective analysis for Research and Applications version 2 (MERRA-2) datasets. V2 of this percentile data collection is computed based on the 1991-2020 climatology, covering the time period from January 1980 to present. In contrast, V1, the original version, is computed based on an earlier 30-year climatology (1981-2010). This collection consists of percentiles used to identify or characterize extreme weather events associated with temperature (maximum, mean, and minimum 2-m air temperature), as well as with precipitation (total precipitation).MERRA-2 is the latest version of global atmospheric reanalysis for the satellite era produced by the NASA Global Modeling and Assimilation Office (GMAO) using the Goddard Earth Observing System Model (GEOS) version 5.12.4. The dataset covers the period of 1980-present, with a latency of ~3 weeks after the end of the previous month.Data Reprocessing: Please check “Records of MERRA-2 Data Reprocessing and Service Changes”, linked from the “Documentation” tab on this page. Note that a reprocessed data filename is different from the original filename.MERRA-2 Mailing List: Sign up to receive information on reprocessing of data, changes to tools and services, as well as data announcements from GMAO. Contact the GES DISC Help Desk (gsfc-dl-help-disc@mail.nasa.gov) to be added to the list.Questions: If you have a question, please read the "MERRA-2 File Specification Document'', “MERRA-2 Data Access – Quick Start Guide”, and FAQs linked from the ”Documentation” tab on this page for more information. If these documents do not answer your question, you may post your question to the NASA Earthdata Forum (forum.earthdata.nasa.gov) or email the GES DISC Help Desk (gsfc-dl-help-disc@mail.nasa.gov).
Facebook
TwitterData for Figure 3.39 from Chapter 3 of the Working Group I (WGI) Contribution to the Intergovernmental Panel on Climate Change (IPCC) Sixth Assessment Report (AR6). Figure 3.39 shows the observed and simulated Pacific Decadal Variability (PDV). --------------------------------------------------- How to cite this dataset --------------------------------------------------- When citing this dataset, please include both the data citation below (under 'Citable as') and the following citation for the report component from which the figure originates: Eyring, V., N.P. Gillett, K.M. Achuta Rao, R. Barimalala, M. Barreiro Parrillo, N. Bellouin, C. Cassou, P.J. Durack, Y. Kosaka, S. McGregor, S. Min, O. Morgenstern, and Y. Sun, 2021: Human Influence on the Climate System. In Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Masson-Delmotte, V., P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, pp. 423–552, doi:10.1017/9781009157896.005. --------------------------------------------------- Figure subpanels --------------------------------------------------- The figure has six panels. Files are not separated according to the panels. --------------------------------------------------- List of data provided --------------------------------------------------- pdv.obs.nc contains - Observed SST anomalies associated with the PDV pattern - Observed PDV index time series (unfiltered) - Observed PDV index time series (low-pass filtered) - Taylor statistics of the observed PDV patterns - Statistical significance of the observed SST anomalies associated with the PDV pattern pdv.hist.cmip6.nc contains - Simulated SST anomalies associated with the PDV pattern - Simulated PDV index time series (unfiltered) - Simulated PDV index time series (low-pass filtered) - Taylor statistics of the simulated PDV patterns based on CMIP6 historical simulations. pdv.hist.cmip5.nc contains - Simulated SST anomalies associated with the PDV pattern - Simulated PDV index time series (unfiltered) - Simulated PDV index time series (low-pass filtered) - Taylor statistics of the simulated PDV patterns based on CMIP5 historical simulations. pdv.piControl.cmip6.nc contains - Simulated SST anomalies associated with the PDV pattern - Simulated PDV index time series (unfiltered) - Simulated PDV index time series (low-pass filtered) - Taylor statistics of the simulated PDV patterns based on CMIP6 piControl simulations. pdv.piControl.cmip5.nc contains - Simulated SST anomalies associated with the PDV pattern - Simulated PDV index time series (unfiltered) - Simulated PDV index time series (low-pass filtered) - Taylor statistics of the simulated PDV patterns based on CMIP5 piControl simulations. --------------------------------------------------- Data provided in relation to figure --------------------------------------------------- Panel a: - ipo_pattern_obs_ref in pdv.obs.nc: shading - ipo_pattern_obs_signif (dataset = 1) in pdv.obs.nc: cross markers Panel b: - Multimodel ensemble mean of ipo_model_pattern in pdv.hist.cmip6.nc: shading, with their sign agreement for hatching Panel c: - tay_stats (stat = 0, 1) in pdv.obs.nc: black dots - tay_stats (stat = 0, 1) in pdv.hist.cmip6.nc: red crosses, and their multimodel ensemble mean for the red dot - tay_stats (stat = 0, 1) in pdv.hist.cmip5.nc: blue crosses, and their multimodel ensemble mean for the blue dot Panel d: - Lag-1 autocorrelation of tpi in pdv.obs.nc: black horizontal lines in left . ERSSTv5: dataset = 1 . HadISST: dataset = 2 . COBE-SST2: dataset = 3 - Multimodel ensemble mean and percentiles of lag-1 autocorrelation of tpi in pdv.piControl.cmip5.nc: blue open box-whisker in the left - Multimodel ensemble mean and percentiles of lag-1 autocorrelation of tpi in pdv.piControl.cmip6.nc: red open box-whisker in the left - Multimodel ensemble mean and percentiles of lag-1 autocorrelation of tpi in pdv.hist.cmip5.nc: blue filled box-whisker in the left - Multimodel ensemble mean and percentiles of lag-1 autocorrelation of tpi in pdv.hist.cmip6.nc: red filled box-whisker in the left - Lag-10 autocorrelation of tpi_lp in pdv.obs.nc: black horizontal lines in right . ERSSTv5: dataset = 1 . HadISST: dataset = 2 . COBE-SST2: dataset = 3 - Multimodel ensemble mean and percentiles of lag-10 autocorrelation of tpi_lp in pdv.piControl.cmip5.nc: blue open box-whisker in the right - Multimodel ensemble mean and percentiles of lag-10 autocorrelation of tpi_lp in pdv.piControl.cmip6.nc: red open box-whisker in the right - Multimodel ensemble mean and percentiles of lag-10 autocorrelation of tpi_lp in pdv.hist.cmip5.nc: blue filled box-whisker in the right - Multimodel ensemble mean and percentiles of lag-10 autocorrelation of tpi_lp in pdv.hist.cmip6.nc: red filled box-whisker in the right Panel e: - Standard deviation of tpi in pdv.obs.nc: black horizontal lines in left . ERSSTv5: dataset = 1 . HadISST: dataset = 2 . COBE-SST2: dataset = 3 - Multimodel ensemble mean and percentiles of standard deviation of tpi in pdv.piControl.cmip5.nc: blue open box-whisker in the left - Multimodel ensemble mean and percentiles of standard deviation of tpi in pdv.piControl.cmip6.nc: red open box-whisker in the left - Multimodel ensemble mean and percentiles of standard deviation of tpi in pdv.hist.cmip5.nc: blue filled box-whisker in the left - Multimodel ensemble mean and percentiles of standard deviation of tpi in pdv.hist.cmip6.nc: red filled box-whisker in the left - Standard deviation of tpi_lp in pdv.obs.nc: black horizontal lines in right . ERSSTv5: dataset = 1 . HadISST: dataset = 2 . COBE-SST2: dataset = 3 - Multimodel ensemble mean and percentiles of standard deviation of tpi_lp in pdv.piControl.cmip5.nc: blue open box-whisker in the right - Multimodel ensemble mean and percentiles of standard deviation of tpi_lp in pdv.piControl.cmip6.nc: red open box-whisker in the right - Multimodel ensemble mean and percentiles of standard deviation of tpi_lp in pdv.hist.cmip5.nc: blue filled box-whisker in the right - Multimodel ensemble mean and percentiles of standard deviation of tpi_lp in pdv.hist.cmip6.nc: red filled box-whisker in the right Panel f: - tpi_lp in pdv.obs.nc: black curves . ERSSTv5: dataset = 1 . HadISST: dataset = 2 . COBE-SST2: dataset = 3 - tpi_lp in pdv.hist.cmip6.nc: 5th-95th percentiles in red shading, multimodel ensemble mean and its 5-95% confidence interval for red curves - tpi_lp in pdv.hist.cmip5.nc: 5th-95th percentiles in blue shading, multimodel ensemble mean for blue curve CMIP5 is the fifth phase of the Coupled Model Intercomparison Project. CMIP6 is the sixth phase of the Coupled Model Intercomparison Project. SST stands for Sea Surface Temperature. --------------------------------------------------- Notes on reproducing the figure from the provided data --------------------------------------------------- Multimodel ensemble means and percentiles of historical simulations of CMIP5 and CMIP6 are calculated after weighting individual members with the inverse of the ensemble size of the same model. ensemble_assign in each file provides the model number to which each ensemble member belongs. This weighting does not apply to the sign agreement calculation. piControl simulations from CMIP5 and CMIP6 consist of a single member from each model, so the weighting is not applied. Multimodel ensemble means of the pattern correlation in Taylor statistics in (c) and the autocorrelation of the index in (d) are calculated via Fisher z-transformation and back transformation. --------------------------------------------------- Sources of additional information --------------------------------------------------- The following weblinks are provided in the Related Documents section of this catalogue record: - Link to the report component containing the figure (Chapter 3) - Link to the Supplementary Material for Chapter 3, which contains details on the input data used in Table 3.SM.1 - Link to the code for the figure, archived on Zenodo - Link to the figure on the IPCC AR6 website
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset contains data points deemed low-confidence by an ensemble of 8 ML models (> 75th percentile for class 0 and < 25th percentile for class 1). The data came from this dataset: https://www.kaggle.com/datasets/thedrcat/daigt-v2-train-dataset/
Facebook
TwitterThe U.S. Geological Survey has been characterizing the regional variation in shear stress on the sea floor and sediment mobility through statistical descriptors. The purpose of this project is to identify patterns in stress in order to inform habitat delineation or decisions for anthropogenic use of the continental shelf. The statistical characterization spans the continental shelf from the coast to approximately 120 m water depth, at approximately 0.03 degree (2.5-3.75 km, depending on latitude) resolution. Time-series of wave and circulation are created using numerical models, and near-bottom output of steady and oscillatory velocities and an estimate of bottom roughness are used to calculate a time-series of bottom shear stress at 1-hour intervals. Statistical descriptions such as the median and 95th percentile, which are the output included with this database, are then calculated to create a two-dimensional picture of the regional patterns in shear stress. In addition, time-series of stress are compared to critical stress values at select points calculated from observed surface sediment texture data to determine estimates of sea floor mobility.
Facebook
TwitterThe Modern Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) contains a wealth of information that can be used for weather and climate studies. By combining the assimilation of observations with a frozen version of the Goddard Earth Observing System (GEOS), a global analysis is produced at an hourly temporal resolution spanning from January 1980 through present (Gelaro et al., 2017). It can be difficult to parse through a multidecadal dataset such as MERRA-2 to evaluate the interannual variability of weather that occurs on a daily timescale, let alone determine the occurrence of an extreme weather event. This data collection provides climate statistics compute using MERRA-2 to assist in the analysis of extreme temperature and precipitation events and the accompanying the large scale meteorological patterns across a time period of over four decades. Find the product File Specific, Readme, References, and data tools under "Documentation" tab. Sign up for the MERRA-2 mailing list to receive announcements on the latest data information, tools and services that become available, data announcements from GMAO MERRA-2 project and more! Contact the GES DISC User Services (gsfc-dl-help-disc@mail.nasa.gov) to be added to the list.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
List of Subdatasets:
Long-term data: 2000-2021
5th percentile (p05) monthly time-series: 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021
50th percentile (p50) monthly time-series: 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021
95th percentile (p95) monthly time-series: 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021
General Description
The monthly aggregated Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) dataset is derived from 250m 8d GLASS V6 FAPAR. The data set is derived from Moderate Resolution Imaging Spectroradiometer (MODIS) reflectance and LAI data using several other FAPAR products (MODIS Collection 6, GLASS FAPAR V5, and PROBA-V1 FAPAR) to generate a bidirectional long-short-term memory (Bi-LSTM) model to estimate FAPAR. The dataset time spans from March 2000 to December 2021 and provides data that covers the entire globe. The dataset can be used in many applications like land degradation modeling, land productivity mapping, and land potential mapping. The dataset includes:
Long-term:
Derived from monthly time-series. This dataset provides linear trend model for the p95 variable: (1) slope beta mean (p95.beta_m), p-value for beta (p95.beta_pv), intercept alpha mean (p95.alpha_m), p-value for alpha (p95.alpha_pv), and coefficient of determination R2 (p95.r2_m).
Monthly time-series:
Monthly aggregation with three standard statistics: (1) 5th percentile (p05), median (p50), and 95th percentile (p95). For each month, we aggregate all composites within that month plus one composite each before and after, ending up with 5 to 6 composites for a single month depending on the number of images within that month.
Data Details
Time period: March 2000 – December 2021
Type of data: Fraction of Absorbed Photosynthetically Active Radiation (FAPAR)
How the data was collected or derived: Derived from 250m 8 d GLASS V6 FAPAR using Python running in a local HPC. The time-series analysis were computed using the Scikit-map Python package.
Statistical methods used: for the long-term, Ordinary Least Square (OLS) of p95 monthly variable; for the monthly time-series, percentiles 05, 50, and 95.
Limitations or exclusions in the data: The dataset does not include data for Antarctica.
Coordinate reference system: EPSG:4326
Bounding box (Xmin, Ymin, Xmax, Ymax): (-180.00000, -62.0008094, 179.9999424, 87.37000)
Spatial resolution: 1/480 d.d. = 0.00208333 (250m)
Image size: 172,800 x 71,698
File format: Cloud Optimized Geotiff (COG) format.
Support
If you discover a bug, artifact, or inconsistency, or if you have a question please raise a GitHub issue: https://github.com/Open-Earth-Monitor/Global_FAPAR_250m/issues
Reference
Hackländer, J., Parente, L., Ho, Y.-F., Hengl, T., Simoes, R., Consoli, D., Şahin, M., Tian, X., Herold, M., Jung, M., Duveiller, G., Weynants, M., Wheeler, I., (2023?) "Land potential assessment and trend-analysis using 2000–2021 FAPAR monthly time-series at 250 m spatial resolution", submitted to PeerJ, preprint available at: https://doi.org/10.21203/rs.3.rs-3415685/v1
Name convention
To ensure consistency and ease of use across and within the projects, we follow the standard Open-Earth-Monitor file-naming convention. The convention works with 10 fields that describes important properties of the data. In this way users can search files, prepare data analysis etc, without needing to open files. The fields are:
generic variable name: fapar = Fraction of Absorbed Photosynthetically Active Radiation
variable procedure combination: essd.lstm = Earth System Science Data with bidirectional long short-term memory (Bi–LSTM)
Position in the probability distribution / variable type: p05/p50/p95 = 5th/50th/95th percentile
Spatial support: 250m
Depth reference: s = surface
Time reference begin time: 20000301 = 2000-03-01
Time reference end time: 20211231 = 2022-12-31
Bounding box: go = global (without Antarctica)
EPSG code: epsg.4326 = EPSG:4326
Version code: v20230628 = 2023-06-28 (creation date)
Facebook
TwitterThe table only covers individuals who have some liability to Income Tax. The percentile points have been independently calculated on total income before tax and total income after tax.
These statistics are classified as accredited official statistics.
You can find more information about these statistics and collated tables for the latest and previous tax years on the Statistics about personal incomes page.
Supporting documentation on the methodology used to produce these statistics is available in the release for each tax year.
Note: comparisons over time may be affected by changes in methodology. Notably, there was a revision to the grossing factors in the 2018 to 2019 publication, which is discussed in the commentary and supporting documentation for that tax year. Further details, including a summary of significant methodological changes over time, data suitability and coverage, are included in the Background Quality Report.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
To analyze the salaries of company employees using Pandas, NumPy, and other tools, you can structure the analysis process into several steps:
Case Study: Employee Salary Analysis In this case study, we aim to analyze the salaries of employees across different departments and levels within a company. Our goal is to uncover key patterns, identify outliers, and provide insights that can support decisions related to compensation and workforce management.
Step 1: Data Collection and Preparation Data Sources: The dataset typically includes employee ID, name, department, position, years of experience, salary, and additional compensation (bonuses, stock options, etc.). Data Cleaning: We use Pandas to handle missing or incomplete data, remove duplicates, and standardize formats. Example: df.dropna() to handle missing salary information, and df.drop_duplicates() to eliminate duplicate entries. Step 2: Data Exploration and Descriptive Statistics Exploratory Data Analysis (EDA): Using Pandas to calculate basic statistics such as mean, median, mode, and standard deviation for employee salaries. Example: df['salary'].describe() provides an overview of the distribution of salaries. Data Visualization: Leveraging tools like Matplotlib or Seaborn for visualizing salary distributions, box plots to detect outliers, and bar charts for department-wise salary breakdowns. Example: sns.boxplot(x='department', y='salary', data=df) provides a visual representation of salary variations by department. Step 3: Analysis Using NumPy Calculating Salary Ranges: NumPy can be used to calculate the range, variance, and percentiles of salary data to identify the spread and skewness of the salary distribution. Example: np.percentile(df['salary'], [25, 50, 75]) helps identify salary quartiles. Correlation Analysis: Identify the relationship between variables such as experience and salary using NumPy to compute correlation coefficients. Example: np.corrcoef(df['years_of_experience'], df['salary']) reveals if experience is a significant factor in salary determination. Step 4: Grouping and Aggregation Salary by Department and Position: Using Pandas' groupby function, we can summarize salary information for different departments and job titles to identify trends or inequalities. Example: df.groupby('department')['salary'].mean() calculates the average salary per department. Step 5: Salary Forecasting (Optional) Predictive Analysis: Using tools such as Scikit-learn, we could build a regression model to predict future salary increases based on factors like experience, education level, and performance ratings. Step 6: Insights and Recommendations Outlier Identification: Detect any employees earning significantly more or less than the average, which could signal inequities or high performers. Salary Discrepancies: Highlight any salary discrepancies between departments or gender that may require further investigation. Compensation Planning: Based on the analysis, suggest potential changes to the salary structure or bonus allocations to ensure fair compensation across the organization. Tools Used: Pandas: For data manipulation, grouping, and descriptive analysis. NumPy: For numerical operations such as percentiles and correlations. Matplotlib/Seaborn: For data visualization to highlight key patterns and trends. Scikit-learn (Optional): For building predictive models if salary forecasting is included in the analysis. This approach ensures a comprehensive analysis of employee salaries, providing actionable insights for human resource planning and compensation strategy.
Facebook
TwitterRoads and bridges are vulnerable to a range of stressors, such as flooding, heat waves, and other extreme events. The probability of these stressors impacting roads and bridges cannot be exactly calculated due to various uncertainties related to the scientific understanding of future environmental conditions. Resilient design methods find ways to account for the uncertainty in various stressors. This data set provides temperature and precipitation variables that can be used to help transportation professionals better characterize risks to transportation assets and provide more resilient designs. We applied daily climate projections to calculate 19 variables related to resilient roadway design. The source data set is the statistically downscaled CMIP6-LOCA2 (Localized Constructed Analogs, v20240915 version, Pierce et al. 2023), which includes temperature and precipitation projections from the Climate Model Intercomparison Program Phase 6 (CMIP6) for 27 models under the ssp245, ssp370, and ssp585 scenarios. The “unsplit Livneh” (Pierce et al., 2021) is used as the training data set for LOCA2. We adopt the v20240915 version of CMIP6-LOCA2 as it includes recent changes to the downscaling methodology to improve the representation of precipitation extreme events. The Python xclim (v0.56) library was used to process daily temperature and precipitation from CMIP6-LOCA2. These data are provided as climatology and percentile maps for the 1981-2010, 2025-2049, 2050-2074, and 2075-2099 periods. County-level time series from 1950-2100 are provided, as well as climatology and percentile summaries for 1981-2010, 2025-2049, 2050-2074, and 2075-2099 periods. Users interested in 6 km grids are referred to the home pages of each of the respective sources. The county-level data sets are spatially averaged using the 2023 United States Census Bureau TIGER/Line Shapefiles (https://www.census.gov/geographies/mapping-files/time-series/geo/tiger-line-file.html). The NetCDF time series files herein can be linked to the shapefile geometry using the “GEOID” field. The variables included are: Minimum daily minimum temperature (TN_min, units=degF, freq=monthly/annual) Minimum 7-day minimum temperature (TN7day_min, units=degF, freq=monthly/annual) Maximum daily maximum temperature (TX_max, units=degF, freq=monthly/annual) Maximum 7-day maximum temperature (TX7day_max, units=degF, freq=monthly/annual) Maximum number of consecutive days with maximum daily temperature above 95 degF (maximum_consecutive_warm_days_95F, units=d, freq=annual) Maximum number of consecutive days with maximum daily temperature above 100 degF (maximum_consecutive_warm_days_100F, units=d, freq=annual) Maximum number of consecutive days with maximum daily temperature above 105 degF (maximum_consecutive_warm_days_105F, units=d, freq=annual) Maximum number of consecutive days with maximum daily temperature above 110 degF (maximum_consecutive_warm_days_110F, units=d, freq=annual) Maximum Near-Surface Air Temperature (95th percentile) (TX95p_per, units=degF, freq=time window) Maximum Near-Surface Air Temperature (99th percentile) (TX99p_per, units=degF, freq=time window) Minimum Near-Surface Air Temperature (1st percentile) (TN01p_per, units=degF, freq=time window) Minimum Near-Surface Air Temperature (5th percentile) (TN05p_per, units=degF, freq=time window) Number of days with daily precipitation at or above 0.01 in/day (wetdays, units=d, freq=monthly/annual) Number of days with daily precipitation at or above 0.5 in/day (intense_wetdays, units=d, freq=monthly/annual) Maximum 1-day total precipitation (rx1day, units=in/d, freq=annual) Maximum 1-day total precipitation (50th percentile) (rx1day_50p_per, units=in/d, freq=time window, notes=See Processing Step 4 for details) Maximum 1-day total precipitation (90th percentile) (rx1day_90p_per, units=in/d, freq=time window, notes=See Processing Step 4 for details) Maximum 1-day total precipitation (estimated 90th percentile) (rx1day_90p_per_est, units=in/d, freq=time window, notes=See Processing Step 4 for details) Maximum 1-day total precipitation (96th percentile) (rx1day_96p_per, units=in/d, freq=time window, notes=See Processing Step 4 for details) The 27 included CMIP6 GCMs are: ACCESS-CM2, ACCESS-ESM1-5, AWI-CM-1-1-MR, BCC-CSM2-MR, CESM2-LENS, CNRM-CM6-1, CNRM-CM6-1-HR, CNRM-ESM2-1, CanESM5, EC-Earth3, EC-Earth3-Veg, FGOALS-g3, GFDL-CM4, GFDL-ESM4, HadGEM3-GC31-LL, HadGEM3-GC31-MM, INM-CM4-8, INM-CM5-0, IPSL-CM6A-LR, KACE-1-0-G, MIROC6, MPI-ESM1-2-HR, MPI-ESM1-2-LR, MRI-ESM2-0, NorESM2-LM, NorESM2-MM, TaiESM1
Facebook
TwitterThe dataset provides the median, 25th percentile, and 75th percentile of carbon monoxide (CO) concentrations in Delhi, measured in moles per square meter and vertically integrated over a 9-day mean period. This data offers insights into the distribution and variability of CO levels over time.
The data, collected from July 10, 2018, to August 10, 2024, is sourced from the Tropomi Explorer
CO is a harmful gas that can significantly impact human health. High levels of CO can lead to respiratory issues, cardiovascular problems, and even be life-threatening in extreme cases. Forecasting CO levels helps in predicting and managing air quality to protect public health.
CO is often emitted from combustion processes, such as those in vehicles and industrial activities. Forecasting CO levels can help in monitoring the impact of these sources and evaluating the effectiveness of emission control measures.**
Accurate CO forecasts can assist in urban planning and pollution control strategies, especially in densely populated areas where air quality issues are more pronounced.
Columns and Data Description: system:time_start: This column represents the date when the CO measurements were taken. p25: This likely represents the 25th percentile value of CO levels for the given date, providing insight into the lower range of the distribution. Median: The median CO level for the given date, which is the middle value of the dataset and represents a typical value. IQR: The Interquartile Range, which measures the spread of the middle 50% of the data. It’s calculated as the difference between the 75th percentile (p75) and the 25th percentile (p25) values.
Facebook
TwitterThe U.S. Geological Survey has been characterizing the regional variation in shear stress on the sea floor and sediment mobility through statistical descriptors. The purpose of this project is to identify patterns in stress in order to inform habitat delineation or decisions for anthropogenic use of the continental shelf. The statistical characterization spans the continental shelf from the coast to approximately 120 m water depth, at approximately 5 km resolution. Time-series of wave and circulation are created using numerical models, and near-bottom output of steady and oscillatory velocities and an estimate of bottom roughness are used to calculate a time-series of bottom shear stress at 1-hour intervals. Statistical descriptions such as the median and 95th percentile, which are the output included with this database, are then calculated to create a two-dimensional picture of the regional patterns in shear stress. In addition, time-series of stress are compared to critical stress values at select points calculated from observed surface sediment texture data to determine estimates of sea floor mobility.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is a set of raster tidal statistics for the Australian region at a 1/32 degree resolution derived from the EOT20 global tidal model. This dataset provides rasters for Lowest Predicted Tide (LPT), Highest Predicted Tide (HPT), Mean Low Spring Water (MLSW), Mean High Spring Water (MHSW), tidal range (HPT-LPT), tidal percentiles (1, 2, 5, 10, 20, 50, 80, 90, 95, 98, 99) and monthly climatologies (median over all simulated years for a given month) of LPT, Mean, and HPT.
Lowest Predicted Tide is a proxy for Lowest Astronomical Tide (LAT) and Highest Predicted Tide is a proxy for Highest Astronomical Tide (HAT) estimated over a shorter simulation period of typically 5 years.
The tidal modelling statistics are all represented as GeoTiff images to allow easy use in subsequent analysis and visualisation in GIS tools. The monthly climatology datasets (LPT, Mean and HPT) are stored as multi-band images with one band per month. The tidal percentiles are also stored as multi-band images with one band per percentile value.
This dataset also includes a comparison between the EOT20 tidal model and 70 tide gauges around Australia. This includes a comparison of the monthly min, mean and maximum over the last 19 years of data, and a monthly climatology over the full tidal record. Generated plots of the 70 stations comparisons are available in the data download section.
All dataset products and validation analysis were performed using Python scripts, allowing this dataset to be fully reproduced. The source code is available from GitHub.
Limitations:
This modelling was limited to the accuracy of the EOT20 global model. Tidal statistics were calculated using a 30 minute time increment over a 5-year period (2020-2025) (Note: the initial release data only covered a single year 2023). A simulation period of 19 years is needed to capture the full lunar cycle. The calculation time period was limited due to the high computational cost of processing the full 19 year lunar cycle. Based on reviewing the tide gauge data and the matching tidal predictions we find that in the last 19 years the largest tidal ranges occurred in 2004 - 2006 and 2021 - 2024 and the lowest tidal range were in the middle of this period from 2012 - 2014. By modelling from 2020-2025 we the statistics are based on the period of the cycle with the highest tidal ranges.
The EOT20 tidal model provides tidal constituents on a grid that is 1/8 degree in resolution. Land areas pixels are excluded from the model grid. This means that nearshore areas, particularly those associated with river mouths or bays, are excluded from the model. In this dataset we infill these areas based on model parameters extrapolated using nearest neighbour interpolation. This will result in increased error in the tidal estimates in the locations.
This dataset is not suitable as a tidal datum for administrative and jurisdictional extents. This dataset has not gone through enough validation for its use in critical decisions. It was developed to assist in understanding tidal conditions experienced by shallow marine environments.
Format:
GeoTiff raster files (EPSG:4326). One file per statistic. Percentiles contain 11 bands corresponding to 1, 2, 5, 10, 20, 50, 80, 90, 95, 98, and 99 percentile of time exposure. Monthly_LPT, Monthly_Mean, and Monthly_HPT each have 12 bands corresponding to months of the year, where band 1 is January, band 2 is February, etc.
Dataset relevance:
This section aims to improve the discoverability of the dataset by highlighting key areas where this dataset is relevant and what it shows.
This dataset provides a plot of the monthly lowest tide, mean tide and highest tide as a time series and a climatology (where all years are overlaid on each other and the monthly results are taken from the median value of all the results for each month) based on 70 tide gauges (based on data made available through the BOM website) around Australia, with a comparison with the same tidal values predicted from the EOT20 tidal model. This includes the following locations:
- Queensland: Cape Ferguson, Rosslyn Bay, Booby Island, Bowen, Brisbane Bar, Bundaberg (Burnett Heads), Cairns, Gladstone, Gold Coast Operations Base, Goods Island, Hay Point, Ince Point, Karumba, Lucinda (Offshore), Mackay Outer Harbour, Mooloolaba, Nardana Patches, Mourilyan Harbour, Port Alma, Port Douglas, Shute Harbour, Townsville, Turtle Head, Urangan, Weipa (Humbug Point), Thursday Island
- New South Wales: Port Kembla, Botany Bay, Eden, Lord Howe Island, Newcastle, Norfolk Island, Fort Denison (Sydney), Yamba
- Victoria: Portland, Stony Point, Lorne
- South Australia: Port Stanvac, Thevenard, Port Adelaide (Outer Harbor), Port Giles, Port Lincoln, Port Pirie, Victor Harbor, Wallaroo, Whyalla
- Western Australia: Esperance, Hillarys, Broome, Albany, Bunbury (Inner), Cape Lambert, Carnarvon, Exmouth, Fremantle, Geraldton, King Bay, Onslow, Port Hedland, Wyndham
- Tasmania: Burnie, Spring Bay, Low Head, Mersey River (Devonport)
- Northern Territory: Darwin, Milner Bay - Groote Eylandt
- Indian Ocean: Cocos - Keeling Islands (Home Island)
The tidal range product shows that the strongest tides occur in the Kimberley, in the northern portions of the Pilbara along Eighty Mile Beach, in Joseph Bonaparte Gulf, between Darwin and the Tiwi Islands, and in Broad Sound in Queensland. Areas that have a low tidal range include the coast south of Ningaloo Reef and much of the Gulf of Carpentaria.
References:
Bishop-Taylor, R., Sagar, S., Phillips, C., & Newey, V. (2024). eo-tides: Tide modelling tools for large-scale satellite earth observation analysis. https://github.com/GeoscienceAustralia/eo-tides
Sutterley, T. C., Alley, K., Brunt, K., Howard, S., Padman, L., Siegfried, M. (2017) pyTMD: Python-based tidal prediction software. 10.5281/zenodo.5555395
Hart-Davis Michael, Piccioni Gaia, Dettmering Denise, Schwatke Christian, Passaro Marcello, Seitz Florian (2021). EOT20 - A global Empirical Ocean Tide model from multi-mission satellite altimetry. SEANOE. https://doi.org/10.17882/79489
Hart-Davis Michael G., Piccioni Gaia, Dettmering Denise, Schwatke Christian, Passaro Marcello, Seitz Florian (2021). EOT20: a global ocean tide model from multi-mission satellite altimetry. Earth System Science Data, 13 (8), 3869-3884. https://doi.org/10.5194/essd-13-3869-2021
Change log:
As updates to this dataset are published, the changes will be recorded here.
- 2025-02-27 v1: Initial release of the dataset that is based on the northern-au-test.yaml. This has a limited spatial extent and 1-year simulation. This is only a spatial subset of the full dataset. This initial release has been archived (https://nextcloud.eatlas.org.au/apps/sharealias/a/AU_NESP-MaC-3-17_AIMS_EOT20-tidal-stats_v1).
- 2025-03-18 v1-1: Release of the full Australian geographic scope calculated over 5 years.
Facebook
Twitterhttps://opensource.org/license/MIThttps://opensource.org/license/MIT
Algorithm (.php) for retrieving the co-citation set of a scholarly output by DOI, and calculating CPR for it. Configuration, database operations and input sanitizing code omitted. Also, example data and statistical analyses used in Seppänen et al (2020). For context see: Seppänen et al (2020): Co-Citation Percentile Rank and JYUcite: a new network-standardized output-level citation influence metric https://oscsolutions.cc.jyu.fi/jyucite
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Facebook
Twitterhttp://geospatial-usace.opendata.arcgis.com/datasets/9defaa133d434c0a8bb82d5db54e1934/license.jsonhttp://geospatial-usace.opendata.arcgis.com/datasets/9defaa133d434c0a8bb82d5db54e1934/license.json
A sieve analysis (or gradation test) is a practice or procedure commonly used in civil engineering to assess the particle size distribution (also called gradation) of a granular material.
As part of the Sediment Analysis and Geo-App (SAGA) a series of data processing web services are available to assist in computing sediment statistics based on results of sieve analysis. The Calculate Percentile service returns one of the following percentiles: D5, D10, D16, D35, D50, D84, D90, D95.
Percentiles can also be computed for classification sub-groups: Overall (OVERALL), <62.5 um (DS_FINE), 62.5-250um (DS_MED), and > 250um (DS_COARSE)
Parameter #1: Input Sieve Size, Percent Passing, Sieve Units.
Parameter #2: Percentile
Parameter #3: Subgroup
Parameter #4: Outunits
This service is part of the Sediment Analysis and Geo-App (SAGA) Toolkit.
Looking for a comprehensive user interface to run this tool?
Go to SAGA Online to view this geoprocessing service with data already stored in the SAGA database.
This service can be used independently of the SAGA application and user interface, or the tool can be directly accessed through http://navigation.usace.army.mil/SEM/Analysis/GSD