39 datasets found
  1. a

    Find Outliers Minnesota Hospitals

    • umn.hub.arcgis.com
    Updated May 6, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    University of Minnesota (2020). Find Outliers Minnesota Hospitals [Dataset]. https://umn.hub.arcgis.com/maps/UMN::find-outliers-minnesota-hospitals
    Explore at:
    Dataset updated
    May 6, 2020
    Dataset authored and provided by
    University of Minnesota
    Area covered
    Description

    The following report outlines the workflow used to optimize your Find Outliers result:Initial Data Assessment.There were 137 valid input features.There were 4 outlier locations; these will not be used to compute the polygon cell size.Incident AggregationThe polygon cell size was 49251.0000 Meters.The aggregation process resulted in 72 weighted areas.Incident Count Properties:Min1.0000Max21.0000Mean1.9028Std. Dev.2.4561Scale of AnalysisThe optimal fixed distance band selected was based on peak clustering found at 94199.9365 Meters.Outlier AnalysisCreating the random reference distribution with 499 permutations.There are 3 output features statistically significant based on a FDR correction for multiple testing and spatial dependence.There are 2 statistically significant high outlier features.There are 0 statistically significant low outlier features.There are 0 features part of statistically significant low clusters.There are 1 features part of statistically significant high clusters.OutputPink output features are part of a cluster of high values.Light Blue output features are part of a cluster of low values.Red output features represent high outliers within a cluster of low values.Blue output features represent low outliers within a cluster of high values.

  2. d

    Data from: Aircraft Proximity Maps Based on Data-Driven Flow Modeling

    • catalog.data.gov
    • data.staging.idas-ds1.appdat.jsc.nasa.gov
    • +1more
    Updated Apr 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dashlink (2025). Aircraft Proximity Maps Based on Data-Driven Flow Modeling [Dataset]. https://catalog.data.gov/dataset/aircraft-proximity-maps-based-on-data-driven-flow-modeling
    Explore at:
    Dataset updated
    Apr 10, 2025
    Dataset provided by
    Dashlink
    Description

    With the forecast increase in air traffic demand over the next decades, it is imperative to develop tools to provide traffic flow managers with the information required to support decision making. In particular, decision-support tools for traffic flow management should aid in limiting controller workload and complexity, while supporting increases in air traffic throughput. While many decision-support tools exist for short-term traffic planning, few have addressed the strategic needs for medium- and long-term planning for time horizons greater than 30 minutes. This paper seeks to address this gap through the introduction of 3D aircraft proximity maps that evaluate the future probability of presence of at least one or two aircraft at any given point of the airspace. Three types of proximity maps are presented: presence maps that indicate the local density of traffic; conflict maps that determine locations and probabilities of potential conflicts; and outliers maps that evaluate the probability of conflict due to aircraft not belonging to dominant traffic patterns. These maps provide traffic flow managers with information relating to the complexity and difficulty of managing an airspace. The intended purpose of the maps is to anticipate how aircraft flows will interact, and how outliers impact the dominant traffic flow for a given time period. This formulation is able to predict which "critical" regions may be subject to conflicts between aircraft, thereby requiring careful monitoring. These probabilities are computed using a generative aircraft flow model. Time-varying flow characteristics, such as geometrical configuration, speed, and probability density function of aircraft spatial distribution within the flow, are determined from archived Enhanced Traffic Management System data, using a tailored clustering algorithm. Aircraft not belonging to flows are identified as outliers.

  3. n

    Spatial detection of outlier loci with Moran eigenvector maps (MEM)

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated Jan 9, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Helene H. Wagner; Mariana Chávez-Pesqueira; Brenna R. Forester (2017). Spatial detection of outlier loci with Moran eigenvector maps (MEM) [Dataset]. http://doi.org/10.5061/dryad.b12kk
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 9, 2017
    Dataset provided by
    Duke University
    University of Toronto
    Authors
    Helene H. Wagner; Mariana Chávez-Pesqueira; Brenna R. Forester
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    The spatial signature of microevolutionary processes structuring genetic variation may play an important role in the detection of loci under selection. However, the spatial location of samples has not yet been used to quantify this. Here, we present a new two-step method of spatial outlier detection at the individual and deme levels using the power spectrum of Moran eigenvector maps (MEM). The MEM power spectrum quantifies how the variation in a variable, such as the frequency of an allele at a SNP locus, is distributed across a range of spatial scales defined by MEM spatial eigenvectors. The first step (Moran spectral outlier detection: MSOD) uses genetic and spatial information to identify outlier loci by their unusual power spectrum. The second step uses Moran spectral randomization (MSR) to test the association between outlier loci and environmental predictors, accounting for spatial autocorrelation. Using simulated data from two published papers, we tested this two-step method in different scenarios of landscape configuration, selection strength, dispersal capacity and sampling design. Under scenarios that included spatial structure, MSOD alone was sufficient to detect outlier loci at the individual and deme levels without the need for incorporating environmental predictors. Follow-up with MSR generally reduced (already low) false-positive rates, though in some cases led to a reduction in power. The results were surprisingly robust to differences in sample size and sampling design. Our method represents a new tool for detecting potential loci under selection with individual-based and population-based sampling by leveraging spatial information that has hitherto been neglected.

  4. a

    Mapping Clusters: Introduction to Statistical Cluster Analysis

    • hub.arcgis.com
    Updated Nov 7, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    State of Delaware (2019). Mapping Clusters: Introduction to Statistical Cluster Analysis [Dataset]. https://hub.arcgis.com/documents/2d93a3a8530b4bbb94e614f7a3a8f8d6
    Explore at:
    Dataset updated
    Nov 7, 2019
    Dataset authored and provided by
    State of Delaware
    Description

    In this course, you are introduced to the Hot Spot Analysis tools and the Cluster and Outlier Analysis tools. You will discover how these analysis tools can help you make smarter decisions. You will also learn the foundational skills and concepts required to begin your analysis and interpret your results.GoalsExplain how statistical cluster analysis can help you make smarter decisions.Describe key concepts related to statistical cluster analysis.Describe the Hot Spot Analysis and Cluster and Outlier Analysis tools.

  5. a

    Find Outliers Percent of households with income below the Federal Poverty...

    • uscssi.hub.arcgis.com
    Updated Dec 5, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Spatial Sciences Institute (2021). Find Outliers Percent of households with income below the Federal Poverty Level [Dataset]. https://uscssi.hub.arcgis.com/maps/USCSSI::find-outliers-percent-of-households-with-income-below-the-federal-poverty-level
    Explore at:
    Dataset updated
    Dec 5, 2021
    Dataset authored and provided by
    Spatial Sciences Institute
    Area covered
    Description

    The following report outlines the workflow used to optimize your Find Outliers result:Initial Data Assessment.There were 1684 valid input features.POVERTY Properties:Min0.0000Max91.8000Mean18.9902Std. Dev.12.7152There were 22 outlier locations; these will not be used to compute the optimal fixed distance band.Scale of AnalysisThe optimal fixed distance band was based on the average distance to 30 nearest neighbors: 3709.0000 Meters.Outlier AnalysisCreating the random reference distribution with 499 permutations.There are 1155 output features statistically significant based on a FDR correction for multiple testing and spatial dependence.There are 68 statistically significant high outlier features.There are 84 statistically significant low outlier features.There are 557 features part of statistically significant low clusters.There are 446 features part of statistically significant high clusters.OutputPink output features are part of a cluster of high POVERTY values.Light Blue output features are part of a cluster of low POVERTY values.Red output features represent high outliers within a cluster of low POVERTY values.Blue output features represent low outliers within a cluster of high POVERTY values.

  6. n

    Data from: Localizing FST outliers on a QTL map reveals evidence for large...

    • data.niaid.nih.gov
    • search.dataone.org
    • +2more
    zip
    Updated Aug 17, 2012
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sara Via; Gina Conte; Casey Mason-Foley; Kelly Mills (2012). Localizing FST outliers on a QTL map reveals evidence for large genomic regions of reduced gene exchange during speciation-with-gene-flow [Dataset]. http://doi.org/10.5061/dryad.9cf75
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 17, 2012
    Dataset provided by
    University of Maryland, College Park
    Authors
    Sara Via; Gina Conte; Casey Mason-Foley; Kelly Mills
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Area covered
    New York, North America
    Description

    Populations that maintain phenotypic divergence in sympatry typically show a mosaic pattern of genomic divergence, requiring a corresponding mosaic of genomic isolation (reduced gene flow). However, mechanisms that could produce the genomic isolation required for divergence-with-gene-flow have barely been explored, apart from the traditional localized effects of selection and reduced recombination near centromeres or inversions. By localizing FST outliers from a genome scan of wild pea aphid host races on a Quantitative Trait Locus (QTL) map of key traits, we test the hypothesis that between-population recombination and gene exchange are reduced over large ‘divergence hitchhiking’ (DH) regions. As expected under divergence hitchhiking, our map confirms that QTL and divergent markers cluster together in multiple large genomic regions. Under divergence hitchhiking, the nonoutlier markers within these regions should show signs of reduced gene exchange relative to nonoutlier markers in genomic regions where ongoing gene flow is expected. We use this predicted difference among nonoutliers to perform a critical test of divergence hitchhiking. Results show that nonoutlier markers within clusters of FST outliers and QTL resolve the genetic population structure of the two host races nearly as well as the outliers themselves, while nonoutliers outside DH regions reveal no population structure, as expected if they experience more gene flow. These results provide clear evidence for divergence hitchhiking, a mechanism that may dramatically facilitate the process of speciation-with-gene-flow. They also show the power of integrating genome scans with genetic analyses of the phenotypic traits involved in local adaptation and population divergence.

  7. U

    Heat flow maps and supporting data for the Great Basin, USA

    • data.usgs.gov
    • gimi9.com
    • +1more
    Updated Jan 23, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jacob DeAngelo; Erick Burns; Emilie Gentry; Joseph Batir; Cary Lindsey; Stanley Mordensky (2025). Heat flow maps and supporting data for the Great Basin, USA [Dataset]. http://doi.org/10.5066/P9BZPVUC
    Explore at:
    Dataset updated
    Jan 23, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Authors
    Jacob DeAngelo; Erick Burns; Emilie Gentry; Joseph Batir; Cary Lindsey; Stanley Mordensky
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Time period covered
    Jun 30, 2022
    Area covered
    Great Basin, United States
    Description

    Geothermal well data from Southern Methodist University (SMU, 2021) and the U.S. Geological Survey (Sass et al., 2005) were used to create maps of estimated background conductive heat flow across the greater Great Basin region of the western US. The heat flow maps in this data release were created using a process that sought to remove hydrothermal convective influence from predictions of background conductive heat flow. Heat flow maps were constructed using a custom-developed iterative process using weighted regression, where convectively influenced outliers were de-emphasized by assigning lower weights to measurements that are very different from the estimated local trend (e.g., local convective influence). The weighted regression algorithm is 2D LOESS (locally estimated scatterplot smoothing; Cleveland et al., 1992), which was used for local linear regression, and smoothness was controlled by varying the number of nearby points used for each local interpolation. Three maps are i ...

  8. m

    AUS Soil and Landscape Grid National Soil Attribute Maps - Depth of Regolith...

    • demo.dev.magda.io
    • researchdata.edu.au
    • +1more
    zip
    Updated Apr 13, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bioregional Assessment Program (2022). AUS Soil and Landscape Grid National Soil Attribute Maps - Depth of Regolith (3" resolution) - Release 2 [Dataset]. https://demo.dev.magda.io/dataset/ds-dga-15a5b103-3e7d-4548-8c03-5cd1c393ae8c
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 13, 2022
    Dataset provided by
    Bioregional Assessment Program
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Description

    Abstract This dataset and its metadata statement were supplied to the Bioregional Assessment Programme by a third party and are presented here as originally supplied. This is Version 2 of the …Show full descriptionAbstract This dataset and its metadata statement were supplied to the Bioregional Assessment Programme by a third party and are presented here as originally supplied. This is Version 2 of the Australian Soil Depth of Regolith product of the Soil and Landscape Grid of Australia (produced 2015-06-01). The Soil and Landscape Grid of Australia has produced a range of digital soil attribute products. The digital soil attribute maps are in raster format at a resolution of 3 arc sec (~90 x 90 m pixels). Attribute Definition: The regolith is the in situ and transported material overlying unweathered bedrock; Units: metres; Spatial prediction method: data mining using piecewise linear regression; Period (temporal coverage; approximately): 1900-2013; Spatial resolution: 3 arc seconds (approx 90m); Total number of gridded maps for this attribute:3; Number of pixels with coverage per layer: 2007M (49200 * 40800); Total size before compression: about 8GB; Total size after compression: about 4GB; Data license : Creative Commons Attribution 3.0 (CC By); Variance explained (cross-validation): R^2 = 0.38; Target data standard: GlobalSoilMap specifications; Format: GeoTIFF. Dataset History The methodology consisted of the following steps: (i) drillhole data preparation, (ii) compilation and selection of the environmental covariate raster layers and (iii) model implementation and evaluation. Drillhole data preparation: Drillhole data was sourced from the National Groundwater Information System (NGIS) database. This spatial database holds nationally consistent information about bores that were drilled as part of the Bore Construction Licensing Framework (http://www.bom.gov.au/water/groundwater/ngis/). The database contains 357,834 bore locations with associated lithology, bore construction and hydrostratigraphy records. This information was loaded into a relational database to facilitate analysis. Regolith depth extraction: The first step was to recognise and extract the boundary between the regolith and bedrock within each drillhole record. This was done using a key word look-up table of bedrock or lithology related words from the record descriptions. 1,910 unique descriptors were discovered. Using this list of new standardised terms analysis of the drillholes was conducted, and the depth value associated with the word in the description that was unequivocally pointing to reaching fresh bedrock material was extracted from each record using a tool developed in C# code. The second step of regolith depth extraction involved removal of drillhole bedrock depth records deemed necessary because of the "noisiness" in depth records resulting from inconsistencies we found in drilling and description standards indentified in the legacy database. On completion of the filtering and removal of outliers the drillhole database used in the model comprised of 128,033 depth sites. Selection and preparation of environmental covariates The environmental correlations style of DSM applies environmental covariate datasets to predict target variables, here regolith depth. Strongly performing environmental covariates operate as proxies for the factors that control regolith formation including climate, relief, parent material organisms and time (Jenny, 1941 Depth modelling was implemented using the PC-based R-statistical software (R Core Team, 2014), and relied on the R-Cubist package (Kuhn et al. 2013). To generate modelling uncertainty estimates, the following procedures were followed: (i) the random withholding of a subset comprising 20% of the whole depth record dataset for external validation; (ii) Bootstrap sampling 100 times of the remaining dataset to produce repeated model training datasets, each time. The Cubist model was then run repeated times to produce a unique rule set for each of these training sets. Repeated model runs using different training sets, a procedure referred to as bagging or bootstrap aggregating, is a machine learning ensemble procedure designed to improve the stability and accuracy of the model. The Cubist rule sets generated were then evaluated and applied spatially calculating a mean predicted value (i.e. the final map). The 5% and 95% confidence intervals were estimated for each grid cell (pixel) in the prediction dataset by combining the variance from the bootstrapping process and the variance of the model residuals. Version 2 differs from version 1, in that the modelling of depths was performed on the log scale to better conform to assumptions of normality used in calculating the confidence intervals. The method to estimate the confidence intervals was improved to better represent the full range of variability in the modelling process. (Wilford et al, in press) Dataset Citation CSIRO (2015) AUS Soil and Landscape Grid National Soil Attribute Maps - Depth of Regolith (3" resolution) - Release 2. Bioregional Assessment Source Dataset. Viewed 22 June 2018, http://data.bioregionalassessments.gov.au/dataset/c28597e8-8cfc-4b4f-8777-c9934051cce2.

  9. Soil and Landscape Grid National Soil Attribute Maps - Depth of Regolith (3"...

    • researchdata.edu.au
    • data.csiro.au
    datadownload
    Updated Aug 28, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mike Grundy; Mark Thomas; Ross Searle; John Wilford; Searle, Ross (2024). Soil and Landscape Grid National Soil Attribute Maps - Depth of Regolith (3" resolution) - Release 2 [Dataset]. http://doi.org/10.4225/08/55C9472F05295
    Explore at:
    datadownloadAvailable download formats
    Dataset updated
    Aug 28, 2024
    Dataset provided by
    CSIROhttp://www.csiro.au/
    Authors
    Mike Grundy; Mark Thomas; Ross Searle; John Wilford; Searle, Ross
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 1900 - Dec 31, 2013
    Area covered
    Description

    This is Version 2 of the Depth of Regolith product of the Soil and Landscape Grid of Australia (produced 2015-06-01).

    The Soil and Landscape Grid of Australia has produced a range of digital soil attribute products. The digital soil attribute maps are in raster format at a resolution of 3 arc sec (~90 x 90 m pixels).

    Attribute Definition: The regolith is the in situ and transported material overlying unweathered bedrock; Units: metres; Spatial prediction method: data mining using piecewise linear regression; Period (temporal coverage; approximately): 1900-2013; Spatial resolution: 3 arc seconds (approx 90m); Total number of gridded maps for this attribute:3; Number of pixels with coverage per layer: 2007M (49200 * 40800); Total size before compression: about 8GB; Total size after compression: about 4GB; Data license : Creative Commons Attribution 4.0 (CC BY); Variance explained (cross-validation): R^2 = 0.38; Target data standard: GlobalSoilMap specifications; Format: GeoTIFF. Lineage: The methodology consisted of the following steps: (i) drillhole data preparation, (ii) compilation and selection of the environmental covariate raster layers and (iii) model implementation and evaluation.

    Drillhole data preparation: Drillhole data was sourced from the National Groundwater Information System (NGIS) database. This spatial database holds nationally consistent information about bores that were drilled as part of the Bore Construction Licensing Framework (http://www.bom.gov.au/water/groundwater/ngis/). The database contains 357,834 bore locations with associated lithology, bore construction and hydrostratigraphy records. This information was loaded into a relational database to facilitate analysis.

    Regolith depth extraction: The first step was to recognise and extract the boundary between the regolith and bedrock within each drillhole record. This was done using a key word look-up table of bedrock or lithology related words from the record descriptions. 1,910 unique descriptors were discovered. Using this list of new standardised terms analysis of the drillholes was conducted, and the depth value associated with the word in the description that was unequivocally pointing to reaching fresh bedrock material was extracted from each record using a tool developed in C# code.

    The second step of regolith depth extraction involved removal of drillhole bedrock depth records deemed necessary because of the “noisiness” in depth records resulting from inconsistencies we found in drilling and description standards indentified in the legacy database.

    On completion of the filtering and removal of outliers the drillhole database used in the model comprised of 128,033 depth sites.

    Selection and preparation of environmental covariates The environmental correlations style of DSM applies environmental covariate datasets to predict target variables, here regolith depth. Strongly performing environmental covariates operate as proxies for the factors that control regolith formation including climate, relief, parent material organisms and time.

    Depth modelling was implemented using the PC-based R-statistical software (R Core Team, 2014), and relied on the R-Cubist package (Kuhn et al. 2013). To generate modelling uncertainty estimates, the following procedures were followed: (i) the random withholding of a subset comprising 20% of the whole depth record dataset for external validation; (ii) Bootstrap sampling 100 times of the remaining dataset to produce repeated model training datasets, each time. The Cubist model was then run repeated times to produce a unique rule set for each of these training sets. Repeated model runs using different training sets, a procedure referred to as bagging or bootstrap aggregating, is a machine learning ensemble procedure designed to improve the stability and accuracy of the model. The Cubist rule sets generated were then evaluated and applied spatially calculating a mean predicted value (i.e. the final map). The 5% and 95% confidence intervals were estimated for each grid cell (pixel) in the prediction dataset by combining the variance from the bootstrapping process and the variance of the model residuals. Version 2 differs from version 1, in that the modelling of depths was performed on the log scale to better conform to assumptions of normality used in calculating the confidence intervals. The method to estimate the confidence intervals was improved to better represent the full range of variability in the modelling process. (Wilford et al, in press)

  10. g

    Snow Depth Mapping | gimi9.com

    • gimi9.com
    Updated Feb 3, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2019). Snow Depth Mapping | gimi9.com [Dataset]. https://gimi9.com/dataset/eu_32acdacc-98dd-4fa7-87e8-449278bd24f0-envidat
    Explore at:
    Dataset updated
    Feb 3, 2019
    Description

    The available datasets are snow depth maps with a spatial resolution of 2m generated from image matching of ADS 80/100 data. Image acquisition took place at peak of winter (time when the thickest snowpack is expected). The snow depth maps are the difference of a summer DSM from the winter DSM of the corresponding date . The summer DSM used is a product of image matching of ADS 80 data from summer 2013. In the available products buildings, vegetation and outliers were masked (set to NoData). For the elimination of buildings the TLM layer (swisstopo) was used, because this layer might not represent exactly the state of infrastructure at time of image acquisition, it is possible that mainly in dense settlement some buildings were not successfully masked. For the relevant area above treeline the masking of buildings showed good results. Vegetation got masked for a height above ground > 1m and was detected in a combination of summer and winter data sets. As Outliers were considered unrealistic snow depths caused by a failure of the image matching algorithm. Snow depths > 15m and smaller than

  11. m

    GABATLAS - Cadna-owie - Hooray Aquifer Total Dissolved Solids map: Data

    • demo.dev.magda.io
    • researchdata.edu.au
    • +2more
    zip
    Updated Dec 4, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bioregional Assessment Program (2022). GABATLAS - Cadna-owie - Hooray Aquifer Total Dissolved Solids map: Data [Dataset]. https://demo.dev.magda.io/dataset/ds-dga-3857111d-cae6-409e-8447-78686cec2040
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 4, 2022
    Dataset provided by
    Bioregional Assessment Program
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Description

    Abstract This dataset and its metadata statement were supplied to the Bioregional Assessment Programme by a third party and are presented here as originally supplied. Data used to produce the …Show full descriptionAbstract This dataset and its metadata statement were supplied to the Bioregional Assessment Programme by a third party and are presented here as originally supplied. Data used to produce the predicted Total Dissolved Solids map for the Cadna-owie - Hooray Aquifer in the Hydrogeological Atlas of the Great Artesian Basin (Ransley et.al., 2014). There are four layers in the Cadna-owie - Hooray Aquifer Total Dissolved Solids map data A. Location of hydrochemistry samples (Point data, Shapefile) B. Predicted Concentration (Filled contours , Shapefile) C. Predicted Concentration Contours (Contours, Shapefile) D. Prediction Standard Error (Filled contours , Shapefile) The predicted values provide a regional based estimate and may be associated with considerable error. It is recommended that the predicted values are read together with the predicted error map, which provides an estimate of the absolute standard error associated with the predicted values at any point within the map. The predicted standard error map provides an absolute standard error associated with the predicted values at any point within the map. Please note this is not a relative error map and the concentration of a parameter needs to be considered when interpreting the map. Predicted standard error values are low where the concentration is low and there is a high density of samples. Predicted standard errors values can be high where the concentration is high and there is moderate variability between nearby samples or where there is a paucity of data. Concentrations are Total Dissolved Solids mg/L. Coordinate system is Lambert conformal conic GDA 1994, with central meridian 134 degrees longitude, standard parallels at -18 and -36 degrees latitude. The Cadna-owie - Hooray Aquifer Total Dissolved Solids map is one of 14 hydrochemistry maps for the Cadna-owie - Hooray Aquifer and 24 hydrochemistry maps in the Hydrogeological Atlas of the Great Artesian Basin (Ransley et. al., 2014). This dataset and associated metadata can be obtained from www.ga.gov.au, using catalogue number 81693. References: Hitchon, B. and Brulotte, M. (1994): Culling criteria for ‘standard’ formation water analyses; Applied Geochemistry, v. 9, p. 637–645 Ransley, T., Radke, B., Feitz, A., Kellett, J., Owens, R., Bell, J. and Stewart, G., 2014. Hydrogeological Atlas of the Great Artesian Basin. Geoscience Australia. Canberra. [available from www.ga.gov.au using catalogue number 79790] Dataset History SOURCE DATA: Data was obtained from a variety of sources, as listed below: Water quality data from the Queensland groundwater database, Department of Environment and Resource Management Geological Society of Queensland water chemistry database (1970s to 1980s). Muller, PJ, Dale, NM (1985) Storage System for Groundwater Data Held by the Geological Survey of Queensland. GSQ Record 1985/47. Queensland. Geoscience Australia GAB hydrochemistry dataset 1973-1997. Published in Radke BM, Ferguson J, Cresswell RG, Ransley TR and Habermehl MA (2000) Hydrochemistry and implied hydrodynamics of the Cadna-owie - Hooray Aquifer, Great Artesian Basin, Australia. Canberra, Bureau of Rural Sciences: xiv, 229p. Feitz, A.J., Ransley, T.R., Dunsmore, R., Kuske, T.J., Hodgkinson, J., Preda, M., Spulak, R., Dixon, O. & Draper, J., 2014. Geoscience Australia and Geological Survey of Queensland Surat and Bowen Basins Groundwater Surveys Hydrochemistry Dataset (2009-2011). Geoscience Australia, Canberra Australia Water quality data from the Office of Groundwater Impact Assessment, Department of Natural Resources and Mines, Queensland Government Geoscience Australia (2010) Hydrogeochemical collection. A compilation of quality controlled groundwater data taken from well completion reports from QLD and NSW. Water quality data from the Office of Groundwater Impact Assessment, Department of Natural Resources and Mines, Queensland Government BOUNDARIES: Data covers the extent of the Cadna-owie-Hooray Aquifer and Equivalents as defined in Great Artesian Basin - Cadna-owie-Hooray Aquifer and Equivalents - Thickness and Extent dataset (Available from www.ga.gov.au using catalogue number 81678) METHOD: Groundwater chemistry data was compiled from the data sources listed above. Data was imported into ESRI ArcGIS (ArcMap 10) as data point sets and used to create a predicted values surface using an ordinary kriging method within the Geostatistical Analyst extension. A log transform was applied to the Alkalinity, TDS, Na, SO4, Mg, Ca, K, F, Cl, Cl36 data prior to kriging. No transform was applied to the 13C, 18O, 2H, pH data prior to kriging. The geostatistical model was optimized using cross validation. The search neighbourhood was extended to a 1 degree radius, comprising of 4 sectors (N, S, E and W) with a minimum and maximum of 3 and 8 neighbours, respectively, per sector. The predicted values surface was exported to a vector format (Shapefile) and clipped to the aquifer boundaries. QAQC: Prior to data analysis all hydrochemistry data was assessed for reliability by Quality Assurance/Quality Control (QA/QC) procedures. A data audit and verification were performed using various quality checking procedures including identification and verification of outliers. The ionic balance of each analysis was checked, and where the ionic charge balance differed by greater than 10%, these analyses were deemed unacceptable and were not considered for future analysis. Data that passed the initial QA/QC procedures were checked against borehole construction and stratigraphic records to determine aquifer intercepts. Data were discarded in cases where there was no recorded location information or screen interval/depth information (to cross reference with borehole stratigraphy). One exception was chemistry data obtained from the NSW Governments Triton database. Groundwater chemistry data obtained from bore records in the Triton database that was also identified as GAB bores in the NSW Governments Pinneena database were assumed to be in the Pilliga Sandstone and were allocated to the Cadna-owie Hooray equivalent aquifer, despite many not recording depth information. Groundwater chemistry data was sourced from multiple studies, government databases, and companies. Many of the studies used sub-sets of the same data. All duplicates were removed before mapping and analysis. The differences between data sources had to be reconciled to ensure that maximum value of the data was retained and for errors in the transcription to be avoided. This precluded any automated processing system. Random checks were routinely made against the source data to ensure quality of the process. Some source data was in the form of thousands of consecutive rows and required python scripts or detailed table manipulations to correctly re-format the information and re-produce records with all the well data, its location and hydrochemical data for a particular sample date on one row in the collated Excel spreadsheet. Alkalinity measurements, in particular, were often reported differently between studies and even within the same database and required conversion to a common unit. All data before 1960 was discarded. The study uses a data collection compiled from petroleum well completion reports from QLD and NSW. This data underwent a thorough QC process to ensure that drilling mud contaminated samples were excluded, based on the procedure described by Hitchon, B. & Brulotte, M. (1994). Less than 5% of the samples compiled passed the QC procedure, but these provide invaluable insight into the chemistry of very deep parts of the aquifers (typically 1 - 2km deep). Where multiple samples have been taken at the same well, an average of the analyses was used in the kriging but outliers were removed. Outliers were identified by looking for large differences between predicted and measured samples. Excessively high values compared to predicted values and typical measurements at the same bore were discarded. Dataset Citation Geoscience Australia (2015) GABATLAS - Cadna-owie - Hooray Aquifer Total Dissolved Solids map: Data. Bioregional Assessment Source Dataset. Viewed 11 April 2016, http://data.bioregionalassessments.gov.au/dataset/5044a067-35d1-4d6d-98a6-17974aa9226a.

  12. a

    Visualize A Space Time Cube in 3D

    • gemelo-digital-en-arcgis-gemelodigital.hub.arcgis.com
    • hub.arcgis.com
    Updated Dec 3, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Society for Conservation GIS (2020). Visualize A Space Time Cube in 3D [Dataset]. https://gemelo-digital-en-arcgis-gemelodigital.hub.arcgis.com/maps/acddde8dae114381889b436fa0ff4b2f
    Explore at:
    Dataset updated
    Dec 3, 2020
    Dataset authored and provided by
    Society for Conservation GIS
    Description

    Stamp Out COVID-19An apple a day keeps the doctor away.Linda Angulo LopezDecember 3, 2020https://theconversation.com/coronavirus-where-do-new-viruses-come-from-136105SNAP Participation Rates, was explored and analysed on ArcGIS Pro, the results of which can help decision makers set up further SNAP-D initiatives.In the USA foods are stored in every State and U.S. territory and may be used by state agencies or local disaster relief organizations to provide food to shelters or people who are in need.US Food Stamp Program has been ExtendedThe Supplemental Nutrition Assistance Program, SNAP, is a State Organized Food Stamp Program in the USA and was put in place to help individuals and families during this exceptional time. State agencies may request to operate a Disaster Supplemental Nutrition Assistance Program (D-SNAP) .D-SNAP Interactive DashboardAlmost all States have set up Food Relief Programs, in response to COVID-19.Scroll Down to Learn more about the SNAP Participation Analysis & ResultsSNAP Participation AnalysisInitial results of yearly participation rates to geography show statistically significant trends, to get acquainted with the results, explore the following 3D Time Cube Map:Visualize A Space Time Cube in 3Dhttps://arcg.is/1q8LLPnetCDF ResultsWORKFLOW: a space-time cube was generated as a netCDF structure with the ArcGIS Pro Space-Time Mining Tool : Create a Space Time Cube from Defined Locations, other tools were then used to incorporate the spatial and temporal aspects of the SNAP County Participation Rate Feature to reveal and render statistically significant trends about Nutrition Assistance in the USA.Hot Spot Analysis Explore the results in 2D or 3D.2D Hot Spotshttps://arcg.is/1Pu5WH02D Hot Spot ResultsWORKFLOW: Hot Spot Analysis, with the Hot Spot Analysis Tool shows that there are various trends across the USA for instance the Southeastern States have a mixture of consecutive, intensifying, and oscillating hot spots.3D Hot Spotshttps://arcg.is/1b41T43D Hot Spot ResultsThese trends over time are expanded in the above 3D Map, by inspecting the stacked columns you can see the trends over time which give result to the overall Hot Spot Results.Not all counties have significant trends, symbolized as Never Significant in the Space Time Cubes.Space-Time Pattern Mining AnalysisThe North-central areas of the USA, have mostly diminishing cold spots.2D Space-Time Mininghttps://arcg.is/1PKPj02D Space Time Mining ResultsWORKFLOW: Analysis, with the Emerging Hot Spot Analysis Tool shows that there are various trends across the USA for instance the South-Eastern States have a mixture of consecutive, intensifying, and oscillating hot spots.Results ShowThe USA has counties with persistent malnourished populations, they depend on Food Aide.3D Space-Time Mininghttps://arcg.is/01fTWf3D Space Time Mining ResultsIn addition to obvious planning for consistent Hot-Hot Spot Areas, areas oscillating Hot-Cold and/or Cold-Hot Spots can be identified for further analysis to mitigate the upward trend in food insecurity in the USA, since 2009 which has become even worse since the outbreak of the COVID-19 pandemic.After Notes:(i) The Johns Hopkins University has an Interactive Dashboard of the Evolution of the COVID-19 Pandemic.Coronavirus COVID-19 (2019-nCoV)(ii) Since March 2020 in a Response to COVID-19, SNAP has had to extend its benefits to help people in need. The Food Relief is coordinated within States and by local and voluntary organizations to provide nutrition assistance to those most affected by a disaster or emergency.Visit SNAPs Interactive DashboardFood Relief has been extended, reach out to your state SNAP office, if you are in need.(iii) Follow these Steps to build an ArcGIS Pro StoryMap:Step 1: [Get Data][Open An ArcGIS Pro Project][Run a Hot Spot Analysis][Review analysis parameters][Interpret the results][Run an Outlier Analysis][Interpret the results]Step 2: [Open the Space-Time Pattern Mining 2 Map][Create a space-time cube][Visualize a space-time cube in 2D][Visualize a space-time cube in 3D][Run a Local Outlier Analysis][Visualize a Local Outlier Analysis in 3DStep 3: [Communicate Analysis][Identify your Audience & Takeaways][Create an Outline][Find Images][Prepare Maps & Scenes][Create a New Story][Add Story Elements][Add Maps & Scenes] [Review the Story][Publish & Share]A submission for the Esri MOOCSpatial Data Science: The New Frontier in AnalyticsLinda Angulo LopezLauren Bennett . Shannon Kalisky . Flora Vale . Alberto Nieto . Atma Mani . Kevin Johnston . Orhun Aydin . Ankita Bakshi . Vinay Viswambharan . Jennifer Bell & Nick Giner

  13. GIS Shapefile - GIS Shapefile, Assessments and Taxation Database, MD...

    • search.dataone.org
    • portal.edirepository.org
    Updated Apr 5, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cary Institute Of Ecosystem Studies; Jarlath O'Neil-Dunne; Morgan Grove (2019). GIS Shapefile - GIS Shapefile, Assessments and Taxation Database, MD Property View 2003, Baltimore City [Dataset]. https://search.dataone.org/view/https%3A%2F%2Fpasta.lternet.edu%2Fpackage%2Fmetadata%2Feml%2Fknb-lter-bes%2F349%2F610
    Explore at:
    Dataset updated
    Apr 5, 2019
    Dataset provided by
    Long Term Ecological Research Networkhttp://www.lternet.edu/
    Authors
    Cary Institute Of Ecosystem Studies; Jarlath O'Neil-Dunne; Morgan Grove
    Time period covered
    Jan 1, 2003 - Jan 1, 2004
    Area covered
    Description

    AT_2003_BACI_1 File Geodatabase Feature Class Thumbnail Not Available Tags There are no tags for this item. Summary There is no summary for this item. Description MD Property View 2003 A&T Database. For more information on the A&T Database refer to the enclosed documentation. This layer was edited to remove spatial outliers in the A&T Database. Spatial outliers are those points that were not geocoded and as a result fell outside of the Baltimore City Boundary; 416 spatial outliers were removed from this layer. The field BLOCKLOT2 can be used to join this layer with the Baltimore City parcel layer. Credits There are no credits for this item. Use limitations There are no access and use limitations for this item. Extent West -76.713418 East -76.526031 North 39.374429 South 39.197452

  14. r

    Hutton Aquifer and equivalents Total Dissolved Solids map: Data

    • researchdata.edu.au
    • data.gov.au
    • +2more
    Updated Mar 23, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bioregional Assessment Program (2016). Hutton Aquifer and equivalents Total Dissolved Solids map: Data [Dataset]. https://researchdata.edu.au/hutton-aquifer-equivalents-map-data/2992987
    Explore at:
    Dataset updated
    Mar 23, 2016
    Dataset provided by
    data.gov.au
    Authors
    Bioregional Assessment Program
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Description

    Abstract

    This dataset and its metadata statement were supplied to the Bioregional Assessment Programme by a third party and are presented here as originally supplied. Data used to produce the predicted Total Dissolved Solids map for the Hutton Aquifer and equivalents in the Hydrogeological Atlas of the Great Artesian Basin (Ransley et.al., 2014).

    There are four layers in the Hutton Aquifer and equivalents Total Dissolved Solids map data

    A. Location of hydrochemistry samples (Point data, Shapefile)

    B. Predicted Concentration (Filled contours , Shapefile)

    C. Predicted Concentration Contours (Contours, Shapefile)

    D. Prediction Standard Error (Filled contours , Shapefile)

    The predicted values provide a regional based estimate and may be associated with considerable error. It is recommended that the predicted values are read together with the predicted error map, which provides an estimate of the absolute standard error associated with the predicted values at any point within the map.

    The predicted standard error map provides an absolute standard error associated with the predicted values at any point within the map. Please note this is not a relative error map and the concentration of a parameter needs to be considered when interpreting the map. Predicted standard error values are low where the concentration is low and there is a high density of samples. Predicted standard errors values can be high where the concentration is high and there is moderate variability between nearby samples or where there is a paucity of data.

    Concentrations are Total Dissolved Solids mg/L.

    Coordinate system is Lambert conformal conic GDA 1994, with central meridian 134 degrees longitude, standard parallels at -18 and -36 degrees latitude.

    The Hutton Aquifer and equivalents Total Dissolved Solids map is one of four hydrochemistry maps for the Hutton Aquifer and equivalents and 24 hydrochemistry maps in the Hydrogeological Atlas of the Great Artesian Basin (Ransley et.al., 2014).

    This dataset and associated metadata can be obtained from www.ga.gov.au, using catalogue number 81709.

    \t

    References:

    Hitchon, B. and Brulotte, M. (1994): Culling criteria for ‘standard’ formation water analyses; Applied Geochemistry, v. 9, p. 637–645

    Ransley, T., Radke, B., Feitz, A., Kellett, J., Owens, R., Bell, J. and Stewart, G., 2014. Hydrogeological Atlas of the Great Artesian Basin. Geoscience Australia. Canberra. \[available from www.ga.gov.au using catalogue number 79790\]

    Dataset History

    SOURCE DATA:

    Data was obtained from a variety of sources, as listed below:

    1.\tWater quality data from the Queensland groundwater database, Department of Environment and Resource Management

    2.\tGeological Society of Queensland water chemistry database (1970s to 1980s). Muller, PJ, Dale, NM (1985) Storage System for Groundwater Data Held by the Geological Survey of Queensland. GSQ Record 1985/47. Queensland.

    3.\tGeoscience Australia GAB hydrochemistry dataset 1973-1997. Published in Radke BM, Ferguson J, Cresswell RG, Ransley TR and Habermehl MA (2000) Hydrochemistry and implied hydrodynamics of the Cadna-owie - Hooray Aquifer, Great Artesian Basin, Australia. Canberra, Bureau of Rural Sciences: xiv, 229p.

    4.\tFeitz, A.J., Ransley, T.R., Dunsmore, R., Kuske, T.J., Hodgkinson, J., Preda, M., Spulak, R., Dixon, O. & Draper, J., 2014. Geoscience Australia and Geological Survey of Queensland Surat and Bowen Basins Groundwater Surveys Hydrochemistry Dataset (2009-2011). Geoscience Australia, Canberra Australia

    5.\tWater quality data from the Office of Groundwater Impact Assessment, Department of Natural Resources and Mines, Queensland Government

    6.\tGeoscience Australia (2010) Hydrogeochemical collection. A compilation of quality controlled groundwater data taken from well completion reports from QLD and NSW.

    7.\tWater quality data from the Office of Groundwater Impact Assessment, Department of Natural Resources and Mines, Queensland Government

    BOUNDARIES:

    Data covers the extent of the Hutton Aquifer and equivalents as defined in Great Artesian Basin - Hutton Aquifer and equivalents - Thickness and Extent dataset (Available from www.ga.gov.au using catalogue number 81682).

    METHOD:

    Groundwater chemistry data was compiled from the data sources listed above. Data was imported into ESRI ArcGIS (ArcMap 10) as data point sets and used to create a predicted values surface using an ordinary kriging method within the Geostatistical Analyst extension. A log transform was applied to the Alkalinity, TDS, Na, SO4, Mg, Ca, K, F, Cl, Cl36 data prior to kriging. No transform was applied to the 13C, 18O, 2H, pH data prior to kriging. The geostatistical model was optimized using cross validation. The search neighbourhood was extended to a 1 degree radius, comprising of 4 sectors (N, S, E and W) with a minimum and maximum of 3 and 8 neighbours, respectively, per sector. The predicted values surface was exported to a vector format (Shapefile) and clipped to the aquifer boundaries and clipped further where there was no data within 100 km.

    QAQC:

    Prior to data analysis all hydrochemistry data was assessed for reliability by Quality Assurance/Quality Control (QA/QC) procedures. A data audit and verification were performed using various quality checking procedures including identification and verification of outliers.

    The ionic balance of each analysis was checked, and where the ionic charge balance differed by greater than 10%, these analyses were deemed unacceptable and were not considered for future analysis.

    Data that passed the initial QA/QC procedures were checked against borehole construction and stratigraphic records to determine aquifer intercepts. Data were discarded in cases where there was no recorded location information or screen interval/depth information (to cross reference with borehole stratigraphy).

    Groundwater chemistry data was sourced from multiple studies, government databases, and companies. Many of the studies used sub-sets of the same data. All duplicates were removed before mapping and analysis. The differences between data sources had to be reconciled to ensure that maximum value of the data was retained and for errors in the transcription to be avoided. This precluded any automated processing system. Random checks were routinely made against the source data to ensure quality of the process. Some source data was in the form of thousands of consecutive rows and required python scripts or detailed table manipulations to correctly re-format the information and re-produce records with all the well data, its location and hydrochemical data for a particular sample date on one row in the collated Excel spreadsheet. Alkalinity measurements, in particular, were often reported differently between studies and even within the same database and required conversion to a common unit. All data before 1960 was discarded.

    The study uses a data collection compiled from petroleum well completion reports from QLD and NSW. This data underwent a thorough QC process to ensure that drilling mud contaminated samples were excluded, based on the procedure described by Hitchon, B. & Brulotte, M. (1994). Less than 5% of the samples compiled passed the QC procedure, but these provide invaluable insight into the chemistry of very deep parts of the aquifers (typically 1 - 2km deep).

    Where multiple samples have been taken at the same well, an average of the analyses was used in the kriging but outliers were removed. Outliers were identified by looking for large differences between predicted and measured samples. Excessively high values compared to predicted values and typical measurements at the same bore were discarded.

    Dataset Citation

    Geoscience Australia (2015) Hutton Aquifer and equivalents Total Dissolved Solids map: Data. Bioregional Assessment Source Dataset. Viewed 11 April 2016, http://data.bioregionalassessments.gov.au/dataset/f5f16389-d97e-46b3-bd43-83255acf257d.

  15. Snow Depth Mapping

    • envidat.ch
    • opendata.swiss
    not available, pdf +1
    Updated May 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mauro Marty; Yves Bühler; Christian Ginzler (2025). Snow Depth Mapping [Dataset]. http://doi.org/10.16904/envidat.62
    Explore at:
    not available, tiff, pdfAvailable download formats
    Dataset updated
    May 29, 2025
    Dataset provided by
    Swiss Federal Institute for Forest, Snow and Landscape Research
    WSL Institute for Snow and Avalanche Research SLF
    Authors
    Mauro Marty; Yves Bühler; Christian Ginzler
    License

    Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2010 - May 1, 2016
    Area covered
    Switzerland
    Description

    The available datasets are snow depth maps with a spatial resolution of 2m generated from image matching of ADS 80/100 data. Image acquisition took place at peak of winter (time when the thickest snowpack is expected). The snow depth maps are the difference of a summer DSM from the winter DSM of the corresponding date . The summer DSM used is a product of image matching of ADS 80 data from summer 2013. In the available products buildings, vegetation and outliers were masked (set to NoData). For the elimination of buildings the TLM layer (swisstopo) was used, because this layer might not represent exactly the state of infrastructure at time of image acquisition, it is possible that mainly in dense settlement some buildings were not successfully masked. For the relevant area above treeline the masking of buildings showed good results. Vegetation got masked for a height above ground - 1m and was detected in a combination of summer and winter data sets. As Outliers were considered unrealistic snow depths caused by a failure of the image matching algorithm. Snow depths - 15m and smaller than - -15m were classified as outliers. Negative snow depth were kept, because of an uncertainty in image orientation accuracy. It is expected that in regions with negative snow depth also positive snow depth are underestimated by the same amount, which means that an estimation of snow volume should be carried out summing up the absolute values of snow depth (also the negative ones). For volume estimation in small regions the user has to take into account, that orientation accuracy of the images is roughly around 1-2 GSD (30cm), which propagates directly to the snow depth product. Areas which are not covered by snow got assigned a value of 0 as snow depth. The work is published in: Bühler, Y.; Marty, M.; Egli, L.; Veitinger, J.; Jonas, T.; Thee, P.; Ginzler, C., (2015). Snow depth mapping in high-alpine catchments using digital photogrammetry. Cryosphere, 9 (1), 229-243. doi: 10.5194/tc-9-229-2015

  16. f

    Data from: DELINEATION OF HOMOGENEOUS ZONES BASED ON GEOSTATISTICAL MODELS...

    • figshare.com
    • scielo.figshare.com
    jpeg
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DANILO PEREIRA BARBOSA; EDUARDO LEONEL BOTTEGA; DOMINGOS SÁRVIO MAGALHÃES VALENTE; NERILSON TERRA SANTOS; WELLINGTON DONIZETE GUIMARÃES (2023). DELINEATION OF HOMOGENEOUS ZONES BASED ON GEOSTATISTICAL MODELS ROBUST TO OUTLIERS [Dataset]. http://doi.org/10.6084/m9.figshare.8986808.v1
    Explore at:
    jpegAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    SciELO journals
    Authors
    DANILO PEREIRA BARBOSA; EDUARDO LEONEL BOTTEGA; DOMINGOS SÁRVIO MAGALHÃES VALENTE; NERILSON TERRA SANTOS; WELLINGTON DONIZETE GUIMARÃES
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ABSTRACT Measures of the apparent electrical conductivity (ECa) of soil are used in many studies as indicators of spatial variability in physicochemical characteristics of production fields. Based on these measures, management zones (MZs) are delineated to improve agricultural management. However, these measures include outliers. The presence or incorrect identification and exclusion of outliers affect the variogram function and result in unreliable parameter estimates. Thus, the aim of this study was to model ECa data with outliers using methods based on robust approximation theory and model-based geostatistics to delineate MZs. Robust estimators developed by Cressie-Hawkins, Genton and MAD Dowd were tested. The Cressie-Hawkins semivariance estimator was selected, followed by the semivariogram cubic fit using Akaike information criterion (AIC). The robust kriging with an external drift plug-in was applied to fitted estimates, and the fuzzy k-means classifier was applied to the resulting ECa kriging map. Models with multiple MZs were evaluated using fuzzy k-means, and a map with two MZs was selected based on the fuzzy performance index (FPI), modified partition entropy (MPE) and Fukuyama-Sugeno and Xie-Beni indices. The defined MZs were validated based on differences between the ECa means using mixed linear models. The independent errors model was chosen for validation based on its AIC value. Thus, the results demonstrate that it is possible to delineate an MZ map without outlier exclusion, evidencing the efficacy of this methodology.

  17. Detection power of different approach for multiple unlinked QTLs.

    • plos.figshare.com
    xls
    Updated Jun 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Md. Mamun Monir; Mita Khatun; Md. Nurul Haque Mollah (2023). Detection power of different approach for multiple unlinked QTLs. [Dataset]. http://doi.org/10.1371/journal.pone.0208234.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 8, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Md. Mamun Monir; Mita Khatun; Md. Nurul Haque Mollah
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Detection power of different approach for multiple unlinked QTLs.

  18. d

    Geologic Map of the Tularosa Mountains 30´ × 60´ Quadrangle, Catron...

    • dataone.org
    Updated Oct 29, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    James C. Ratte (2016). Geologic Map of the Tularosa Mountains 30´ × 60´ Quadrangle, Catron County, New Mexico [Dataset]. https://dataone.org/datasets/d8b7fe95-85c3-43df-9250-0a03e2bfee5a
    Explore at:
    Dataset updated
    Oct 29, 2016
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Authors
    James C. Ratte
    Area covered
    Description

    This digital map database was compiled from previously published and unpublished data by the author and USGS colleagues, and from published maps by others, as indicated in figure 3 on the map sheet. A pamphlet included with the map provides a brief discussion of the geology of the quadrangle, a description of map units, and references cited.

  19. f

    Summary results of dataset, with outliers, of the horizontal position error...

    • plos.figshare.com
    xls
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Taeyoon Lee; Pete Bettinger; Chris J. Cieszewski; Alba Rocio Gutierrez Garzon (2023). Summary results of dataset, with outliers, of the horizontal position error of the GPS watch and mapping-grade GNSS receiver at the Whitehall Forest GPS Test Site in Athens, Georgia (USA). [Dataset]. http://doi.org/10.1371/journal.pone.0231532.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Taeyoon Lee; Pete Bettinger; Chris J. Cieszewski; Alba Rocio Gutierrez Garzon
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Georgia, United States, Whitehall Forest, Athens
    Description

    Summary results of dataset, with outliers, of the horizontal position error of the GPS watch and mapping-grade GNSS receiver at the Whitehall Forest GPS Test Site in Athens, Georgia (USA).

  20. H

    Per-Cloud Pixelated Map Result Tables (Machine Readable)

    • dataverse.harvard.edu
    Updated Feb 2, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Catherine Zucker (2019). Per-Cloud Pixelated Map Result Tables (Machine Readable) [Dataset]. http://doi.org/10.7910/DVN/74Y5KU
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 2, 2019
    Dataset provided by
    Harvard Dataverse
    Authors
    Catherine Zucker
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    A machine readable version of the pixelated map results for each cloud listed in Table 1. The results for each cloud are listed in a separate file, labeled by cloud name. For each model parameter, we report the 16th, 50th, and 84th percentile of the samples from our dynesty chain, which should be regarded as the statistical uncertainties. An additional systematic uncertainty of 5% should be added to the distances. The column headings are as follows: 'name' is the cloud coincident with the sightline 'l' is the Galactic longitude of the sightline (in degrees) 'b' is the Galactic latitude of the sightline (in degrees) 'n' is the normalization parameter 'f' is the foreground extinction parameter (in mag) 'm' is the cloud distance modulus parameter (in mag) 'd' is the cloud distance (derived from m) in pc 'p' is the outlier fraction parameter 'sfore' is the foreground smoothing parameter 'sback' is the background smoothing parameter See Section 3.2 for a complete description of the model parameters.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
University of Minnesota (2020). Find Outliers Minnesota Hospitals [Dataset]. https://umn.hub.arcgis.com/maps/UMN::find-outliers-minnesota-hospitals

Find Outliers Minnesota Hospitals

Explore at:
Dataset updated
May 6, 2020
Dataset authored and provided by
University of Minnesota
Area covered
Description

The following report outlines the workflow used to optimize your Find Outliers result:Initial Data Assessment.There were 137 valid input features.There were 4 outlier locations; these will not be used to compute the polygon cell size.Incident AggregationThe polygon cell size was 49251.0000 Meters.The aggregation process resulted in 72 weighted areas.Incident Count Properties:Min1.0000Max21.0000Mean1.9028Std. Dev.2.4561Scale of AnalysisThe optimal fixed distance band selected was based on peak clustering found at 94199.9365 Meters.Outlier AnalysisCreating the random reference distribution with 499 permutations.There are 3 output features statistically significant based on a FDR correction for multiple testing and spatial dependence.There are 2 statistically significant high outlier features.There are 0 statistically significant low outlier features.There are 0 features part of statistically significant low clusters.There are 1 features part of statistically significant high clusters.OutputPink output features are part of a cluster of high values.Light Blue output features are part of a cluster of low values.Red output features represent high outliers within a cluster of low values.Blue output features represent low outliers within a cluster of high values.

Search
Clear search
Close search
Google apps
Main menu