CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This is the data repository for the PLOS ONE Manuscript: "Meeting radiation dosimetry capacity requirements of population-scale exposures by geostatistical sampling". This repository contains the following data:
1. "State-and-Subdivision-Boundary-Files.Edited-for-ArcMap-10.4.KML-Format.zip":
This file contains modified U.S. state and sub-division boundary files [in KML format], which can be imported into ArcMap using its KMLtoLayer function. These files have been modified to prevent sub-division naming issues that we encountered when importing boundary data into ArcMap: A) State sub-divisions with identical names are considered a single sub-division by ArcMap (corrected by adding a letter after each sub-division of the same name, i.e. CenterA, CenterB, etc), and; B) ArcMap would only identify the sub-division by its first word if sub-division name contained spaces (corrected by converting all spaces into dashes).
2. "HPAC-Plumes.Processed.zip" and "HPAC-Plumes.Unprocessed.zip":
These files contain HPAC plume coordinate (WGS1984) and dose (in cGy) values for all scenarios discussed in the manuscript. We provide "processed" and "unprocessed" HPAC plume data files. The "unprocessed" HPAC plume data is provided in its original XML format, which cannot be imported into ArcMap directly. The "processed" HPAC plumes are provided in tab-delimited X,Y,Z format (Latitude, Longitude, and Dose). We have also added a "0 cGy" contour in the "processed" plumes (surrounding the HPAC plume), as the presence of unirradiated data points adjacent to the plume was found to be crucial for accurate kriging, since these points served as boundaries for kriging.
3. "Final-Derived-Plumes.Data-Points.zip":
This file contains geostatistically-derived plume coordinate (WGS1984) and dose (in cGy) values for all scenarios discussed in the manuscript. Data is in comma-delimited format (Latitude, Longitude, and Dose). Data points consist of a set of initial coordinates generated at random locations within each Census sub-division using the ArcMap tool, ‘CreateRandomPoints_management’, and subsequent points generated by densification (the geostatistical procedure that targets and localizes an additional small cohort of irradiated individuals to mitigate uncertainty in environmental measurements). These data points were assigned radiation level values corresponding to the adjacent outer HPAC contour by a script comparing each sample with its location within the HPAC plume of the same scenario.
4. “Intermediate-Derived-Plumes.Data-Points.zip”
This archive contains coordinate data (WGS1984) and dose values (in cGy) for all intermediary steps of plume development (using our geostatistical method) for all scenarios. Like (3), the data is comma-delimited (Latitude, Longitude, and Dose), and were assigned radiation level values by a script comparing sampling locations with the location of the HPAC plume of the same scenario. Scenario replicate folders contains text files for each iteration step of the plume derivation process, including a file containing just the initial random sampling (“Iteration-1”), a file containing initial sampling and sampling locations selected by the first densification step (“Iteration-2”), a file containing initial sampling and sampling locations selected by the first and second densification steps (“Iteration-3”), and so on.
This archive also contains a Table (“Progression-of-New-Densification-Selected-Sampling-Locations-For-All-Scenarios.xslx”) which provides a categorical breakdown of how many unique densification-selected sampling locations occur within the irradiated region (i.e. overlap the HPAC plume) for each iteration of all scenario replicates. The fraction of irradiated to unirradiated sampling locations varies among each scenario and individual replicates for the same scenario. Our analysis shows that these results depend on the population densities and exact topography of the HPAC plume which is different among each scenario.
5. "Geostatistical-Sampling-Project.All-Scripts.zip"
This archive contains all programs required for this project. This includes Python scripts meant to be run within the ArcMap software environment (for random point generation and data extraction), and Perl scripts used to process HPAC and U.S. State and Sub-division boundary files, and to assign radiation values to sample locations based on a modified HPAC plume. A java program, “CompareReplicates.jar”, compares the overlapping areas between a pair of polygons that overlap one other using the ArcMap software environment, and requires access to the ArcGIS Runtime SDK (https://developers.arcgis.com/arcgis-runtime/).
Geostatistics has attracted the attention of many earth scientist and engineers who need better modeling tools for natural gas reservoirs. Two years ago Correlations Company responded to this need through the peer reviewed, DOE Small Business Innovative Research program to develop a fractal algorithm for interpolating between measurements and mapping the consequences. During the two years research period Correlations Company has combined geostatistical modeling with high quality graphics to produce Gviz. This software provides accurate 3D reservoir modeling tools and high quality 3D graphics for PC platforms enabling engineers and geologists to better comprehend reservoirs and consequently improve their decisions. Until recently geostatistical modeling was only available to the limited number of earth scientist familiar with UNIX based platforms. Gviz runs on any PC with Windows 95 or Windows NT operating system. The Gviz pre- processing module reads LAS and ASCII files. The pre-processing module facilitates selection of the stratigraphic units prior to processing by a nearest neighbor, kriging and co-kriging, conditional simulation, or fractal module. A user friendly GUI simplifies the examination of the statistical data and the geostatistical analyses using isotropic and anisotropic variograms. After completing the analyses, the post-processing unit can generate ID models of well logs, 2D models such as cross-sections, or a 3D model of any petrophysical property. Post- processing includes the display of reservoir slices, multiple cross-sections, rotation along any axis, and identification of geobodies (visually inspect the effect of porosity cutoffs on connected pore volume). The post-processor includes an up-scaling module to transform a fine scale grid into a reservoir simulation grid which can then be exported in an Eclipse format. Gviz emphasizes a self-explanatory GUI and visually oriented help pages which guides even a novice through the process of generating realistic, two to five million cell, 3D reservoir models. Beta testing of Gviz will finish in April 1997 and a working version of the PC software package, at one fifth of the cost of a comparable UNIX system, will be available to domestic gas and oil producers in mid-1997.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Summary of the generalized linear geostatistical model distributions implemented in MBGapp: The conditional distribution of Yi given the random effects S(xi) and Zi, including its expectation, variance function and link function.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The analysis of 87Sr/86Sr has become a robust tool for identifying non-local individuals at archeological sites. The 87Sr/86Sr in human bioapatite reflects the geological signature of food and water consumed during tissue development. Modeling relationships between 87Sr/86Sr in human environments, food webs, and archeological human tissues is critical for moving from identifying non-locals to determining their likely provenience. In the Andes, obstacles to sample geolocation include overlapping 87Sr/86Sr of distant geographies and a poor understanding of mixed strontium sources in food and drink. Here, water is investigated as a proxy for bioavailable strontium in archeological human skeletal and dental tissues. This study develops a water 87Sr/86Sr isoscape from 262 samples (220 new and 42 published samples), testing the model with published archeological human skeletal 87Sr/86Sr trimmed of probable non-locals. Water 87Sr/86Sr and prediction error between the predicted and measured 87Sr/86Sr for the archeological test set are compared by elevation, underlying geology, and watershed size. Across the Peruvian Andes, water 87Sr/86Sr ranges from 0.7049 to 0.7227 (M = 0.7081, SD = 0.0027). Water 87Sr/86Sr is higher in the highlands, in areas overlying older bedrock, and in larger watersheds, characteristics which are geographically correlated. Spatial outliers identified are from canals, wells, and one stream, suggesting those sources could show non-representative 87Sr/86Sr. The best-fit water 87Sr/86Sr isoscape achieves prediction errors for archeological samples ranging from 0.0017 – 0.0031 (M = 0.0012, n = 493). The water isoscape explains only 7.0% of the variation in archeological skeletal 87Sr/86Sr (R2 = 0.07), but 90.0% of archeological skeleton 87Sr/86Sr fall within the site isoscape prediction ± site prediction standard error. Due to lower sampling density and higher geological variability in the highlands, the water 87Sr/86Sr isoscape is more useful for ruling out geographic origins for lowland dwellers than for highlanders. Baseline studies are especially needed in the highlands and poorly sampled regions. Because the results demonstrate that a geostatistical water model is insufficient for fully predicting human 87Sr/86Sr variation, future work will incorporate additional substrates like plants, fauna, soils, and dust, aiming to eventually generate a regression and process-based mixing model for the probabilistic geolocation of Andean samples.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract One of the basic factors in mine operational optimization is knowledge regarding mineral deposit features, which allows to predict its behavior. This could be achieved by conditional geostatistical simulation, which allows to evaluate deposit variability (uncertainty band) and its impacts on project economics. However, a large number of realizations could be computationally expensive when applied in a transfer function. The transfer function that was used in this study was the NPV net present value. Hence, there arises a necessity to reduce the number of realizations obtained by conditional geostatistical simulation in order to make the process more dynamic and yet maintain the uncertainty band. This study made use of machine-learning techniques, such as multidimensional scaling and hierarchical cluster analysis to reduce the number of realizations, based on the Euclidean distance between simulation grids. This approach was tested, generating 100 realizations by the sequential Gaussian simulation method in a database. Proving that similar uncertainty analysis results can be obtained from a smaller number of simulations previously selected by the methodology described in this study, when compared to all simulations.
The Tagus River Alluvial banks are in the central area of the Tagus Basin in Portugal. Due to its porous hydrological formations and the hydraulic connection to the Tagus River, the alluvial banks show good water productivity at the national scale, but its strategic geographical location is promoting overexploitation of its sources, for the public water supply, agriculture, and industry. This study aims to characterize the geometry of the Tagus River Alluvial banks estimating its depth. The geometry combined with its hydraulic behaviour give ways to predict its capacity to respond under different scenarios of water exploitation. To build the model we follow three main steps, data mining and processing of 60 drilling logs, spatial exploratory data analysis and geostatistical methods. The proposed model is validated with literature review and shows as previous studies do, that the alluvial banks are deeper in the downstream sector.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
The intensity and spatial extent of tropical cyclone precipitation (TCP) often shapes the risk posed by landfalling storms. Here we provide a comprehensive climatology of landfalling TCP characteristics as a function of tropical cyclone strength, using daily precipitation station data and Atlantic US landfalling tropical cyclone tracks from 1900-2017. We analyze the intensity and spatial extent of ≥ 1 mm/day TCP (Z1) and ≥ 50 mm/day TCP (Z50) over land. We show that the highest median intensity and largest median spatial extent of Z1 and Z50 occur for major hurricanes that have weakened to tropical storms, indicating greater flood risk despite weaker wind speeds. We also find some signs of TCP change in recent decades. In particular, for major hurricanes that have weakened to tropical storms, Z50 intensity has significantly increased, indicating possible increases in flood risk to coastal communities in more recent years.
Methods 1. Station precipitation and tropical cyclone tracks
We use daily precipitation data from the Global Historical Climatology Network (GHCN)-Daily station dataset (Menne et al., 2012) and TC tracks archived in the revised HURricane DATabase (HURDAT2) database (Landsea & Franklin, 2013). HURDAT2 is a post-storm reanalysis that uses several datasets, including land observations, aircraft reconnaissance, ship logs, radiosondes, and satellite observations to determine tropical cyclone track locations, wind speeds and central pressures (Jarvinen et al., 1984; Landsea & Franklin, 2013). We select 1256 US stations from the GHCN-Daily dataset that have observations beginning no later than 1900 and ending no earlier than 2017 (though most station records are not continuous throughout that period). These 1256 land-based stations are well distributed over the southeastern US and Atlantic seaboard (see Supporting Figure S1).
We use the HURDAT2 Atlantic database to select locations and windspeeds of TC tracks that originated in the North Atlantic Ocean, Gulf of Mexico and Caribbean Sea, and made landfall over the continental US. Though tracks are determined at 6-hourly time steps for each storm (with additional timesteps that indicate times of landfall, and times and values of maximum intensity), we limit our analysis to track points recorded at 1200 UTC, in order to match the daily temporal resolution and times of observation of the GHCN-Daily precipitation dataset (Menne et al., 2012), as well as the diurnal cycle of TCP (Gaona & Villarini, 2018). Although this temporal matching technique may omit high values of precipitation from the analysis, it reduces the possibility of capturing precipitation that is not associated with a TC.
For each daily point in the tropical cyclone track, we use the maximum sustained windspeed to place the storm into one of three Extended Saffir-Simpson categories: tropical storms (“TS”; 34-63 knots), minor hurricanes (“Min”; categories 1 and 2; 64-95 knots), and major hurricanes (“Maj”; categories 3 to 5; > 96 knots) (Schott et al., 2012). Additionally, for each track, we record the category of the lifetime maximum intensity (LMI), based on the maximum windspeed found along the whole lifetime of the track (i.e., using all available track points). LMI is a standard tropical cyclone metric, and is considered a robust measure of track intensity through time and across different types of data integrated into the HURDAT2 reanalysis (Elsner et al., 2008; Kossin et al., 2013, 2014). Therefore, for each track point, a dual category is assigned: the first portion of the classification denotes the category of the storm for a given point (hereafter “point category”), while the second denotes the LMI category. The combination of the two can thus be considered a “point-LMI category”. For example, the point on August 27, 2017 at 1200 UTC along Hurricane Harvey’s track is classified as TS-Maj because it is a tropical storm (TS) at this point but falls along a major hurricane LMI track (see starred location in Supporting Figure S2a). Given that the LMI category for a given point cannot be weaker than the point category itself, the set of possible point-LMI category combinations for each track point is TS-TS, TS-Min, TS-Maj, Min-Min, Min-Maj, and Maj-Maj. This dual classification allows us to explore climatological TCP spatial extents and intensities during the tropical cyclone lifetime. Our dual classification does not account for the timing of the point category relative to the LMI category for a given point along a track (i.e., the time-lag between the LMI and point in consideration). However, the majority of points selected in our analysis occur after the TC has reached its LMI and are in the weakening stage (see Supporting Table S1 for more details). This could be expected, as our analysis is focused on land-based precipitation stations, and TCs weaken over land. However, a small fraction of TC points analyzed occur over the ocean before making landfall, but are close enough to land for precipitation gauges to be impacted.
Moving neighborhood method for TCP spatial extent and intensity
We first find the distribution of tropical cyclone precipitation (TCP) intensity using all daily land precipitation values from all available stations in a 700 km-radius neighborhood around each point over land on each tropical cyclone track (Figure 1a and Supporting Figure S2). We then create two new binary station datasets, Z1(x) and Z50(x), which indicate whether or not a station meets or exceeds the 1 mm/day or 50 mm/day precipitation threshold, respectively, on a given day. The 50 mm/day threshold is greater than the 75th percentile of TCP across all tropical cyclone categories (Figure 1a), allowing us to capture the characteristics of heavy TCP while retaining a robust sample size. The 1 mm/day threshold captures the extent of the overall TCP around the TC track point.
We use the relaxed moving neighborhood and semivariogram framework developed by Touma et al. (2018) to quantify the spatial extent of Z1 and Z50 TCP for each track point. Using a neighborhood with a 700 km radius around each track point, we select all station pairs that meet two criteria: at least one station has to exhibit the threshold precipitation on that given day (Z(x) = 1; blue and pink stations in Supporting Figure S2b), and at least one station has to be inside the neighborhood (black and pink stations in Supporting Figure S2b). We then calculate the indicator semivariogram, g(h), for each station pair selected for that track point (Eq. 1):
γh=0.5*[Z(x+h)-Z(x)]2, Eq. 1
where h is the separation distance between the stations in the station pair. The indicator semivariogram is a function of the separation distance, and has two possible outcomes: all pairs with two threshold stations (Z(x) = Z(x+h) = 1) have a semivariogram value of 0, and all pairs with one threshold station and one non-threshold station (Z(x) = 1 and Z(x+h) = 0) have a semivariogram value of 0.5.
We then average the semivariogram values for all station pairs for equal intervals of separation distances (up to 1000 km) to obtain the experimental semivariogram (Supporting Figure S2c). To quantify the shape of the experimental semivariogram, we fit three parameters of the theoretical spherical variogram (nugget, partial sill, and practical range) to the experimental semivariogram (Eq. 2):
γ(h) = 0, for h=0
γ(h) = c+b*((3/2)(h/α)-(1/2)(h/α)3), for 0<h≤α
γ(h) = c+b, for h≥α, Eq. 2
where c is the nugget, b is the partial sill, and a is the practical range (Goovaerts, 2015). The nugget quantifies measurement errors or microscale variability, and the partial sill is the maximum value reached by the spherical semivariogram (Goovaerts, 2015). The practical range is the separation distance at which the semivariogram asymptotes (Supporting Figure S2c). At this separation distance, station pairs are no longer likely to exhibit the threshold precipitation (1 mm/day or 50 mm/day) simultaneously (Goovaerts, 2015; Touma et al., 2018). Therefore, as in Touma et al. (2018), we define the length scale – or spatial extent – of TCP for that given track point as the practical range.
There are some subjective choices of the moving neighborhood and semivariogram framework, including the 700 km radius of neighborhood (Touma et al. 2018). Previous studies found that 700 km is sufficient to capture the extent to which tropical cyclones influence precipitation (e.g., Barlow, (2011), Daloz et al. (2010), Hernández Ayala & Matyas (2016), Kim et al. (2014), Knaff et al. (2014), Knutson et al. (2010) and Matyas (2010)). Additionally, Touma et al. (2018) showed that although the neighborhood size can slightly impact the magnitude of length scales, it has little impact on their relative spatial and temporal variations.
We use Mood’s median test (Desu & Raghavarao, 2003) to test for differences in the median TCP intensity and spatial extent among point-LMI categories, adjusting p-values to account for multiple simultaneous comparisons (Benjamini & Hochberg, 1995; Holm, 1979; Sheskin, 2003). To test for changes in TCP characteristics over time, we divide our century-scale dataset into two halves, 1900-1957 and 1958-2017. First, the quartile boundaries are established using the distributions of the earlier period (1900-1957), with one-quarter of the distribution falling in each quartile. Then, we find the fraction of points in each quartile in the later period (1958-2017) to determine changes in the distribution. We also report the p-values of the Kolmogorov-Smirnov
A statistical analysis was performed with overburden characterization data that was obtained from a US Forest Service study site in the Powder River Basin, Wyoming. The drilling and overburden characterization program had been performed during 1977 and 1978 and this information was provided to the Laramie Energy Technology Center by the US Forest Service. There were three basic goals that were accomplished during this study. First, find out how overburden data obtained from drill cuttings compares with overburden data obtained from core samples. Second, determine the basic chemical and physical characteristics of the overburden. Third, determine the minimum drill hole spacing required to adequately characterize the overburden. The R-Squared statistic was used as a measure of correlation between drill cutting samples and core samples. Most R-Squared values were less than 50%, therefore, it was concluded that geostatistical structure cannot be predicted accurately during an overburden study when drill cuttings are used. Principal component R-Mode factor analysis with Varimax rotation was used to characterize the overburden. Thirty-one variables were used in the factor analysis. The factor analysis yielded twelve distinct factors which explained ninety percent of the total variation. A two state sequential drilling procedure was developed that moves in a stepwise manner towards the goal of a predetermined level of accuracy until that level is reached. Thus, the desired level of accuracy can be reached without over-drilling an area. 7 figures, 11 tables.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Avalanches represent a very high risk in residential areas, road infrastructure, environment, and economy, and can have fatal consequences if the human factors do not take any action. Advances in geospatial technology and access to spatial data have enabled spatial analysis to assist in decision-making regarding spatial planning in avalanche-prone locations. Determining locations with snow avalanche discharge potential is a crucial step in the avalanche zoning process.
This research deals with areas with snow avalanche potential disjunction, based mainly on topographic factors followed by meteorological ones. Topographic factors were mainly determined according to morphometric techniques, which are achieved through geographic information systems (GIS), as well as meteorological ones from statistical data and various processing of spatial and non-spatial data. Spatial analysis are also supported by geostatistical methods Fuzzy Logic and AHP, which in interaction with GIS have enabled the achievement of the purpose of this paper. The results from the spatial analysis have been verified based on comparison methods, such as the ROC method which was used during this final phase, in which the analysis has shown that the methods used in this research have given satisfactory results. As the main result, we obtained maps of areas with snow avalanche potential discharge in the study area relating to two geostatistical methods.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Climate change is making intense DANA (depresion aislada en niveles altos) type rains a more frequent phenomenon in Mediterranean basins. This trend, combined with the transformation of the territory derived from diffuse anthropization processes, has created an explosive cocktail for many coastal towns due to flooding events. To evaluate this problem and the impact of its main guiding parameters, a geostatistical analysis of the territory based on GIS indicators and an NDVI (Normalized Difference Vegetation Index) analysis is developed. The assessment of the validity of a proposed methodology is applied to the case study of the Campo de Cartagena watershed located around the Mar Menor, a Mediterranean coastal lagoon in Southeastern Spain. This area has suffered three catastrophic floods derived from the DANA phenomenon between 2016 and 2019. The results show that apart from the effects derived from climate change, the real issue that amplifies the damage caused by floods is the diffuse anthropization process in the area, which has caused the loss of the natural hydrographic network that traditionally existed in the basin.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ABSTRACT Sandy-textured soils are naturally more vulnerable to the erosion process and their exploitation, although possible, is often performed inappropriately, favoring its degradation. In this context, this study aimed to classify the rainfall erosivity in a region of sandy soils to identify critical situations of soil and water loss and also correlate it with rainfall data to assess whether there is temporal dependence of this variable using geostatistical techniques. The potential for alternative and sustainable production systems to be used in regions with sandy soils was also analyzed. Historical data of precipitation in the study region were analyzed to determine the average monthly and annual erosivity indices, which were classified and its temporal dependence was assessed by applying geostatistics. NDVI data from satellite images were used to investigate the soil cover pattern in different production systems. Geostatistics was adequate for the analysis of rainfall erosivity, which showed moderate to strong temporal dependence. It was classified between strong and very strong and was highly dependent on precipitation, with events of higher erosion potential between October and March in the studied region. The vicious circle of degradation of sandy soils, such as those of the Bolsão region of Mato Grosso do Sul, Brazil, can be modified by adopting alternative and sustainable production systems that value the maximization of soil cover.
Sleepsites_1997_oddshift.csv
SleepsitesAA1990-95_bjbshift.csv
The file Sleepsites_1997_oddshift.csv
is from the first season in which electronic devices (Psion Observers) were used to record data in the field (though these were still supplemented with field notes having hand-drawn maps to help us find sleep sites the following morning, as this was our first field season working with RR group, so its home range had not yet been mapped). The file 1993AAsubsetData.xlsx
is a chunk of the larger file SleepsitesAA1990-95_bjbshift.csv
that is mentioned in the R code. This is from the first 3 years of the project when we focused exclusively on the AA group, recording data on microcassette recorders and writing additional notes about sleep site locations in field notebooks. Originally there were more columns, including (a) values for a 50 m grid system (which we abandoned when it became clear that we could rarely pinpoint the precise grid cell at that level of granularity), (b) columns indicating whether we saw the monkeys curl up in their sleep trees, or at least were with them during the last hour before sundown, (c) extensive comments about how certain we were about the specific grid cell assigned and listing other possible values (when we were considering listing all possible grid cell values), and (d) comments containing notes from Susan about which sleep sites needed to be double checked via older versions of our maps, and then checked in the field by finding those points and taking a GPS reading. When this became possible, we corrected the Grid100 value. The versions presented here use only the columns that are used in the present analysis, after we had refined our data processing workflow as described in the manuscript. * date: the date representing the evening on which we watched the monkeys settle into their sleep tree for the night. * location: a description of where the monkeys slept * group: the group of monkeys monitored on that date * Grid100: the number assigned to the grid cell in the 100 m2 grid. Note that the numbers in the entire grid published in the code repository have been systematically altered to protect these threatened monkeys, so that their sleep sites cannot be easily located by non-researchers. * certainty?: As described in the manuscript, a score of 0 indicates that we can confidently assign the sleep site to that grid cell. A score of 1 means that the sleep site lies within a radius of 1 away from the grid cell assigned, i.e. could be within that cell or one of the 8 neighboring grid cells (corresponding to 300 meters squared). A score of 2 means that the sleep site was within 2 grid cells of the one chosen (corresponding to 500 meters squared), etc. #### Demographic data: annual_group_sizes.csv
This file reports the group size (including members of all age-sex classes) for each social group in each year, derived from the Lomas Barbudal Monkey Project demographic database, using only census days in which trained observers were present in the group for at least 6 hours. #### Raw GPS data: For conservation reasons, i.e., to protect the animals from the pet trade and poachers, these data will not be made public. They are archived at Movebank, and can be accessed here, pending permission of the curator (Susan Perry): https://www.movebank.org/cms/webapp?gwt_fragment=page=studies,path=study3389013696 ## Description of the relationship between the data and the code used to analyze it: To analyze these data, please refer to the code found at the following location: https://doi.org/10.17617/3.HUKMS6 The descriptions of sleep sites were transformed to location data using the quadrant centroids in which they fell, using a map that had a 100x100m2 grid. The code for this and other procedures described below can be found in 01_georeferencing/scripts/00_hist_slp_data_processing.R.
All location data (georeferenced data, GPS-tracking data, and GPS sleep-site data) were used to estimate home ranges using the ctmm package in R. To do this, one can follow 02_validation/scripts
for annotated steps within the 01 and 02 scripts. Next, the home ranges using sleep site data were validated, using the scripts 03-06, by com...Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Proposal distributions for RJMCMC within model moves of each analysis.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This study provides a risk assessment and risk maps related to the consumption of water contaminated by Al, Ba, Fe and Pb in an industrial area in the Brazilian Amazon. A total of 120 samples of drinking water were collected from 26 locations in the municipality of Barcarena, Pará State. Multiple elements were analyzed by inductively coupled plasma optical emission spectrometry. The quantifiable elements in the samples were Al, Ba, Fe and Pb. Risk assessment was performed according to U.S. Environmental Protection Agency (USEPA) procedures. Results indicate that the highest potential risk of non-carcinogenic adverse health effects for Al was in São João Island; for Ba, Fe and Pb (hazard quotient (HQ) > 1) were in Porto da Balsa community, in the city of Barcarena and Distrito Industrial community, respectively. Maps showed that areas located near Barcarena’s industrial complex are the most affected by water contamination. Therefore, these populations are at higher risk of non-carcinogenic problems, especially children and the elderly, since the majority of the population resides in these areas. Geospatial analysis contributed to delimiting and analyzing risk-change trends in the region, expanding the scope of results to a decision-making process.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This is the data repository for the PLOS ONE Manuscript: "Meeting radiation dosimetry capacity requirements of population-scale exposures by geostatistical sampling". This repository contains the following data:
1. "State-and-Subdivision-Boundary-Files.Edited-for-ArcMap-10.4.KML-Format.zip":
This file contains modified U.S. state and sub-division boundary files [in KML format], which can be imported into ArcMap using its KMLtoLayer function. These files have been modified to prevent sub-division naming issues that we encountered when importing boundary data into ArcMap: A) State sub-divisions with identical names are considered a single sub-division by ArcMap (corrected by adding a letter after each sub-division of the same name, i.e. CenterA, CenterB, etc), and; B) ArcMap would only identify the sub-division by its first word if sub-division name contained spaces (corrected by converting all spaces into dashes).
2. "HPAC-Plumes.Processed.zip" and "HPAC-Plumes.Unprocessed.zip":
These files contain HPAC plume coordinate (WGS1984) and dose (in cGy) values for all scenarios discussed in the manuscript. We provide "processed" and "unprocessed" HPAC plume data files. The "unprocessed" HPAC plume data is provided in its original XML format, which cannot be imported into ArcMap directly. The "processed" HPAC plumes are provided in tab-delimited X,Y,Z format (Latitude, Longitude, and Dose). We have also added a "0 cGy" contour in the "processed" plumes (surrounding the HPAC plume), as the presence of unirradiated data points adjacent to the plume was found to be crucial for accurate kriging, since these points served as boundaries for kriging.
3. "Final-Derived-Plumes.Data-Points.zip":
This file contains geostatistically-derived plume coordinate (WGS1984) and dose (in cGy) values for all scenarios discussed in the manuscript. Data is in comma-delimited format (Latitude, Longitude, and Dose). Data points consist of a set of initial coordinates generated at random locations within each Census sub-division using the ArcMap tool, ‘CreateRandomPoints_management’, and subsequent points generated by densification (the geostatistical procedure that targets and localizes an additional small cohort of irradiated individuals to mitigate uncertainty in environmental measurements). These data points were assigned radiation level values corresponding to the adjacent outer HPAC contour by a script comparing each sample with its location within the HPAC plume of the same scenario.
4. “Intermediate-Derived-Plumes.Data-Points.zip”
This archive contains coordinate data (WGS1984) and dose values (in cGy) for all intermediary steps of plume development (using our geostatistical method) for all scenarios. Like (3), the data is comma-delimited (Latitude, Longitude, and Dose), and were assigned radiation level values by a script comparing sampling locations with the location of the HPAC plume of the same scenario. Scenario replicate folders contains text files for each iteration step of the plume derivation process, including a file containing just the initial random sampling (“Iteration-1”), a file containing initial sampling and sampling locations selected by the first densification step (“Iteration-2”), a file containing initial sampling and sampling locations selected by the first and second densification steps (“Iteration-3”), and so on.
This archive also contains a Table (“Progression-of-New-Densification-Selected-Sampling-Locations-For-All-Scenarios.xslx”) which provides a categorical breakdown of how many unique densification-selected sampling locations occur within the irradiated region (i.e. overlap the HPAC plume) for each iteration of all scenario replicates. The fraction of irradiated to unirradiated sampling locations varies among each scenario and individual replicates for the same scenario. Our analysis shows that these results depend on the population densities and exact topography of the HPAC plume which is different among each scenario.
5. "Geostatistical-Sampling-Project.All-Scripts.zip"
This archive contains all programs required for this project. This includes Python scripts meant to be run within the ArcMap software environment (for random point generation and data extraction), and Perl scripts used to process HPAC and U.S. State and Sub-division boundary files, and to assign radiation values to sample locations based on a modified HPAC plume. A java program, “CompareReplicates.jar”, compares the overlapping areas between a pair of polygons that overlap one other using the ArcMap software environment, and requires access to the ArcGIS Runtime SDK (https://developers.arcgis.com/arcgis-runtime/).