This dataset consists of raster geotiff outputs of 30-year average annual land use and land cover transition probabilities for the California Central Valley modeled for the period 2011-2101 across 5 future scenarios. The full methods and results of this research are described in detail in “Integrated modeling of climate, land use, and water availability scenarios and their impacts on managed wetland habitat: A case study from California’s Central Valley” (2021). Land-use and land-cover change for California's Central Valley were modeled using the LUCAS model and five different scenarios were simulated from 2011 to 2101 across the entirety of the valley. The five future scenario projections originated from the four scenarios developed as part of the Central Valley Landscape Conservation Project (http://climate.calcommons.org/cvlcp ). The 4 original scenarios include a Bad-Business-As-Usual (BBAU; high water availability, poor management), California Dreamin’ (DREAM; high water availability, good management), Central Valley Dustbowl (DUST; low water availability, poor management), and Everyone Equally Miserable (EEM; low water availability, good management). These scenarios represent alternative plausible futures, capturing a range of climate variability, land management activities, and habitat restoration goals. We parameterized our models based on close interpretation of these four scenario narratives to best reflect stakeholder interests, adding a baseline Historical Business-As-Usual scenario (HBAU) for comparison. The TGAP raster maps represent the average annual transition probability of a cell over a specified time period for a specified land use transition group and type. Each filename has the associated scenario ID (scn418 = DUST, scn419 = DREAM, scn420 = HBAU, scn421 = BBAU, and scn426 = EEM), transition group (e.g. FALLOW, URBANIZATION), transition type, model iteration (= it0 in all cases as only 1 Monte Carlo simulation was modeled and no iteration data used in the calculation of the probability value), timestep of the 30-year transition summary end date (ts2041 = average annual 30-year transition probability from modeled timesteps 2012 to 2041, ts2071 = average annual 30-year transition probability from modeled timesteps 2042 to 2071, and ts101 = average annual 30-year transition probability from modeled timesteps 2072 to 2101). For example, the following filename “scn418.tgap_URBANIZATION_ Grass_Shrub to Developed [Type].it0.ts2041.tif” represents 30-year cumulative URBANIZATION transition group, for the Grass/Shrub to Developed transition type, for the 2011 to 2041 model period. More information about the LUCAS model can be found here: https://geography.wr.usgs.gov/LUCC/the_lucas_model.php. For more information on the specific parameter settings used in the model contact Tamara S. Wilson (tswilson@usgs.gov)
This dataset consists of raster geotiff outputs from a series of modeling simulations for the California Central Valley. The full methods and results of this research are described in detail in “Integrated modeling of climate, land use, and water availability scenarios and their impacts on managed wetland habitat: A case study from California’s Central Valley” (2021). Land-use and land-cover change for California's Central Valley were modeled using the LUCAS model and five different scenarios were simulated from 2011 to 2101 across the entirety of the valley. The five future scenario projections originated from the four scenarios developed as part of the Central Valley Landscape Conservation Project (http://climate.calcommons.org/cvlcp ). The 4 original scenarios include a Bad-Business-As-Usual (BBAU; high water availability, poor management), California Dreamin’ (DREAM; high water availability, good management), Central Valley Dustbowl (DUST; low water availability, poor management), and Everyone Equally Miserable (EEM; low water availability, good management). These scenarios represent alternative plausible futures, capturing a range of climate variability, land management activities, and habitat restoration goals. We parameterized our models based on close interpretation of these four scenario narratives to best reflect stakeholder interests, adding a baseline Historical Business-As-Usual scenario (HBAU) for comparison. The flood probability raster maps represent the average annual flooding probability of a cell over a specified time period for a specified land use and land cover group and type. Each filename has the associated scenario ID (scn418 = DUST, scn419 = DREAM, scn420 = HBAU, scn421 = BBAU, and scn426 = EEM), flooding probability per pixel per month, over a 30-year period, model iteration (= it0 in all cases as only 1 Monte Carlo simulation was modeled and no iteration data used in the calculation of the probability value), timestep of the 30-year transition summary end date (ts2041 = average annual 30-year transition probability from modeled time steps 2012 to 2041, ts2071 = average annual 30-year flooding probability from modeled timesteps 2042 to 2071, and ts101 = average annual 30-year flooding probability from modeled timesteps 2072 to 2101). The filename will also include one of the 12 monthly flooding designations (e.g. Apr = April; Nov = November). For example, the following filename “scn418_DUST_tgapFLOODING_30yr_Apr_2041.tif” represents 30-year average annual flooding probability for the month of April, for the modeled scenario 418 DUST, over the 2011 to 2041 model period. More information about the LUCAS model can be found here: https://geography.wr.usgs.gov/LUCC/the_lucas_model.php. For more information on the specific parameter settings used in the model contact Tamara S. Wilson (tswilson@usgs.gov)
This a peer reviewed article in IEE computer graphics and applications 27 (2). Despite much published research on its deficiencies, the rainbow colour map is prevalent in the visualization community. The authors present survey results showing that the rainbow colour map continues to appear in more than half of the relevant papers in IEEE Visualization Conference proceedings. Its use is encouraged by its selection as the default colour map used in most visualization toolkits that the authors inspected. The visualization community must do better. In this article, the authors reiterate the characteristics that make the rainbow colour map a poor choice, provide examples that clearly illustrate these deficiencies even on simple data sets, and recommend better colour maps for several categories of display
Website: https://www.researchgate.net/publication/6419747_Rainbow_Color_Map_Still_Considered_Harmful
The production of the Prussian Urmesstischblatt began in 1822 for the entire territory of Prussia. The cards were hand-drawn unique pieces in scale 1: 25 000. They have not been published; they should merely form the basis for smaller scale maps. With the -Instruction for the topographic works of the Royal Prussian General Staff- from 1821 and with the -Explanatory Notes to the sample sheets for the topographic works the Royal Prussian General Staff were determined in terms of content and design. The Ur-Measuring Table Sheets mark the beginning of topographic cartography, which has evolved in various stages, but is still based on these roots. The cards are available plano and are mainly available as print. Individual map sheets have been reworked in the color scheme and are thus even more similar to the original. These are available as a high-quality plot.
The production of the Prussian Urmesstischblatt began in 1822 for the entire territory of Prussia. The cards were hand-drawn unique pieces in scale 1: 25 000. They have not been published; they should merely form the basis for smaller scale maps. With the -Instruction for the topographic works of the Royal Prussian General Staff- from 1821 and with the -Explanatory Notes to the sample sheets for the topographic works the Royal Prussian General Staff were determined in terms of content and design. The Ur-Measuring Table Sheets mark the beginning of topographic cartography, which has evolved in various stages, but is still based on these roots. The cards are available plano and are mainly available as print. Individual map sheets have been reworked in the color scheme and are thus even more similar to the original. These are available as a high-quality plot.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Access National Hydrography ProductsThe National Hydrography Dataset (NHD) is a feature-based database that interconnects and uniquely identifies the stream segments or reaches that make up the nation's surface water drainage system. NHD data was originally developed at 1:100,000-scale and exists at that scale for the whole country. This high-resolution NHD, generally developed at 1:24,000/1:12,000 scale, adds detail to the original 1:100,000-scale NHD. (Data for Alaska, Puerto Rico and the Virgin Islands was developed at high-resolution, not 1:100,000 scale.) Local resolution NHD is being developed where partners and data exist. The NHD contains reach codes for networked features, flow direction, names, and centerline representations for areal water bodies. Reaches are also defined on waterbodies and the approximate shorelines of the Great Lakes, the Atlantic and Pacific Oceans and the Gulf of Mexico. The NHD also incorporates the National Spatial Data Infrastructure framework criteria established by the Federal Geographic Data Committee.The NHD is a national framework for assigning reach addresses to water-related entities, such as industrial discharges, drinking water supplies, fish habitat areas, wild and scenic rivers. Reach addresses establish the locations of these entities relative to one another within the NHD surface water drainage network, much like addresses on streets. Once linked to the NHD by their reach addresses, the upstream/downstream relationships of these water-related entities--and any associated information about them--can be analyzed using software tools ranging from spreadsheets to geographic information systems (GIS). GIS can also be used to combine NHD-based network analysis with other data layers, such as soils, land use and population, to help understand and display their respective effects upon one another. Furthermore, because the NHD provides a nationally consistent framework for addressing and analysis, water-related information linked to reach addresses by one organization (national, state, local) can be shared with other organizations and easily integrated into many different types of applications to the benefit of all.Statements of attribute accuracy are based on accuracy statements made for U.S. Geological Survey Digital Line Graph (DLG) data, which is estimated to be 98.5 percent. One or more of the following methods were used to test attribute accuracy: manual comparison of the source with hardcopy plots; symbolized display of the DLG on an interactive computer graphic system; selected attributes that could not be visually verified on plots or on screen were interactively queried and verified on screen. In addition, software validated feature types and characteristics against a master set of types and characteristics, checked that combinations of types and characteristics were valid, and that types and characteristics were valid for the delineation of the feature. Feature types, characteristics, and other attributes conform to the Standards for National Hydrography Dataset (USGS, 1999) as of the date they were loaded into the database. All names were validated against a current extract from the Geographic Names Information System (GNIS). The entry and identifier for the names match those in the GNIS. The association of each name to reaches has been interactively checked, however, operator error could in some cases apply a name to a wrong reach.Points, nodes, lines, and areas conform to topological rules. Lines intersect only at nodes, and all nodes anchor the ends of lines. Lines do not overshoot or undershoot other lines where they are supposed to meet. There are no duplicate lines. Lines bound areas and lines identify the areas to the left and right of the lines. Gaps and overlaps among areas do not exist. All areas close.The completeness of the data reflects the content of the sources, which most often are the published USGS topographic quadrangle and/or the USDA Forest Service Primary Base Series (PBS) map. The USGS topographic quadrangle is usually supplemented by Digital Orthophoto Quadrangles (DOQs). Features found on the ground may have been eliminated or generalized on the source map because of scale and legibility constraints. In general, streams longer than one mile (approximately 1.6 kilometers) were collected. Most streams that flow from a lake were collected regardless of their length. Only definite channels were collected so not all swamp/marsh features have stream/rivers delineated through them. Lake/ponds having an area greater than 6 acres were collected. Note, however, that these general rules were applied unevenly among maps during compilation. Reach codes are defined on all features of type stream/river, canal/ditch, artificial path, coastline, and connector. Waterbody reach codes are defined on all lake/pond and most reservoir features. Names were applied from the GNIS database. Detailed capture conditions are provided for every feature type in the Standards for National Hydrography Dataset available online through https://prd-wret.s3-us-west-2.amazonaws.com/assets/palladium/production/atoms/files/NHD%201999%20Draft%20Standards%20-%20Capture%20conditions.PDF.Statements of horizontal positional accuracy are based on accuracy statements made for U.S. Geological Survey topographic quadrangle maps. These maps were compiled to meet National Map Accuracy Standards. For horizontal accuracy, this standard is met if at least 90 percent of points tested are within 0.02 inch (at map scale) of the true position. Additional offsets to positions may have been introduced where feature density is high to improve the legibility of map symbols. In addition, the digitizing of maps is estimated to contain a horizontal positional error of less than or equal to 0.003 inch standard error (at map scale) in the two component directions relative to the source maps. Visual comparison between the map graphic (including digital scans of the graphic) and plots or digital displays of points, lines, and areas, is used as control to assess the positional accuracy of digital data. Digital map elements along the adjoining edges of data sets are aligned if they are within a 0.02 inch tolerance (at map scale). Features with like dimensionality (for example, features that all are delineated with lines), with or without like characteristics, that are within the tolerance are aligned by moving the features equally to a common point. Features outside the tolerance are not moved; instead, a feature of type connector is added to join the features.Statements of vertical positional accuracy for elevation of water surfaces are based on accuracy statements made for U.S. Geological Survey topographic quadrangle maps. These maps were compiled to meet National Map Accuracy Standards. For vertical accuracy, this standard is met if at least 90 percent of well-defined points tested are within one-half contour interval of the correct value. Elevations of water surface printed on the published map meet this standard; the contour intervals of the maps vary. These elevations were transcribed into the digital data; the accuracy of this transcription was checked by visual comparison between the data and the map.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Access National Hydrography ProductsThe National Hydrography Dataset (NHD) is a feature-based database that interconnects and uniquely identifies the stream segments or reaches that make up the nation's surface water drainage system. NHD data was originally developed at 1:100,000-scale and exists at that scale for the whole country. This high-resolution NHD, generally developed at 1:24,000/1:12,000 scale, adds detail to the original 1:100,000-scale NHD. (Data for Alaska, Puerto Rico and the Virgin Islands was developed at high-resolution, not 1:100,000 scale.) Local resolution NHD is being developed where partners and data exist. The NHD contains reach codes for networked features, flow direction, names, and centerline representations for areal water bodies. Reaches are also defined on waterbodies and the approximate shorelines of the Great Lakes, the Atlantic and Pacific Oceans and the Gulf of Mexico. The NHD also incorporates the National Spatial Data Infrastructure framework criteria established by the Federal Geographic Data Committee.The NHD is a national framework for assigning reach addresses to water-related entities, such as industrial discharges, drinking water supplies, fish habitat areas, wild and scenic rivers. Reach addresses establish the locations of these entities relative to one another within the NHD surface water drainage network, much like addresses on streets. Once linked to the NHD by their reach addresses, the upstream/downstream relationships of these water-related entities--and any associated information about them--can be analyzed using software tools ranging from spreadsheets to geographic information systems (GIS). GIS can also be used to combine NHD-based network analysis with other data layers, such as soils, land use and population, to help understand and display their respective effects upon one another. Furthermore, because the NHD provides a nationally consistent framework for addressing and analysis, water-related information linked to reach addresses by one organization (national, state, local) can be shared with other organizations and easily integrated into many different types of applications to the benefit of all.Statements of attribute accuracy are based on accuracy statements made for U.S. Geological Survey Digital Line Graph (DLG) data, which is estimated to be 98.5 percent. One or more of the following methods were used to test attribute accuracy: manual comparison of the source with hardcopy plots; symbolized display of the DLG on an interactive computer graphic system; selected attributes that could not be visually verified on plots or on screen were interactively queried and verified on screen. In addition, software validated feature types and characteristics against a master set of types and characteristics, checked that combinations of types and characteristics were valid, and that types and characteristics were valid for the delineation of the feature. Feature types, characteristics, and other attributes conform to the Standards for National Hydrography Dataset (USGS, 1999) as of the date they were loaded into the database. All names were validated against a current extract from the Geographic Names Information System (GNIS). The entry and identifier for the names match those in the GNIS. The association of each name to reaches has been interactively checked, however, operator error could in some cases apply a name to a wrong reach.Points, nodes, lines, and areas conform to topological rules. Lines intersect only at nodes, and all nodes anchor the ends of lines. Lines do not overshoot or undershoot other lines where they are supposed to meet. There are no duplicate lines. Lines bound areas and lines identify the areas to the left and right of the lines. Gaps and overlaps among areas do not exist. All areas close.The completeness of the data reflects the content of the sources, which most often are the published USGS topographic quadrangle and/or the USDA Forest Service Primary Base Series (PBS) map. The USGS topographic quadrangle is usually supplemented by Digital Orthophoto Quadrangles (DOQs). Features found on the ground may have been eliminated or generalized on the source map because of scale and legibility constraints. In general, streams longer than one mile (approximately 1.6 kilometers) were collected. Most streams that flow from a lake were collected regardless of their length. Only definite channels were collected so not all swamp/marsh features have stream/rivers delineated through them. Lake/ponds having an area greater than 6 acres were collected. Note, however, that these general rules were applied unevenly among maps during compilation. Reach codes are defined on all features of type stream/river, canal/ditch, artificial path, coastline, and connector. Waterbody reach codes are defined on all lake/pond and most reservoir features. Names were applied from the GNIS database. Detailed capture conditions are provided for every feature type in the Standards for National Hydrography Dataset available online through https://prd-wret.s3-us-west-2.amazonaws.com/assets/palladium/production/atoms/files/NHD%201999%20Draft%20Standards%20-%20Capture%20conditions.PDF.Statements of horizontal positional accuracy are based on accuracy statements made for U.S. Geological Survey topographic quadrangle maps. These maps were compiled to meet National Map Accuracy Standards. For horizontal accuracy, this standard is met if at least 90 percent of points tested are within 0.02 inch (at map scale) of the true position. Additional offsets to positions may have been introduced where feature density is high to improve the legibility of map symbols. In addition, the digitizing of maps is estimated to contain a horizontal positional error of less than or equal to 0.003 inch standard error (at map scale) in the two component directions relative to the source maps. Visual comparison between the map graphic (including digital scans of the graphic) and plots or digital displays of points, lines, and areas, is used as control to assess the positional accuracy of digital data. Digital map elements along the adjoining edges of data sets are aligned if they are within a 0.02 inch tolerance (at map scale). Features with like dimensionality (for example, features that all are delineated with lines), with or without like characteristics, that are within the tolerance are aligned by moving the features equally to a common point. Features outside the tolerance are not moved; instead, a feature of type connector is added to join the features.Statements of vertical positional accuracy for elevation of water surfaces are based on accuracy statements made for U.S. Geological Survey topographic quadrangle maps. These maps were compiled to meet National Map Accuracy Standards. For vertical accuracy, this standard is met if at least 90 percent of well-defined points tested are within one-half contour interval of the correct value. Elevations of water surface printed on the published map meet this standard; the contour intervals of the maps vary. These elevations were transcribed into the digital data; the accuracy of this transcription was checked by visual comparison between the data and the map.
The production of the Prussian Urmesstischblätter began in 1822 for the entire territory of Prussia. The cards were hand-drawn one-offs on a scale of 1: 25 000. They were not published, they were merely intended to form the basis for maps of smaller scales. With the -Instruction for the topographical works of the Royal Prussian General Staff- of 1821 and with the -Explanations to the sample sheets for the topographical works The content and design of the Royal Prussian General Staff were determined. The Ur-messtischblätter mark the beginning of topographical cartography, which has evolved in various stages, but is still based on these roots today. The maps are available plano and are mainly available as prints. Individual map sheets have been reworked in the color scheme and are thus even more similar to the original. These are available as a high-quality plot.
We refined a suite of hydrodynamic and individual-based models to understand how climate change may impact red king crab (Paralithodes camtschaticus) recruitment in Bristol Bay, Alaska. We coupled a biophysical individual-based model (IBM) and a Regional Ocean Modeling System (ROMS) circulation model to estimate connectivity between the location of red king crab larval release and benthic settlement location in the eastern Bering Sea including Bristol Bay. We conducted ROMS hindcasts for two representative years: 1999 (cold) and 2005 (warm), and a forecast for a predicted warm year: 2037. Scientific output includes ROMS model files, IBM data files, and a red king crab habitat map. Data for each habitat sample used to qualify habitat type and definitions of habitat type is included in the “Habitat Data File” folder. Data for the habitat map were divided into physical (sediments, rocks, and shells) and biological (epibenthos) categories using various data sources including published and unpublished digital data, paper data sheets, and cruise logbooks. Each location in the habitat database was assigned to a cell of the habitat grid, and each cell was then classified as good habitat or bad habitat. Cells containing both good and bad habitat were classified as good habitat, while cells in the grid containing no samples were classified as unknown habitat. The “Connectivity Zones”, “Habitat Grid”, and “Habitat Map” folders contain ArcMap shape files for the habitat map grid, habitat type designations within the grid, connectivity zones, and images of the habitat map. The grid for the habitat information had a cell size of 37 km x 37 km. The ROMS grid used for the circulation model was too finely-divided (2 km x 2 km) for the scale of the habitat data; however, the habitat grid was based on a regularized version of the ROMS grid. The connectivity grid of polygons (“zones”) of various shapes and sizes was assembled to quantify rates of connectivity and retention in areas of interest.
This is a ArcGIS StoryMap Collection that was compiled from the Esri Maps for Public Policy site to show successful examples of policy maps. Browse each item to see examples of different types of policy maps, and learn how each map clearly shows areas to intervene.Items included:Where are schools that fall within areas of poor broadband/internet?Black or African American Population without Health InsuranceIncluding Transportation Costs in Location AffordabilityWhich areas with poor air quality also have higher populations of people of color?Grocery Store AccessSchool District Characteristics and Socioeconomic InformationWhat is the most frequently occurring fire risk?Up and Down COVID-19 TrendsWhere are the highest and lowest incomes in the US?Top 10 Most Job Accessible Cities in the U.S.Los Angeles County Homelessness & Housing MapHow the Age of Housing Impacts AffordabilityStudent Loans or Mortgage? Young Adults Can't Afford Both.You Can Get a Bachelor's at Some Community Colleges
As part of the Maine Beach Mapping Program (MBMAP), MGS surveys annual alongshore shoreline positions (see Beach_Mapping_Shorelines). Using these shoreline positions and guidance from the USGS Digital Shoreline Analysis System (DSAS). DSAS is referenced as Thieler, E.R., Himmelstoss, E.A., Zichichi, J.L., and Ergul, Ayhan, 2009, Digital Shoreline Analysis System (DSAS) version 4.0— An ArcGIS extension for calculating shoreline change: U.S. Geological Survey Open-File Report 2008-1278. For more information on DSAS and the methodology DSAS employs, please see: https://woodshole.er.usgs.gov/project-pages/DSAS/. The supporting DSAS User Guide which describes how DSAS works and how statistics are calculated is available here: http://www.maine.gov/dacf/mgs/hazards/beach_mapping/DSAS_manual.pdf. MGS wrote a database procedure following protocols outlined in DSAS that allows for the calculation of different shoreline change rates and supporting statistics. This was done so that MGS no longer needed to depend on USGS updates to the DSAS software to keep current with ArcGIS software updates. The script casts shoreline-perpendicular transects at a set spacing (in this case, 10-m intervals along the shoreline), from a preset baseline (located landward of the monitored shorelines), and calculates a range of shoreline change statistics, including: Process Time: The time when the statistics were calculated. TransectID: The ID of the transect (including the group or line section ID; for example, 1-1, is line 1, transect 1) SCE: Shoreline Change Envelope. The distance, in meters, between the shoreline farthest from and closests to the baseline at each transect. NSM: Net Shoreline Movement. The distance, in meters, between the oldest and youngest shorelines for each tranect. EPR: End Point Rate. A shoreline change rate, in meters/year, calculated by dividing the NSM by the time elapsed between the oldest and youngest shorelines at each transect. LRR: Linear Regression Rate. A shoreline change rate, in meters/year, calculated by fitting a least-squares regression line to all of the shoreline points for a particular transect. The distance from the baseline, in meters, is plotted against the shoreline date, and slope of the line that provides the best fit is the LRR. LR2: The R-squared statistic, or coefficient of determination. The percentage of variance in the data that is explained by a regression, or in this case, the LRR value. It is a dimensionless index that ranges from 1.0 (a perfect fit, with the best fit line explaining all variation) to 0.0 (a bad fit, with the best fit line explaining little to no variation) and measures how successfully the best fit line (LRR) accounts for variation in the data. LCI95: Standard error of the slope at the 95% confidence interval. Calculated by muliplying the standard error, or standard deviation, of the slope by the two-tailed test statistic at the user-specified confidence percentage. For example if a reported LRR is 1.34 m/yr and a calculated LCI95 is 0.50, the band of confidence around the LRR is +/- 0.50. In other words, you can be 95% confidence that the true rate of change is between 0.84 and 1.84 m/yr. LRR_ft: The Linear Regression Rate, converted to feet/year. LCI95_ft: The LCI95, converted to feet. EPR_ft: The End Point Rate converted to feet.
http://dcat-ap.de/def/licenses/geonutz/20130319http://dcat-ap.de/def/licenses/geonutz/20130319
Blatt Fulda shows a part of the Hessian Buntsandstein landscape, which is bordered in the west by foothills of the Rhenish Slate Mountains and in the north by the collapse of the North Hessian Tertiary Depression. The young volcanic areas of the Vogelsberg and the Rhön are recorded in the southern part of the map. The Hessian sandstone landscape is formed by mostly flat sedimentary layers of red sandstone. The sandstones, including mudstones and conglomerates, were deposited extensively in a mainland basin that covered large parts of Central Europe. The area is traversed by a large number of Saxon grabens in which younger sediments (Muschelkalk, Keuper, Lias) have been preserved. A larger outcrop of shell limestone and Keuper can be found, for example, on the eastern edge of the map sheet near Hünfeld and in the Ringau. The young volcanic areas of Vogelsberg, Rhön and Knüllgebirge rise above the base of the Buntsandstein. With an area of around 2500 square kilometers, the Vogelsberg is one of the largest closed basalt areas in Central Europe. It consists of a multitude of overlying nappes of basalts, tholeiites and trachytes that intruded during the Miocene. Basalt and basalt-like, alkali-rich rocks (phonolites, nephelinite) are also found in the Rhön and in Knüll (south of Homberg). In the depressions and lowlands of the volcanic areas, Pleistocene overburdens of scree, flowing soil and loess are widespread. In the area of the North Hessian Tertiary Depression, Eocene, Oligocene and Pliocene loose sediments lie on top of the Buntsandstein, which are partly covered by Pleistocene deposits (fluvial and Aeolian sands). Folded and skewed rocks of the Paleozoic (Devonian and Carboniferous) characterize the foothills of the Rhenish Slate Mountains on the map sheet, with sedimentary rocks (sandstone, greywacke, slate and slate) of the Lower Carboniferous dominating. In the Kellerwald, between Frankenau and Bad Wildungen, sandstone and argillaceous slate of the Middle and Upper Devonian are exposed in a large area. Lower Carboniferous volcanic rocks (diabases) are attached to them along fault zones. Zechstein sediments surround the basement rock outcrops of the Rhenish Slate Mountains. In addition to the legend, which provides information about the age, genesis and petrography of the units shown, two geological sections provide insights into the structure of the subsoil. Profile 1 crosses the Palaeozoic of the Rhenish Slate Mountains, the Buntsandstein landscape of the Frankenberg Bay and the Lower Hessian Tertiary Depression. Profile 2 runs from the Taunus in the west via the Wetterau, the Vogelsberg and the Hessian Buntsandstein to the Rhön.
As part of a larger collaboration between the USGS, USAID, and partners in Jordan and Lebanon, we developed an open-source and interactive web application that allows users to classify, weight, and combine layers to produce suitability maps easily and transparently. The user can choose how to make suitability classifications within each spatial layer, how to apply relative weights to different spatial layers, and observe how those changes affect the resulting suitability map and distribution of suitability scores across the landscape. The application has two pre-loaded spatial layers describing modeled runoff and surface slope and uses a simplified version of suitability mapping. Values within each input layer are classified as having either “Good” or “Poor” suitability, based on a user-supplied threshold value chosen using interactive sliders. Those layers are then weighted based on user-supplied weights and linearly aggregated to create a final suitability map. The application is not meant as a substitute for more formal suitability mapping techniques. Rather, the web application is presented as a tool aimed at end-users and stakeholders as a way to increase transparency and process-understanding throughout the development of suitability mapping. We use example data from the Jordan Valley, a subset of our full project region to demonstrate the capabilities of the application. The web application was written in R (v 4.0.3) with shiny package (v 1.0.6). This resource will be updated with a link to the full project when the project report is published.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
An essential aspect for adequate predictions of chemical properties by machine learning models is the database used for training them. However, studies that analyze how the content and structure of the databases used for training impact the prediction quality are scarce. In this work, we analyze and quantify the relationships learned by a machine learning model (Neural Network) trained on five different reference databases (QM9, PC9, ANI-1E, ANI-1, and ANI-1x) to predict tautomerization energies from molecules in Tautobase. For this, characteristics such as the number of heavy atoms in a molecule, number of atoms of a given element, bond composition, or initial geometry on the quality of the predictions are considered. The results indicate that training on a chemically diverse database is crucial for obtaining good results and also that conformational sampling can partly compensate for limited coverage of chemical diversity. The overall best-performing reference database (ANI-1x) performs on average by 1 kcal/mol better than PC9, which, however, contains about 2 orders of magnitude fewer reference structures. On the other hand, PC9 is chemically more diverse by a factor of ∼5 as quantified by the number of atom-in-molecule-based fragments (amons) it contains compared with the ANI family of databases. A quantitative measure for deficiencies is the Kullback–Leibler divergence between reference and target distributions. It is explicitly demonstrated that when certain types of bonds need to be covered in the target database (Tautobase) but are undersampled in the reference databases, the resulting predictions are poor. Examples of this include the poor performance of all databases analyzed to predict C(sp2)–C(sp2) double bonds close to heteroatoms and azoles containing N–N and N–O bonds. Analysis of the results with a Tree MAP algorithm provides deeper understanding of specific deficiencies in predicting tautomerization energies by the reference datasets due to inadequate coverage of chemical space. Capitalizing on this information can be used to either improve existing databases or generate new databases of sufficient diversity for a range of machine learning (ML) applications in chemistry.
GIS dataset includes surveyed shoreline positions for most of the larger beach systems along the southern to mid-coast Maine coastline in York, Cumberland, and Sagadahoc counties. Data were collected using a Leica GS-15 network Real Time Kinematic Global Positioning System (RTK-GPS), and in areas with poor cellular coverage, an Ashtech Z-Xtreme RTK-GPS. Both systems typically have horizontal and vertical accuracies of less than 5 cm. In general, surveys are attempted to be repeated at approximately the same month in each consecutive survey year, however this is not always possible. As a result, the number of available shoreline positions may vary by beach.The line feature class includes the following attributes:BEACH_NAME: The name of the beach where a shoreline was surveyed.SURVEY_DATE: The date (year, month, day; for example 20160901 would be September 1, 2016) upon which a shoreline was surveyed.SURVEY_YEAR: The year (e.g., 2016) within which a shoreline was surveyed.SHAPE_LENGTH: The length, in meters, of the surveyed shoreline.
Attribution-NonCommercial-ShareAlike 3.0 (CC BY-NC-SA 3.0)https://creativecommons.org/licenses/by-nc-sa/3.0/
License information was derived automatically
In High Arctic northern Greenland, future responses to climatic changes are poorly understood on a landscape scale. Here, we present a study of the geomorphology and cryostratigraphy in the Zackenberg Valley in NE Greenland (74°N) containing a geomorphological map and a simplified geocryological map, combined with analyses of 13 permafrost cores and two exposures. Cores from a solifluction sheet, alluvial fans, and an emerged delta were studied with regards to cryostructures, ice and total carbon contents, grain size distribution, and pore water electrical conductivity; and the samples were AMS 14C dated. The near-surface permafrost on slopes and alluvial fans is ice rich, as opposed to the ice-poor epigenetic permafrost in the emerged delta. Ground ice and carbon distribution are closely linked to sediment transport processes, which largely depend on lithology and topography. Holocene alluvial fans, covering 12% of the lowermost hillslopes, represent paleoenvironmental archives. During the contrasting climates of the Holocene, the alluvial fans continued to aggrade - through the warmer early Holocene Optimum, the colder late Holocene, and the following climate warming - and by 0.45 mm a- 1, on average. This is caused by three factors: sedimentation, ground ice aggradation, and vegetation growth and is reflected by AMS 14C dating and continuously alternating cryostructures. Highly variable sedimentation rates in space and time at the alluvial fans have been detected. This is also reflected by alternating lenticular and microlenticular cryostructures indicating syngenetic permafrost aggradation during sedimentation with suspended and organic-matrix cryostructures indicating quasi-syngenetic permafrost aggradation in response to vegetation growth in periods with reduced or no sedimentation. Over time, this causes organic matter to become buried, indicating that alluvial fans represent effective carbon sinks that have previously been overlooked.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionChildhood stunting is a global public health concern, associated with both short and long-term consequences, including high child morbidity and mortality, poor development and learning capacity, increased vulnerability for infectious and non-infectious disease. The prevalence of stunting varies significantly throughout Ethiopian regions. Therefore, this study aimed to assess the geographical variation in predictors of stunting among children under the age of five in Ethiopia using 2019 Ethiopian Demographic and Health Survey.MethodThe current analysis was based on data from the 2019 mini Ethiopian Demographic and Health Survey (EDHS). A total of 5,490 children under the age of five were included in the weighted sample. Descriptive and inferential analysis was done using STATA 17. For the spatial analysis, ArcGIS 10.7 were used. Spatial regression was used to identify the variables associated with stunting hotspots, and adjusted R2 and Corrected Akaike Information Criteria (AICc) were used to compare the models. As the prevalence of stunting was over 10%, a multilevel robust Poisson regression was conducted. In the bivariable analysis, variables having a p-value < 0.2 were considered for the multivariable analysis. In the multivariable multilevel robust Poisson regression analysis, the adjusted prevalence ratio with the 95% confidence interval is presented to show the statistical significance and strength of the association.ResultThe prevalence of stunting was 33.58% (95%CI: 32.34%, 34.84%) with a clustered geographic pattern (Moran’s I = 0.40, p40 (APR = 0.74, 95%CI: 0.55, 0.99). Children whose mother had secondary (APR = 0.74, 95%CI: 0.60, 0.91) and higher (APR = 0.61, 95%CI: 0.44, 0.84) educational status, household wealth status (APR = 0.87, 95%CI: 0.76, 0.99), child aged 6–23 months (APR = 1.87, 95%CI: 1.53, 2.28) were all significantly associated with stunting.ConclusionIn Ethiopia, under-five children suffering from stunting have been found to exhibit a spatially clustered pattern. Maternal education, wealth index, birth interval and child age were determining factors of spatial variation of stunting. As a result, a detailed map of stunting hotspots and determinants among children under the age of five aid program planners and decision-makers in designing targeted public health measures.
Hazardous substances are transported by road, rail, water and pipelines. Things can go wrong during transport, which can cause the dangerous cargo to ignite or explode or, for example, cause toxic gases to escape through a leak or break. In general, those roads are listed on the risk map where it is known that the legal standard has been exceeded. This standard is the local risk 10-6.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
CLAMATO 2017 Data Release 1
Public release: 2017 October 9
Uploaded to Zenodo on 2018 June 19th after acceptance for publication in ApJS
By Khee-Gan Lee (kglee@lbl.gov) and collaborators
Supporting paper: https://arxiv.org/abs/1710.02894
These are data products associated with the first data release of the COSMOS Lyman-Alpha Mapping And Tomography Observations (CLAMATO) survey with the Keck-I telescope, which mapped 3D Lyman-alpha forest absorption at 2.05
The following is the summary of the main products:
- Source catalog (CL2017_VALUEADDED_RELEASE_20171009.TXT)
- Reduced spectra, in /spec_v0/ (blue) and /spec_v0_red (red) sub-directories
- Continuum-fitted 2.05
Versions:
v0 (not public): Initial rough extraction for 2.15
Redshift Catalog and Spectra
We provide our redshift catalog and reduced spectra obtained with Keck-I/LRIS
The source catalog is provided in the ASCII file CL2017_VALUEADDED_RELEASE_20171009.TXT, with the following columns:
- BLUE_SPEC: Blue spectrum filename (in /spec_v0/ sub-directory)
- TOMO_ID: CLAMATO ID number
- GMAG: g-magnitude (AB) per Capak et al 2007 photometric catalog
- CONF: Redshift confidence grade: see https://arxiv.org/abs/1710.02894
- ZSPEC: Spectroscopic redshift as determined from CLAMATO spectrum
- QSO: QSO flag (1 if QSO, 0 if non-QSO)
- RA: R.A. in degrees (J2000)
- DEC: Dec in degrees (J2000)
- S/N_1: Estimated Lya-forest S/N at 2.05
The tarballs spec_v0.tar.gz and spec_v0_red.tar.gz include all the reduced spectra from LRIS-Blue and LRIS-Red, respectively.
The individual LRIS spectra are provided in FITS format, with the following HDU Extensions:
- HDU0: Object spectral flux density, in units of 10^{-17} ergs/s/cm^2/angstrom
- HDU1: Noise standard deviation
- HDU2: Pixel Wavelengths in angstroms
Pixel Data
The binary file PIXEL_DATA_v4.BIN stores the concatenated Lyman-alpha forest pixels at 2.05
The first value in the binary is a 32-bit integer specifying the number of pixels (64332), followed by 5 double-precision floating point (64-bit) vectors storing the x, y, z, sigma_f, and delta_f of the pixels.
An example python script to read pixel_data is as follows:
import numpy as np
with open('CLAMATO2017_public/pixel_data_v4.bin','r') as f:
npix = np.fromfile(f, dtype=np.int32, count=1)
f.seek(4)
pixel_data = np.fromfile(f,dtype=np.float64).reshape((npix,5))
LIST_TOMO_INPUT_2017.TXT is a summary file of corresponding to PIXEL_DATA.BIN, listing the [x,y,z] position of the sightlines that contributed to the file as well as, in the final two columns, the index range that can be used to grab the relevant pixels from the concatenated pixel list.
Tomographic Map
The Wiener-reconstructed map of the 2.05
The reconstructed map is MAP_2017_V4.BIN, which is a 60x48x876 = 2552880 pixel double-precision binary with. The dimension that changes fastest is the z-dimension (876 pixels per dimension), followed by the y-dimension (48 pixels per dimension) and x-dimension (60 per dimension).
Each map pixel represents a 0.5Mpc/h comoving voxel of the Ly-alpha forest absorption. See the Appendix of https://arxiv.org/abs/1710.02894 for the conversion factors to assume to switch between pixel/voxel and [RA, Dec, redshift].
The file MAP_2017_V4_SM2.0.BIN is the same map, but smoothed with a R=2Mpc/h Gaussian kernel.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The project leads for the collection of most of this data were Heiko Wittmer, Christopher Wilmers, Bogdan Cristescu, Pete Figura, David Casady, and Julie Garcia. Mule deer (82 adult females) from the Siskiyou herd were captured and equipped with GPS collars (Survey Globalstar, Vectronic Aerospace, Germany; Vertex Plus Iridium, Vectronic Aerospace, Germany), transmitting data from 2015-2020. The Siskiyou herd migrates from winter ranges primarily north and east of Mount Shasta (i.e., Shasta Valley, Red Rock Valley, Sheep Camp Butte, Sardine Flat, Long Prairie, and Little Hot Spring Valley) to sprawling summer ranges scattered between Mount Shasta in the west and the Burnt Lava Flow Geological Area to the east. A small percentage of the herd were residents. GPS locations were fixed between 1-2 hour intervals in the dataset. To improve the quality of the data set as per Bjørneraas et al. (2010), the GPS data were filtered prior to analysis to remove locations which were: i) further from either the previous point or subsequent point than an individual deer is able to travel in the elapsed time, ii) forming spikes in the movement trajectory based on outgoing and incoming speeds and turning angles sharper than a predefined threshold , or iii) fixed in 2D space and visually assessed as a bad fix by the analyst. The methodology used for this migration analysis allowed for the mapping of winter ranges and the identification and prioritization of migration corridors. Brownian Bridge Movement Models (BBMMs; Sawyer et al. 2009) were constructed with GPS collar data from 67 migrating deer, including 167 migration sequences, location, date, time, and average location error as inputs in Migration Mapper. The average migration time and average migration distance for deer was 12.09 days and 41.33 km, respectively. Corridors and stopovers were prioritized based on the number of animals moving through a particular area. BBMMs were produced at a spatial resolution of 50 m using a sequential fix interval of less than 27 hours. Due to often produced BBMM variance rates greater than 8000, separate models using BBMMs and fixed motion variances of 1000 were produced per migration sequence and visually compared for the entire dataset, with best models being combined prior to population-level analyses (62 percent of sequences selected with BMMM). Winter range analyses were based on data from 66 individual deer and 111 wintering sequences using a fixed motion variance of 1000. Winter range designations for this herd may expand with a larger sample, filling in some of the gaps between winter range polygons in the map. Large water bodies were clipped from the final outputs.Corridors are visualized based on deer use per cell, with greater than or equal to 1 deer, greater than or equal to 4 deer (10 percent of the sample), and greater than or equal to 7 deer (20 percent of the sample) representing migration corridors, medium use corridors, and high use corridors, respectively. Stopovers were calculated as the top 10 percent of the population level utilization distribution during migrations and can be interpreted as high use areas. Stopover polygon areas less than 20,000 m2 were removed, but remaining small stopovers may be interpreted as short-term resting sites, likely based on a small concentration of points from an individual animal. Winter range is visualized as the 50th percentile contour of the winter range utilization distribution.
This dataset consists of raster geotiff outputs of 30-year average annual land use and land cover transition probabilities for the California Central Valley modeled for the period 2011-2101 across 5 future scenarios. The full methods and results of this research are described in detail in “Integrated modeling of climate, land use, and water availability scenarios and their impacts on managed wetland habitat: A case study from California’s Central Valley” (2021). Land-use and land-cover change for California's Central Valley were modeled using the LUCAS model and five different scenarios were simulated from 2011 to 2101 across the entirety of the valley. The five future scenario projections originated from the four scenarios developed as part of the Central Valley Landscape Conservation Project (http://climate.calcommons.org/cvlcp ). The 4 original scenarios include a Bad-Business-As-Usual (BBAU; high water availability, poor management), California Dreamin’ (DREAM; high water availability, good management), Central Valley Dustbowl (DUST; low water availability, poor management), and Everyone Equally Miserable (EEM; low water availability, good management). These scenarios represent alternative plausible futures, capturing a range of climate variability, land management activities, and habitat restoration goals. We parameterized our models based on close interpretation of these four scenario narratives to best reflect stakeholder interests, adding a baseline Historical Business-As-Usual scenario (HBAU) for comparison. The TGAP raster maps represent the average annual transition probability of a cell over a specified time period for a specified land use transition group and type. Each filename has the associated scenario ID (scn418 = DUST, scn419 = DREAM, scn420 = HBAU, scn421 = BBAU, and scn426 = EEM), transition group (e.g. FALLOW, URBANIZATION), transition type, model iteration (= it0 in all cases as only 1 Monte Carlo simulation was modeled and no iteration data used in the calculation of the probability value), timestep of the 30-year transition summary end date (ts2041 = average annual 30-year transition probability from modeled timesteps 2012 to 2041, ts2071 = average annual 30-year transition probability from modeled timesteps 2042 to 2071, and ts101 = average annual 30-year transition probability from modeled timesteps 2072 to 2101). For example, the following filename “scn418.tgap_URBANIZATION_ Grass_Shrub to Developed [Type].it0.ts2041.tif” represents 30-year cumulative URBANIZATION transition group, for the Grass/Shrub to Developed transition type, for the 2011 to 2041 model period. More information about the LUCAS model can be found here: https://geography.wr.usgs.gov/LUCC/the_lucas_model.php. For more information on the specific parameter settings used in the model contact Tamara S. Wilson (tswilson@usgs.gov)