The package_converter extension for CKAN enhances its functionality by enabling the export of CKAN package metadata into various formats. The extension aims to simplify the process of converting metadata to formats like DataCite and OAI_DC and allows defining and reusing custom converters. While the provided documentation is somewhat sparse, it highlights the extension's compatibility with other useful CKAN extensions. Key Features: Package Metadata Export: Enables the export of CKAN package metadata. Multiple Format Support: Supports exporting to formats like DataCite and OAI_DC. Custom Converter Definitions: Allows users to define custom converters to tailor the export process to their specific requirements. Converter Reusability: Supports the reuse of existing converters, promoting efficiency and consistency. Compatibility with other Extensions: Claims compatibility with ckanext-scheming, ckanext-repeating, ckanext-composite, and ckanext-spatial. Use Cases (Based on likely functionality): Metadata Harmonization: Organizations needing to share their datasets with other platforms or repositories using different metadata standards can use this extension to convert their existing CKAN metadata into the required formats. DataCite Integration: Institutions using CKAN to manage research datasets can leverage the DataCite export feature to generate metadata records compliant with the DataCite standard, simplifying the process of assigning DOIs. OAI-PMH Support: Libraries and archives operating CKAN instances can utilize the OAI_DC format to facilitate metadata harvesting through OAI-PMH, making their datasets discoverable by a wider audience. Technical Integration: The extension likely integrates with CKAN as a plugin, adding functionality for managing and triggering package metadata conversions. Details for setting the configurations are vague but it is defined custom converters can be added directly or via the config file. It requires the addition of "package_converter" to the ckan.plugins setting in the CKAN configuration file (typically production.ini). Benefits & Impact: By enabling package metadata conversion, the package_converter extension reduces the manual effort required to share CKAN datasets with external parties using different metadata standards. It enhances the interoperability of CKAN installations and allows organizations to easily participate in broader data sharing initiatives. The ability to define custom converters ensures the flexibility to support a wide range of metadata formats.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In digital musicology, the widely adopted de facto standard for creating digital critical editions is the encoding format defined by the Music Encoding Initiative. However, the usual workflow to arrive at the desired encoding involves the use of well-established WYSIWYG music notation software.
Scores are mostly prepared in the conventional way and then exported to MusicXML format to be converted to MEI afterwards. While all programs handle the export of basic features like notes, measures, clefs and key signatures quite well, there are decided differences when it comes to more complex notational features like ornaments.
We have thoroughly investigated which export features the most popular programs offer and provide here our test files.
The iso19115 extension for CKAN facilitates the export of datasets into the ISO 19115 XML format, an international standard for geospatial metadata. This extension provides a mechanism for converting CKAN datasets into a structured XML representation compliant with the ISO 19115 standard, enabling interoperability and exchange of geospatial metadata across different systems and organizations. Compatible with CKAN versions 2.9 and 2.10, it helps in aligning data sharing practices with established geospatial standards. Key Features: ISO 19115 XML Export: Specifically designed to export CKAN datasets as ISO 19115 compliant XML documents. This conversion is crucial for organizations needing to adhere to geospatial metadata standards. Customizable Mapping via Interface Implementation: Offers the IIso19115 interface that allows developers to implement custom mapping logic from CKAN dataset formats to the ISO 19115 standard. JsonML Export: Supports exporting dataset metadata into ISO 19115 JsonML (JSON Markup Language), providing an alternative, flexible representation for data exchange. Validation Check: Enables the validation of datasets to assess if they can be rendered as a valid ISO 19115 document. This feature enhances data quality and ensures compliance with established standards. Converter Base Class: Offers a Converter base class simplifying the customization process for ISO 19115 mapping that extension developers can extend and tailor. Configurable Cache Directory: Allows users to configure a storage path for pre-compiled schema definition, useful for managing performance and ensuring schema definitions are readily available. Technical Integration: The iso19115 extension integrates with CKAN through plugins and a defined interface, allowing seamless metadata management and export capabilities. The extension adds new API actions (iso19115_package_show and iso19115_package_check) accessible via CKAN's API, enhancing CKAN's capabilities in geospatial data management. The provided IIso19115 interface guides developers in implementing custom converters, ensuring proper data transformation according to specific needs. Benefits & Impact: By implementing the iso19115 extension, organizations can efficiently manage geospatial metadata within their CKAN instances, adhering to the ISO 19115 standard. This ensures better data interoperability, enabling broader data sharing and enhanced compliance. Through custom mapping implementations, it offers flexibility in adapting to particular data transformations, ensuring the metadata accurately reflects the underlying datasets.
Notice: this is not the latest Heat Island Severity image service.This layer contains the relative heat severity for every pixel for every city in the United States, including Alaska, Hawaii, and Puerto Rico. Heat Severity is a reclassified version of Heat Anomalies raster which is also published on this site. This data is generated from 30-meter Landsat 8 imagery band 10 (ground-level thermal sensor) from the summer of 2023.To explore previous versions of the data, visit the links below:Heat Severity - USA 2022Heat Severity - USA 2021Heat Severity - USA 2020Heat Severity - USA 2019Federal statistics over a 30-year period show extreme heat is the leading cause of weather-related deaths in the United States. Extreme heat exacerbated by urban heat islands can lead to increased respiratory difficulties, heat exhaustion, and heat stroke. These heat impacts significantly affect the most vulnerable—children, the elderly, and those with preexisting conditions.The purpose of this layer is to show where certain areas of cities are hotter than the average temperature for that same city as a whole. Severity is measured on a scale of 1 to 5, with 1 being a relatively mild heat area (slightly above the mean for the city), and 5 being a severe heat area (significantly above the mean for the city). The absolute heat above mean values are classified into these 5 classes using the Jenks Natural Breaks classification method, which seeks to reduce the variance within classes and maximize the variance between classes. Knowing where areas of high heat are located can help a city government plan for mitigation strategies.This dataset represents a snapshot in time. It will be updated yearly, but is static between updates. It does not take into account changes in heat during a single day, for example, from building shadows moving. The thermal readings detected by the Landsat 8 sensor are surface-level, whether that surface is the ground or the top of a building. Although there is strong correlation between surface temperature and air temperature, they are not the same. We believe that this is useful at the national level, and for cities that don’t have the ability to conduct their own hyper local temperature survey. Where local data is available, it may be more accurate than this dataset. Dataset SummaryThis dataset was developed using proprietary Python code developed at Trust for Public Land, running on the Descartes Labs platform through the Descartes Labs API for Python. The Descartes Labs platform allows for extremely fast retrieval and processing of imagery, which makes it possible to produce heat island data for all cities in the United States in a relatively short amount of time.What can you do with this layer?This layer has query, identify, and export image services available. Since it is served as an image service, it is not necessary to download the data; the service itself is data that can be used directly in any Esri geoprocessing tool that accepts raster data as input.In order to click on the image service and see the raw pixel values in a map viewer, you must be signed in to ArcGIS Online, then Enable Pop-Ups and Configure Pop-Ups.Using the Urban Heat Island (UHI) Image ServicesThe data is made available as an image service. There is a processing template applied that supplies the yellow-to-red or blue-to-red color ramp, but once this processing template is removed (you can do this in ArcGIS Pro or ArcGIS Desktop, or in QGIS), the actual data values come through the service and can be used directly in a geoprocessing tool (for example, to extract an area of interest). Following are instructions for doing this in Pro.In ArcGIS Pro, in a Map view, in the Catalog window, click on Portal. In the Portal window, click on the far-right icon representing Living Atlas. Search on the acronyms “tpl” and “uhi”. The results returned will be the UHI image services. Right click on a result and select “Add to current map” from the context menu. When the image service is added to the map, right-click on it in the map view, and select Properties. In the Properties window, select Processing Templates. On the drop-down menu at the top of the window, the default Processing Template is either a yellow-to-red ramp or a blue-to-red ramp. Click the drop-down, and select “None”, then “OK”. Now you will have the actual pixel values displayed in the map, and available to any geoprocessing tool that takes a raster as input. Below is a screenshot of ArcGIS Pro with a UHI image service loaded, color ramp removed, and symbology changed back to a yellow-to-red ramp (a classified renderer can also be used): A typical operation at this point is to clip out your area of interest. To do this, add your polygon shapefile or feature class to the map view, and use the Clip Raster tool to export your area of interest as a geoTIFF raster (file extension ".tif"). In the environments tab for the Clip Raster tool, click the dropdown for "Extent" and select "Same as Layer:", and select the name of your polygon. If you then need to convert the output raster to a polygon shapefile or feature class, run the Raster to Polygon tool, and select "Value" as the field.Other Sources of Heat Island InformationPlease see these websites for valuable information on heat islands and to learn about exciting new heat island research being led by scientists across the country:EPA’s Heat Island Resource CenterDr. Ladd Keith, University of ArizonaDr. Ben McMahan, University of Arizona Dr. Jeremy Hoffman, Science Museum of Virginia Dr. Hunter Jones, NOAA Daphne Lundi, Senior Policy Advisor, NYC Mayor's Office of Recovery and ResiliencyDisclaimer/FeedbackWith nearly 14,000 cities represented, checking each city's heat island raster for quality assurance would be prohibitively time-consuming, so Trust for Public Land checked a statistically significant sample size for data quality. The sample passed all quality checks, with about 98.5% of the output cities error-free, but there could be instances where the user finds errors in the data. These errors will most likely take the form of a line of discontinuity where there is no city boundary; this type of error is caused by large temperature differences in two adjacent Landsat scenes, so the discontinuity occurs along scene boundaries (see figure below). Trust for Public Land would appreciate feedback on these errors so that version 2 of the national UHI dataset can be improved. Contact Dale.Watt@tpl.org with feedback.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is about book subjects. It has 2 rows and is filtered where the books is Mk1 passenger coaches : a complete listing of numbers, conversions, renumberings, preservation, departmental, exports and disposals. It features 10 columns including number of authors, number of books, earliest publication date, and latest publication date.
Success.ai SaaS Platform: Revolutionizing B2B Lead Generation & Email Outreach
Pricing Success.ai offers unparalleled value with a transparent pricing model. Start for free and explore the platform’s robust features, including unlimited access to 700M+ verified B2B leads. Affordable upgrade plans ensure you get the best value for your business growth.
Login Easily log in to your account and access your personalized dashboard. Seamlessly manage your leads, campaigns, and outreach strategies all in one place.
Get Started for FREE Success.ai allows you to begin your journey at no cost. Test the platform’s powerful capabilities with no credit card required. Experience features like AI-driven lead search, email crafting, and outreach optimization before committing to a plan.
Book a Demo Curious about how Success.ai can transform your business? Book a demo to see the platform in action. Learn how to streamline your lead generation process, maximize ROI, and scale your outreach efforts with ease.
Why Success.ai? 700M+ Professionals Success.ai provides access to the largest verified database of over 700 million global professional contacts. Every lead is rigorously verified to ensure accuracy, enabling you to target decision-makers with precision.
Find and Win Your Ideal Customers The platform’s advanced search features let you locate prospects by name, company, or email. Whether you're targeting CEOs, sales managers, or industry-specific professionals, Success.ai helps you find and connect with your ideal audience.
AI-Powered Capabilities Success.ai leverages AI to enhance every aspect of your sales process. From crafting hyper-personalized cold emails to filtering leads by industry, revenue, or company size, the platform ensures your outreach efforts are efficient and effective.
Solutions for Every Business Need Sales Leaders Accelerate your sales cycle with tools designed to seamlessly book new deals and drive revenue growth.
Startups Find, contact, and win clients globally with the power of Success.ai. Tailored tools help startups scale quickly with minimal resources.
Marketing Agencies Grow your client base and enhance your campaigns with targeted lead generation and cold email strategies.
Lead Generation Agencies Unlock the potential of your campaigns with access to the world’s largest verified B2B database. Drive conversions and client satisfaction with precision-targeted outreach.
Unmatched Features for Growth Unlimited B2B Leads: Access 700M+ verified contacts to fuel your pipeline. AI-Powered Writer: Craft personalized emails effortlessly, improving engagement and response rates. Unlimited Email Warmup: Ensure your emails land in inboxes, avoiding spam folders. Unified CRM: Manage leads, campaigns, and responses in one streamlined platform. 24/7 Live Support: Dedicated support ensures your success at every step. What Users Say Success.ai has received glowing reviews from over 10,000 satisfied companies. From startups to established enterprises, users praise the platform’s ease of use, robust features, and significant ROI.
For example, Muhammad Sulaiman says, “This tool has made filling our sales pipeline easier than ever. The AI writer and extensive database have been game-changers.”
Get Started Today Join the ranks of businesses achieving hypergrowth with Success.ai. With unlimited access to the largest verified B2B database, advanced AI tools, and unmatched affordability, Success.ai is the ultimate platform for sales success.
Start for FREE or Book a Demo today to see how Success.ai can transform your lead generation efforts!
Register on the platform: app.success.ai See our prices: https://www.success.ai/pricing Book a demo: https://calendly.com/d/cmh7-chj-pcz/success-ai-demo-session?
This data was collected by the U.S. Bureau of Land Management (BLM) in New Mexico at both the New Mexico State Office and at the various field offices. This dataset is meant to depict the surface owner or manager of the land parcels. In the vast majority of land parcels, they will be one and the same. However, there are instances where the owner and manager of the land surface are not the same. When this occurs, the manager of the land is usually indicated. BLM's Master Title Plats are the official land records of the federal government and serve as the primary data source for depiction of all federal lands. Information from State of New Mexico is the primary source for the depiction of all state lands. Auxiliary source are referenced, as well, for the depiction of all lands. Collection of this dataset began in the 1980's using the BLM's ADS software to digitize information at the 1:24,000 scale. In the mid to late 1990's the data was converted from ADS to ArcInfo software and merged into tiles of one degree of longitude by one half degree of latitude. These tiles were regularly updated. The tiles were merged into a statewide coverage. The source geodatabase for this shapefile was created by loading the merged ArcInfo coverage into a personal geodatabase. The geodatabase data were snapped to a more accurate GCDB derived land network, where available. In areas where GCDB was not available the data were snapped to digitized PLSS. In 2006, the personal geodatabase was loaded into an enterprise geodatabase (SDE). This shapefile has been created by exporting the feature class from SDE.
Feature class that compare the elevations between seawall crests (extracted from available LiDAR datasets from 2010 and 2013) with published FEMA Base Flood Elevations (BFEs) from preliminary FEMA DFIRMS (Panels issued in 2018 and 2019) in coastal York and Cumberland counties (up through Willard Beach in South Portland). The dataset included the development of an inventory of coastal armor structures from a range of different datasets. Feature classes include the following:Steps to create the dataset included:Shoreline structures from the most recent NOAA EVI LANDWARD_SHORETYPE feature class were extracted using the boundaries of York and Cumberland counties. This included 1B: Exposed, Solid Man-Made structures, 8B: Sheltered, Solid Man-Made Structures; 6B: Riprap, and 8C: Sheltered Riprap. This resulted in the creation of Cumberland_ESIL_Structures and York_ESIL_Structures. Note that ESIL uses the MHW line as the feature base.Shoreline structures from the work by Rice (2015) were extracted using the York and Cumberland county boundaries. This resulted in the creation of Cumberland_Rice_Structures and York_Rice_Structures.Additional feature classes for structures were created for York and Cumberland county structures that were missed. This was Slovinsky_York_Structures and Slovinsky_Cumberland_Structures. GoogleEarth imagery was inspected while additional structures were being added to the GIS. 2012 York and Cumberland County imagery was used as the basemap, and structures were classified as bulkheads, rip rap, or dunes (if known). Also, whether or not the structure was in contact with the 2015 HAT was noted.MEDEP was consulted to determine which permit data (both PBR and Individual Permit, IP, data) could be used to help determine where shoreline stabilization projects may have been conducted adjacent to or on coastal bluffs. A file was received for IP data and brought into GIS (DEP_Licensing_Points). This is a point file for shoreline stabilization permits under NRPA.Clip GISVIEW.MEDEP.Permit_By_Rule_Locations to the boundaries of the study area and output DEP_PBR_Points.Join GISVIEW.sde>GISVIEW.MEDEP.PBR_ACTIVITY to the DEP_PBR_Points using the PBR_ID Field. Then, export this file as DEP_PBR_Points2. Using the new ACTIVITY_DESC field, select only those activities that relate to shoreline stabilization projects:PBR_ACTIVITY ACTIVITY_DESC02 Act. Adjacent to a Protected Natural Resource04 Maint Repair & Replacement of Structure08 Shoreline StabilizationSelect by Attributes > PBR_ACTIVITY IN (‘02’, ‘04’, ‘08’) select only those activities likely to be related to shoreline stabilization, and export the selected data as a DEP_PBR_Points3. Then delete 1 and 2, and rename this final product as DEP_PBR_Points.Next, visually inspect the Licensing and PBR files using ArcMap 2012, 2013 imagery, along with Google Earth imagery to determine the extents of armoring along the shoreline.Using EVI and Rice data as indicators, manually inspect and digitize sections of the coastline that are armored. Classify the seaward shoreline type (beach, mudflat, channel, dune, etc.) and the armor type (wall or bulkhead). Bring in the HAT line and, using that and visual indicators, identify whether or not the armored sections are in contact with HAT. Use Google Earth at the same time as digitizing in order to help constrain areas. Merge digitized armoring into Cumberland_York_Merged.Bring the preliminary FEMA DFIRM data in and use “intersect” to assign the different flood zones and elevations to the digitized armored sections. This was done first for Cumberland, then for York Counties. Delete ancillary attributes, as needed. Resulting layer is Cumberland_Structure_FloodZones and York_Structure_FloodZones.Go to NOAA Digital Coast Data Layers and download newest LiDAR data for York and Cumberland county beach, dune, and just inland areas. This includes 2006 and newer topobathy data available from 2010 (entire coast), and selected areas from 2013 and 2014 (Wells, Scarborough, Kennebunk).Mosaic the 2006, 2010, 2013 and 2014 data (with 2013 and 2014 being the first dataset laying on top of the 2010 data) Mosaic this dataset into the sacobaydem_ftNAVD raster (this is from the MEGIS bare-earth model). This will cover almost all of the study area except for armor along several areas in York. Resulting in LidAR206_2010_2013_Mosaic.tif.Using the LiDAR data as a proxy, create a “seaward crest” line feature class which follows along the coast and extracts the approximate highest point (cliff, bank, dune) along the shoreline. This will be used to extract LiDAR data and compare with preliminary flood zone information. The line is called Dune_Crest.Using an added tool Points Along Line, create points at 5 m spacing along each of the armored shoreline feature lines and the dune crest lines. Call the outputs PointsonLines and PointsonDunes.Using Spatial Analyst, Extract LIDAR elevations to the points using the 2006_2010_2013 Mosaic first. Call this LidarPointsonLines1. Select those points which have NULL values, export as this LiDARPointsonLines2. Then rerun Extract Values to Points using just the selected data and the state MEGIS DEM. Convert RASTERVALU to feet by multiplying by 3.2808 (and rename as Elev_ft). Select by Attributes, find all NULL values, and in an edit session, delete them from LiDARPointsonLines. Then, merge the 2 datasets and call it LidarPointsonLines. Do the same above with dune lines and create LidarPointsonDunes.Next, use the Cumberland and York flood zone layers to intersect the points with the appropriate flood zone data. Create ….CumbFIRM and …YorkFIRM files for the dunes and lines.Select those points from the Dunes feature class that are within the X zone – these will NOT have an associated BFE for comparison with the Lidar data. Export the Dune Points as Cumberland_York_Dunes_XZone. Run NEAR and use the merged flood zone feature class (with only V, AE, and AO zones selected). Then, join the flood zone data to the feature class using FID (from the feature class) and OBJECTID (from the flood zone feature class). Export as Cumberland_York_Dunes_XZone_Flood. Delete ancillary columns of data, leaving the original FLD_ZONE (X), Elev_ft, NEAR_DIST (distance, in m, to the nearest flood zone), FLD_ZONE_1 (the near flood zone), and the STATIC_BFE_1 (the nearest static BFE).Do the same as above, except with the Structures file (Cumberland_York_Structures_Lidar_DFIRM_Merged), but also select those features that are within the X zone and the OPEN WATER. Export the points as Cumberland_York_Structures_XZone. Again, run the NEAR using the merged flood zone and only AE, VE, and AO zones selected. Export the file as Cumberland_York_Structures_XZone_Flood.Merge the above feature classes with the original feature classes. Add a field BFE_ELEV_COMPARE. Select all those features whose attributes have a VE or AE flood zone and use field calculator to calculate the difference between the Elev_ft and the BFE (subtracting the STATIC_BFE from Elev_ft). Positive values mean the maximum wall value is higher than the BFE, while negative values mean the max is below the BFE. Then, select the remaining values with switch selection. Calculate the same value but use the NEAR_STATIC_BFE value instead. Select by Attributes>FLD_ZONE=AO, and use the DEPTH value to enter into the above created fields as negative values. Delete ancilary attribute fields, leaving those listed in the _FINAL feature classes described above the process steps section.
description: This data was collected by the U.S. Bureau of Land Management (BLM) in New Mexico at both the New Mexico State Office and at the various field offices. This dataset is meant to depict the federal mineral (or subsurface) interest of land parcels within New Mexico. No attempt is made to depict the mineral interest of non-federal entities. BLM's Master Title Plats are the official land records of the federal government and serve as the primary data source for depiction of federal mineral interest lands. Auxilliary source are referenced, as well, for the depiction of federal mineral interest. Collection of this dataset began in the 1980's using the BLM's ADS software to digitize information at the 1:24,000 scale. In the mid to late 1990's the data was converted from ADS to ArcInfo software and merged into tiles of one degree of longitude by one half degree of latitude. These tiles were regularly updated. The tiles were merged into a statewide coverage. The source geodatabase for this shapefile was created by loading the merged ArcInfo coverage into a personal geodatabase. The geodatabase data were snapped to a more accurate GCDB derived land network, where available. In areas where GCDB was not available the data were snapped to digitized PLSS. This shapefile has been created by exporting the geodatabase feature class.; abstract: This data was collected by the U.S. Bureau of Land Management (BLM) in New Mexico at both the New Mexico State Office and at the various field offices. This dataset is meant to depict the federal mineral (or subsurface) interest of land parcels within New Mexico. No attempt is made to depict the mineral interest of non-federal entities. BLM's Master Title Plats are the official land records of the federal government and serve as the primary data source for depiction of federal mineral interest lands. Auxilliary source are referenced, as well, for the depiction of federal mineral interest. Collection of this dataset began in the 1980's using the BLM's ADS software to digitize information at the 1:24,000 scale. In the mid to late 1990's the data was converted from ADS to ArcInfo software and merged into tiles of one degree of longitude by one half degree of latitude. These tiles were regularly updated. The tiles were merged into a statewide coverage. The source geodatabase for this shapefile was created by loading the merged ArcInfo coverage into a personal geodatabase. The geodatabase data were snapped to a more accurate GCDB derived land network, where available. In areas where GCDB was not available the data were snapped to digitized PLSS. This shapefile has been created by exporting the geodatabase feature class.
This data was collected by the U.S. Bureau of Land Management (BLM) in New Mexico at both the New Mexico State Office and at the various field offices. This dataset is meant to depict the surface owner or manager of the land parcels. In the vast majority of land parcels, they will be one and the same. However, there are instances where the owner and manager of the land surface are not the same. When this occurs, the manager of the land is usually indicated. BLM's Master Title Plats are the official land records of the federal government and serve as the primary data source for depiction of all federal lands. Information from State of New Mexico is the primary source for the depiction of all state lands. Auxilliary source are referenced, as well, for the depiction of all lands. Collection of this dataset began in the 1980's using the BLM's ADS software to digitize information at the 1:24,000 scale. In the mid to late 1990's the data was converted from ADS to ArcInfo software and merged into tiles of one degree of longitude by one half degree of latitude. These tiles were regularly updated. The tiles were merged into a statewide coverage. The source geodatabase for this shapefile was created by loading the merged ArcInfo coverage into a personal geodatabase. The geodatabase data were snapped to a more accurate GCDB derived land network, where available. In areas where GCDB was not available the data were snapped to digitized PLSS. In 2006, the personal geodatabase was loaded into an enterprise geodatabase (SDE). This shapefile has been created by exporting the feature class from SDE.
The ckanext-dsactions extension enhances CKAN by adding an "Actions" tab on a dataset's view page, visible to users with editing permissions for that dataset. This tab provides a central location for performing actions related to the dataset, with the default functionality including a dataset cloning feature. The extension is designed to be extensible, enabling administrators to add other custom actions relevant to dataset management. Key Features: Dataset Actions Tab: Introduces a dedicated "Actions" tab on dataset pages, providing a user interface for performing specific operations on a dataset. Dataset Cloning: Includes a built-in "clone" feature, allowing authorized users to create a copy of the dataset efficiently. The precise details of what is cloned (metadata only, resources, etc.) are not specified in the readme, but the intent is for efficient duplication. Extensible Design: The "Actions" tab is designed to be easily extended, allowing CKAN administrators to add custom actions and functionalities tailored to specific organizational needs. It provides flexibility in adding new scripts associated with dataset management. Database Export: Facilitates exporting the entire CKAN database using a paster command (explained below). This feature is valuable for backups, migrations, or analysis purposes. Use Cases: Data Governance: Facilitates easier copying/cloning of datasets for staging changes or for testing purposes prior to pushing changes to production. Custom Workflows: Enable custom actions such as triggering QA processes upon dataset updates, or initiating data transformation scripts. Technical Integration: The extension integrates into CKAN by adding a new tab to the dataset view page based on user permissions. It also integrates a shell command export triggered with paster. Activation requires adding dsactions to the CKAN's .ini configuration file. Further configuration details and instructions for adding custom actions are not explicitly provided in the readme but would likely involve custom plugin development. Benefits & Impact: By centralizing dataset actions and providing a mechanism for extending functionality, ckanext-dsactions streamlines dataset management for CKAN users with editing permissions. The clone feature saves time and effort compared to manually recreating datasets, and the extensibility allows organizations to customize CKAN to their specific data management workflows. The database export functionality provides a relatively straightforward method for backing up or migrating the CKAN database using a command-line tool. As a developer focused extension, the benefits are primarily directed toward ease of development and operational activities around data management.
Recent studies in snowmelt-dominated catchments have documented changes in nitrogen (N) retention over time, such as declines in watershed exports of N, though there is a limited understanding of the controlling processes driving these trends. Working in the mountainous headwater East River Colorado watershed, we explored the effects of riparian hollows as N-cycling hotspots and as important small-scale controls on observed watershed trends. Using a modeling-based approach informed by remote sensing and in situ observations, we simulated the N-retention capacity of riparian hollows with seasonal and yearly hydrobiogeochemical perturbations imposed as drivers. We then implemented a scaling approach to quantify the relative contribution of riparian hollows to the total river corridor N budget. We found that riparian hollows primarily serve as N sinks, with N-transformation rates significantly limited by periods of enhanced groundwater upwelling and promoted at the onset of rainfall events. Given these observed hydrologic controls, we expect that the nitrate (NO3-) sink capacity of riparian hollows will increase in magnitude with future climatic perturbations, specifically the shift to more frequent rainfall events and fewer snowmelt events, as projected for many mountainous headwater catchments. Our current estimates suggest that while riparian hollows provision ~5–20% of NO3- to the river network, they functionally act as inhibitors to upland NO3- reaching the stream. Our work linking transient hydrological conditions to numerical biogeochemical simulations is an important step in assessing N-retaining features relative to the watershed N budget and better understanding the role of small-scale features within watersheds.
Urbanization alters dramatically watershed ecosystem processes. Land-use change and anthropogenic activities contribute to increased inputs of nutrients and other materials, while changes to land cover alter hydrology and the corresponding movement of materials. These changes have ramifications for both watershed processes and downstream systems. The impacts of urbanization on aquatic systems are well-studied, and frequently encapsulated in the ‘urban stream syndrome’ (Walsh et al. 2005) that describes, among others, increased nutrient loading and stream flashiness. However, there is some evidence that aridland cities behave differently (Grimm et al. 2004, 2005), and the complex dynamics among catchment characteristics, storm attributes, and runoff in highly urbanized settings of the arid Southwest remains poorly understood.
To enhance our understanding of stormwater dynamics and watershed functioning in aridland, urban environments, the Central Arizona–Phoenix Long-Term Ecological Research (CAP LTER) program began monitoring stormwater runoff at the outflow of the Indian Bend Wash (IBW) in 2008. The IBW is a tributary to the Salt River in central Arizona, and is a major drainage within the greater Phoenix metropolitan area, encompassing much of the City of Scottsdale. A model of soft engineering, the IBW as it runs through much of the City of Scottsdale is comprised largely of a series of artificial lakes, parks, paths, golf courses, ball fields, and other non-structural elements designed with the dual roles of providing outdoor amenities to the City residents while serving as an effective flood water conveyance feature. A unique biogeochemistry of this novel system is detailed by Roach et al. (2008), and Roach and Grimm (2011).
Data and expertise garnered by stormwater monitoring near the outflow of the IBW helped pave the way for a more expansive stormwater research effort facilitated by a leveraged grant from the National Science Foundation (DEB-0918457, NSF Ecosystems, 2009-13). Through the Stormwater Nitrogen in Arizona (SNAZ) project, hierarchically nested urban stormwater catchments in Scottsdale, and another in Tempe, Arizona were instrumented with automated stormwater samplers (ISCO® 6700 automated pump samplers). A subset of those locations were fitted with bubbler modules (ISCO® 720 bubbler modules) for quantifying water height (and subsequently discharge), and tipping-bucket rain gauges (ISCO® 674). The study catchments differed in type of stormwater infrastructure, spanning a continuum from highly engineered stormwater infrastructure in older residential areas to non-engineered washes in the desert, but generally not in land-use type (land use in all study catchments is predominantly residential). Discrete stormwater samples were collected from most runoff-generating storms at the outflow of the study catchments from the fall of 2010 through the summer 2012. Rainfall samples were collected at a subset of the locations during several storms to provide data that would contribute to an assessment of sources of materials in runoff. Results of this study are detailed by Hale et al. (2014, 2015). Sampling at most locations ceased at the end of the SNAZ award period, but the CAP LTER continues its long-term monitoring of runoff at several locations along the IBW, as well as other locations in the greater Phoenix metropolitan area. Data from the CAP LTER's long-term stormwater monitoring program are available through the CAP LTER website and the Environmental Data Initiative data portal.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Part of TraCES. From Translation to Creation: Changes in Ethiopic Style and Lexicon from Late Antiquity to the Middle Ages. Beside trying to parse a string based on logic and tabular data the script queries an export in TEI-Feature Structure of annotations produced with the GETA tool (by Cristina Vertan). As a base for validation uses also an extract of the data from Dillmann's Lexicon Linguae Aethiopicae
Feature class that compares the elevations between sand dune crests (extracted from available LiDAR datasets from 2010 and 2013) with published FEMA Base Flood Elevations (BFEs) from preliminary FEMA DFIRMS (Panels issued in 2018 and 2019) in coastal York and Cumberland counties (up through Willard Beach in South Portland). Steps to create the dataset included:Shoreline structures from the most recent NOAA EVI LANDWARD_SHORETYPE feature class were extracted using the boundaries of York and Cumberland counties. This included 1B: Exposed, Solid Man-Made structures, 8B: Sheltered, Solid Man-Made Structures; 6B: Riprap, and 8C: Sheltered Riprap. This resulted in the creation of Cumberland_ESIL_Structures and York_ESIL_Structures. Note that ESIL uses the MHW line as the feature base.Shoreline structures from the work by Rice (2015) were extracted using the York and Cumberland county boundaries. This resulted in the creation of Cumberland_Rice_Structures and York_Rice_Structures.Additional feature classes for structures were created for York and Cumberland county structures that were missed. This was Slovinsky_York_Structures and Slovinsky_Cumberland_Structures. GoogleEarth imagery was inspected while additional structures were being added to the GIS. 2012 York and Cumberland County imagery was used as the basemap, and structures were classified as bulkheads, rip rap, or dunes (if known). Also, whether or not the structure was in contact with the 2015 HAT was noted.MEDEP was consulted to determine which permit data (both PBR and Individual Permit, IP, data) could be used to help determine where shoreline stabilization projects may have been conducted adjacent to or on coastal bluffs. A file was received for IP data and brought into GIS (DEP_Licensing_Points). This is a point file for shoreline stabilization permits under NRPA.Clip GISVIEW.MEDEP.Permit_By_Rule_Locations to the boundaries of the study area and output DEP_PBR_Points.Join GISVIEW.sde>GISVIEW.MEDEP.PBR_ACTIVITY to the DEP_PBR_Points using the PBR_ID Field. Then, export this file as DEP_PBR_Points2. Using the new ACTIVITY_DESC field, select only those activities that relate to shoreline stabilization projects:PBR_ACTIVITY ACTIVITY_DESC02 Act. Adjacent to a Protected Natural Resource04 Maint Repair & Replacement of Structure08 Shoreline StabilizationSelect by Attributes > PBR_ACTIVITY IN (‘02’, ‘04’, ‘08’) select only those activities likely to be related to shoreline stabilization, and export the selected data as a DEP_PBR_Points3. Then delete 1 and 2, and rename this final product as DEP_PBR_Points.Next, visually inspect the Licensing and PBR files using ArcMap 2012, 2013 imagery, along with Google Earth imagery to determine the extents of armoring along the shoreline.Using EVI and Rice data as indicators, manually inspect and digitize sections of the coastline that are armored. Classify the seaward shoreline type (beach, mudflat, channel, dune, etc.) and the armor type (wall or bulkhead). Bring in the HAT line and, using that and visual indicators, identify whether or not the armored sections are in contact with HAT. Use Google Earth at the same time as digitizing in order to help constrain areas. Merge digitized armoring into Cumberland_York_Merged.Bring the preliminary FEMA DFIRM data in and use “intersect” to assign the different flood zones and elevations to the digitized armored sections. This was done first for Cumberland, then for York Counties. Delete ancillary attributes, as needed. Resulting layer is Cumberland_Structure_FloodZones and York_Structure_FloodZones.Go to NOAA Digital Coast Data Layers and download newest LiDAR data for York and Cumberland county beach, dune, and just inland areas. This includes 2006 and newer topobathy data available from 2010 (entire coast), and selected areas from 2013 and 2014 (Wells, Scarborough, Kennebunk).Mosaic the 2006, 2010, 2013 and 2014 data (with 2013 and 2014 being the first dataset laying on top of the 2010 data) Mosaic this dataset into the sacobaydem_ftNAVD raster (this is from the MEGIS bare-earth model). This will cover almost all of the study area except for armor along several areas in York. Resulting in LidAR206_2010_2013_Mosaic.tif.Using the LiDAR data as a proxy, create a “seaward crest” line feature class which follows along the coast and extracts the approximate highest point (cliff, bank, dune) along the shoreline. This will be used to extract LiDAR data and compare with preliminary flood zone information. The line is called Dune_Crest.Using an added tool Points Along Line, create points at 5 m spacing along each of the armored shoreline feature lines and the dune crest lines. Call the outputs PointsonLines and PointsonDunes.Using Spatial Analyst, Extract LIDAR elevations to the points using the 2006_2010_2013 Mosaic first. Call this LidarPointsonLines1. Select those points which have NULL values, export as this LiDARPointsonLines2. Then rerun Extract Values to Points using just the selected data and the state MEGIS DEM. Convert RASTERVALU to feet by multiplying by 3.2808 (and rename as Elev_ft). Select by Attributes, find all NULL values, and in an edit session, delete them from LiDARPointsonLines. Then, merge the 2 datasets and call it LidarPointsonLines. Do the same above with dune lines and create LidarPointsonDunes.Next, use the Cumberland and York flood zone layers to intersect the points with the appropriate flood zone data. Create ….CumbFIRM and …YorkFIRM files for the dunes and lines.Select those points from the Dunes feature class that are within the X zone – these will NOT have an associated BFE for comparison with the Lidar data. Export the Dune Points as Cumberland_York_Dunes_XZone. Run NEAR and use the merged flood zone feature class (with only V, AE, and AO zones selected). Then, join the flood zone data to the feature class using FID (from the feature class) and OBJECTID (from the flood zone feature class). Export as Cumberland_York_Dunes_XZone_Flood. Delete ancillary columns of data, leaving the original FLD_ZONE (X), Elev_ft, NEAR_DIST (distance, in m, to the nearest flood zone), FLD_ZONE_1 (the near flood zone), and the STATIC_BFE_1 (the nearest static BFE).Do the same as above, except with the Structures file (Cumberland_York_Structures_Lidar_DFIRM_Merged), but also select those features that are within the X zone and the OPEN WATER. Export the points as Cumberland_York_Structures_XZone. Again, run the NEAR using the merged flood zone and only AE, VE, and AO zones selected. Export the file as Cumberland_York_Structures_XZone_Flood.Merge the above feature classes with the original feature classes. Add a field BFE_ELEV_COMPARE. Select all those features whose attributes have a VE or AE flood zone and use field calculator to calculate the difference between the Elev_ft and the BFE (subtracting the STATIC_BFE from Elev_ft). Positive values mean the maximum wall value is higher than the BFE, while negative values mean the max is below the BFE. Then, select the remaining values with switch selection. Calculate the same value but use the NEAR_STATIC_BFE value instead. Select by Attributes>FLD_ZONE=AO, and use the DEPTH value to enter into the above created fields as negative values. Delete ancilary attribute fields, leaving those listed in the _FINAL feature classes described above the process steps section.
New-ID: NBI18
The Africa Major Infrastructure and Human Settlements Dataset
Files: TOWNS2.E00 Code: 100022-002 ROADS2.E00 100021-002
Vector Members: The E00 files are in Arc/Info Export format and should be imported with the Arc/Info command Import cover In-Filename Out-Filename
The Africa major infrastructure and human settlements dataset form part of the UNEP/FAO/ESRI Database project that covers the entire world but focuses here on Africa. The maps were prepared by Environmental Systems Research Institute (ESRI), USA. Most data for the database were provided by the Soil Resources, Management and Conservation Service, Land and Water Development Division of the Food and Agriculture Organization (FAO), Italy. This dataset was developed in collaboration with the United Nations Environment Program (UNEP), Kenya. The base maps used were the UNESCO/FAO Soil Map of the world (1977) in Miller Oblated Stereographic projection, the DMA Global Navigation and Planning charts for Africa (various dates: 1976-1982) and the Rand-McNally, New International Atlas (1982). All sources were re-registered to the basemap by comparing known features on the basemap those of the source maps. The digitizing was done with a spatial resolution of 0.002 inches. The maps were then transformed from inch coordinates to latitude/longitude degrees. The transformation was done using an unpublished algorithm of the US Geological Survey and ESRI to create coverages for one-degree graticules. The Population Centers were selected based upon their inclusion in the list of major cities and populated areas in the Rand McNally New International Atlas Contact: UNEP/GRID-Nairobi, P.O. Box 30552 Nairobi, Kenya FAO, Soil Resources, Management and Conservation Service, 00100, Rome, Italy ESRI, 380 New York Street, Redlands, CA. 92373, USA The ROADS2 file shows major roads of the African continent The TOWNS2 file shows human settlements and airports for the African continent
References:
ESRI. Final Report UNEP/FAO World and Africa GIS data base (1984). Internal Publication by ESRI, FAO and UNEP
FAO. UNESCO Soil Map of the World (1977). Scale 1:5000000. UNESCO, Paris
Defence Mapping Agency. Global Navigation and Planning charts for Africa (various dates: 1976-1982). Scale 1:5000000. Washington DC.
Grosvenor. National Geographic Atlas of the World (1975). Scale 1:850000. National Geographic Society Washington DC.
DMA. Topographic Maps of Africa (various dates). Scale 1:2000000 Washington DC.
Rand-McNally. The new International Atlas (1982). Scale 1:6,000,000. Rand McNally & Co.Chicago
Source: FAO Soil Map of the World. Scale 1:5000000 Publication Date: Dec 1984 Projection: Miller Type: Points Format: Arc/Info export non-compressed Related Datasets: All UNEP/FAO/ESRI Datasets ADMINLL (100012-002) administrative boundries AFURBAN (100082) urban percentage coverage Comments: There is no outline of Africa
Use the Sidebar template to include a set of tools and options that appear in a side panel next to the map. You can enable editing tools to allow users to add and update features in the map. Configure filters that app users can use to gain more information about your data. Include bookmarks to guide your users to important regions and add essential map tools for exploring the map. Examples: Showcase a detailed map of population data with supplementary text for further explanation. Allow data reviewers to investigate and update records with editing tools. Present public services in a map that your audience can filter for the types of services they need. Data requirements The Sidebar template has no specific data requirements. To use the Oriented imagery tool, the web map must have an oriented imagery layer. Key app capabilities Cover page - Include a cover page with custom text and logos to establish the purpose of the app. Edit tools - Provide options to add and update features in editable layers. Users can turn on snapping for more efficient and precise editing. Attribute filter - Configure map filter options that are available to app users. Bookmarks - Allow users to zoom and pan to a collection of preset extents that are saved in the map. Export - Print or export the search results or selected features as a .pdf, .jpg, or .png file that includes the pop-up content of returned features and an option to include the map. Measurement tools - Provide tools that measure distance and area and find and convert coordinates. Language switcher - Provide translations for custom text and create a multilingual app. Home, Zoom controls, Legend, Layer List, Search Supportability This web app is designed responsively to be used in browsers on desktops, mobile phones, and tablets. We are committed to ongoing efforts towards making our apps as accessible as possible. Please feel free to leave a comment on how we can improve the accessibility of our apps for those who use assistive technologies.
This layer illustrates the Sacramento-San Joaquin Delta boundary (version 2002.4). This is a duplicate layer to one available on the CNRA Open Data website. It is provided here because the REST service for the authoritative CNRA copy is occasionally down.It delineates the legal Delta established under the Delta Protection Act (Section 12220 of the Water Code), passed in 1959. This boundary file has been reviewed by a variety of relevant professionals and can be considered acceptable for mapping at 1:24000. The original legal boundary maps obtained from the Delta Protection Commission were compiled by DWR Land & Right of Way sometime in the early 1980s, with one revision made to the original maps in the vicinity of Point Pleasant.Additional notes: The exact accuracy is somewhat uncertain, but can be considered acceptable for mapping within 7.5 Minute USGS map accuracy standards (1:24000 scale). The original topographic maps containing the drawn Delta border were scanned from the Department of Water Resources. Images were registered to 1:24000 USGS DRG's in ArcView (ESRI) utilizing imagewarp extension. The Delta boundary was digitized from the registered images. The original legal boundary maps were based on the legal description in Section 12220 of the Water Code, with ambiguities in the Code addressed by the individuals involved in the mapping project at that time. One revision was made to the original maps in the vicinity of Point Pleasant, and is the only difference between this and the 4.2001 version of the legal Delta boundary Arc/INFO coverage.Published to DWR Spatial Data Library 2/21/2003. Published as an export to geoDB feature class output. Source is DWR Delta Levees Program. These data are distributed as part of the DWR Spatial Data Library. Please advise dataset administrator of any improvements or suggestions for these data, or if additional metadata can be contributed. The State of California, the Department of Water Resources, the Programs, and the individuals working in support of any of the preceding shall have no legal responsibility for providing data to the DWR Spatial Data Library, and shall have no responsibility for any errors or omissions, or for the use or results obtained from the use of this information. User acknowledges and accepts these terms upon receipt of display of any of the contents of any of the files associated with these data. Received from Chico State by DWR Delta Levees Program 5/31/2001. Converted from shapefile into coverage format, converted from Teale Albers into Geographic/NAD83, & rebuilt topology using ArcGIS 8.2, double-precision, by Joel Dudas, DWR Delta Levees Program, 2/2003. The revision between the 4.2001 and the 4.2002 versions reflects a change in the vicinity of Point Pleasant in the east Delta, as shown on modified Delta Protection Commission maps. The line was moved south to the township boundary line, as appropriate, using ArcGIS 8.1 software. During 2001 & early 2002 every effort was made to identify any errors in the underlying data sources, including water district, reclamation district, roads, etc. boundaries. While certain features were not able to be 100% certified, this coverage can be considered to be as accurate based on all of the information available at this time. These uncertainties principally involve obscurity in some of the ancestral source data.
A Hosted Feature Service representing Chatham County Flood Risk to Building Footprints in 2015. This Map is used in the Flood Footprints Map, and the Flood Risk Swipe Instant Application for the Chatham County Performance Hub.Analytical Steps to Produce Data:1. Select By Location: All Building Footprints that "intersect" Flood Hazard Areas2. Export Footprint Selection to FC, then convert Polygons to Points (Feature to Point Tool)3. Calculate Number of Footprint Points within Each Census Tract & Append Data to Tracts (Select by Location or Summarize Within)
This data was collected by the U.S. Bureau of Land Management (BLM) in New Mexico at both the New Mexico State Office and at the various field offices. This dataset is meant to depict the federal mineral (or subsurface) interest of land parcels within New Mexico. No attempt is made to depict the mineral interest of non-federal entities. BLM's Master Title Plats are the official land records of the federal government and serve as the primary data source for depiction of federal mineral interest lands. Auxilliary source are referenced, as well, for the depiction of federal mineral interest. Collection of this dataset began in the 1980's using the BLM's ADS software to digitize information at the 1:24,000 scale. In the mid to late 1990's the data was converted from ADS to ArcInfo software and merged into tiles of one degree of longitude by one half degree of latitude. These tiles were regularly updated. The tiles were merged into a statewide coverage. The source geodatabase for this shapefile was created by loading the merged ArcInfo coverage into a personal geodatabase. The geodatabase data were snapped to a more accurate GCDB derived land network, where available. In areas where GCDB was not available the data were snapped to digitized PLSS. This shapefile has been created by exporting the geodatabase feature class.
The package_converter extension for CKAN enhances its functionality by enabling the export of CKAN package metadata into various formats. The extension aims to simplify the process of converting metadata to formats like DataCite and OAI_DC and allows defining and reusing custom converters. While the provided documentation is somewhat sparse, it highlights the extension's compatibility with other useful CKAN extensions. Key Features: Package Metadata Export: Enables the export of CKAN package metadata. Multiple Format Support: Supports exporting to formats like DataCite and OAI_DC. Custom Converter Definitions: Allows users to define custom converters to tailor the export process to their specific requirements. Converter Reusability: Supports the reuse of existing converters, promoting efficiency and consistency. Compatibility with other Extensions: Claims compatibility with ckanext-scheming, ckanext-repeating, ckanext-composite, and ckanext-spatial. Use Cases (Based on likely functionality): Metadata Harmonization: Organizations needing to share their datasets with other platforms or repositories using different metadata standards can use this extension to convert their existing CKAN metadata into the required formats. DataCite Integration: Institutions using CKAN to manage research datasets can leverage the DataCite export feature to generate metadata records compliant with the DataCite standard, simplifying the process of assigning DOIs. OAI-PMH Support: Libraries and archives operating CKAN instances can utilize the OAI_DC format to facilitate metadata harvesting through OAI-PMH, making their datasets discoverable by a wider audience. Technical Integration: The extension likely integrates with CKAN as a plugin, adding functionality for managing and triggering package metadata conversions. Details for setting the configurations are vague but it is defined custom converters can be added directly or via the config file. It requires the addition of "package_converter" to the ckan.plugins setting in the CKAN configuration file (typically production.ini). Benefits & Impact: By enabling package metadata conversion, the package_converter extension reduces the manual effort required to share CKAN datasets with external parties using different metadata standards. It enhances the interoperability of CKAN installations and allows organizations to easily participate in broader data sharing initiatives. The ability to define custom converters ensures the flexibility to support a wide range of metadata formats.