100+ datasets found
  1. z

    Classification of web-based Digital Humanities projects leveraging...

    • zenodo.org
    • data-staging.niaid.nih.gov
    csv, tsv
    Updated Nov 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tommaso Battisti; Tommaso Battisti (2025). Classification of web-based Digital Humanities projects leveraging information visualisation techniques [Dataset]. http://doi.org/10.5281/zenodo.14192758
    Explore at:
    tsv, csvAvailable download formats
    Dataset updated
    Nov 10, 2025
    Dataset provided by
    Zenodo
    Authors
    Tommaso Battisti; Tommaso Battisti
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Description

    This dataset contains a list of 186 Digital Humanities projects leveraging information visualisation techniques. Each project has been classified according to visualisation and interaction methods, narrativity and narrative solutions, domain, methods for the representation of uncertainty and interpretation, and the employment of critical and custom approaches to visually represent humanities data.

    Classification schema: categories and columns

    The project_id column contains unique internal identifiers assigned to each project. Meanwhile, the last_access column records the most recent date (in DD/MM/YYYY format) on which each project was reviewed based on the web address specified in the url column.
    The remaining columns can be grouped into descriptive categories aimed at characterising projects according to different aspects:

    Narrativity. It reports the presence of information visualisation techniques employed within narrative structures. Here, the term narrative encompasses both author-driven linear data stories and more user-directed experiences where the narrative sequence is determined by user exploration [1]. We define 2 columns to identify projects using visualisation techniques in narrative, or non-narrative sections. Both conditions can be true for projects employing visualisations in both contexts. Columns:

    • non_narrative (boolean)

    • narrative (boolean)

    Domain. The humanities domain to which the project is related. We rely on [2] and the chapters of the first part of [3] to abstract a set of general domains. Column:

    • domain (categorical):

      • History and archaeology

      • Art and art history

      • Language and literature

      • Music and musicology

      • Multimedia and performing arts

      • Philosophy and religion

      • Other: both extra-list domains and cases of collections without a unique or specific thematic focus.

    Visualisation of uncertainty and interpretation. Buiding upon the frameworks proposed by [4] and [5], a set of categories was identified, highlighting a distinction between precise and impressional communication of uncertainty. Precise methods explicitly represent quantifiable uncertainty such as missing, unknown, or uncertain data, precisely locating and categorising it using visual variables and positioning. Two sub-categories are interactive distinction, when uncertain data is not visually distinguishable from the rest of the data but can be dynamically isolated or included/excluded categorically through interaction techniques (usually filters); and visual distinction, when uncertainty visually “emerges” from the representation by means of dedicated glyphs and spatial or visual cues and variables. On the other hand, impressional methods communicate the constructed and situated nature of data [6], exposing the interpretative layer of the visualisation and indicating more abstract and unquantifiable uncertainty using graphical aids or interpretative metrics. Two sub-categories are: ambiguation, when the use of graphical expedients—like permeable glyph boundaries or broken lines—visually convey the ambiguity of a phenomenon; and interpretative metrics, when expressive, non-scientific, or non-punctual metrics are used to build a visualisation. Column:

    • uncertainty_interpretation (categorical):

      • Interactive distinction

      • Visual distinction

      • Ambiguation

      • Interpretative metrics

    Critical adaptation. We identify projects in which, with regards to at least a visualisation, the following criteria are fulfilled: 1) avoid repurposing of prepackaged, generic-use, or ready-made solutions; 2) being tailored and unique to reflect the peculiarities of the phenomena at hand; 3) avoid simplifications to embrace and depict complexity, promoting time-consuming visualisation-based inquiry. Column:

    • critical_adaptation (boolean)

    Non-temporal visualisation techniques. We adopt and partially adapt the terminology and definitions from [7]. A column is defined for each type of visualisation and accounts for its presence within a project, also including stacked layouts and more complex variations. Columns and inclusion criteria:

    • plot (boolean): visual representations that map data points onto a two-dimensional coordinate system.

    • cluster_or_set (boolean): sets or cluster-based visualisations used to unveil possible inter-object similarities.

    • map (boolean): geographical maps used to show spatial insights. While we do not specify the variants of maps (e.g., pin maps, dot density maps, flow maps, etc.), we make an exception for maps where each data point is represented by another visualisation (e.g., a map where each data point is a pie chart) by accounting for the presence of both in their respective columns.

    • network (boolean): visual representations highlighting relational aspects through nodes connected by links or edges.

    • hierarchical_diagram (boolean): tree-like structures such as tree diagrams, radial trees, but also dendrograms. They differ from networks for their strictly hierarchical structure and absence of closed connection loops.

    • treemap (boolean): still hierarchical, but highlighting quantities expressed by means of area size. It also includes circle packing variants.

    • word_cloud (boolean): clouds of words, where each instance’s size is proportional to its frequency in a related context

    • bars (boolean): includes bar charts, histograms, and variants. It coincides with “bar charts” in [7] but with a more generic term to refer to all bar-based visualisations.

    • line_chart (boolean): the display of information as sequential data points connected by straight-line segments.

    • area_chart (boolean): similar to a line chart but with a filled area below the segments. It also includes density plots.

    • pie_chart (boolean): circular graphs divided into slices which can also use multi-level solutions.

    • plot_3d (boolean): plots that use a third dimension to encode an additional variable.

    • proportional_area (boolean): representations used to compare values through area size. Typically, using circle- or square-like shapes.

    • other (boolean): it includes all other types of non-temporal visualisations that do not fall into the aforementioned categories.

    Temporal visualisations and encodings. In addition to non-temporal visualisations, a group of techniques to encode temporality is considered in order to enable comparisons with [7]. Columns:

    • timeline (boolean): the display of a list of data points or spans in chronological order. They include timelines working either with a scale or simply displaying events in sequence. As in [7], we also include structured solutions resembling Gantt chart layouts.

    • temporal_dimension (boolean): to report when time is mapped to any dimension of a visualisation, with the exclusion of timelines. We use the term “dimension” and not “axis” as in [7] as more appropriate for radial layouts or more complex representational choices.

    • animation (boolean): temporality is perceived through an animation changing the visualisation according to time flow.

    • visual_variable (boolean): another visual encoding strategy is used to represent any temporality-related variable (e.g., colour).

    Interactions. A set of categories to assess affordable interactions based on the concept of user intent [8] and user-allowed perceptualisation data actions [9]. The following categories roughly match the manipulative subset of methods of the “how” an interaction is performed in the conception of [10]. Only interactions that affect the aspect of the visualisation or the visual representation of its data points, symbols, and glyphs are taken into consideration. Columns:

    • basic_selection (boolean): the demarcation of an element either for the duration of the interaction or more permanently until the occurrence of another selection.

    • advanced_selection (boolean): the demarcation involves both the selected element and connected elements within the visualisation or leads to brush and link effects across views. Basic selection is tacitly implied.

    • navigation (boolean): interactions that allow moving, zooming, panning, rotating, and scrolling the view but only when applied to the visualisation and not to the web page. It also includes “drill” interactions (to navigate through different levels or portions of data detail, often generating a new view that replaces or accompanies the original) and “expand” interactions generating new perspectives on data by expanding and collapsing nodes.

    • arrangement (boolean): the organisation of visualisation elements (symbols, glyphs, etc.) or multi-visualisation layouts spatially through drag and drop or

  2. NHD HUC8 Shapefile: Rappahannock - 02080103

    • noaa.hub.arcgis.com
    Updated Mar 28, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NOAA GeoPlatform (2024). NHD HUC8 Shapefile: Rappahannock - 02080103 [Dataset]. https://noaa.hub.arcgis.com/maps/7abd0b06b9f74022b92389217f3d84dd
    Explore at:
    Dataset updated
    Mar 28, 2024
    Dataset provided by
    National Oceanic and Atmospheric Administrationhttp://www.noaa.gov/
    Authors
    NOAA GeoPlatform
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Area covered
    Description

    Access National Hydrography ProductsThe National Hydrography Dataset (NHD) is a feature-based database that interconnects and uniquely identifies the stream segments or reaches that make up the nation's surface water drainage system. NHD data was originally developed at 1:100,000-scale and exists at that scale for the whole country. This high-resolution NHD, generally developed at 1:24,000/1:12,000 scale, adds detail to the original 1:100,000-scale NHD. (Data for Alaska, Puerto Rico and the Virgin Islands was developed at high-resolution, not 1:100,000 scale.) Local resolution NHD is being developed where partners and data exist. The NHD contains reach codes for networked features, flow direction, names, and centerline representations for areal water bodies. Reaches are also defined on waterbodies and the approximate shorelines of the Great Lakes, the Atlantic and Pacific Oceans and the Gulf of Mexico. The NHD also incorporates the National Spatial Data Infrastructure framework criteria established by the Federal Geographic Data Committee.The NHD is a national framework for assigning reach addresses to water-related entities, such as industrial discharges, drinking water supplies, fish habitat areas, wild and scenic rivers. Reach addresses establish the locations of these entities relative to one another within the NHD surface water drainage network, much like addresses on streets. Once linked to the NHD by their reach addresses, the upstream/downstream relationships of these water-related entities--and any associated information about them--can be analyzed using software tools ranging from spreadsheets to geographic information systems (GIS). GIS can also be used to combine NHD-based network analysis with other data layers, such as soils, land use and population, to help understand and display their respective effects upon one another. Furthermore, because the NHD provides a nationally consistent framework for addressing and analysis, water-related information linked to reach addresses by one organization (national, state, local) can be shared with other organizations and easily integrated into many different types of applications to the benefit of all.Statements of attribute accuracy are based on accuracy statements made for U.S. Geological Survey Digital Line Graph (DLG) data, which is estimated to be 98.5 percent. One or more of the following methods were used to test attribute accuracy: manual comparison of the source with hardcopy plots; symbolized display of the DLG on an interactive computer graphic system; selected attributes that could not be visually verified on plots or on screen were interactively queried and verified on screen. In addition, software validated feature types and characteristics against a master set of types and characteristics, checked that combinations of types and characteristics were valid, and that types and characteristics were valid for the delineation of the feature. Feature types, characteristics, and other attributes conform to the Standards for National Hydrography Dataset (USGS, 1999) as of the date they were loaded into the database. All names were validated against a current extract from the Geographic Names Information System (GNIS). The entry and identifier for the names match those in the GNIS. The association of each name to reaches has been interactively checked, however, operator error could in some cases apply a name to a wrong reach.Points, nodes, lines, and areas conform to topological rules. Lines intersect only at nodes, and all nodes anchor the ends of lines. Lines do not overshoot or undershoot other lines where they are supposed to meet. There are no duplicate lines. Lines bound areas and lines identify the areas to the left and right of the lines. Gaps and overlaps among areas do not exist. All areas close.The completeness of the data reflects the content of the sources, which most often are the published USGS topographic quadrangle and/or the USDA Forest Service Primary Base Series (PBS) map. The USGS topographic quadrangle is usually supplemented by Digital Orthophoto Quadrangles (DOQs). Features found on the ground may have been eliminated or generalized on the source map because of scale and legibility constraints. In general, streams longer than one mile (approximately 1.6 kilometers) were collected. Most streams that flow from a lake were collected regardless of their length. Only definite channels were collected so not all swamp/marsh features have stream/rivers delineated through them. Lake/ponds having an area greater than 6 acres were collected. Note, however, that these general rules were applied unevenly among maps during compilation. Reach codes are defined on all features of type stream/river, canal/ditch, artificial path, coastline, and connector. Waterbody reach codes are defined on all lake/pond and most reservoir features. Names were applied from the GNIS database. Detailed capture conditions are provided for every feature type in the Standards for National Hydrography Dataset available online through https://prd-wret.s3-us-west-2.amazonaws.com/assets/palladium/production/atoms/files/NHD%201999%20Draft%20Standards%20-%20Capture%20conditions.PDF.Statements of horizontal positional accuracy are based on accuracy statements made for U.S. Geological Survey topographic quadrangle maps. These maps were compiled to meet National Map Accuracy Standards. For horizontal accuracy, this standard is met if at least 90 percent of points tested are within 0.02 inch (at map scale) of the true position. Additional offsets to positions may have been introduced where feature density is high to improve the legibility of map symbols. In addition, the digitizing of maps is estimated to contain a horizontal positional error of less than or equal to 0.003 inch standard error (at map scale) in the two component directions relative to the source maps. Visual comparison between the map graphic (including digital scans of the graphic) and plots or digital displays of points, lines, and areas, is used as control to assess the positional accuracy of digital data. Digital map elements along the adjoining edges of data sets are aligned if they are within a 0.02 inch tolerance (at map scale). Features with like dimensionality (for example, features that all are delineated with lines), with or without like characteristics, that are within the tolerance are aligned by moving the features equally to a common point. Features outside the tolerance are not moved; instead, a feature of type connector is added to join the features.Statements of vertical positional accuracy for elevation of water surfaces are based on accuracy statements made for U.S. Geological Survey topographic quadrangle maps. These maps were compiled to meet National Map Accuracy Standards. For vertical accuracy, this standard is met if at least 90 percent of well-defined points tested are within one-half contour interval of the correct value. Elevations of water surface printed on the published map meet this standard; the contour intervals of the maps vary. These elevations were transcribed into the digital data; the accuracy of this transcription was checked by visual comparison between the data and the map.

  3. d

    Data from: Map data showing concentration of landslides caused by Hurricane...

    • catalog.data.gov
    • data.usgs.gov
    • +5more
    Updated Nov 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Map data showing concentration of landslides caused by Hurricane Maria in Puerto Rico [Dataset]. https://catalog.data.gov/dataset/map-data-showing-concentration-of-landslides-caused-by-hurricane-maria-in-puerto-rico
    Explore at:
    Dataset updated
    Nov 21, 2025
    Dataset provided by
    U.S. Geological Survey
    Area covered
    Puerto Rico
    Description

    On September 20, 2017, Hurricane Maria hit the U.S. territory of Puerto Rico as a category 4 storm. Heavy rainfall caused landslides in mountainous regions throughout the territory. This data release presents geospatial data describing the concentration of landslides generated by Hurricane Maria in Puerto Rico. We used post-hurricane satellite and aerial imagery collected between September 26, 2017 and October 8, 2017 to visually estimate the number of landslides over nearly the whole territory. This was done by dividing the territory into a grid with 4 km2 cells (2 km x 2 km). Each 4 km2 grid cell was classified as either containing no landslides, fewer than 25 landslides/km2 or more than 25 landslides/km2. We used 12 WorldView satellite images (~0.5 m-resolution) available from Digital Globe, Inc. and ~0.15 m-resolution aerial imagery collected by Sanborn and QuantumSpatial (http://www.arcgis.com/home/item.html?id=b1949283c1084b0daf2987d896392ac2). Because leaves were stripped from much of the vegetation, landslide scars were readily visible in both sources of imagery. We assume that the majority of landslides were triggered by rainfall from Hurricane Maria, but rainfall from Hurricane Irma during the first week of September and rainfall from thunderstorms after Hurricane Maria may have also initiated landslides. During this investigation, we visually examined a total area of 8103 km2, which encompassed most of the territory and nearly all the mountainous areas. Approximately 846 km2 of the land area of the territory was not examined because either 1) imagery was unavailable or 2) the area was obscured by cloud cover. Approximately 61% of the examined area was unaffected by landslides. Landslides were observed in the remaining 39% of the examined area, but, for the most part , the landslide density was less than 25 landslides/km2 (37% of the examined area). Landslide density was greater than 25 landslides/km2 in about 2% of the examined area (156 km2), which includes parts of the Añasco, Mayagüez, Las Marías, Maricao, Lares, Utuado, Adjuntas, and Jayuya municipalities. Based on visual examination of imagery, the municipality of Utuado appears to have been the most severely impacted, with about 40% of the municipality having a density of landslides greater than 25 landslides/km2. This preliminary assessment serves to inform response and recovery efforts and as a basis for more detailed studies on the impacts of landslides in Puerto Rico caused by Hurricane Maria.

  4. Geospatial data for the Vegetation Mapping Inventory Project of Palo Alto...

    • catalog.data.gov
    Updated Nov 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Park Service (2025). Geospatial data for the Vegetation Mapping Inventory Project of Palo Alto Battlefield National Historical Park [Dataset]. https://catalog.data.gov/dataset/geospatial-data-for-the-vegetation-mapping-inventory-project-of-palo-alto-battlefield-nati
    Explore at:
    Dataset updated
    Nov 25, 2025
    Dataset provided by
    National Park Servicehttp://www.nps.gov/
    Description

    The files linked to this reference are the geospatial data created as part of the completion of the baseline vegetation inventory project for the NPS park unit. Current format is ArcGIS file geodatabase but older formats may exist as shapefiles. The GIS included a visual portrayal of field site and GCP locations that were hotlinked to map identifiers overlain on 1999 mosaic photography coverage. Clicking on a map identifier retrieved a tabular and visual display of all data and photography available for each field site and GCP location. The GIS project that contained GCP locations and descriptions (some with photography), field site locations and descriptions including photography was delivered to the PAAL personnel on October 11, 2000.

  5. NHD HUC8 Shapefile: James- 02080207

    • noaa.hub.arcgis.com
    Updated Mar 29, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NOAA GeoPlatform (2024). NHD HUC8 Shapefile: James- 02080207 [Dataset]. https://noaa.hub.arcgis.com/maps/77aad52768f84830bedf7f9d7043f0b6
    Explore at:
    Dataset updated
    Mar 29, 2024
    Dataset provided by
    National Oceanic and Atmospheric Administrationhttp://www.noaa.gov/
    Authors
    NOAA GeoPlatform
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Area covered
    Description

    Access National Hydrography ProductsThe National Hydrography Dataset (NHD) is a feature-based database that interconnects and uniquely identifies the stream segments or reaches that make up the nation's surface water drainage system. NHD data was originally developed at 1:100,000-scale and exists at that scale for the whole country. This high-resolution NHD, generally developed at 1:24,000/1:12,000 scale, adds detail to the original 1:100,000-scale NHD. (Data for Alaska, Puerto Rico and the Virgin Islands was developed at high-resolution, not 1:100,000 scale.) Local resolution NHD is being developed where partners and data exist. The NHD contains reach codes for networked features, flow direction, names, and centerline representations for areal water bodies. Reaches are also defined on waterbodies and the approximate shorelines of the Great Lakes, the Atlantic and Pacific Oceans and the Gulf of Mexico. The NHD also incorporates the National Spatial Data Infrastructure framework criteria established by the Federal Geographic Data Committee.The NHD is a national framework for assigning reach addresses to water-related entities, such as industrial discharges, drinking water supplies, fish habitat areas, wild and scenic rivers. Reach addresses establish the locations of these entities relative to one another within the NHD surface water drainage network, much like addresses on streets. Once linked to the NHD by their reach addresses, the upstream/downstream relationships of these water-related entities--and any associated information about them--can be analyzed using software tools ranging from spreadsheets to geographic information systems (GIS). GIS can also be used to combine NHD-based network analysis with other data layers, such as soils, land use and population, to help understand and display their respective effects upon one another. Furthermore, because the NHD provides a nationally consistent framework for addressing and analysis, water-related information linked to reach addresses by one organization (national, state, local) can be shared with other organizations and easily integrated into many different types of applications to the benefit of all.Statements of attribute accuracy are based on accuracy statements made for U.S. Geological Survey Digital Line Graph (DLG) data, which is estimated to be 98.5 percent. One or more of the following methods were used to test attribute accuracy: manual comparison of the source with hardcopy plots; symbolized display of the DLG on an interactive computer graphic system; selected attributes that could not be visually verified on plots or on screen were interactively queried and verified on screen. In addition, software validated feature types and characteristics against a master set of types and characteristics, checked that combinations of types and characteristics were valid, and that types and characteristics were valid for the delineation of the feature. Feature types, characteristics, and other attributes conform to the Standards for National Hydrography Dataset (USGS, 1999) as of the date they were loaded into the database. All names were validated against a current extract from the Geographic Names Information System (GNIS). The entry and identifier for the names match those in the GNIS. The association of each name to reaches has been interactively checked, however, operator error could in some cases apply a name to a wrong reach.Points, nodes, lines, and areas conform to topological rules. Lines intersect only at nodes, and all nodes anchor the ends of lines. Lines do not overshoot or undershoot other lines where they are supposed to meet. There are no duplicate lines. Lines bound areas and lines identify the areas to the left and right of the lines. Gaps and overlaps among areas do not exist. All areas close.The completeness of the data reflects the content of the sources, which most often are the published USGS topographic quadrangle and/or the USDA Forest Service Primary Base Series (PBS) map. The USGS topographic quadrangle is usually supplemented by Digital Orthophoto Quadrangles (DOQs). Features found on the ground may have been eliminated or generalized on the source map because of scale and legibility constraints. In general, streams longer than one mile (approximately 1.6 kilometers) were collected. Most streams that flow from a lake were collected regardless of their length. Only definite channels were collected so not all swamp/marsh features have stream/rivers delineated through them. Lake/ponds having an area greater than 6 acres were collected. Note, however, that these general rules were applied unevenly among maps during compilation. Reach codes are defined on all features of type stream/river, canal/ditch, artificial path, coastline, and connector. Waterbody reach codes are defined on all lake/pond and most reservoir features. Names were applied from the GNIS database. Detailed capture conditions are provided for every feature type in the Standards for National Hydrography Dataset available online through https://prd-wret.s3-us-west-2.amazonaws.com/assets/palladium/production/atoms/files/NHD%201999%20Draft%20Standards%20-%20Capture%20conditions.PDF.Statements of horizontal positional accuracy are based on accuracy statements made for U.S. Geological Survey topographic quadrangle maps. These maps were compiled to meet National Map Accuracy Standards. For horizontal accuracy, this standard is met if at least 90 percent of points tested are within 0.02 inch (at map scale) of the true position. Additional offsets to positions may have been introduced where feature density is high to improve the legibility of map symbols. In addition, the digitizing of maps is estimated to contain a horizontal positional error of less than or equal to 0.003 inch standard error (at map scale) in the two component directions relative to the source maps. Visual comparison between the map graphic (including digital scans of the graphic) and plots or digital displays of points, lines, and areas, is used as control to assess the positional accuracy of digital data. Digital map elements along the adjoining edges of data sets are aligned if they are within a 0.02 inch tolerance (at map scale). Features with like dimensionality (for example, features that all are delineated with lines), with or without like characteristics, that are within the tolerance are aligned by moving the features equally to a common point. Features outside the tolerance are not moved; instead, a feature of type connector is added to join the features.Statements of vertical positional accuracy for elevation of water surfaces are based on accuracy statements made for U.S. Geological Survey topographic quadrangle maps. These maps were compiled to meet National Map Accuracy Standards. For vertical accuracy, this standard is met if at least 90 percent of well-defined points tested are within one-half contour interval of the correct value. Elevations of water surface printed on the published map meet this standard; the contour intervals of the maps vary. These elevations were transcribed into the digital data; the accuracy of this transcription was checked by visual comparison between the data and the map.

  6. HERE Map Rendering - by MBI Geodata with 5 million map updates per day

    • datarade.ai
    Updated Sep 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MBI Geodata (2020). HERE Map Rendering - by MBI Geodata with 5 million map updates per day [Dataset]. https://datarade.ai/data-products/here-map-rendering
    Explore at:
    Dataset updated
    Sep 24, 2020
    Dataset provided by
    Michael Bauer International GmbH
    Authors
    MBI Geodata
    Area covered
    France
    Description

    Highly accurate, professionally designed, enterprise-grade maps available worldwide.

    Maps that receive 5 million updates on average per day across the globe for reliable navigation and data visualization.

    Vector Tile API Use the freshest, daily updated HERE map data through tiles containing vector data and customize the map style to support your user needs.

    Personalize your maps Configure the look and feel of your map by changing color, icon size, width, length and other properties of objects such as buildings, land features and roads. Display it all at the desired zoom level.

    Pre-rendered map images Pre-rendered map images in multiple styles, such as base and aerial, optimized for various devices and OS’s. Request an image around a specific area, or at a specified location and zoom level.

    Map Tile API Display server-rendered, raster 2D map tiles at different zoom levels, display options, views and schemes. Request tiles that highlight congestion and environmental zones.

    Built-in fleet maps Integrate maps designed especially for fleet management applications with accentuated country borders and highways, toll roads within congestion charging zones and highway exits along routes.

    Truck attributes layer Provide simple visual cues so that areas with truck restrictions are easily identifiable. Display truck restrictions such as height, weight or environmental restrictions on a variety of map styles.

    Map Feedback Offer your users the possibility to edit the HERE map or report errors.

  7. Add Spatial Data to Create a Map

    • teachwithgis.co.uk
    Updated Feb 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri UK Education (2025). Add Spatial Data to Create a Map [Dataset]. https://teachwithgis.co.uk/datasets/add-spatial-data-to-create-a-map
    Explore at:
    Dataset updated
    Feb 11, 2025
    Dataset provided by
    Esrihttp://esri.com/
    Authors
    Esri UK Education
    Description

    The final aim for this practical is to create a 3D model to visualise how a flood depth of 1m might impact buildings within flood alert areas in Shrewsbury, including a potential new building we are going to create a simple 3D model for. By the end of the exercises in this practical you should be able to use Arc Online Apps to create a 3D model that looks like this -The learning objectives for making this model are as follows:Be able to open and navigate in the Map ViewerBe able to find and add suitable data into Map ViewerBe able to create datasets that allow you to perform visual analysis to understand why areas may have been identified as flood risk areasBe able to build a query to identify and extract building data for buildings within the Flood Alert AreaBe able to create a model to represent a potential new buildingBe able to use Scene Viewer to put this all together in a 3D model that allows you visualise this data

  8. Driver Technologies | Traffic Lights Map Video Data | North America and UK |...

    • datarade.ai
    .json
    Updated Aug 29, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Driver Technologies, Inc​ (2024). Driver Technologies | Traffic Lights Map Video Data | North America and UK | Real-time and historical traffic information [Dataset]. https://datarade.ai/data-products/driver-technologies-traffic-lights-map-video-data-north-a-driver-technologies-inc
    Explore at:
    .jsonAvailable download formats
    Dataset updated
    Aug 29, 2024
    Dataset provided by
    Driver Technologies Inc.
    Authors
    Driver Technologies, Inc​
    Area covered
    United Kingdom, United States
    Description

    At Driver Technologies, we specialize in collecting high-quality, highly-anonymized, driving data crowdsourced using our dash cam app. Our Traffic Light Map Video Data is built from the millions of miles of driving data captured and is optimized to be trained for whatever computer vision models you need and enhancing various applications in transportation and safety.

    What Makes Our Data Unique? What sets our Traffic Light Map Video Data apart is its comprehensive approach to road object detection. By leveraging advanced computer vision models, we analyze the captured video to identify and classify various road objects encountered during an end user's trip. This includes road signs, pedestrians, vehicles, traffic signs, and road conditions, resulting in rich, annotated datasets that can be used for a range of applications.

    How Is the Data Generally Sourced? Our data is sourced directly from users who utilize our dash cam app, which harnesses the smartphone’s camera and sensors to record during a trip. This direct sourcing method ensures that our data is unbiased and represents a wide variety of conditions and environments. The data is not only authentic and reflective of current road conditions but is also abundant in volume, offering millions of miles of recorded trips that cover diverse scenarios.

    Primary Use-Cases and Verticals The Traffic Light Map Video Data is tailored for various sectors, particularly those involved in transportation, urban planning, and autonomous vehicle development. Key use cases include:

    Training Computer Vision Models: Clients can utilize our annotated data to develop and refine their own computer vision models for applications in autonomous vehicles, ensuring better object detection and decision-making capabilities in complex road environments.

    Urban Planning and Infrastructure Development: Our data helps municipalities understand road usage patterns, enabling them to make informed decisions regarding infrastructure improvements, safety measures, and traffic light placement. Our data can also aid in making sure municipalities have an accurate count of signs in their area.

    Integration with Our Broader Data Offering The Traffic Light Map Video Data is a crucial component of our broader data offerings at Driver Technologies. It complements our extensive library of driving data collected from various vehicles and road users, creating a comprehensive data ecosystem that supports multiple verticals, including insurance, automotive technology, and computer vision models.

    In summary, Driver Technologies' Traffic Light Map Video Data provides a unique opportunity for data buyers to access high-quality, actionable insights that drive innovation across mobility. By integrating our Traffic Light Map Video Data with other datasets, clients can gain a holistic view of transportation dynamics, enhancing their analytical capabilities and decision-making processes.

  9. d

    Post World War II Areas

    • catalog.data.gov
    • data.tempe.gov
    • +9more
    Updated Sep 20, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Tempe (2024). Post World War II Areas [Dataset]. https://catalog.data.gov/dataset/post-world-war-ii-areas-37523
    Explore at:
    Dataset updated
    Sep 20, 2024
    Dataset provided by
    City of Tempe
    Area covered
    World
    Description

    The contents of this feature layer provide a visual aid for homes constructed during the period between 1945 to 1960. Data supporting the visual aids list which neighborhood these post World War II homes resides in, the style of the homes, along with its condition and integrity.The Historic Preservation Office works with the community to preserve these homes by enhancing archaeological, prehistoric, and historic resources throughout the City of Tempe. This work includes a wide range of partnerships with local homeowners, neighborhoods, developers/architects, boards/commissions, state and national agencies, as well as volunteer and non-profit preservation groupsContact: Will DukeContact E-Mail: will_duke@tempe.govContact Phone: N/ALink: N/AData Source: SQL Server/ArcGIS ServerData Source Type: GeospatialPreparation Method: N/APublish Frequency: As information changesPublish Method: AutomaticData Dictionary

  10. f

    Data from: Motion of animated streamlets appears to surpass their graphical...

    • tandf.figshare.com
    mp4
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pyry Kettunen; Juha Oksanen (2023). Motion of animated streamlets appears to surpass their graphical alterations in human visual detection of vector field maxima [Dataset]. http://doi.org/10.6084/m9.figshare.7571075.v1
    Explore at:
    mp4Available download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Pyry Kettunen; Juha Oksanen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Animations have become a frequently utilized illustration technique on maps but changes in their graphical loading remain understudied in empirical geovisualization and cartographic research. Animated streamlets have gained attention as an illustrative animation technique and have become popular on widely viewed maps. We conducted an experiment to investigate how altering four major animation parameters of animated streamlets affects people’s reading performance of field maxima on vector fields. The study involved 73 participants who performed reaction-time tasks on pointing maxima on vector field stimuli. Reaction times and correctness of answers changed surprisingly little between visually different animations, with only a few occasional statistical significances. The results suggest that motion of animated streamlets is such a strong visual cue that altering graphical parameters makes only little difference when searching for the maxima. This leads to the conclusion that, for this kind of a task, animated streamlets on maps can be designed relatively freely in graphical terms and their style fitted to other contents of the map. In the broader visual and geovisual analytics context, the results can lead to more generally hypothesizing that graphical loading of animations with continuous motion flux could be altered without severely affecting their communicative power.

  11. Geospatial data for the Vegetation Mapping Inventory Project of Coronado...

    • catalog.data.gov
    • datasets.ai
    Updated Nov 25, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Park Service (2025). Geospatial data for the Vegetation Mapping Inventory Project of Coronado National Memorial [Dataset]. https://catalog.data.gov/dataset/geospatial-data-for-the-vegetation-mapping-inventory-project-of-coronado-national-memorial
    Explore at:
    Dataset updated
    Nov 25, 2025
    Dataset provided by
    National Park Servicehttp://www.nps.gov/
    Description

    The files linked to this reference are the geospatial data created as part of the completion of the baseline vegetation inventory project for the NPS park unit. Current format is ArcGIS file geodatabase but older formats may exist as shapefiles. The vector (polygon) map is in digital format within a geodatabase structure that allows for complex relationships to be established between spatial and tabular data, and allows much of the data to be accessed concurrently. Strict nomenclature was enforced for polygons and a unique name was assigned to each polygon. These reflected the verified physiognomic formation type by a prefix of representative letters (e.g., W = Woodland, SS = shrub savanna), followed by a number. Using ArcMap, polygon boundaries were buffered and excluded based on the distance equal to the radius of the selected plot size, positional accuracy of the map, and positional error of the GPS to be used by the assessment crew (Lea and Curtis 2010). The resulting polygons were converted to raster format and points were distributed using the “distribute spatially balanced points” function in ArcToolbox. This function uses the RRQRR algorithm (Theobald et al. 2007) to distribute spatially balanced points throughout the raster. Next, each point was buffered using the radius of the assigned plot size to create a circular area (see Figure 3-1) that was later used as a visual aid to delineate the survey area. These circular plot areas (polygons, essentially) and the plot centroids for all map classes were merged and assigned a unique identifier. All information was removed that could give an assessor any indication as to which class it belonged.

  12. a

    Surging Seas: Risk Zone Map

    • amerigeo.org
    • data.amerigeoss.org
    • +1more
    Updated Feb 18, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AmeriGEOSS (2019). Surging Seas: Risk Zone Map [Dataset]. https://www.amerigeo.org/datasets/surging-seas-risk-zone-map
    Explore at:
    Dataset updated
    Feb 18, 2019
    Dataset authored and provided by
    AmeriGEOSS
    Description

    IntroductionClimate Central’s Surging Seas: Risk Zone map shows areas vulnerable to near-term flooding from different combinations of sea level rise, storm surge, tides, and tsunamis, or to permanent submersion by long-term sea level rise. Within the U.S., it incorporates the latest, high-resolution, high-accuracy lidar elevation data supplied by NOAA (exceptions: see Sources), displays points of interest, and contains layers displaying social vulnerability, population density, and property value. Outside the U.S., it utilizes satellite-based elevation data from NASA in some locations, and Climate Central’s more accurate CoastalDEM in others (see Methods and Qualifiers). It provides the ability to search by location name or postal code.The accompanying Risk Finder is an interactive data toolkit available for some countries that provides local projections and assessments of exposure to sea level rise and coastal flooding tabulated for many sub-national districts, down to cities and postal codes in the U.S. Exposure assessments always include land and population, and in the U.S. extend to over 100 demographic, economic, infrastructure and environmental variables using data drawn mainly from federal sources, including NOAA, USGS, FEMA, DOT, DOE, DOI, EPA, FCC and the Census.This web tool was highlighted at the launch of The White House's Climate Data Initiative in March 2014. Climate Central's original Surging Seas was featured on NBC, CBS, and PBS U.S. national news, the cover of The New York Times, in hundreds of other stories, and in testimony for the U.S. Senate. The Atlantic Cities named it the most important map of 2012. Both the Risk Zone map and the Risk Finder are grounded in peer-reviewed science.Back to topMethods and QualifiersThis map is based on analysis of digital elevation models mosaicked together for near-total coverage of the global coast. Details and sources for U.S. and international data are below. Elevations are transformed so they are expressed relative to local high tide lines (Mean Higher High Water, or MHHW). A simple elevation threshold-based “bathtub method” is then applied to determine areas below different water levels, relative to MHHW. Within the U.S., areas below the selected water level but apparently not connected to the ocean at that level are shown in a stippled green (as opposed to solid blue) on the map. Outside the U.S., due to data quality issues and data limitations, all areas below the selected level are shown as solid blue, unless separated from the ocean by a ridge at least 20 meters (66 feet) above MHHW, in which case they are shown as not affected (no blue).Areas using lidar-based elevation data: U.S. coastal states except AlaskaElevation data used for parts of this map within the U.S. come almost entirely from ~5-meter horizontal resolution digital elevation models curated and distributed by NOAA in its Coastal Lidar collection, derived from high-accuracy laser-rangefinding measurements. The same data are used in NOAA’s Sea Level Rise Viewer. (High-resolution elevation data for Louisiana, southeast Virginia, and limited other areas comes from the U.S. Geological Survey (USGS)). Areas using CoastalDEM™ elevation data: Antigua and Barbuda, Barbados, Corn Island (Nicaragua), Dominica, Dominican Republic, Grenada, Guyana, Haiti, Jamaica, Saint Kitts and Nevis, Saint Lucia, Saint Vincent and the Grenadines, San Blas (Panama), Suriname, The Bahamas, Trinidad and Tobago. CoastalDEM™ is a proprietary high-accuracy bare earth elevation dataset developed especially for low-lying coastal areas by Climate Central. Use our contact form to request more information.Warning for areas using other elevation data (all other areas)Areas of this map not listed above use elevation data on a roughly 90-meter horizontal resolution grid derived from NASA’s Shuttle Radar Topography Mission (SRTM). SRTM provides surface elevations, not bare earth elevations, causing it to commonly overestimate elevations, especially in areas with dense and tall buildings or vegetation. Therefore, the map under-portrays areas that could be submerged at each water level, and exposure is greater than shown (Kulp and Strauss, 2016). However, SRTM includes error in both directions, so some areas showing exposure may not be at risk.SRTM data do not cover latitudes farther north than 60 degrees or farther south than 56 degrees, meaning that sparsely populated parts of Arctic Circle nations are not mapped here, and may show visual artifacts.Areas of this map in Alaska use elevation data on a roughly 60-meter horizontal resolution grid supplied by the U.S. Geological Survey (USGS). This data is referenced to a vertical reference frame from 1929, based on historic sea levels, and with no established conversion to modern reference frames. The data also do not take into account subsequent land uplift and subsidence, widespread in the state. As a consequence, low confidence should be placed in Alaska map portions.Flood control structures (U.S.)Levees, walls, dams or other features may protect some areas, especially at lower elevations. Levees and other flood control structures are included in this map within but not outside of the U.S., due to poor and missing data. Within the U.S., data limitations, such as an incomplete inventory of levees, and a lack of levee height data, still make assessing protection difficult. For this map, levees are assumed high and strong enough for flood protection. However, it is important to note that only 8% of monitored levees in the U.S. are rated in “Acceptable” condition (ASCE). Also note that the map implicitly includes unmapped levees and their heights, if broad enough to be effectively captured directly by the elevation data.For more information on how Surging Seas incorporates levees and elevation data in Louisiana, view our Louisiana levees and DEMs methods PDF. For more information on how Surging Seas incorporates dams in Massachusetts, view the Surging Seas column of the web tools comparison matrix for Massachusetts.ErrorErrors or omissions in elevation or levee data may lead to areas being misclassified. Furthermore, this analysis does not account for future erosion, marsh migration, or construction. As is general best practice, local detail should be verified with a site visit. Sites located in zones below a given water level may or may not be subject to flooding at that level, and sites shown as isolated may or may not be be so. Areas may be connected to water via porous bedrock geology, and also may also be connected via channels, holes, or passages for drainage that the elevation data fails to or cannot pick up. In addition, sea level rise may cause problems even in isolated low zones during rainstorms by inhibiting drainage.ConnectivityAt any water height, there will be isolated, low-lying areas whose elevation falls below the water level, but are protected from coastal flooding by either man-made flood control structures (such as levees), or the natural topography of the surrounding land. In areas using lidar-based elevation data or CoastalDEM (see above), elevation data is accurate enough that non-connected areas can be clearly identified and treated separately in analysis (these areas are colored green on the map). In the U.S., levee data are complete enough to factor levees into determining connectivity as well.However, in other areas, elevation data is much less accurate, and noisy error often produces “speckled” artifacts in the flood maps, commonly in areas that should show complete inundation. Removing non-connected areas in these places could greatly underestimate the potential for flood exposure. For this reason, in these regions, the only areas removed from the map and excluded from analysis are separated from the ocean by a ridge of at least 20 meters (66 feet) above the local high tide line, according to the data, so coastal flooding would almost certainly be impossible (e.g., the Caspian Sea region).Back to topData LayersWater Level | Projections | Legend | Social Vulnerability | Population | Ethnicity | Income | Property | LandmarksWater LevelWater level means feet or meters above the local high tide line (“Mean Higher High Water”) instead of standard elevation. Methods described above explain how each map is generated based on a selected water level. Water can reach different levels in different time frames through combinations of sea level rise, tide and storm surge. Tide gauges shown on the map show related projections (see just below).The highest water levels on this map (10, 20 and 30 meters) provide reference points for possible flood risk from tsunamis, in regions prone to them.

  13. Statistics on the datasets.

    • plos.figshare.com
    xls
    Updated Jun 5, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marco Piccirilli; Gianfranco Doretto; Donald Adjeroh (2023). Statistics on the datasets. [Dataset]. http://doi.org/10.1371/journal.pone.0166749.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 5, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Marco Piccirilli; Gianfranco Doretto; Donald Adjeroh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    (S denotes the stature).

  14. n

    Data for: Predicting habitat suitability for Townsend’s big-eared bats...

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated Dec 12, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Natalie Hamilton; Michael Morrison; Leila Harris; Joseph Szewczak; Scott Osborn (2022). Data for: Predicting habitat suitability for Townsend’s big-eared bats across California in relation to climate change [Dataset]. http://doi.org/10.5061/dryad.4j0zpc8f1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 12, 2022
    Dataset provided by
    California Department of Fish and Wildlife
    California State Polytechnic University
    Texas A&M University
    University of California, Davis
    Authors
    Natalie Hamilton; Michael Morrison; Leila Harris; Joseph Szewczak; Scott Osborn
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Area covered
    California
    Description

    Aim: Effective management decisions depend on knowledge of species distribution and habitat use. Maps generated from species distribution models are important in predicting previously unknown occurrences of protected species. However, if populations are seasonally dynamic or locally adapted, failing to consider population level differences could lead to erroneous determinations of occurrence probability and ineffective management. The study goal was to model the distribution of a species of special concern, Townsend’s big-eared bats (Corynorhinus townsendii), in California. We incorporate seasonal and spatial differences to estimate the distribution under current and future climate conditions. Methods: We built species distribution models using all records from statewide roost surveys and by subsetting data to seasonal colonies, representing different phenological stages, and to Environmental Protection Agency Level III Ecoregions to understand how environmental needs vary based on these factors. We projected species’ distribution for 2061-2080 in response to low and high emissions scenarios and calculated the expected range shifts. Results: The estimated distribution differed between the combined (full dataset) and phenologically-explicit models, while ecoregion-specific models were largely congruent with the combined model. Across the majority of models, precipitation was the most important variable predicting the presence of C. townsendii roosts. Under future climate scnearios, distribution of C. townsendii is expected to contract throughout the state, however suitable areas will expand within some ecoregions. Main conclusion: Comparison of phenologically-explicit models with combined models indicate the combined models better predict the extent of the known range of C. townsendii in California. However, life history-explicit models aid in understanding of different environmental needs and distribution of their major phenological stages. Differences between ecoregion-specific and statewide predictions of habitat contractions highlight the need to consider regional variation when forecasting species’ responses to climate change. These models can aid in directing seasonally explicit surveys and predicting regions most vulnerable under future climate conditions. Methods Study area and survey data The study area covers the U.S. state of California, which has steep environmental gradients that support an array of species (Dobrowski et al. 2011). Because California is ecologically diverse, with regions ranging from forested mountain ranges to deserts, we examined local environmental needs by modeling at both the state-wide and ecoregion scale, using U.S. Environmental Protection Agency (EPA) Level III ecoregion designations and there are thirteen Level III ecoregions in California (Table S1.1) (Griffith et al. 2016). Species occurrence data used in this study were from a statewide survey of C. townsendii in California conducted by Harris et al. (2019). Briefly, methods included field surveys from 2014-2017 following a modified bat survey protocol to create a stratified random sampling scheme. Corynorhinus townsendii presence at roost sites was based on visual bat sightings. From these survey efforts, we have visual occurrence data for 65 maternity roosts, 82 hibernation roosts (hibernacula), and 91 active-season non-maternity roosts (transition roosts) for a total of 238 occurrence records (Figure 1, Table S1.1). Ecogeographical factors We downloaded climatic variables from WorldClim 2.0 bioclimatic variables (Fick & Hijmans, 2017) at a resolution of 5 arcmin for broad-scale analysis and 30 arcsec for our ecoregion-specific analyses. To calculate elevation and slope, we used a digital elevation model (USGS 2022) in ArcGIS 10.8.1 (ESRI, 2006). The chosen set of environmental variables reflects knowledge on climatic conditions and habitat relevant to bat physiology, phenology, and life history (Rebelo et al. 2010, Razgour et al. 2011, Loeb and Winters 2013, Razgour 2015, Ancillotto et al. 2016). To trim the global environmental variables to the same extent (the state of California), we used the R package “raster” (Hijmans et al. 2022). We performed a correlation analysis on the raster layers using the “layerStats” function and removed variables with a Pearson’s coefficient > 0.7 (see Table 1 for final model variables). For future climate conditions, we selected three general circulation models (GCMs) based on previous species distribution models of temperate bat species (Razgour et al. 2019) [Hadley Centre Global Environment Model version 2 Earth Systems model (HadGEM3-GC31_LL; Webb, 2019), Institut Pierre-Simon Laplace Coupled Model 6th Assessment Low Resolution (IPSL-CM6A-LR; Boucher et al., 2018), and Max Planck Institute for Meteorology Earth System Model Low Resolution (MPI-ESM1-2-LR; Brovkin et al., 2019)] and two contrasting greenhouse concentration trajectories (Shared Socio-economic Pathways (SSPs): a steady decline pathway with CO2 concentrations of 360 ppmv (SSP1-2.6) and an increasing pathway with CO2 reaching around 2,000 ppmv (SSP5-8.5) (IPCC6). We modeled distribution for present conditions future (2061-2080) time periods. Because one aim of our study was to determine the consequences of changing climate, we changed only the climatic data when projecting future distributions, while keeping the other variables constant over time (elevation, slope). Species distribution modeling We generated distribution maps for total occurrences (maternity + hibernacula + transition, hereafter defined as “combined models”), maternity colonies , hibernacula, and transition roosts. To estimate the present and future habitat suitability for C. townsendii in California, we used the maximum entropy (MaxEnt) algorithm in the “dismo” R package (Hijmans et al. 2021) through the advanced computing resources provided by Texas A&M High Performance Research Computing. We chose MaxEnt to aid in the comparisons of state-wide and ecoregion-specific models as MaxEnt outperforms other approaches when using small datasets (as is the case in our ecoregion-specific models). We created 1,000 background points from random points in the environmental layers and performed a 5-fold cross validation approach, which divided the occurrence records into training (80%) and testing (20%) datasets. We assessed the performance of our models by measuring the area under the receiver operating characteristic curve (AUC; Hanley & McNeil, 1982), where values >0.5 indicate that the model is performing better than random, values 0.5-0.7 indicating poor performance, 0.7-0.9 moderate performance and values of 0.9-1 excellent performance (BCCVL, Hallgren et al., 2016). We also measured the maximum true skill statistic (TSS; Allouche, Tsoar, & Kadmon, 2006) to assess model performance. The maxTSS ranges from -1 to +1:values <0.4 indicate a model that performs no better than random, 0.4-0.55 indicates poor performance, (0.55-0.7) moderate performance, (0.7-0.85) good performance, and values >0.80 indicate excellent performance (Samadi et al. 2022). Final distribution maps were generated using all occurrence records for each region (rather than the training/testing subset), and the models were projected onto present and future climate conditions. Additionally, because the climatic conditions of the different ecoregions of California vary widely, we generated separate models for each ecoregion in an attempt to capture potential local effects of climate change. A general rule in species distribution modeling is that the occurrence points should be 10 times the number of predictors included in the model, meaning that we would need 50 occurrences in each ecoregion. One common way to overcome this limitation is through the ensemble of small models (ESMs) (Breiner et al. 2015., 2018; Virtanen et al. 2018; Scherrer et al. 2019; Song et al. 2019) included in ecospat R package (references). For our ESMs we implemented MaxEnt modeling, and the final ensemble model was created by averaging individual bivariate models by weighted performance (AUC > 0.5). We also used null model significance testing with to evaluate the performance of our ESMs (Raes and Ter Steege 2007). To perform null model testing we compared AUC scores from 100 null models using randomly generated presence locations equal to the number used in the developed distribution model. All ecoregion models outperformed the null expectation (p<0.002). Estimating range shifts For each of the three GCMs and each RCP scenario, we converted the probability distribution map into a binary map (0=unsuitable, 1=suitable) using the threshold that maximizes sensitivity and specificity (Liu et al. 2016). To create the final maps for each SSP scenario, we summed the three binary GCM layers and took a consensus approach, meaning climatically suitable areas were pixels where at least two of the three models predicted species presence (Araújo and New 2007, Piccioli Cappelli et al. 2021). We combined the future binary maps (fmap) and the present binary maps (pmap) following the formula fmap x 2 + pmap (from Huang et al., 2017) to produce maps with values of 0 (areas not suitable), 1 (areas that are suitable in the present but not the future), 2 (areas that are not suitable in the present but suitable in the future), and 3 (areas currently suitable that will remain suitable) using the raster calculator function in QGIS. We then calculated the total area of suitability, area of maintenance, area of expansion, and area of contraction for each binary model using the “BIOMOD_RangeSize” function in R package “biomod2” (Thuiller et al. 2021).

  15. Estimating the Global Distribution of Field Size using Crowdsourcing

    • data.europa.eu
    • data.niaid.nih.gov
    unknown
    Updated Jul 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zenodo (2025). Estimating the Global Distribution of Field Size using Crowdsourcing [Dataset]. https://data.europa.eu/data/datasets/oai-zenodo-org-6651481?locale=da
    Explore at:
    unknown(622745)Available download formats
    Dataset updated
    Jul 3, 2025
    Dataset authored and provided by
    Zenodohttp://zenodo.org/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    There is increasing evidence that smallholder farms contribute substantially to food production globally yet spatially explicit data on agricultural field sizes are currently lacking. Automated field size delineation using remote sensing or the estimation of average farm size at subnational level using census data are two approaches that have been used but both have limitations, e.g. limited geographical coverage by remote sensing or coarse spatial resolution when using census data. This paper demonstrates another approach to quantifying and mapping field size globally using crowdsourcing. A campaign was run in June 2017 where participants were asked to visually interpret very high resolution satellite imagery from Google Maps and Bing using the Geo-Wiki application. During the campaign, participants collected field size data for 130K unique locations around the globe. Using this sample, we have produced an improved global field size map (over the previous version) and estimated the percentage of different field sizes, ranging from very small to very large, in agricultural areas at global, continental and national levels. The results show that smallholder farms occupy no more than 40% of agricultural areas, which means that, potentially, there are much more smallholder farms in comparison with the current global estimate of 12%. The global field size map and the crowdsourced data set are openly available and can be used for integrated assessment modelling, comparative studies of agricultural dynamics across different contexts and contribute to SDG 2, among many others. The dataset (global field sizes.zip) contains: - map of dominant field sizes (dominant_field_size_categories.tif) and description of legend items (legend_items.txt) - table with all submissions by the participant (those who completed more than 10 classifications) and table description - table with quality score of all the participants and table description - table with estimated dominant field sizes at each location and table description

  16. n

    Satellite images and road-reference data for AI-based road mapping in...

    • data.niaid.nih.gov
    • dataone.org
    • +1more
    zip
    Updated Apr 4, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sean Sloan; Raiyan Talkhani; Tao Huang; Jayden Engert; William Laurance (2024). Satellite images and road-reference data for AI-based road mapping in Equatorial Asia [Dataset]. http://doi.org/10.5061/dryad.bvq83bkg7
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 4, 2024
    Dataset provided by
    James Cook University
    Vancouver Island University
    Authors
    Sean Sloan; Raiyan Talkhani; Tao Huang; Jayden Engert; William Laurance
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Area covered
    Asia
    Description

    For the purposes of training AI-based models to identify (map) road features in rural/remote tropical regions on the basis of true-colour satellite imagery, and subsequently testing the accuracy of these AI-derived road maps, we produced a dataset of 8904 satellite image ‘tiles’ and their corresponding known road features across Equatorial Asia (Indonesia, Malaysia, Papua New Guinea). Methods

    1. INPUT 200 SATELLITE IMAGES

    The main dataset shared here was derived from a set of 200 input satellite images, also provided here. These 200 images are effectively ‘screenshots’ (i.e., reduced-resolution copies) of high-resolution true-colour satellite imagery (~0.5-1m pixel resolution) observed using the Elvis Elevation and Depth spatial data portal (https://elevation.fsdf.org.au/), which here is functionally equivalent to the more familiar Google Earth. Each of these original images was initially acquired at a resolution of 1920x886 pixels. Actual image resolution was coarser than the native high-resolution imagery. Visual inspection of these 200 images suggests a pixel resolution of ~5 meters, given the number of pixels required to span features of familiar scale, such as roads and roofs, as well as the ready discrimination of specific land uses, vegetation types, etc. These 200 images generally spanned either forest-agricultural mosaics or intact forest landscapes with limited human intervention. Sloan et al. (2023) present a map indicating the various areas of Equatorial Asia from which these images were sourced.
    IMAGE NAMING CONVENTION A common naming convention applies to satellite images’ file names: XX##.png where:

    XX – denotes the geographical region / major island of Equatorial Asia of the image, as follows: ‘bo’ (Borneo), ‘su’ (Sumatra), ‘sl’ (Sulawesi), ‘pn’ (Papua New Guinea), ‘jv’ (java), ‘ng’ (New Guinea [i.e., Papua and West Papua provinces of Indonesia])

    – denotes the ith image for a given geographical region / major island amongst the original 200 images, e.g., bo1, bo2, bo3…

    1. INTERPRETING ROAD FEATURES IN THE IMAGES For each of the 200 input satellite images, its road was visually interpreted and manually digitized to create a reference image dataset by which to train, validate, and test AI road-mapping models, as detailed in Sloan et al. (2023). The reference dataset of road features was digitized using the ‘pen tool’ in Adobe Photoshop. The pen’s ‘width’ was held constant over varying scales of observation (i.e., image ‘zoom’) during digitization. Consequently, at relatively small scales at least, digitized road features likely incorporate vegetation immediately bordering roads. The resultant binary (Road / Not Road) reference images were saved as PNG images with the same image dimensions as the original 200 images.

    2. IMAGE TILES AND REFERENCE DATA FOR MODEL DEVELOPMENT

    The 200 satellite images and the corresponding 200 road-reference images were both subdivided (aka ‘sliced’) into thousands of smaller image ‘tiles’ of 256x256 pixels each. Subsequent to image subdivision, subdivided images were also rotated by 90, 180, or 270 degrees to create additional, complementary image tiles for model development. In total, 8904 image tiles resulted from image subdivision and rotation. These 8904 image tiles are the main data of interest disseminated here. Each image tile entails the true-colour satellite image (256x256 pixels) and a corresponding binary road reference image (Road / Not Road).
    Of these 8904 image tiles, Sloan et al. (2023) randomly selected 80% for model training (during which a model ‘learns’ to recognize road features in the input imagery), 10% for model validation (during which model parameters are iteratively refined), and 10% for final model testing (during which the final accuracy of the output road map is assessed). Here we present these data in two folders accordingly:

    'Training’ – contains 7124 image tiles used for model training in Sloan et al. (2023), i.e., 80% of the original pool of 8904 image tiles. ‘Testing’– contains 1780 image tiles used for model validation and model testing in Sloan et al. (2023), i.e., 20% of the original pool of 8904 image tiles, being the combined set of image tiles for model validation and testing in Sloan et al. (2023).

    IMAGE TILE NAMING CONVENTION A common naming convention applies to image tiles’ directories and file names, in both the ‘training’ and ‘testing’ folders: XX##_A_B_C_DrotDDD where

    XX – denotes the geographical region / major island of Equatorial Asia of the original input 1920x886 pixel image, as follows: ‘bo’ (Borneo), ‘su’ (Sumatra), ‘sl’ (Sulawesi), ‘pn’ (Papua New Guinea), ‘jv’ (java), ‘ng’ (New Guinea [i.e., Papua and West Papua provinces of Indonesia])

    – denotes the ith image for a given geographical region / major island amongst the original 200 images, e.g., bo1, bo2, bo3…

    A, B, C and D – can all be ignored. These values, which are one of 0, 256, 512, 768, 1024, 1280, 1536, and 1792, are effectively ‘pixel coordinates’ in the corresponding original 1920x886-pixel input image. They were recorded within the names of image tiles’ sub-directories and file names merely to ensure that names/directory were uniquely named)

    rot – implies an image rotation. Not all image tiles are rotated, so ‘rot’ will appear only occasionally.

    DDD – denotes the degree of image-tile rotation, e.g., 90, 180, 270. Not all image tiles are rotated, so ‘DD’ will appear only occasionally.

    Note that the designator ‘XX##’ is directly equivalent to the filenames of the corresponding 1920x886-pixel input satellite images, detailed above. Therefore, each image tiles can be ‘matched’ with its parent full-scale satellite image. For example, in the ‘training’ folder, the subdirectory ‘Bo12_0_0_256_256’ indicates that its image tile therein (also named ‘Bo12_0_0_256_256’) would have been sourced from the full-scale image ‘Bo12.png’.

  17. e

    Population and single cell receptive field maps from mouse visual cortex

    • search.kg.ebrains.eu
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Enny H. van Beest; Sreedeep Mukherjee; Lisa Kirchberger; Pieter R. Roelfsema; Matthew W. Self, Population and single cell receptive field maps from mouse visual cortex [Dataset]. http://doi.org/10.25493/VKV1-X9C
    Explore at:
    Authors
    Enny H. van Beest; Sreedeep Mukherjee; Lisa Kirchberger; Pieter R. Roelfsema; Matthew W. Self
    Description

    In this study we use population receptive-field (pRF) mapping techniques, which allow estimates to be made of aggregate receptive field sizes, to map the visual cortex of mice using wide-field calcium imaging. We include data relating the position of the pRF to its size across visual cortex. These maps show a region of mouse visual cortex in which pRFs are considerably smaller. We investigated the source of the smaller pRFs by recording receptive-fields from multi-units in the different layers of V1 using laminar electrodes and by using two-photon imaging to tile almost the entirety of V1 at the single-cell level. We also examine the relationship between RF position and size in the higher visual areas that surround V1 including LM, RL, AL, Am and PM.

  18. f

    Virtual Random dataset, correlation matrix.

    • plos.figshare.com
    xls
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marco Piccirilli; Gianfranco Doretto; Donald Adjeroh (2023). Virtual Random dataset, correlation matrix. [Dataset]. http://doi.org/10.1371/journal.pone.0166749.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Marco Piccirilli; Gianfranco Doretto; Donald Adjeroh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    All subjects at θ = 0° ϕ = 0°.

  19. Vegetation - Whiskeytown National Recreation Area Vegetation Mapping Project...

    • data.ca.gov
    • data.cnra.ca.gov
    • +3more
    Updated Nov 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    California Department of Fish and Wildlife (2025). Vegetation - Whiskeytown National Recreation Area Vegetation Mapping Project [ds986] [Dataset]. https://data.ca.gov/dataset/vegetation-whiskeytown-national-recreation-area-vegetation-mapping-project-ds986
    Explore at:
    arcgis geoservices rest api, geojson, zip, kml, html, csvAvailable download formats
    Dataset updated
    Nov 18, 2025
    Dataset authored and provided by
    California Department of Fish and Wildlifehttps://wildlife.ca.gov/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Whiskeytown
    Description

    This work was done by Humboldt State University Sponsored Programs. The vegetation map adheres to the 2008 National Vegetation Classification Standard (NVCS) and the Manual of California Vegetation (MCV) around 42,000 acres of the NRA land. The Association-Alliance map offers maximum classification detail with 38 classes (31 plant associations, 5 alliances, 1 Disturbed class and a Barren class), with the minimum mapping unit being 0.5 hectares. Three source images were acquired by the Ikonos satellite on July 25, 2003 and used a spatial pattern recognition software package called Feature Analyst (Visual Learning Systems 2006) to identify groups of pixels associated with various vegetation categories. The previously developed, vegetation classification at Whiskeytown National Recreation Area served as the basis for making the vegetation map. More information can be found in the project report, found here: https://nrm.dfg.ca.gov/FileHandler.ashx?DocumentID=16313

  20. S

    Three types of data used for UFZ mapping in the GBA

    • scidb.cn
    Updated Apr 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jingru Hong (2025). Three types of data used for UFZ mapping in the GBA [Dataset]. http://doi.org/10.57760/sciencedb.23362
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 10, 2025
    Dataset provided by
    Science Data Bank
    Authors
    Jingru Hong
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Three types of data are used to support UFZ mapping in the Guangdong-Hong Kong-Macao Greater Bay Area(GBA),including POI data, HSR images and road networks.They share the same projected coordinate System,WGS84 Web Mercator.The POI data were extracted from a Chinese online map platform, AutoNavi Map (https://lbs.amap.com/), in January 2024, totalling 1,927,094 records. Each record included latitude, longitude, and attribute information of categories.The HSR images included Jilin-1 satellite images and Google Earth images, which were collected around January 2024 and covered all cities in the GBA. All HSR images had a spatial resolution of 1 meter and included red, blue and green bands.The road networks were downloaded from OpenStreetMap as vector data, which included hierarchical information, arranged in descending order as: 'primary roads', 'secondary roads', 'tertiary roads', 'minor roads', and 'other roads'.HSR images and POI data are used to provide visual and social-economic information,which is crucial for UFZ mapping.Manual feature or deep feature could be extracted from HSR images and POI data.Road vectors are used to generate UFZ spatial units, which feature multi-level structure. On a practical basis, the road networks could be used to generate UFZ units of different scales

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Tommaso Battisti; Tommaso Battisti (2025). Classification of web-based Digital Humanities projects leveraging information visualisation techniques [Dataset]. http://doi.org/10.5281/zenodo.14192758

Classification of web-based Digital Humanities projects leveraging information visualisation techniques

Explore at:
3 scholarly articles cite this dataset (View in Google Scholar)
tsv, csvAvailable download formats
Dataset updated
Nov 10, 2025
Dataset provided by
Zenodo
Authors
Tommaso Battisti; Tommaso Battisti
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Description

This dataset contains a list of 186 Digital Humanities projects leveraging information visualisation techniques. Each project has been classified according to visualisation and interaction methods, narrativity and narrative solutions, domain, methods for the representation of uncertainty and interpretation, and the employment of critical and custom approaches to visually represent humanities data.

Classification schema: categories and columns

The project_id column contains unique internal identifiers assigned to each project. Meanwhile, the last_access column records the most recent date (in DD/MM/YYYY format) on which each project was reviewed based on the web address specified in the url column.
The remaining columns can be grouped into descriptive categories aimed at characterising projects according to different aspects:

Narrativity. It reports the presence of information visualisation techniques employed within narrative structures. Here, the term narrative encompasses both author-driven linear data stories and more user-directed experiences where the narrative sequence is determined by user exploration [1]. We define 2 columns to identify projects using visualisation techniques in narrative, or non-narrative sections. Both conditions can be true for projects employing visualisations in both contexts. Columns:

  • non_narrative (boolean)

  • narrative (boolean)

Domain. The humanities domain to which the project is related. We rely on [2] and the chapters of the first part of [3] to abstract a set of general domains. Column:

  • domain (categorical):

    • History and archaeology

    • Art and art history

    • Language and literature

    • Music and musicology

    • Multimedia and performing arts

    • Philosophy and religion

    • Other: both extra-list domains and cases of collections without a unique or specific thematic focus.

Visualisation of uncertainty and interpretation. Buiding upon the frameworks proposed by [4] and [5], a set of categories was identified, highlighting a distinction between precise and impressional communication of uncertainty. Precise methods explicitly represent quantifiable uncertainty such as missing, unknown, or uncertain data, precisely locating and categorising it using visual variables and positioning. Two sub-categories are interactive distinction, when uncertain data is not visually distinguishable from the rest of the data but can be dynamically isolated or included/excluded categorically through interaction techniques (usually filters); and visual distinction, when uncertainty visually “emerges” from the representation by means of dedicated glyphs and spatial or visual cues and variables. On the other hand, impressional methods communicate the constructed and situated nature of data [6], exposing the interpretative layer of the visualisation and indicating more abstract and unquantifiable uncertainty using graphical aids or interpretative metrics. Two sub-categories are: ambiguation, when the use of graphical expedients—like permeable glyph boundaries or broken lines—visually convey the ambiguity of a phenomenon; and interpretative metrics, when expressive, non-scientific, or non-punctual metrics are used to build a visualisation. Column:

  • uncertainty_interpretation (categorical):

    • Interactive distinction

    • Visual distinction

    • Ambiguation

    • Interpretative metrics

Critical adaptation. We identify projects in which, with regards to at least a visualisation, the following criteria are fulfilled: 1) avoid repurposing of prepackaged, generic-use, or ready-made solutions; 2) being tailored and unique to reflect the peculiarities of the phenomena at hand; 3) avoid simplifications to embrace and depict complexity, promoting time-consuming visualisation-based inquiry. Column:

  • critical_adaptation (boolean)

Non-temporal visualisation techniques. We adopt and partially adapt the terminology and definitions from [7]. A column is defined for each type of visualisation and accounts for its presence within a project, also including stacked layouts and more complex variations. Columns and inclusion criteria:

  • plot (boolean): visual representations that map data points onto a two-dimensional coordinate system.

  • cluster_or_set (boolean): sets or cluster-based visualisations used to unveil possible inter-object similarities.

  • map (boolean): geographical maps used to show spatial insights. While we do not specify the variants of maps (e.g., pin maps, dot density maps, flow maps, etc.), we make an exception for maps where each data point is represented by another visualisation (e.g., a map where each data point is a pie chart) by accounting for the presence of both in their respective columns.

  • network (boolean): visual representations highlighting relational aspects through nodes connected by links or edges.

  • hierarchical_diagram (boolean): tree-like structures such as tree diagrams, radial trees, but also dendrograms. They differ from networks for their strictly hierarchical structure and absence of closed connection loops.

  • treemap (boolean): still hierarchical, but highlighting quantities expressed by means of area size. It also includes circle packing variants.

  • word_cloud (boolean): clouds of words, where each instance’s size is proportional to its frequency in a related context

  • bars (boolean): includes bar charts, histograms, and variants. It coincides with “bar charts” in [7] but with a more generic term to refer to all bar-based visualisations.

  • line_chart (boolean): the display of information as sequential data points connected by straight-line segments.

  • area_chart (boolean): similar to a line chart but with a filled area below the segments. It also includes density plots.

  • pie_chart (boolean): circular graphs divided into slices which can also use multi-level solutions.

  • plot_3d (boolean): plots that use a third dimension to encode an additional variable.

  • proportional_area (boolean): representations used to compare values through area size. Typically, using circle- or square-like shapes.

  • other (boolean): it includes all other types of non-temporal visualisations that do not fall into the aforementioned categories.

Temporal visualisations and encodings. In addition to non-temporal visualisations, a group of techniques to encode temporality is considered in order to enable comparisons with [7]. Columns:

  • timeline (boolean): the display of a list of data points or spans in chronological order. They include timelines working either with a scale or simply displaying events in sequence. As in [7], we also include structured solutions resembling Gantt chart layouts.

  • temporal_dimension (boolean): to report when time is mapped to any dimension of a visualisation, with the exclusion of timelines. We use the term “dimension” and not “axis” as in [7] as more appropriate for radial layouts or more complex representational choices.

  • animation (boolean): temporality is perceived through an animation changing the visualisation according to time flow.

  • visual_variable (boolean): another visual encoding strategy is used to represent any temporality-related variable (e.g., colour).

Interactions. A set of categories to assess affordable interactions based on the concept of user intent [8] and user-allowed perceptualisation data actions [9]. The following categories roughly match the manipulative subset of methods of the “how” an interaction is performed in the conception of [10]. Only interactions that affect the aspect of the visualisation or the visual representation of its data points, symbols, and glyphs are taken into consideration. Columns:

  • basic_selection (boolean): the demarcation of an element either for the duration of the interaction or more permanently until the occurrence of another selection.

  • advanced_selection (boolean): the demarcation involves both the selected element and connected elements within the visualisation or leads to brush and link effects across views. Basic selection is tacitly implied.

  • navigation (boolean): interactions that allow moving, zooming, panning, rotating, and scrolling the view but only when applied to the visualisation and not to the web page. It also includes “drill” interactions (to navigate through different levels or portions of data detail, often generating a new view that replaces or accompanies the original) and “expand” interactions generating new perspectives on data by expanding and collapsing nodes.

  • arrangement (boolean): the organisation of visualisation elements (symbols, glyphs, etc.) or multi-visualisation layouts spatially through drag and drop or

Search
Clear search
Close search
Google apps
Main menu