Facebook
TwitterThis technical assistance brief provides information on mapping similarities and difference between NYTD and AFCARS 2020 from the NYTD perspective. Metadata-only record linking to the original dataset. Open original dataset below.
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
Author: ANN WURST, NGS TEACHER CONSULTANTGrade/Audience: grade 6, grade 7, grade 8, high school, ap human geography, post secondary, professional developmentResource type: activitySubject topic(s): cartography, maps, regional geographyRegion: worldStandards: TEXAS TEKS (19) Social studies skills. The student applies critical-thinking skills to organize and use information acquired through established research methodologies from a variety of valid sources, including technology. The student is expected to: (A) analyze information by sequencing, categorizing, identifying cause-and-effect relationships, comparing, contrasting, finding the main idea, summarizing, making generalizations and predictions, and drawing inferences and conclusions; (B) create a product on a contemporary government issue or topic using critical methods of inquiry; (D) analyze and evaluate the validity of information, arguments, and counterarguments from primary and secondary sources for bias, propaganda, point of view, and frame of reference; Objectives: Students will keep a list of the toolkit 'helpers' in their notebook and use the elements to process/apply information in various formats such as short answers responses, tickets out the door, setting up writing samples for world geo, AP Human Geo and other courses involving the study of geographic concepts. Summary: Students can use these 'hooks' in their study of cartography/map making , can be applied in every unit where map skills are needed. Helps further critical thinking skills.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Constructing maps suitable for autonomous vehicles (AVs) is a critical research focus in autonomous driving and AI, extending cartography’s challenges. Building on cartographic principles, we propose the concept of a road scene map along with its modeling method that incorporates dynamic/static traffic elements with geometric/semantic features. Current limitations include unclear road scene graph relationships and a lack of integration among 3D traffic entity detection, map element detection, and scene relation extraction. To address these issues, we propose a method for constructing road scene maps: (1) A multi-task detection model identifies traffic entities and map elements directly in bird’s-eye-view (BEV) space, providing precise location, geometry, and attribute data; (2) A unified road scene relation pattern enables rule-based spatial/semantic relationship extraction. Experiments on nuScenes demonstrate improvements: the detection model achieves 1.5% and 1.9% accuracy gains in traffic entity and map element detection over state-of-the-art methods, while the relation extraction method covers broader perceptual ranges and more complex interactions. Results confirm the effective integration of 3D object detection, map element recognition, and scene relation extraction into a unified map. This integration delivers critical environmental information (locations, geometries, attributes, and spatial/semantic relationships) to AVs, significantly enhancing their perception and reasoning in dynamic road scenarios.
Facebook
TwitterMapping of research questions to research elements.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
UI Elements Learn O0nqe 95%mAP is a dataset for object detection tasks - it contains UI Elements FkF7 annotations for 4,752 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These data correspond to the set of problems used for the evaluation of the proposal What Are You Gazing At? An Approach to Use Eye-tracking for Robotic Process Automation.
Each problem consists of a set of 10 screenshots with the same look and feel but different data values for those values that can be entered/modify by the user. Each problem has its associated gaze fixation data. In each of the problems there is a key UI element that primarily attracts the attention of the user.
The evaluation is based on a set of images which resemble realistic screenshots of activities in the administrative domain. More precisely, 5 different set of screenshots (S) are generated, each of them with a different level of complexity. Complexity is measured in terms of the number of UI elements per screenshot. The sets are:
S1 Mockup-based email view. Represents the activity of viewing an email to check if it contains an attachment. In this case, the key UI element that receives the attention is the attachment inside the email.
S2 Mockup-based CRM user details. Represents a user's detail viewing activity within a Client Relationship Management (CRM) platform. The key UI element is the checkbox that indicates if the user has all his invoices paid.
S3 Real screenshot email view. Analogous to S1 but with real screenshots. It represents the activity of viewing an e-mail to check if it contains an attachment. In this case, the key UI element to which attention is paid is the attachment contained in the e-mail.
S4 Real screenshot CRM user details. Analogous to S2 but with real screenshots. It represents a user's detail viewing activity within a CRM platform. The key UI element is the checkbox indicating whether the user has all their invoices paid.
S5 Real screenshot CRM user details. Represents the split-screen display of two applications. On the left side a pdf viewer, showing a covid vaccination certificate. And on the right side a human resources management system (basic recreation of real system for privacy reasons). In this one the detail of the employee to whom the certificate of the left side corresponds is visualized. These screenshots, having two applications, have two key UI elements. In the pdf viewer it is the name of the certificate holder and in the human resources management system it is the name of the employee whose detail view is being displayed. The activity being carried out is the verification that the covid certificate received corresponds to that of an employee.
Two types of filters based on the gaze fixation data are applied to these sets of screenshots: Pre-filtering and Post-filtering, corresponding to applying the filtering before and after detecting UI components in the screenshots, respectively. The structure of the data packages is divided in two folders input and output. The input folder is organized as follows:
input/
screenshots/: corresponds to the screenshots. The sets of screenshots are easily identifiable, they are named following the pattern: SX_screenshot_DDDD.jpeg. Where X indicates to which of the set of screenshots described in the previous list it belongs, and DDDD represents a unique identifier for each screenshot. Each group consists of 10 screenshots, being 50 in total.
fixation.json: It is a JSON file that contains a key associated with each of the screenshots. For each screenshot, it contains a "fixation_points" key where information about the fixations that have occurred on the screenshot is stored. Here's an example:
"S5_screenshot_0050.jpeg": {
"fixation_points": {
"334.25#497.166666666667": {
"#events": 6,
"start_index": 33224,
"ms_start": 553962.1467,
"ms_end": 554061.9899,
"duration": 99.8432000001194,
"imotions_dispersion": 0.300325967868111,
"last_index": 33229,
"dispersion": 14.044275227531914
},
"1258.80769230769#507.576923076923": {
"#events": 13,
"start_index": 33234,
"ms_start": 554128.5427,
"ms_end": 554345.3595,
...
The output folder is organized in three subfolders, the first one containing the information of the non-filtered screenshots (i.e. without having applied to them any filtering or processing), and the next two with the information resulting from pre-filtering and post-filtering.
output/
non-filter/
borders/: screenshots with highlighted borders of all UI components detected in it.
components_json/: a collection of JSON files with the same name as the screenshot, containing the "img_shape" key with a list of the screen resolution and the number of layers the image has: [1080, 1920, 3], and the "compos" key with a list of all UI components representing the Screen Object Model.
pre-filter/ and post-filter/
borders/: screenshots with the borders of the relevant UI components. In the case of prefiltering, the detection of components is only performed on the parts of the screenshot that have received attention. In postfiltering, the complete screenshot is shown, with only the borders of the relevant UI components highlighted.
components_json/: a collection of JSON files with the same name as the screenshot is included, containing the following keys:
"img_shape": A list representing the screen resolution and the number of layers in the image, e.g., [1080, 1920, 3].
"compos": A list of all UI components representing the Screen Object Model (SOM). During post-filtering, each UI component is augmented with an additional property called "relevant." If this property is set to true, it indicates that the respective UI component has received attention.
(pre)/(post)filter_attention_maps/: represent the attention maps. In the case of prefiltering, any surface of the screen that has not received attention will be shown in black. In the case of postfiltering, the areas of attention will be shown as red circles, and the UI components whose area intersects with the areas of attention by more than 25% will be shown in yellow.
In conclusion, the described data package consists of sets of screenshots, accompanied by prefiltering and postfiltering filters using gaze fixation data, enabling the identification of relevant UI components. The organized data packages include input and output folders, where the output folder offers processed screenshots, UI component information, and attention maps. This resource provides valuable insights into user attention and interaction with UI elements on different types of scenarios.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Niobe Aphrodite Map Area covers over 25% of the surface of Venus and extends from 57N to 57S and 60E to 180E. The structural-element map presented here is derived from the1:10 M-scale geologic maps of Niobe Planitia, U.S. Geological Survey I-2467 and Aphrodite Terra, U.S. Geological Survey I-2476. Both maps are in various stages of review and revision overseen by the U.S. Geological Survey on behalf of NASA.
Here we present a Geographic Information System (GIS) that contain the different structural elements of the area (deformation structures and lithodemic units), that can be used to analyze relationships between and among suites of structural elements across this large portion of Venus’ surface.
Base images and data on which determination of the structural element determination is based can be accessed and downloaded directly in GIS-ready formats through the USGS Map a Planet website (https://astrogeology.usgs.gov/tools/map-a-planet-2).
Facebook
TwitterAdditional file 2: Supplementary tables and associated data (Data S1) of mapping responsive genomic elements to heat stress in a maize diversity panel. Table S1-S11 include additional associated results and Data S1 represents MOA-seq TF footprints of B73 detected in either control or heat conditions.
Facebook
TwitterThe project “Basic Geological Charter of Sardinia on a scale of 1:25,000” aims to create a geological map that is homogeneous and extended to the whole island, adapted to the planning objectives of the Regional Landscape Plan (PPR) and in accordance with the indications of the Geological Service of Italy. The geology was represented at 1:25,000, a scale of compromise between the unevenness of the basic data and the need to have a unique and homogeneous cartography for the entire island (58 Sheets in scale 1:50,000, comprising 197 Sections in scale 1:25,000).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overview
3DHD CityScenes is the most comprehensive, large-scale high-definition (HD) map dataset to date, annotated in the three spatial dimensions of globally referenced, high-density LiDAR point clouds collected in urban domains. Our HD map covers 127 km of road sections of the inner city of Hamburg, Germany including 467 km of individual lanes. In total, our map comprises 266,762 individual items.
Our corresponding paper (published at ITSC 2022) is available here. Further, we have applied 3DHD CityScenes to map deviation detection here.
Moreover, we release code to facilitate the application of our dataset and the reproducibility of our research. Specifically, our 3DHD_DevKit comprises:
Python tools to read, generate, and visualize the dataset,
3DHDNet deep learning pipeline (training, inference, evaluation) for map deviation detection and 3D object detection.
The DevKit is available here:
https://github.com/volkswagen/3DHD_devkit.
The dataset and DevKit have been created by Christopher Plachetka as project lead during his PhD period at Volkswagen Group, Germany.
When using our dataset, you are welcome to cite:
@INPROCEEDINGS{9921866, author={Plachetka, Christopher and Sertolli, Benjamin and Fricke, Jenny and Klingner, Marvin and Fingscheidt, Tim}, booktitle={2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)}, title={3DHD CityScenes: High-Definition Maps in High-Density Point Clouds}, year={2022}, pages={627-634}}
Acknowledgements
We thank the following interns for their exceptional contributions to our work.
Benjamin Sertolli: Major contributions to our DevKit during his master thesis
Niels Maier: Measurement campaign for data collection and data preparation
The European large-scale project Hi-Drive (www.Hi-Drive.eu) supports the publication of 3DHD CityScenes and encourages the general publication of information and databases facilitating the development of automated driving technologies.
The Dataset
After downloading, the 3DHD_CityScenes folder provides five subdirectories, which are explained briefly in the following.
This directory contains the training, validation, and test set definition (train.json, val.json, test.json) used in our publications. Respective files contain samples that define a geolocation and the orientation of the ego vehicle in global coordinates on the map.
During dataset generation (done by our DevKit), samples are used to take crops from the larger point cloud. Also, map elements in reach of a sample are collected. Both modalities can then be used, e.g., as input to a neural network such as our 3DHDNet.
To read any JSON-encoded data provided by 3DHD CityScenes in Python, you can use the following code snipped as an example.
import json
json_path = r"E:\3DHD_CityScenes\Dataset\train.json" with open(json_path) as jf: data = json.load(jf) print(data)
Map items are stored as lists of items in JSON format. In particular, we provide:
traffic signs,
traffic lights,
pole-like objects,
construction site locations,
construction site obstacles (point-like such as cones, and line-like such as fences),
line-shaped markings (solid, dashed, etc.),
polygon-shaped markings (arrows, stop lines, symbols, etc.),
lanes (ordinary and temporary),
relations between elements (only for construction sites, e.g., sign to lane association).
Our high-density point cloud used as basis for annotating the HD map is split in 648 tiles. This directory contains the geolocation for each tile as polygon on the map. You can view the respective tile definition using QGIS. Alternatively, we also provide respective polygons as lists of UTM coordinates in JSON.
Files with the ending .dbf, .prj, .qpj, .shp, and .shx belong to the tile definition as “shape file” (commonly used in geodesy) that can be viewed using QGIS. The JSON file contains the same information provided in a different format used in our Python API.
The high-density point cloud tiles are provided in global UTM32N coordinates and are encoded in a proprietary binary format. The first 4 bytes (integer) encode the number of points contained in that file. Subsequently, all point cloud values are provided as arrays. First all x-values, then all y-values, and so on. Specifically, the arrays are encoded as follows.
x-coordinates: 4 byte integer
y-coordinates: 4 byte integer
z-coordinates: 4 byte integer
intensity of reflected beams: 2 byte unsigned integer
ground classification flag: 1 byte unsigned integer
After reading, respective values have to be unnormalized. As an example, you can use the following code snipped to read the point cloud data. For visualization, you can use the pptk package, for instance.
import numpy as np import pptk
file_path = r"E:\3DHD_CityScenes\HD_PointCloud_Tiles\HH_001.bin" pc_dict = {} key_list = ['x', 'y', 'z', 'intensity', 'is_ground'] type_list = ['
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
Author: J. Cain, educator, Minnesota Alliance for Geographic EducationGrade/Audience: grade 4Resource type: lessonSubject topic(s): mapsRegion: united statesStandards: Minnesota Social Studies Standards
Standard 2: People use geographic representations and geospatial technologies to acquire, process and report information within a spatial context. Objectives: Students will be able to:
Explore a variety of maps.
Become acquainted with the elements of maps referred to as TODALS:
Title
Orientation
Date
Author
Legend (Key)
Scale
Locate and interpret TODALS from a variety of maps.
Compare and contrast elements of given maps while looking for bias.
Reflect on the importance of knowing TODALS when understanding and interpreting maps. Summary: Basic mapping terminology is essential for understanding and interpreting various types of maps. Knowing where to find these essential elements, and interpreting their meaning, are critical to the development of a 4th grader’s knowledge of geography.
Facebook
TwitterConcept Map resource is a statement of relationships from one set of concepts to one or more other concepts - either concepts in code systems, or data element/data element concepts, or classes in class models.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
When measuring chemical information in biological fluids, challenges of cross-reactivity arise, especially in sensing applications where no biological recognition elements exist. An understanding of the cross-reactions involved in these complex matrices is necessary to guide the design of appropriate sensing systems. This work presents a methodology for investigating cross-reactions in complex fluids. First, a systematic screening of matrix components is demonstrated in buffer-based solutions. Second, to account for the effect of the simultaneous presence of these species in complex samples, the responses of buffer-based simulated mixtures of these species were characterized using an arrayed sensing system. We demonstrate that the sensor array, consisting of electrochemical sensors with varying input parameters, generated differential responses that provide synergistic information of sample. By mapping the sensing array response onto multidimensional heat maps, characteristic signatures were compared across sensors in the array and across different matrices. Lastly, the arrayed sensing system was applied to complex biological samples to discern and match characteristic signatures between the simulated mixtures and the complex sample responses. As an example, this methodology was applied to screen interfering species relevant to the application of schizophrenia management. Specifically, blood serum measurement of antipsychotic clozapine and antioxidant species can provide useful information regarding therapeutic efficacy and psychiatric symptoms. This work proposes an investigational tool that can guide multi-analyte sensor design, chemometric modeling and biomarker discovery.
Facebook
Twitter
Facebook
TwitterTwo active landslides at and near the retreating front of Barry Glacier at the head of Barry Arm Fjord in southern Alaska could generate tsunamis if they failed rapidly and entered the water of the fjord. Landslide A, at the front of the glacier, is the largest, with a total volume estimated at 455 M m3. Historical photographs from Barry Arm indicate that Landslide A initiated in the mid twentieth century, but there was a large pulse of movement between 2010 and 2017 when Barry Glacier thinned and retreated from about 1/2 of the toe of Landslide A. Interferometric synthetic aperture radar (InSAR) investigations of the area between May and November, 2020, revealed a second, smaller landslide (referred to as Landslide B) on the south-facing slope about 2 km up the glacier from Landslide A. Landslide-generated tsunami modeling in 2020 used a worst-case scenario where the entire mass of Landslide A (about 455 M m3) would rapidly enter the water. The use of multiple landslide volume scenarios in future tsunami modeling efforts would be beneficial in evaluating tsunami risk to communities in the Prince William Sound region. Herein, we present a map of landslide structures and kinematic elements within, and adjacent to, Landslides A and B. This map could form at least a partial basis for discriminating multiple volume scenarios (for example, a separate scenario for each kinematic element). We mapped landslide structures and kinematic elements at scale of 1:1000 using high-resolution lidar data acquired by the Alaska Division of Geological and Geophysical Surveys (DGGS) on June 26, 2020 and high resolution bathymetric data acquired by the National Oceanic and Atmospheric Administration (NOAA) in August, 2020. The predominate structures in both landslides are uphill- and downhill-facing normal fault scarps. Uphill-facing scarps dominate in areas where downslope extension from sliding has been relatively low. Downhill-facing scarps dominate in areas where downlslope extension from sliding has been relatively high. Strike-slip and oblique-slip faults form the boundaries of major kinematic elements. Four major kinematic elements, herein named the Kite, the Prow, the Core, and the Tail, are within, or adjacent to Landslide A. One major kinematic element, herein named the Wedge, forms Landslide B. Kinematic element boundaries are a result of cumulative, differential patterns and amounts of movement that began at inception of the landslides. Elements and/or their boundaries may change location as the landslides continue to evolve. Kinematic elements mapped in 2020 may or may not reflect patterns of historical short-term, episodic movement, or patterns of movement in the future. We were not able to field check our mapping in 2020 because of travel restrictions due to the COVID-19 pandemic. We hope to field check the mapping in the summer of 2021. In this data release, we include GIS files for the structural and kinematic map; metadata files for mapped structural features; and portable document files (PDFs) of a location map, and the structural and kinematic map at a scale of 1:5000. Lidar and bathymetric data used to map landslide structures will be released by DGGS and NOAA in 2021.
Facebook
TwitterGeological map at the 1:25,000 scale coming from the CARG project (Geological Artography 1:50,000). The states present in this service are only those relating to geological units, tectonic-structural elements and stratimetry information. Units for representative simplification are grouped by age, the other elements use standard mapping symbols and are visible at different scales.
Facebook
TwitterLinear elements of the 2008 Land Use Map. The linear entities represent hydrographic and road elements with a width of less than 25 m. The figure was created following the update of the land use map created in 2003.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
FlowMapper.org is a web-based framework for automated production and design of origin-destination flow maps. FlowMapper has four major features that contribute to the advancement of existing flow mapping systems. First, users can upload and process their own data to design and share customized flow maps. The ability to save data, cartographic design and map elements in a project file allows users to easily share their data and/or cartographic design with others. Second, users can generate customized flow symbols to support different flow map reading tasks such as comparing flow magnitudes and directions and identifying flow and location clusters that are strongly connected with each other. Third, FlowMapper supports supplementary layers such as node symbols, choropleth, and base maps to contextualize flow patterns with location references and characteristics. Finally, the web-based architecture of FlowMapper supports server-side computational capabilities to process and normalize large flow data and reveal natural patterns of flows.
Facebook
TwitterGeo-referenced vector-type database, containing geomorphological and anthropogenic elements in linear form, collected as part of the national geological mapping project (CARG) at the 1: 25.000 acquisition scale and reviewed at regional level. The geographical area covered comprises the sheets on a scale of 1: 50.000 in which the regional territory falls.
Facebook
TwitterGeomorphological elements of the area and elements of gravitational tectonics. PURPOSE: To describe the geological-structural structure of the region.
Facebook
TwitterThis technical assistance brief provides information on mapping similarities and difference between NYTD and AFCARS 2020 from the NYTD perspective. Metadata-only record linking to the original dataset. Open original dataset below.