100+ datasets found
  1. Road generalization data & code

    • figshare.com
    application/x-rar
    Updated Mar 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Name No (2024). Road generalization data & code [Dataset]. http://doi.org/10.6084/m9.figshare.25330414.v2
    Explore at:
    application/x-rarAvailable download formats
    Dataset updated
    Mar 5, 2024
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Name No
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    As a complex decision-making process, roadnetwork simplification involves stroke recognition, mesh density relatively preserving, and network structure abstracting. Such a multi-factor decision and scaling operation traditionally applied rule-based methods. The construction and adjusting of these rules contain many human-set parameters and conditions, which makes generalized results closely related to the cartographer’sexperience and habits. On the other hand, the existing methods tend to consider individual structures, for example,strokes, meshes, graph networks, etc.,separately in differentalgorithms lacking a solution that bringsthe advantages of these pattern structure handlingstogether. Aiming at the above problems, this study designs a simplification method using the Mesh-Line Structure Unit (MLSU) to simultaneously account for polyline and polygon properties. A graph-based deep learning network is built to use data-driven ideas to realize road selection decisions. The MLSU model can extract22 kinds of polyline features, 5 kinds of polygon features, and 3 interactivefeatures. In order to make generalization decisions,a model based on graph convolutional network is constructed,and the network model is trained with real data from partial areas in the southern United States, thus realizing automatic generalization of the road network. The experimental results show that the proposed method effectively realizes the automatic generalization of road data, and the simplified results have better performance in terms of visual representation, quantity maintenance, and average connectivity compared with other methods. This study also demonstrates the advantages and potential of using graph deep learning techniquesfor map generalization problems.

  2. H

    Large Dataset of Generalization Patterns in the Number Game

    • dataverse.harvard.edu
    • search.dataone.org
    Updated Aug 10, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eric J. Bigelow; Steven T. Piantadosi (2018). Large Dataset of Generalization Patterns in the Number Game [Dataset]. http://doi.org/10.7910/DVN/A8ZWLF
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 10, 2018
    Dataset provided by
    Harvard Dataverse
    Authors
    Eric J. Bigelow; Steven T. Piantadosi
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    272,700 two-alternative forced choice responses in a simple numerical task modeled after Tenenbaum (1999, 2000), collected from 606 Amazon Mechanical Turk workers. Subjects were shown sets of numbers length 1 to 4 from the range 1 to 100 (e.g. {12, 16}), and asked what other numbers were likely to belong to that set (e.g. 1, 5, 2, 98). Their generalization patterns reflect both rule-like (e.g. “even numbers,” “powers of two”) and distance-based (e.g. numbers near 50) generalization. This data set is available for further analysis of these simple and intuitive inferences, developing of hands-on modeling instruction, and attempts to understand how probability and rules interact in human cognition.

  3. f

    Data from: Area aggregation in map generalisation by mixed-integer...

    • tandf.figshare.com
    pdf
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jan-Henrik Haunert; Alexander Wolff (2023). Area aggregation in map generalisation by mixed-integer programming [Dataset]. http://doi.org/10.6084/m9.figshare.825637.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Jan-Henrik Haunert; Alexander Wolff
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Topographic databases normally contain areas of different land cover classes, commonly defining a planar partition, that is, gaps and overlaps are not allowed. When reducing the scale of such a database, some areas become too small for representation and need to be aggregated. This unintentionally but unavoidably results in changes of classes. In this article we present an optimisation method for the aggregation problem. This method aims to minimise changes of classes and to create compact shapes, subject to hard constraints ensuring aggregates of sufficient size for the target scale. To quantify class changes we apply a semantic distance measure. We give a graph theoretical problem formulation and prove that the problem is NP-hard, meaning that we cannot hope to find an efficient algorithm. Instead, we present a solution by mixed-integer programming that can be used to optimally solve small instances with existing optimisation software. In order to process large datasets, we introduce specialised heuristics that allow certain variables to be eliminated in advance and a problem instance to be decomposed into independent sub-instances. We tested our method for a dataset of the official German topographic database ATKIS with input scale 1:50,000 and output scale 1:250,000. For small instances, we compare results of this approach with optimal solutions that were obtained without heuristics. We compare results for large instances with those of an existing iterative algorithm and an alternative optimisation approach by simulated annealing. These tests allow us to conclude that, with the defined heuristics, our optimisation method yields high-quality results for large datasets in modest time.

  4. d

    Data from: Neural tuning functions underlie both generalization and...

    • search.dataone.org
    • datadryad.org
    Updated Apr 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ian S. Howard; David W. Franklin (2025). Neural tuning functions underlie both generalization and interference [Dataset]. http://doi.org/10.5061/dryad.gr487
    Explore at:
    Dataset updated
    Apr 2, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Ian S. Howard; David W. Franklin
    Time period covered
    Jan 1, 2016
    Description

    In sports, the role of backswing is considered critical for generating a good shot, even though it plays no direct role in hitting the ball. We recently demonstrated the scientific basis of this phenomenon by showing that immediate past movement affects the learning and recall of motor memories. This effect occurred regardless of whether the past contextual movement was performed actively, passively, or shown visually. In force field studies, it has been shown that motor memories generalize locally and that the level of compensation decays as a function of movement angle away from the trained movement. Here we examine if the contextual effect of past movement exhibits similar patterns of generalization and whether it can explain behavior seen in interference studies. Using a single force-field learning task, the directional tuning curves of both the prior contextual movement and the subsequent force field adaptive movements were measured. The adaptation movement direction showed strong ...

  5. t

    Source data selection for out-of-domain generalization - Dataset - LDM

    • service.tib.eu
    Updated Dec 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Source data selection for out-of-domain generalization - Dataset - LDM [Dataset]. https://service.tib.eu/ldmservice/dataset/source-data-selection-for-out-of-domain-generalization
    Explore at:
    Dataset updated
    Dec 3, 2024
    Description

    Source data selection for out-of-domain generalization

  6. c

    Data from: Constraining generalisation in language learning: a rational...

    • datacatalogue.cessda.eu
    Updated May 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wonnacott, E (2025). Constraining generalisation in language learning: a rational learning approach [Dataset]. http://doi.org/10.5255/UKDA-SN-851081
    Explore at:
    Dataset updated
    May 27, 2025
    Dataset provided by
    University of Warwick
    Authors
    Wonnacott, E
    Time period covered
    Jan 16, 2012 - May 15, 2013
    Area covered
    United Kingdom
    Variables measured
    Individual
    Measurement technique
    These are experimentally collected data. Full methods can be seen in publications - please contact E.A.Wonnacott@warwick.ac.uk for more details.
    Description

    Successful language acquisition relies on generalisation, yet many 'sensible' generalisations are actually ungrammatical (eg 'John carried me teddy.'). This grant explores how language learners balance generalisation and exception learning using the Artificial Language Learning (ALL) methodology, ie experiments where participants learn and are tested on novel experimenter-designed languages. Earlier research (Wonnacott et al. 2008) had used only adults - an important limitation given evidence for maturational differences in language learning (Newport, 1990). This grant therefore consists of a series of ALL experiments conducted with both child and adult participants, designed to address the following questions: (i) Do children, (like adults in previous studies) use distributional statistics eg word and construction frequency to determine which words should generalise/are exceptions? (ii) How do learners weigh such information against other sources of information such as semantics (eg if words with similar meanings tend to behave similarly). (iii) Do these processes differ across adults and children? (iv) Are there any factors that predict the extent of generalisation/exception learning for individual learners (eg working memory)? The long-term goal is to shed light on why language learning is generally more successful when it begins in childhood and the loci of individual differences in learning.

  7. Data from: Automatic Adaptive Signature Generalization in R

    • iro.uiowa.edu
    • data.mendeley.com
    Updated Aug 12, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Matthew Dannenberg; Christopher R Hakkenberg; Conghe Song (2021). Automatic Adaptive Signature Generalization in R [Dataset]. https://iro.uiowa.edu/esploro/outputs/dataset/Automatic-Adaptive-Signature-Generalization-in-R/9983983647802771
    Explore at:
    Dataset updated
    Aug 12, 2021
    Dataset provided by
    Mendeley Ltd.
    Authors
    Matthew Dannenberg; Christopher R Hakkenberg; Conghe Song
    Time period covered
    2017
    Description

    The automatic adaptive signature generalization (AASG) algorithm overcomes many of the limitations associated with classification of multitemporal imagery. By locating stable sites between two images and using them to adapt class spectral signatures from a high-quality reference classification to a new image, AASG mitigates the impacts of radiometric and phenological differences between images and ensures that class definitions remain consistent between the two classifications. Here, I provide source code (in the R programming environment), as well as a comprehensive user guide, for the AASG algorithm. See Dannenberg, Hakkenberg and Song (2016) for details of the algorithm.

  8. Data from: BERTs of a feather do not generalize together: Large variability...

    • zenodo.org
    zip
    Updated Jan 11, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    R. Thomas McCoy; Junghyun Min; Tal Linzen; R. Thomas McCoy; Junghyun Min; Tal Linzen (2021). BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance [Dataset]. http://doi.org/10.5281/zenodo.4110593
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 11, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    R. Thomas McCoy; Junghyun Min; Tal Linzen; R. Thomas McCoy; Junghyun Min; Tal Linzen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This Zenodo repository contains 100 copies of the model BERT fine-tuned on the MNLI dataset, created for the paper "BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance." Please see the project GitHub page for more details about using these models and how to cite any such usage: https://github.com/tommccoy1/hans/tree/master/berts_of_a_feather

  9. s

    Data to support the MPhil Thesis: Towards an understanding of generalisation...

    • eprints.soton.ac.uk
    Updated Jun 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Belcher, Dominic (2025). Data to support the MPhil Thesis: Towards an understanding of generalisation in deep learning: an analysis of the transformation of information in convolutional neural networks [Dataset]. http://doi.org/10.5258/SOTON/D3540
    Explore at:
    Dataset updated
    Jun 13, 2025
    Dataset provided by
    University of Southampton
    Authors
    Belcher, Dominic
    Description

    Full results of simulations detailed in thesis, Belcher, D 2025, 'Towards an understanding of generalisation in deep learning: an analysis of the transformation of information in convolutional neural networks', Master of Philosophy, University of Southampton, Southampton, UK. This dataset is the results of the simulations detailed in the above thesis. All results are in jsonlines format. No specialist software is required to read this data, any software for parsing json or jsonlines data is sufficient.

  10. 4

    Data and code underlying the publication: Neural oscillators for...

    • data.4tu.nl
    zip
    Updated Jun 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Taniya Kapoor (2024). Data and code underlying the publication: Neural oscillators for generalization of physics-informed machine learning [Dataset]. http://doi.org/10.4121/da6fadb7-843b-4c86-a231-884d88e64868.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 5, 2024
    Dataset provided by
    4TU.ResearchData
    Authors
    Taniya Kapoor
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ### Research Objective


    The primary objective of this research is to enhance the generalization of physics-informed machine learning (PIML) models by integrating them with neural oscillators. The goal is to improve the accuracy of these models in predicting solutions to partial differential equations (PDEs) beyond the training domain.


    ### Type of Research


    This research is applied and experimental. It focuses on developing and validating a new methodological approach to enhance the generalization capabilities of PIML models through a series of numerical experiments on various nonlinear and high order PDEs.


    ### Method of Data Collection


    The data used for validating numerical experiments are closed form analytic solution and physics-informed method is utilized to simulate the dataset. Both are explicitly mention in the python notebooks. The experiments are conducted on time-dependent nonlinear PDEs, including the viscous Burgers equation, Allen-Cahn equation, nonlinear Schrödinger equation, Euler-Bernoulli beam equation, and a 2D Kovasznay flow.


    ### Type of Data/codes


    1. All implementation are done using jupyter notebooks (.ipynb) or .py.

    2. .mat files are analytical solution generated using PINN simulation.

    3. (.jpeg), (.pdf) are figures which are used in the main manuscript.


  11. p

    Data from: Stacked generalization of random forest and decision tree...

    • openacessjournal.primarydomain.in
    Updated May 20, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Open access journals (2022). Stacked generalization of random forest and decision tree techniques for library data visualization [Dataset]. https://www.openacessjournal.primarydomain.in/abstract/1069
    Explore at:
    Dataset updated
    May 20, 2022
    Dataset authored and provided by
    Open access journals
    Description

    Stacked generalization of random forest and decision tree techniques for library data visualization The huge amount of library data stored in our modern research and statistic centers of organizations is springing up on daily bases These databases grow exponentially in size with respect to time it becomes exceptionally difficult to easily understand the behavior and interpret data with the relationships that exist between attributes This exponential growth of data poses new organizational c

  12. u

    Data underpinning "Generalization Capabilities of Machine Learning-based PDM...

    • rdr.ucl.ac.uk
    xlsx
    Updated Oct 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sam Lennard; Filipe Marques Ferreira; Fabio Aparecido Barbosa (2023). Data underpinning "Generalization Capabilities of Machine Learning-based PDM Equalization" [Dataset]. http://doi.org/10.5522/04/24461179.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Oct 31, 2023
    Dataset provided by
    University College London
    Authors
    Sam Lennard; Filipe Marques Ferreira; Fabio Aparecido Barbosa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    "graph_data.xlsx" is an excel spreadsheet containing the graph data. There are two sheets, "Nonlinearities" which contains the data in Fig 2a, and "Dispersion" containing the data in Fig 2b. In each sheet, the first column is the X axis and further columns are the Y values.

  13. Data from: Constructive causal generalization

    • osf.io
    Updated Oct 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bonan Zhao; Christopher Lucas; Neil Bramley (2023). Constructive causal generalization [Dataset]. https://osf.io/9awhj
    Explore at:
    Dataset updated
    Oct 15, 2023
    Dataset provided by
    Center for Open Sciencehttps://cos.io/
    Authors
    Bonan Zhao; Christopher Lucas; Neil Bramley
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    No description was included in this Dataset collected from the OSF

  14. o

    Processed data for the paper "Evaluating natural language processing models...

    • explore.openaire.eu
    Updated Oct 2, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yaoqing Yang (2022). Processed data for the paper "Evaluating natural language processing models with generalization metrics that do not need access to any training or testing data" [Dataset]. http://doi.org/10.5281/zenodo.7134118
    Explore at:
    Dataset updated
    Oct 2, 2022
    Authors
    Yaoqing Yang
    Description

    This is the data used to reproduce the results from "Evaluating natural language processing models with generalization metrics that do not need access to any training or testing data".

  15. D

    Paderborn Domain Generalization Version

    • researchdata.ntu.edu.sg
    Updated Oct 7, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DR-NTU (Data) (2022). Paderborn Domain Generalization Version [Dataset]. http://doi.org/10.21979/N9/UCIK2K
    Explore at:
    bin(252822409), application/x-ipynb+json(7016), text/x-matlab(3379), text/x-matlab(3023), application/x-ipynb+json(13442), bin(279456809), text/x-matlab(2188), text/x-matlab(3422), text/x-matlab(451), text/x-matlab(4092), text/x-matlab(2580), application/x-ipynb+json(2660), application/x-ipynb+json(12838), text/x-matlab(797)Available download formats
    Dataset updated
    Oct 7, 2022
    Dataset provided by
    DR-NTU (Data)
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Area covered
    Paderborn
    Dataset funded by
    Agency for Science, Technology and Research (A*STAR)
    Description

    This dataset is generated the KAT data center in Paderborn University with the sampling rate of 64 KHz (Lessmeier et al. 2016). The damages were generated using both artificial and natural ways. More specifically, an electric discharge machine (EDM), a drilling, and an electric engraving were used to manually produce the artificial faults. While the natural damages were caused by using accelerated run-to-failure tests. The data collection process for both types of damages, i.e., artificial and real, was exposed under working conditions with different operating parameters such as loading torque, rotational speed and radial force. In total, the Paderborn datasets was collect under 6 different operating conditions including 3 conditions with artificial damages (denoted as domains I, J and K) and 3 conditions with real damages (denoted as domains L, M, and N). For example, the loading torque varies from 0.1 to 0.7 Nm and the radial force varies from 400 to 1000 N, while the rotational speed is fixed at 1500 RPM. Each operating condition (i.e., domain) contains three classes, namely, healthy class, inner fault (IF) class, and outer fault (OF) class. To prepare the data samples for the Paderborn dataset, we adopted sliding windows with a fixed length of 5,120 and a shifting size of 4,096 (Ragab et al. 2021). As such, we generated 12,340 for each artificial domain (i.e., I, J, and K) and 13,640 samples for each real domain (i.e., L, Mand N) respectively.

  16. m

    The generalization by simplification operator with Sester’s method of...

    • mostwiedzy.pl
    zip
    Updated May 30, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Adam Inglot (2021). The generalization by simplification operator with Sester’s method of objects representing groups of buildings in Kartuzy district - scale 1:10000. Data from OSM [Dataset]. http://doi.org/10.34808/h88t-1060
    Explore at:
    zip(3868425)Available download formats
    Dataset updated
    May 30, 2021
    Authors
    Adam Inglot
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The process of automatic generalization is one of the elements of spatial data preparation for the purpose of creating digital cartographic studies. The presented data include a part of the process of generalization of building groups obtained from the Open Street Map databases (OSM) [1].

  17. S

    Experimental data of "Methodology for Evaluating the Generalization of...

    • scidb.cn
    Updated Jul 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    du an an (2023). Experimental data of "Methodology for Evaluating the Generalization of ResNet" [Dataset]. http://doi.org/10.57760/sciencedb.space.00802
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 5, 2023
    Dataset provided by
    Science Data Bank
    Authors
    du an an
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This repository contains images from the thesis and experimental result data, where the experimental result data is recorded in an Excel sheet with three columns containing the values of the model's training accuracy, testing accuracy, and generalizability assessment metrics, including IoU,IoU-B, Spectral Norm, EI, and Nuclear Norm.

  18. Data from: GenCodeSearchNet

    • zenodo.org
    bin
    Updated Sep 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andor Diera; Andor Diera (2023). GenCodeSearchNet [Dataset]. http://doi.org/10.5281/zenodo.8310891
    Explore at:
    binAvailable download formats
    Dataset updated
    Sep 2, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Andor Diera; Andor Diera
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    A new dataset for natural language code search evaluating different types of generalization

  19. Data from: Training data composition determines machine learning...

    • zenodo.org
    bin
    Updated Jun 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eugen Ursu; Eugen Ursu; Aygul Minnegalieva; Aygul Minnegalieva (2025). Training data composition determines machine learning generalization and biological rule discovery [Dataset]. http://doi.org/10.5281/zenodo.11191740
    Explore at:
    binAvailable download formats
    Dataset updated
    Jun 1, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Eugen Ursu; Eugen Ursu; Aygul Minnegalieva; Aygul Minnegalieva
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Aug 2024
    Description
  20. N

    Data from: Reinforcement learning with associative or discriminative...

    • neurovault.org
    zip
    Updated Sep 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Reinforcement learning with associative or discriminative generalization across states and actions: fMRI at 3 T and 7 T [Dataset]. http://identifiers.org/neurovault.collection:12536
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 14, 2022
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    A collection of 49 brain maps. Each brain map is a 3D array of values representing properties of the brain at different locations.

    Collection description

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Name No (2024). Road generalization data & code [Dataset]. http://doi.org/10.6084/m9.figshare.25330414.v2
Organization logo

Road generalization data & code

Explore at:
application/x-rarAvailable download formats
Dataset updated
Mar 5, 2024
Dataset provided by
Figsharehttp://figshare.com/
Authors
Name No
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

As a complex decision-making process, roadnetwork simplification involves stroke recognition, mesh density relatively preserving, and network structure abstracting. Such a multi-factor decision and scaling operation traditionally applied rule-based methods. The construction and adjusting of these rules contain many human-set parameters and conditions, which makes generalized results closely related to the cartographer’sexperience and habits. On the other hand, the existing methods tend to consider individual structures, for example,strokes, meshes, graph networks, etc.,separately in differentalgorithms lacking a solution that bringsthe advantages of these pattern structure handlingstogether. Aiming at the above problems, this study designs a simplification method using the Mesh-Line Structure Unit (MLSU) to simultaneously account for polyline and polygon properties. A graph-based deep learning network is built to use data-driven ideas to realize road selection decisions. The MLSU model can extract22 kinds of polyline features, 5 kinds of polygon features, and 3 interactivefeatures. In order to make generalization decisions,a model based on graph convolutional network is constructed,and the network model is trained with real data from partial areas in the southern United States, thus realizing automatic generalization of the road network. The experimental results show that the proposed method effectively realizes the automatic generalization of road data, and the simplified results have better performance in terms of visual representation, quantity maintenance, and average connectivity compared with other methods. This study also demonstrates the advantages and potential of using graph deep learning techniquesfor map generalization problems.

Search
Clear search
Close search
Google apps
Main menu