Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract This paper presents the results of the statistical graphs’ analysis according to the curricular guidelines and its implementation in eighteen primary education mathematical textbooks in Perú, which correspond to three complete series and are from different editorials. In them, through a content analysis, we analyzed sections where graphs appeared, identifying the type of activity that arises from the graphs involved, the demanded reading level and the semiotic complexity task involved. The textbooks are partially suited to the curricular guidelines regarding the graphs presentation by educational level and the number of activities proposed by the three editorials are similar. The main activity that is required in textbooks is calculating and building. The predominance of bar graphs, a basic reading level and the representation of an univariate data distribution in the graph are observed in this study.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains an open and curated scholarly graph we built as a training and test set for data discovery, data connection, author disambiguation, and link prediction tasks. This graph represents the European Marine Science community included in the OpenAIRE Graph. The nodes of the graph we release represent publications, datasets, software, and authors respectively; edges interconnecting research products always have the publication as source, and the dataset/software as target. In addition, edges are labeled with semantics that outline whether the publication is referencing, citing, documenting, or supplementing the related outcome. To curate and enrich nodes metadata and edges semantics, we relied on the information extracted from the PDF of the publications and the datasets/software webpages respectively. We curated the authors so to remove duplicated nodes representing the same person. The resource we release counts 4,047 publications, 5,488 datasets, 22 software, 21,561 authors, and 9,692 edges connect publications to datasets/software. This graph is in the curated_MES folder. We provide this resource as: a property graph: we provide the dump that can be imported in neo4j 5 jsonl files containing publications, datasets, software, authors, and relationships respectively. Each line of a jsonl file contains a JSON object representing a node and contains the metadata of that node (or a relationship). We provide two additional scholarly graphs: The curated MES graph with the removed edges. During the curation we removed some edges since they were labeled with an inconsistent or imprecise semantics. This graph includes the same nodes and edges as the previous one, and, in addition, it contains the edges removed during the curation pipeline; these edges are marked as Removed. This graph is in the curated_MES_with_removed_semantics folder. The original MES community of OpenAIRE. It represents the MES community extracted from the OpenAIRE Research Graph. This graph has not been curated, and the metadata and semantics are those of the OpenAIRE Research Graph. This graph is in the original_MES_community folder.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Graphs are a representative type of fundamental data structures. They are capable of representing complex association relationships in diverse domains. For large-scale graph processing, the stream graphs have become efficient tools to process dynamically evolving graph data. When processing stream graphs, the subgraph counting problem is a key technique, which faces significant computational challenges due to its #P-complete nature. This work introduces StreamSC, a novel framework that efficiently estimate subgraph counting results on stream graphs through two key innovations: (i) It’s the first learning-based framework to address the subgraph counting problem focused on stream graphs; and (ii) this framework addresses the challenges from dynamic changes of the data graph caused by the insertion or deletion of edges. Experiments on 5 real-word graphs show the priority of StreamSC on accuracy and efficiency.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is structured as a graph, where nodes represent users and edges capture their interactions, including tweets, retweets, replies, and mentions. Each node provides detailed user attributes, such as unique ID, follower and following counts, and verification status, offering insights into each user's identity, role, and influence in the mental health discourse. The edges illustrate user interactions, highlighting engagement patterns and types of content that drive responses, such as tweet impressions. This interconnected structure enables sentiment analysis and public reaction studies, allowing researchers to explore engagement trends and identify the mental health topics that resonate most with users.
The dataset consists of three files: 1. Edges Data: Contains graph data essential for social network analysis, including fields for UserID (Source), UserID (Destination), Post/Tweet ID, and Date of Relationship. This file enables analysis of user connections without including tweet content, maintaining compliance with Twitter/X’s data-sharing policies. 2. Nodes Data: Offers user-specific details relevant to network analysis, including UserID, Account Creation Date, Follower and Following counts, Verified Status, and Date Joined Twitter. This file allows researchers to examine user behavior (e.g., identifying influential users or spam-like accounts) without direct reference to tweet content. 3. Twitter/X Content Data: This file contains only the raw tweet text as a single-column dataset, without associated user identifiers or metadata. By isolating the text, we ensure alignment with anonymization standards observed in similar published datasets, safeguarding user privacy in compliance with Twitter/X's data guidelines. This content is crucial for addressing the research focus on mental health discourse in social media. (References to prior Data in Brief publications involving Twitter/X data informed the dataset's structure.)
Facebook
Twitterhttps://paper.erudition.co.in/termshttps://paper.erudition.co.in/terms
Question Paper Solutions of chapter Diagrammatic and Graphical representation of Numerical Data of Numerical and statistical Methods, 5th Semester , Bachelor of Computer Application 2020-2021
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Description. The NetVote dataset contains the outputs of the NetVote program when applied to voting data coming from VoteWatch (http://www.votewatch.eu/).
These results were used in the following conference papers:
Source code. The NetVote source code is available on GitHub: https://github.com/CompNet/NetVotes.
Citation. If you use our dataset or tool, please cite article [1] above.
@InProceedings{Mendonca2015, author = {Mendonça, Israel and Figueiredo, Rosa and Labatut, Vincent and Michelon, Philippe}, title = {Relevance of Negative Links in Graph Partitioning: A Case Study Using Votes From the {E}uropean {P}arliament}, booktitle = {2\textsuperscript{nd} European Network Intelligence Conference ({ENIC})}, year = {2015}, pages = {122-129}, address = {Karlskrona, SE}, publisher = {IEEE Publishing}, doi = {10.1109/ENIC.2015.25},}
-------------------------
Details. This archive contains the following folders:
-------------------------
License. These data are shared under a Creative Commons 0 license.
Contact. Vincent Labatut <vincent.labatut@univ-avignon.fr> & Rosa Figueiredo <rosa.figueiredo@univ-avignon.fr>
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
DBPedia Classes
DBpedia is a knowledge graph extracted from Wikipedia, providing structured data about real-world entities and their relationships. DBpedia Classes are the core building blocks of this knowledge graph, representing different categories or types of entities.
Key Concepts:
Entity: A real-world object, such as a person, place, thing, or concept. Class: A group of entities that share common properties or characteristics. Instance: A specific member of a class.
Examples of DBPedia Classes:
Person: Represents individuals, e.g., "Barack Obama," "Albert Einstein." Place: Represents locations, e.g., "Paris," "Mount Everest." Organization: Represents groups, institutions, or companies, e.g., "Google," "United Nations." Event: Represents occurrences, e.g., "World Cup," "French Revolution." Artwork: Represents creative works, e.g., "Mona Lisa," "Star Wars."
Hierarchy and Relationships:
DBpedia classes often have a hierarchical structure, where subclasses inherit properties from their parent classes. For example, the class "Person" might have subclasses like "Politician," "Scientist," and "Artist."
Relationships between classes are also important. For instance, a "Person" might have a "birthPlace" relationship with a "Place," or an "Artist" might have a "hasArtwork" relationship with an "Artwork."
Applications of DBPedia Classes:
Semantic Search: DBPedia classes can be used to enhance search results by understanding the context and meaning of queries.
Knowledge Graph Construction: DBPedia classes form the foundation of knowledge graphs, which can be used for various applications like question answering, recommendation systems, and data integration.
Data Analysis: DBPedia classes can be used to analyze and extract insights from large datasets.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Graphs are a representative type of fundamental data structures. They are capable of representing complex association relationships in diverse domains. For large-scale graph processing, the stream graphs have become efficient tools to process dynamically evolving graph data. When processing stream graphs, the subgraph counting problem is a key technique, which faces significant computational challenges due to its #P-complete nature. This work introduces StreamSC, a novel framework that efficiently estimate subgraph counting results on stream graphs through two key innovations: (i) It’s the first learning-based framework to address the subgraph counting problem focused on stream graphs; and (ii) this framework addresses the challenges from dynamic changes of the data graph caused by the insertion or deletion of edges. Experiments on 5 real-word graphs show the priority of StreamSC on accuracy and efficiency.
Facebook
TwitterData feed representing the executive relationships and influence network as a graph.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Nomographs (or nomograms, or alignment charts) are graphical representations of mathematical relationships (extending to empirical relationships of data) which are used by simply applying a straightedge across the plot through points on scales representing independent variables, which then crosses the corresponding datum point for the dependent variable; the choice among independent and dependent variable is arbitrary so that each variable may be determined in terms of the others. Examples of nomographs in common current use compute the lift available for a hot-air balloon, the boiling points of solvents under reduced pressure in the chemistry laboratory, and the relative forces in a centrifuge in a biochemical laboratory. Sundials represent another ancient yet widely familiar example. With the advent and ready accessibility of the computer, printed mathematical tables, slide rules and nomographs became generally redundant. However, nomographs provide insight into mathematical relationships, are useful for rapid and repeated application, even in the absence of calculational facilities, and can reliably be used in the field. Many nomographs for various purposes may be found online. This paper describes the origins and development of nomographs, illustrating their use with some relevant examples. A supplementary interactive Excel file demonstrates their application for some simple mathematical operations.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset Construction
This dataset captures the temporal network of Bitcoin (BTC) flow exchanged between entities at the finest time resolution in UNIX timestamp. Its construction is based on the blockchain covering the period from January, 3rd of 2009 to January the 25th of 2021. The blockchain extraction has been made using bitcoin-etl (https://github.com/blockchain-etl/bitcoin-etl) Python package. The entity-entity network is built by aggregating Bitcoin addresses using the common-input heuristic [1] as well as popular Bitcoin users' addresses provided by https://www.walletexplorer.com/
[1] M. Harrigan and C. Fretter, "The Unreasonable Effectiveness of Address Clustering," 2016 Intl IEEE Conferences on Ubiquitous Intelligence & Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress (UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld), Toulouse, France, 2016, pp. 368-373, doi: 10.1109/UIC-ATC-ScalCom-CBDCom-IoP-SmartWorld.2016.0071.keywords: {Online banking;Merging;Protocols;Upper bound;Bipartite graph;Electronic mail;Size measurement;bitcoin;cryptocurrency;blockchain},
Dataset Description
Bitcoin Activity Temporal Coverage: From 03 January 2009 to 25 January 2021
Overview:
This dataset provides a comprehensive representation of Bitcoin exchanges between entities over a significant temporal span, spanning from the inception of Bitcoin to recent years. It encompasses various temporal resolutions and representations to facilitate Bitcoin transaction network analysis in the context of temporal graphs.
Every dates have been retrieved from bloc UNIX timestamp and GMT timezone.
Contents:
The dataset is distributed across three compressed archives:
All data are stored in the Apache Parquet file format, a columnar storage format optimized for analytical queries. It can be used with pyspark Python package.
orbitaal-stream_graph.tar.gz:
The root directory is STREAM_GRAPH/
Contains a stream graph representation of Bitcoin exchanges at the finest temporal scale, corresponding to the validation time of each block (averaging approximately 10 minutes).
The stream graph is divided into 13 files, one for each year
Files format is parquet
Name format is orbitaal-stream_graph-date-[YYYY]-file-id-[ID].snappy.parquet, where [YYYY] stands for the corresponding year and [ID] is an integer from 1 to N (number of files here) such as sorting in increasing [ID] ordering is similar to sort by increasing year ordering
These files are in the subdirectory STREAM_GRAPH/EDGES/
orbitaal-snapshot-all.tar.gz:
The root directory is SNAPSHOT/
Contains the snapshot network representing all transactions aggregated over the whole dataset period (from Jan. 2009 to Jan. 2021).
Files format is parquet
Name format is orbitaal-snapshot-all.snappy.parquet.
These files are in the subdirectory SNAPSHOT/EDGES/ALL/
orbitaal-snapshot-year.tar.gz:
The root directory is SNAPSHOT/
Contains the yearly resolution of snapshot networks
Files format is parquet
Name format is orbitaal-snapshot-date-[YYYY]-file-id-[ID].snappy.parquet, where [YYYY] stands for the corresponding year and [ID] is an integer from 1 to N (number of files here) such as sorting in increasing [ID] ordering is similar to sort by increasing year ordering
These files are in the subdirectory SNAPSHOT/EDGES/year/
orbitaal-snapshot-month.tar.gz:
The root directory is SNAPSHOT/
Contains the monthly resoluted snapshot networks
Files format is parquet
Name format is orbitaal-snapshot-date-[YYYY]-[MM]-file-id-[ID].snappy.parquet, where
[YYYY] and [MM] stands for the corresponding year and month, and [ID] is an integer from 1 to N (number of files here) such as sorting in increasing [ID] ordering is similar to sort by increasing year and month ordering
These files are in the subdirectory SNAPSHOT/EDGES/month/
orbitaal-snapshot-day.tar.gz:
The root directory is SNAPSHOT/
Contains the daily resoluted snapshot networks
Files format is parquet
Name format is orbitaal-snapshot-date-[YYYY]-[MM]-[DD]-file-id-[ID].snappy.parquet, where
[YYYY], [MM], and [DD] stand for the corresponding year, month, and day, and [ID] is an integer from 1 to N (number of files here) such as sorting in increasing [ID] ordering is similar to sort by increasing year, month, and day ordering
These files are in the subdirectory SNAPSHOT/EDGES/day/
orbitaal-snapshot-hour.tar.gz:
The root directory is SNAPSHOT/
Contains the hourly resoluted snapshot networks
Files format is parquet
Name format is orbitaal-snapshot-date-[YYYY]-[MM]-[DD]-[hh]-file-id-[ID].snappy.parquet, where
[YYYY], [MM], [DD], and [hh] stand for the corresponding year, month, day, and hour, and [ID] is an integer from 1 to N (number of files here) such as sorting in increasing [ID] ordering is similar to sort by increasing year, month, day and hour ordering
These files are in the subdirectory SNAPSHOT/EDGES/hour/
orbitaal-nodetable.tar.gz:
The root directory is NODE_TABLE/
Contains two files in parquet format, the first one gives information related to nodes present in stream graphs and snapshots such as period of activity and associated global Bitcoin balance, and the other one contains the list of all associated Bitcoin addresses.
Small samples in CSV format
orbitaal-stream_graph-2016_07_08.csv and orbitaal-stream_graph-2016_07_09.csv
These two CSV files are related to stream graph representations of an halvening happening in 2016.
orbitaal-snapshot-2016_07_08.csv and orbitaal-snapshot-2016_07_09.csv
These two CSV files are related to daily snapshot representations of an halvening happening in 2016.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is the pre-print version of a paper accepted in Open Repository Conference in Brisbane, Australia, June 2017.Abstract Research Graph is an open collaborative project that builds the capability for connecting researchers, publications, research grants and research datasets (data in research). VIVO is an open source, semantic web platform and a set of ontologies for representing scholarship. To provide interoperability between Research Graph data and VIVO systems we modelled the Research Graph metamodel using the VIVO Integrated Semantic Framework. To evaluate the mapping, we used the model to connect figshare RDF records to data collections in Research Data Australia using Research Graph API. In addition, we are working toward loading Research Graph data into a VIVO instance. VIVO provides a search capability, and pages for first class entities in the Research Graph model -- researcher, dataset, grant, and publication. The result provides a visualisation solution for co-authors, co-funding, timeline, and a capability map for finding expertise related to concepts of interest. The resulting linked open data will be made freely available and can be used in other tools for additional discovery.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Modeling dependence in high-dimensional systems has become an increasingly important topic. Most approaches rely on the assumption of a multivariate Gaussian distribution such as statistical models on directed acyclic graphs (DAGs). They are based on modeling conditional independencies and are scalable to high dimensions. In contrast, vine copula models accommodate more elaborate features like tail dependence and asymmetry, as well as independent modeling of the marginals. This flexibility comes however at the cost of exponentially increasing complexity for model selection and estimation. We show a novel connection between DAGs with limited number of parents and truncated vine copulas under sufficient conditions. This motivates a more general procedure exploiting the fast model selection and estimation of sparse DAGs while allowing for non-Gaussian dependence using vine copulas. By numerical examples in hundreds of dimensions, we demonstrate that our approach outperforms the standard method for vine structure selection. Supplementary material for this article is available online.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The network was generated using email data from a large European research institution. For a period from October 2003 to May 2005 (18 months) we have anonymized information about all incoming and outgoing email of the research institution. For each sent or received email message we know the time, the sender and the recipient of the email. Overall we have 3,038,531 emails between 287,755 different email addresses. Note that we have a complete email graph for only 1,258 email addresses that come from the research institution. Furthermore, there are 34,203 email addresses that both sent and received email within the span of our dataset. All other email addresses are either non-existing, mistyped or spam.
Given a set of email messages, each node corresponds to an email address. We create a directed edge between nodes i and j, if i sent at least one message to j.
Enron email communication network covers all the email communication within a dataset of around half million emails. This data was originally made public, and posted to the web, by the Federal Energy Regulatory Commission during its investigation. Nodes of the network are email addresses and if an address i sent at least one email to address j, the graph contains an undirected edge from i to j. Note that non-Enron email addresses act as sinks and sources in the network as we only observe their communication with the Enron email addresses.
The Enron email data was originally released by William Cohen at CMU.
Wikipedia is a free encyclopedia written collaboratively by volunteers around the world. Each registered user has a talk page, that she and other users can edit in order to communicate and discuss updates to various articles on Wikipedia. Using the latest complete dump of Wikipedia page edit history (from January 3 2008) we extracted all user talk page changes and created a network.
The network contains all the users and discussion from the inception of Wikipedia till January 2008. Nodes in the network represent Wikipedia users and a directed edge from node i to node j represents that user i at least once edited a talk page of user j.
The dynamic face-to-face interaction networks represent the interactions that happen during discussions between a group of participants playing the Resistance game. This dataset contains networks extracted from 62 games. Each game is played by 5-8 participants and lasts between 45--60 minutes. We extract dynamically evolving networks from the free-form discussions using the ICAF algorithm. The extracted networks are used to characterize and detect group deceptive behavior using the DeceptionRank algorithm.
The networks are weighted, directed and temporal. Each node represents a participant. At each 1/3 second, a directed edge from node u to v is weighted by the probability of participant u looking at participant v or the laptop. Additionally, we also provide a binary version where an edge from u to v indicates participant u looks at participant v (or the laptop).
Stanford Network Analysis Platform (SNAP) is a general purpose, high performance system for analysis and manipulation of large networks. Graphs consists of nodes and directed/undirected/multiple edges between the graph nodes. Networks are graphs with data on nodes and/or edges of the network.
The core SNAP library is written in C++ and optimized for maximum performance and compact graph representation. It easily scales to massive networks with hundreds of millions of nodes, and billions of edges. It efficiently manipulates large graphs, calculates structural properties, generates regular and random graphs, and supports attributes on nodes and edges. Besides scalability to large graphs, an additional strength of SNAP is that nodes, edges and attributes in a graph or a network can be changed dynamically during the computation.
SNAP was originally developed by Jure Leskovec in the course of his PhD studies. The first release was made available in Nov, 2009. SNAP uses a general purpose STL (Standard Template Library)-like library GLib developed at Jozef Stefan Institute. SNAP and GLib are being actively developed and used in numerous academic and industrial projects.
Facebook
TwitterThese data were used to examine grammatical structures and patterns within a set of geospatial glossary definitions. Objectives of our study were to analyze the semantic structure of input definitions, use this information to build triple structures of RDF graph data, upload our lexicon to a knowledge graph software, and perform SPARQL queries on the data. Upon completion of this study, SPARQL queries were proven to effectively convey graph triples which displayed semantic significance. These data represent and characterize the lexicon of our input text which are used to form graph triples. These data were collected in 2024 by passing text through multiple Python programs utilizing spaCy (a natural language processing library) and its pre-trained English transformer pipeline. Before data was processed by the Python programs, input definitions were first rewritten as natural language and formatted as tabular data. Passages were then tokenized and characterized by their part-of-speech, tag, dependency relation, dependency head, and lemma. Each word within the lexicon was tokenized. A stop-words list was utilized only to remove punctuation and symbols from the text, excluding hyphenated words (ex. bowl-shaped) which remained as such. The tokens’ lemmas were then aggregated and totaled to find their recurrences within the lexicon. This procedure was repeated for tokenizing noun chunks using the same glossary definitions.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
For K4 and Km-e graphs, a coloring type (K4,Km-e;n) is such an edge coloring of the full Kn graph, which does not have the K4 subgraph in the first color (representing by no edges in the graph) or the Km-e subgraph in the second color (representing by edges in the graph). Km-e means the full Km graph with one edge removed.The Ramsey number R(K4,Km-e) is the smallest natural number n such that for any edge coloring of the full Kn graph there is an isomorphic subgraph with K4 in the first color (no edge in the graph) or isomorphic with Km-e in the second color (exists edge in the graph). Coloring types (K4,Km-e;n) exist for n<R(K4,Km-e).The dataset consists of:a) 5 files containing all non-isomorphic graphs that are coloring types (K4,K3-e;n) for 1<n<7,b) 9 files containing all non-isomorphic graphs that are coloring types (K4,K4-e;n) for 1<n<11.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
For K3 and Km-e graphs, a coloring type (K3,Km-e;n) is such an edge coloring of the full Kn graph, which does not have the K3 subgraph in the first color (representing by no edges in the graph) or the Km-e subgraph in the second color (representing by edges in the graph). Km-e means the full Km graph with one edge removed.The Ramsey number R(K3,Km-e) is the smallest natural number n such that for any edge coloring of the full Kn graph there is an isomorphic subgraph with K3 in the first color (no edge in the graph) or isomorphic with Km-e in the second color (exists edge in the graph). Coloring types (K3,Km-e;n) exist for n<R(K3,Km-e). The dataset consists of:a) 3 files containing all non-isomorphic graphs that are coloring types (K3,K3-e;n) for 1<n<5,b) 5 files containing all non-isomorphic graphs that are coloring types (K3,K4-e;n) for 1<n<7,c) 9 files containing all non-isomorphic graphs that are coloring types (K3,K5-e;n) for 1<n<11,d) 15 files containing all non-isomorphic graphs that are coloring types (K3,K6-e;n) for 1<n<17. All graphs have been saved in Graph6 format (https://users.cecs.anu.edu.au/~bdm/data/formats.html).The Nauty package by Brendan D. McKay was used to check the isomorphism of the graphs (http://users.cecs.anu.edu.au/~bdm/nauty/). We recommend the survey article of S. Radziszowski containing the most important results regarding Ramsey numbers: S. Radziszowski, Small Ramsey numbers, Electron. J. Comb. Dyn. Surv. 1, revision #15, DS1: Mar 3, 2017 ( https://doi.org/10.37236/21).
Facebook
Twitterhttps://www.shibatadb.com/license/data/proprietary/v1.0/license.txthttps://www.shibatadb.com/license/data/proprietary/v1.0/license.txt
Yearly citation counts for the publication titled "A Graph-Based Approach for Representing Wastewater Networks from GIS Data: Ensuring Connectivity and Consistency".
Facebook
Twitterhttps://heidata.uni-heidelberg.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.11588/DATA/KOAMK4https://heidata.uni-heidelberg.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.11588/DATA/KOAMK4
This dataset contains source code and data used in the PhD thesis "Learning Neural Graph Representations in Non-Euclidean Geometries". The dataset is split into four repositories: figet: Source code to run experiments for chapter 6 "Constructing and Exploiting Hierarchical Graphs". hyfi: Source code to run experiments for chapter 7 "Inferring the Hierarchy with a Fully Hyperbolic Model". sympa: Source code to run experiments for chapter 8 "A Framework for Graph Embeddings on Symmetric Spaces". gyroSPD: Source code to run experiments for chapter 9 "Representing Multi-Relational Graphs on SPD Manifolds".
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global Graph Data Integration Platform market size reached USD 2.47 billion in 2024, demonstrating robust momentum across key verticals. The market is expected to expand at a remarkable CAGR of 18.9% from 2025 to 2033, reaching a forecasted value of USD 12.13 billion by 2033. This rapid growth is primarily driven by the increasing adoption of graph-based technologies to manage complex, interconnected data and the rising demand for advanced analytics capabilities across industries.
The surge in demand for graph data integration platforms is fundamentally linked to the exponential growth of data volumes and the increasing complexity of enterprise data environments. Organizations today are dealing with vast, diverse, and highly interconnected datasets that traditional relational databases struggle to handle efficiently. Graph-based solutions, by contrast, excel at representing and querying complex relationships, making them indispensable for applications such as fraud detection, recommendation engines, and network analysis. As digital transformation accelerates and businesses seek to extract deeper insights from their data, the need for robust graph data integration platforms is only expected to intensify.
Another vital growth factor for the graph data integration platform market is the expanding application of artificial intelligence and machine learning technologies. These advanced analytics tools rely heavily on the ability to process and analyze large volumes of interconnected data in real time. Graph data integration platforms enable organizations to seamlessly integrate disparate data sources, enhance data quality, and facilitate advanced analytics. This capability is particularly valuable in sectors such as BFSI, healthcare, and retail, where timely insights can drive competitive advantage and operational efficiency. The convergence of AI, machine learning, and graph data integration is poised to unlock new opportunities and fuel sustained market growth throughout the forecast period.
The growing emphasis on data governance, security, and compliance is also propelling the adoption of graph data integration platforms. As regulatory requirements become more stringent and organizations face increasing scrutiny over data privacy and integrity, the ability to track, manage, and audit complex data relationships becomes critical. Graph-based solutions offer unparalleled visibility into data lineage and dependencies, enabling organizations to meet compliance mandates more effectively. This, coupled with the rising threat of cyber-attacks and data breaches, is prompting enterprises to invest in advanced data integration solutions that can ensure both security and compliance.
From a regional perspective, North America continues to dominate the graph data integration platform market, accounting for the largest share in 2024, followed closely by Europe and Asia Pacific. The United States, in particular, has witnessed widespread adoption of graph technologies across sectors such as finance, healthcare, and telecommunications. Meanwhile, Asia Pacific is emerging as a high-growth region, driven by rapid digitalization, expanding IT infrastructure, and increasing investments in big data and analytics. As organizations worldwide recognize the strategic value of graph data integration, the market is expected to witness significant growth across all major regions.
The component segment of the graph data integration platform market is bifurcated into software and services, each playing a pivotal role in the overall ecosystem. The software component, which includes graph databases, integration tools, and visualization solutions, accounted for the largest share in 2024. This dominance is attributed to the continuous innovation in graph database technologies and the increasing demand for scalable, high-performance solutions that can handle complex data relationships. Leading vendors are investing heavily in R&D to enhance the capabilities of their software offerings, introducing features such as real-time analytics, automated data mapping, and advanced visualization tools. These advancements are enabling organizations to unlock deeper insights from their data and drive more informed decision-making.
On the services front, the market is witnessing robust
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract This paper presents the results of the statistical graphs’ analysis according to the curricular guidelines and its implementation in eighteen primary education mathematical textbooks in Perú, which correspond to three complete series and are from different editorials. In them, through a content analysis, we analyzed sections where graphs appeared, identifying the type of activity that arises from the graphs involved, the demanded reading level and the semiotic complexity task involved. The textbooks are partially suited to the curricular guidelines regarding the graphs presentation by educational level and the number of activities proposed by the three editorials are similar. The main activity that is required in textbooks is calculating and building. The predominance of bar graphs, a basic reading level and the representation of an univariate data distribution in the graph are observed in this study.