Data model and generic query templates for translating and integrating a set of related CSV event logs into a single event graph for as used in https://dx.doi.org/10.1007/s13740-021-00122-1
Provides input data for 5 datasets (BPIC14, BPIC15, BPIC16, BPIC17, BPIC19)
Provides Python scripts to prepare and import each dataset into a Neo4j database instance through Cypher queries, representing behavioral information not globally (as in an event log), but locally per entity and per relation between entities.
Provides Python scripts to retrieve event data from a Neo4j database instance and render it using Graphviz dot.
The data model and queries are described in detail in: Stefan Esser, Dirk Fahland: Multi-Dimensional Event Data in Graph Databases (2020) https://arxiv.org/abs/2005.14552 and https://dx.doi.org/10.1007/s13740-021-00122-1
Fork the query code from Github: https://github.com/multi-dimensional-process-mining/graphdb-eventlogs
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Grafo de ligações entre entidades e notícias (neste caso não foi preparado o comando com o neo4j-import mas aconselha-se esse face à opção LOAD CSV para datasets grandes) os dados são os mesmos do dataset 03 b mas, ao importar, são reorganizados de outra forma gerando um nó no grafo para cada notícia. instruções de importação para neo4j: USING PERIODIC COMMIT LOAD CSV WITH HEADERS FROM 'file:///people.csv' AS row MERGE (e:PER {_id: row._id, text: row.text}); USING PERIODIC COMMIT LOAD CSV WITH HEADERS FROM 'file:///orgs.csv' AS row MERGE (e:ORG {_id: row._id, text: row.text}); USING PERIODIC COMMIT LOAD CSV WITH HEADERS FROM 'file:///locations.csv' AS row MERGE (e:LOC {_id: row._id, text: row.text}); USING PERIODIC COMMIT LOAD CSV WITH HEADERS FROM 'file:///misc.csv' AS row MERGE (e:MISC {_id: row._id, text: row.text}); USING PERIODIC COMMIT LOAD CSV WITH HEADERS FROM 'file:///news.csv' AS row MERGE (n:NEWS {_id: row._id, title: row.title}); USING PERIODIC COMMIT LOAD CSV WITH HEADERS FROM 'file:///connections_1.csv' AS row MERGE (e1 {_id: row._id1}) MERGE (e2 {_id: row._id2}) WITH row, e1, e2 MERGE (e1)-[:rel{weight: toInteger(row.weight)}]-(e2); Para mais informações ver: https://github.com/msramalho/desarquivo/blob/master/DATASETS.md
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We present three defect rediscovery datasets mined from Bugzilla. The datasets capture data for three groups of open source software projects: Apache, Eclipse, and KDE. The datasets contain information about approximately 914 thousands of defect reports over a period of 18 years (1999-2017) to capture the inter-relationships among duplicate defects.
File Descriptions
apache.csv - Apache Defect Rediscovery dataset
eclipse.csv - Eclipse Defect Rediscovery dataset
kde.csv - KDE Defect Rediscovery dataset
apache.relations.csv - Inter-relations of rediscovered defects of Apache
eclipse.relations.csv - Inter-relations of rediscovered defects of Eclipse
kde.relations.csv - Inter-relations of rediscovered defects of KDE
create_and_populate_neo4j_objects.cypher - Populates Neo4j graphDB by importing all the data from the CSV files. Note that you have to set dbms.import.csv.legacy_quote_escaping configuration setting to false to load the CSV files as per https://neo4j.com/docs/operations-manual/current/reference/configuration-settings/#config_dbms.import.csv.legacy_quote_escaping
create_and_populate_mysql_objects.sql - Populates MySQL RDBMS by importing all the data from the CSV files
rediscovery_db_mysql.zip - For your convenience, we also provide full backup of the MySQL database
neo4j_examples.txt - Sample Neo4j queries
mysql_examples.txt - Sample MySQL queries
rediscovery_eclipse_6325.png - Output of Neo4j example #1
distinct_attrs.csv - Distinct values of bug_status, resolution, priority, severity for each project
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Grafo de ligações entre entidades usadas na versão atual do desarquivo disponível em https://msramalho.github.io/desarquivo/ Para importar os dados usar neo4j-admin import --id-type=STRING --nodes=import/i_entities.csv --relationships=rel=import/i_connections.csv Para mais informações consultar: https://github.com/msramalho/desarquivo/blob/master/DATASETS.md
This MS174
dataset is the first dataset made public by the China Human Trafficking and Slaving graph Database project (CHTSDB). CHTSDB is based on a versatile action-centric model and is implemented in a graph database structure. For an overview of the project, please have a look at the README.md file.
The project is also publicly available on Github.
It is the result of an exploration of the first 174 rolls of the official Annals of the Ming Dynasty (the Mingshi 明史). It is based on the edition of the History of the Ming published by Wikisource under CC BY-SA 4.0 license. A very state-centric source focusing on the higher social strata and with little interest in recording the lived experiences of the common people, the Annals of the Ming Dynasty are probably the worst source one could think of to start the CHTSDB project. This first exploration nonetheless yielded an interesting result, shedding light on the extended scope and enduring presence of war capture under the Ming. Providing very little numerical data, this first dataset still allows us to provide a first estimate of 150,000 captives, which in all likelihood are only the tip of the iceberg.
This dataset contains the following:
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Data model and generic query templates for translating and integrating a set of related CSV event logs into a single event graph for as used in https://dx.doi.org/10.1007/s13740-021-00122-1
Provides input data for 5 datasets (BPIC14, BPIC15, BPIC16, BPIC17, BPIC19)
Provides Python scripts to prepare and import each dataset into a Neo4j database instance through Cypher queries, representing behavioral information not globally (as in an event log), but locally per entity and per relation between entities.
Provides Python scripts to retrieve event data from a Neo4j database instance and render it using Graphviz dot.
The data model and queries are described in detail in: Stefan Esser, Dirk Fahland: Multi-Dimensional Event Data in Graph Databases (2020) https://arxiv.org/abs/2005.14552 and https://dx.doi.org/10.1007/s13740-021-00122-1
Fork the query code from Github: https://github.com/multi-dimensional-process-mining/graphdb-eventlogs