Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This repository contains the reference code for our ACM MM 2019 paper. Its GitHub link is https://github.com/ysyscool/SGDNet
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Here we provide a large dataset of neuronal responses in the mouse auditory system to a range of simple sounds. This data is initially published in the paper Bagur, Lebourg et al
The "Data" files are organized as follows :
The "Codes" folder contains matlab code showing how to access the data and illustrating how it is organized as well as code to perform basic population level analysis from Bagur, Lebourg et al, illustrating how to calculate noise-free correlation between popultion vector
"Packages" are open source code from github written by other members of the scientific community and saved here as a reference :
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
・All mat file is the source code of the article.
Facebook
Twitterhttps://heidata.uni-heidelberg.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.11588/DATA/KOAMK4https://heidata.uni-heidelberg.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.11588/DATA/KOAMK4
This dataset contains source code and data used in the PhD thesis "Learning Neural Graph Representations in Non-Euclidean Geometries". The dataset is split into four repositories: figet: Source code to run experiments for chapter 6 "Constructing and Exploiting Hierarchical Graphs". hyfi: Source code to run experiments for chapter 7 "Inferring the Hierarchy with a Fully Hyperbolic Model". sympa: Source code to run experiments for chapter 8 "A Framework for Graph Embeddings on Symmetric Spaces". gyroSPD: Source code to run experiments for chapter 9 "Representing Multi-Relational Graphs on SPD Manifolds".
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Neural-Story-v1 Dataset
Overview
The Neural-Story-v1 dataset is a curated collection of short stories featuring a rich variety of genres and plot settings. Carefully assembled by NeuralNovel, this dataset aims to serve as a valuable resource for testing and fine-tuning small language models using LoRa.
Data Source
The dataset content is a result of a combination of automated generation by Mixtral 8x7b and manual refinement.
Purpose
Designed… See the full description on the dataset page: https://huggingface.co/datasets/NeuralNovel/Neural-Story-v1.
Facebook
Twitterhttps://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-2052https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-2052
Source code of our visual analytics system for the interpretation of hidden states in recurrent neural networks. This project contains source code for preprocessing data and the visual analytics system. Additionally, we added precomputed data for immediate use in the visual analysis system. The sub directories contain the following: dataPreparation: Python scripts to prepare data for analysis. In these scripts, Long Short-Term Memory (LSTM) models are trained and data for our visual analytics system is exported. visualAnalytics: The source code of our visual analytics system to explore hidden states. demonstrationData: Data files for the use with our visual analytics system. The same data can also be generated with the data preparation scripts. We provide two scripts to generate data for analysis in our visual analytics system: for the IMDB and Reuters dataset as available in Keras. The output files can then be loaded into our visual analytics system; their locations have to be specified in userData.toml of the visual analytics system. The output file of our data preparation scripts or the ones provided for demonstration can be loaded in our visual analytics system for visualization and analysis. Since we provide input files, you do not have to run the preprocessing steps and can use our visual analytics system immediately. Please have a look at the respective README-files for more details.
Facebook
Twitterhttps://heidata.uni-heidelberg.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.11588/DATA/ZBNUCGhttps://heidata.uni-heidelberg.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.11588/DATA/ZBNUCG
This dataset contains source code and data used in the PhD thesis "Linguistically-Inspired Neural Coherence Modeling". The dataset is split into five repositories: StruSim: Source code to run experiments for Chapter 4 "Document Structure Similarity-Enhanced Coherence Modeling". ConnRel: Source code to run experiments for Chapter 5 "Annotation-inspired Implicit Discourse Relation Classification". Exp2Imp: Source code to run experiments for Chapter 6 "Explicit to Implicit Discourse Relation Classification". RelCoh: Source code to run experiments for Chapter 7 "Discourse Relation-Enhanced Coherence Modeling". EntyRelCoh: Source code to run experiments for Chapter 8 "Coherence Modeling Using Entities and Discourse Relations". The data used in the experiments can be downloaded from Linguistic Data Consortium (https://www.ldc.upenn.edu/): PDTB 2.0: https://catalog.ldc.upenn.edu/LDC2008T05 PDTB 3.0: https://catalog.ldc.upenn.edu/LDC2019T05 TOEFL Dataset: https://catalog.ldc.upenn.edu/LDC2014T06 GCDC: https://github.com/aylai/GCDC-corpus CoheSentia: https://github.com/AviyaMn/CoheSentia
Facebook
Twitterhttps://heidata.uni-heidelberg.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.11588/DATA/ERDJDIhttps://heidata.uni-heidelberg.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.11588/DATA/ERDJDI
Abstract Modal sense classification (MSC) is aspecial WSD task that depends on themeaning of the proposition in the modal’s scope. We explore a CNN architecture for classifying modal sense in English and German. We show that CNNs are superior to manually designed feature-based classifiers and a standard NN classifier. We analyze the feature maps learned by the CNN and identify known and previously unattested linguistic features. We bench-mark the CNN on a standard WSD task,where it compares favorably to models using sense-disambiguated target vectors. (Marasović and Frank, 2016)
Facebook
TwitterAttribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
This is the readme for the supplemental data for our ICDAR 2019 paper.
You can read our paper via IEEE here: https://ieeexplore.ieee.org/document/8978202
If you found this dataset useful, please consider citing our paper:
@inproceedings{DBLP:conf/icdar/MorrisTE19,
author = {David Morris and
Peichen Tang and
Ralph Ewerth},
title = {A Neural Approach for Text Extraction from Scholarly Figures},
booktitle = {2019 International Conference on Document Analysis and Recognition,
{ICDAR} 2019, Sydney, Australia, September 20-25, 2019},
pages = {1438--1443},
publisher = {{IEEE}},
year = {2019},
url = {https://doi.org/10.1109/ICDAR.2019.00231},
doi = {10.1109/ICDAR.2019.00231},
timestamp = {Tue, 04 Feb 2020 13:28:39 +0100},
biburl = {https://dblp.org/rec/conf/icdar/MorrisTE19.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
This work was financially supported by the German Federal Ministry of Education and Research (BMBF) and European Social Fund (ESF) (InclusiveOCW project, no. 01PE17004).
We used different sources of data for testing, validation, and training. Our testing set was assembled by the work we cited by Böschen et al. We excluded the DeGruyter dataset, and use it as our validation dataset.
These datasets contain a readme with license information. Further information about the associated project can be found in the authors' published work we cited: https://doi.org/10.1007/978-3-319-51811-4_2
The DeGruyter dataset does not include the labeled images due to license restrictions. As of writing, the images can still be downloaded from DeGruyter via the links in the readme. Note that depending on what program you use to strip the images out of the PDF they are provided in, you may have to re-number the images.
We used label_generator's generated dataset, which the author made available on a requester-pays amazon s3 bucket. We also used the Multi-Type Web Images dataset, which is mirrored here.
We have made our code available in code.zip. We will upload code, announce further news, and field questions via the github repo.
Our text detection network is adapted from Argman's EAST implementation. The EAST/checkpoints/ours subdirectory contains the trained weights we used in the paper.
We used a tesseract script to run text extraction from detected text rows. This is inside our code code.tar as text_recognition_multipro.py.
We used a java script provided by Falk Böschen and adapted to our file structure. We included this as evaluator.jar.
Parameter sweeps are automated by param_sweep.rb. This file also shows how to invoke all of these components.
Facebook
Twitterhttps://researchdata.ntu.edu.sg/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.21979/N9/FVZOI9https://researchdata.ntu.edu.sg/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.21979/N9/FVZOI9
The source code of pRVFLN "Parsimonious random vector functional link network for data streams"
Facebook
TwitterBody perception plays a fundamental role in social cognition. Yet, the neural mechanisms underlying this process in humans remain elusive given the spatiotemporal constraints of functional imaging. Here we present for the first time intracortical recordings of single- and multi-unit spiking activity in two epilepsy surgery patients in or near the extrastriate body area (EBA), a critical region for body perception. Our recordings revealed a strong preference for human bodies over a large range of control stimuli. Notably, body selectivity was driven by a distinct selectivity for body parts. The observed body selectivity generalized to non-photographic depictions of bodies including silhouettes and stick figures. Overall, our study provides unique neural data that bridge the gap between human neuroimaging and macaque electrophysiology studies, laying a solid foundation for computational models of human body processing., All data were collected using 96-channel utah arrays implanted in human visual cortex, and recorded with a Neuroport system (Blackrock Neurotech)., , # Supplementary Materials for [Your Scientific Paper Title]
This repository contains the supplementary data and source code used in our study. The materials are organized into two main sections: Data and Source Code. Below, you will find detailed information on how the data is structured and how to utilize the provided source code for analysis.
The data is organized into folders corresponding to different experiments. Each experiment folder contains subfolders for each...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset of 92 valid eye tracking sessions of 25 participants working in Vscode and answering 15 different code understanding questions (e.g., what is the output, side effects, algorithmic complexity, concurrency etc.) on source code written in 3 programming langauges: Python, C++, C#.
Facebook
TwitterCurated database of published models so that they can be openly accessed, downloaded, and tested to support computational neuroscience. Provides accessible location for storing and efficiently retrieving computational neuroscience models.Coupled with NeuronDB. Models can be coded in any language for any environment. Model code can be viewed before downloading and browsers can be set to auto-launch the models. The model source code has to be available from publicly accessible online repository or WWW site. Original source code is used to generate simulation results from which authors derived their published insights and conclusions.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The comparison data from the experiments of the paper "Extracting Meaningful Attention on Source Code: An Empirical Study of Developer and Neural Model Code Exploration", this is used to reproduce the plots in the paper.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This dataset contains the program source code, model file and experimental data for analysis of the paper "Detecting Adversarial Examples for Deep Neural Networks via Layer Directed Discriminative Noise Injection"
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Included are the training, validation, and testing data sets for synthetic holograms (netCDF), the HOLODEC data set containing the RF07 examples (netCDF), and the two splits of manually labeled HOLODEC image tiles (numpy arrays). The source code for using the data sets can be found at https://github.com/NCAR/holodec-ml
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This dataset contains partial program source code and the data for analysis for the paper "Fingerprinting Deep Neural Networks - a DeepFool Approach"
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset accompanies the Master of Science (Computer Science) thesis by B.R. Kane titled "Ergo: A Gesture-Based Computer Interaction Device". It contains the raw sensor recordings in CSV format (train/), the pre-processed training-validation and testing datasets trn_20_10.npz and tst_20_10.npz, the dataset of the literature in bibtex format as well as in csv format.
The code to train machine learning models on the raw sensor data is available on GitHub: https://github.com/beyarkay/masters-code/
The code to analyse the gesture recognition is also available on GitHub (along with the source code of the thesis): https://github.com/beyarkay/masters-thesis/
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
V1
I have created an artificial intelligence software that can make an emotion prediction based on the text you have written using the Semi Supervised Learning method and the RC algorithm. I used very simple codes and it was a software that focused on solving the problem. I aim to create the 2nd version of the software using RNN (Recurrent Neural Network). I hope I was able to create an example for you to use in your thesis and projects.
V2
I decided to apply a technique that I had developed in the emotion dataset that I had used Semi-Supervised learning in Machine Learning methods before. This technique is produced according to Quantum5 laws. I developed a smart artificial intelligence software that can predict emotion with Quantum5 neuronal networks. I share this software with all humanity as open source on Kaggle. It is my first open source project in NLP system with Quantum technology. Developing the NLP system with Quantum technology is very exciting!
Happy learning!
Emirhan BULUT
Head of AI and AI Inventor
Emirhan BULUT. (2022). Emotion Prediction with Quantum5 Neural Network AI [Data set]. Kaggle. https://doi.org/10.34740/KAGGLE/DS/2129637
Python 3.9.8
Keras
Tensorflow
NumPy
Pandas
Scikit-learn (SKLEARN)
https://raw.githubusercontent.com/emirhanai/Emotion-Prediction-with-Semi-Supervised-Learning-of-Machine-Learning-Software-with-RC-Algorithm---By/main/Quantum%205.png" alt="Emotion Prediction with Quantum5 Neural Network on AI - Emirhan BULUT">
https://raw.githubusercontent.com/emirhanai/Emotion-Prediction-with-Semi-Supervised-Learning-of-Machine-Learning-Software-with-RC-Algorithm---By/main/Emotion%20Prediction%20with%20Semi%20Supervised%20Learning%20of%20Machine%20Learning%20Software%20with%20RC%20Algorithm%20-%20By%20Emirhan%20BULUT.png" alt="Emotion Prediction with Semi Supervised Learning of Machine Learning Software with RC Algorithm - Emirhan BULUT">
Name-Surname: Emirhan BULUT
Contact (Email) : emirhan@isap.solutions
LinkedIn : https://www.linkedin.com/in/artificialintelligencebulut/
Kaggle: https://www.kaggle.com/emirhanai
Official Website: https://www.emirhanbulut.com.tr
Facebook
TwitterThe following is the README of original repository.
=======================================
This is an implementation of the training, inference and evaluation scripts for OpenGlue under an open source license, our paper - OpenGlue: Open Source Graph Neural Net Based Pipeline for Image Matching
SuperGlue - a method for learning feature matching using graph neural network, proposed by a team (Paul-Edouard Sarlin, Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich) from Magic Leap. Official full paper - SuperGlue: Learning Feature Matching with Graph Neural Networks.
We present OpenGlue: a free open-source framework for image matching, that uses a Graph Neural Network-based matcher inspired by SuperGlue. We show that including additional geometrical information, such as local feature scale, orientation, and affine geometry, when available (e.g. for SIFT features), significantly improves the performance of the OpenGlue matcher. We study the influence of the various attention mechanisms on accuracy and speed. We also present a simple architectural improvement by combining local descriptors with context-aware descriptors.
This repo is based on PyTorch Lightning framework and enables user to train, predict and evaluate the model.
For local feature extraction, our interface supports Kornia detectors and descriptors along with our version of SuperPoint.
We provide an instruction on how to launch training on MegaDepth dataset and test the trained models on Image Matching Challenge.
This code is licensed under the MIT License. Modifications, distribution, commercial and academic uses are permitted. More information in LICENSE file.
1) Create folder MegaDepth, where your dataset will be stored.
mkdir MegaDepth && cd MegaDepth
2) Download and unzip MegaDepth_v1.tar.gz from official link.
You should now be able to see MegaDepth/phoenix directory.
3) We provide the lists of pairs for training and validation, link to download. Each line corresponds to one pair and has the following structure:
path_image_A path_image_B exif_rotationA exif_rotationB [KA_0 ... KA_8] [KB_0 ... KB_8] [T_AB_0 ... T_AB_15] overlap_AB
overlap_AB - is a value of overlap between two images of the same scene, it shows how close (in position transformation) two images are.
The resulting directory structure should be as follows:
MegaDepth/
- pairs/
| - 0000/
| | - sparse-txt/
| | | pairs.txt
...
- phoenix/S6/zl548/MegaDepth_v1/
| -0000/
| | - dense0/
| | | - depths/
| | | | id.h5
...
| | | - images/
| | | | id.jpg
...
| | - dense1/
...
...
We also release the open-source weights for a pretrained OpenGlue on this dataset.
This repository is divided into several modules:
* config - configuration files with training hyperparameters
* data - preprocessing and dataset for MegaDepth
* examples - code and notebooks with examples of applications
* models - module with OpenGlue architecture and detector/descriptors methods
* utils - losses, metrics and additional training utils
For all necessary modules refer to requirements.txt
pip3 install -r requirements.txt
This code is compatible with Python >= 3.6.9 * PyTorch >= 1.10.0 * PyTorch Lightning >= 1.4.9 * Kornia >= 0.6.1 * OpenCV >= 4.5.4
There are two options for feature extraction: 1) Extract features during training. No additional steps required before Launching training.
2) Extract and save features before training. We suggest using this approach, since training time is decreased immensely with pre-extracted features...
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This repository contains the reference code for our ACM MM 2019 paper. Its GitHub link is https://github.com/ysyscool/SGDNet