100+ datasets found
  1. D

    Replication Data for: SGDNet: An End-to-End Saliency-Guided Deep Neural...

    • researchdata.ntu.edu.sg
    Updated Oct 26, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sheng Yang; Sheng Yang (2020). Replication Data for: SGDNet: An End-to-End Saliency-Guided Deep Neural Network for No-Reference Image Quality Assessment [Dataset]. http://doi.org/10.21979/N9/H38R0Z
    Explore at:
    application/matlab-mat(4893402), text/x-python(13153), application/matlab-mat(3092842), application/matlab-mat(1483131), bin(3836), text/markdown(2098), application/matlab-mat(523728), text/x-python(4239), application/matlab-mat(14102287), application/matlab-mat(463363), text/x-python(15118)Available download formats
    Dataset updated
    Oct 26, 2020
    Dataset provided by
    DR-NTU (Data)
    Authors
    Sheng Yang; Sheng Yang
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Dataset funded by
    Ministry of Education (MOE)
    Description

    This repository contains the reference code for our ACM MM 2019 paper. Its GitHub link is https://github.com/ysyscool/SGDNet

  2. Data from: A spatial code for temporal information is necessary for...

    • zenodo.org
    bin, zip
    Updated Oct 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sophie Bagur; Brice Bathellier; Sophie Bagur; Brice Bathellier (2024). A spatial code for temporal information is necessary for efficient sensory learning [Dataset]. http://doi.org/10.5281/zenodo.13941450
    Explore at:
    bin, zipAvailable download formats
    Dataset updated
    Oct 16, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Sophie Bagur; Brice Bathellier; Sophie Bagur; Brice Bathellier
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Here we provide a large dataset of neuronal responses in the mouse auditory system to a range of simple sounds. This data is initially published in the paper Bagur, Lebourg et al

    The "Data" files are organized as follows :

    • Data_XX.mat : one file with the responses all ROIs/units to all 140 sounds for each area. AC data was uploaded in two files with two halves of the neural data set due to file size limitations
    • Clusters_XX.mat : one file with the responses of the clustered ROIS from calcium imaging data go all 140 sounds
    • AnatInfo_AC/ICE.mat : localisation of each ROI from AC and ICE data
    • Correlation_Decoding.mat : the results of core analysis from the paper, to accelerate plotting using the "BasicCorrelationDecodingAnalysis.mlx" code

    The "Codes" folder contains matlab code showing how to access the data and illustrating how it is organized as well as code to perform basic population level analysis from Bagur, Lebourg et al, illustrating how to calculate noise-free correlation between popultion vector

    "Packages" are open source code from github written by other members of the scientific community and saved here as a reference :

  3. m

    Source code of the "Immediate glucose signaling transmitted via the vagus...

    • data.mendeley.com
    Updated Dec 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    serika yamada (2024). Source code of the "Immediate glucose signaling transmitted via the vagus nerve in gut–brain neural communication" [Dataset]. http://doi.org/10.17632/n2v7bw8rg6.1
    Explore at:
    Dataset updated
    Dec 19, 2024
    Authors
    serika yamada
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ・All mat file is the source code of the article.

  4. h

    Source code and data for the PhD Thesis "Learning Neural Graph...

    • heidata.uni-heidelberg.de
    zip
    Updated Apr 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Federico Lopez; Federico Lopez (2023). Source code and data for the PhD Thesis "Learning Neural Graph Representations in Non-Euclidean Geometries" [Dataset]. http://doi.org/10.11588/DATA/KOAMK4
    Explore at:
    zip(4382878), zip(259028), zip(4900874), zip(2971518)Available download formats
    Dataset updated
    Apr 18, 2023
    Dataset provided by
    heiDATA
    Authors
    Federico Lopez; Federico Lopez
    License

    https://heidata.uni-heidelberg.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.11588/DATA/KOAMK4https://heidata.uni-heidelberg.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.11588/DATA/KOAMK4

    Description

    This dataset contains source code and data used in the PhD thesis "Learning Neural Graph Representations in Non-Euclidean Geometries". The dataset is split into four repositories: figet: Source code to run experiments for chapter 6 "Constructing and Exploiting Hierarchical Graphs". hyfi: Source code to run experiments for chapter 7 "Inferring the Hierarchy with a Fully Hyperbolic Model". sympa: Source code to run experiments for chapter 8 "A Framework for Graph Embeddings on Symmetric Spaces". gyroSPD: Source code to run experiments for chapter 9 "Representing Multi-Relational Graphs on SPD Manifolds".

  5. h

    Neural-Story-v1

    • huggingface.co
    Updated Jan 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lee Jackson (2024). Neural-Story-v1 [Dataset]. https://huggingface.co/datasets/NeuralNovel/Neural-Story-v1
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 8, 2024
    Authors
    Lee Jackson
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Neural-Story-v1 Dataset

      Overview
    

    The Neural-Story-v1 dataset is a curated collection of short stories featuring a rich variety of genres and plot settings. Carefully assembled by NeuralNovel, this dataset aims to serve as a valuable resource for testing and fine-tuning small language models using LoRa.

      Data Source
    

    The dataset content is a result of a combination of automated generation by Mixtral 8x7b and manual refinement.

      Purpose
    

    Designed… See the full description on the dataset page: https://huggingface.co/datasets/NeuralNovel/Neural-Story-v1.

  6. D

    Visual Analytics System for Hidden States in Recurrent Neural Networks

    • darus.uni-stuttgart.de
    Updated Sep 10, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tanja Munz; Rafael Garcia; Daniel Weiskopf (2021). Visual Analytics System for Hidden States in Recurrent Neural Networks [Dataset]. http://doi.org/10.18419/DARUS-2052
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 10, 2021
    Dataset provided by
    DaRUS
    Authors
    Tanja Munz; Rafael Garcia; Daniel Weiskopf
    License

    https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-2052https://darus.uni-stuttgart.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18419/DARUS-2052

    Dataset funded by
    DFG
    Description

    Source code of our visual analytics system for the interpretation of hidden states in recurrent neural networks. This project contains source code for preprocessing data and the visual analytics system. Additionally, we added precomputed data for immediate use in the visual analysis system. The sub directories contain the following: dataPreparation: Python scripts to prepare data for analysis. In these scripts, Long Short-Term Memory (LSTM) models are trained and data for our visual analytics system is exported. visualAnalytics: The source code of our visual analytics system to explore hidden states. demonstrationData: Data files for the use with our visual analytics system. The same data can also be generated with the data preparation scripts. We provide two scripts to generate data for analysis in our visual analytics system: for the IMDB and Reuters dataset as available in Keras. The output files can then be loaded into our visual analytics system; their locations have to be specified in userData.toml of the visual analytics system. The output file of our data preparation scripts or the ones provided for demonstration can be loaded in our visual analytics system for visualization and analysis. Since we provide input files, you do not have to run the preprocessing steps and can use our visual analytics system immediately. Please have a look at the respective README-files for more details.

  7. h

    Source code and data for the PhD Thesis "Linguistically-Inspired Neural...

    • heidata.uni-heidelberg.de
    txt, zip
    Updated Sep 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wei Liu; Wei Liu (2025). Source code and data for the PhD Thesis "Linguistically-Inspired Neural Coherence Modeling" [Dataset]. http://doi.org/10.11588/DATA/ZBNUCG
    Explore at:
    txt(889), zip(41966), zip(573748), zip(41290), zip(356954), zip(51456)Available download formats
    Dataset updated
    Sep 3, 2025
    Dataset provided by
    heiDATA
    Authors
    Wei Liu; Wei Liu
    License

    https://heidata.uni-heidelberg.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.11588/DATA/ZBNUCGhttps://heidata.uni-heidelberg.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.11588/DATA/ZBNUCG

    Dataset funded by
    Klaus Tschira Foundation
    Description

    This dataset contains source code and data used in the PhD thesis "Linguistically-Inspired Neural Coherence Modeling". The dataset is split into five repositories: StruSim: Source code to run experiments for Chapter 4 "Document Structure Similarity-Enhanced Coherence Modeling". ConnRel: Source code to run experiments for Chapter 5 "Annotation-inspired Implicit Discourse Relation Classification". Exp2Imp: Source code to run experiments for Chapter 6 "Explicit to Implicit Discourse Relation Classification". RelCoh: Source code to run experiments for Chapter 7 "Discourse Relation-Enhanced Coherence Modeling". EntyRelCoh: Source code to run experiments for Chapter 8 "Coherence Modeling Using Entities and Discourse Relations". The data used in the experiments can be downloaded from Linguistic Data Consortium (https://www.ldc.upenn.edu/): PDTB 2.0: https://catalog.ldc.upenn.edu/LDC2008T05 PDTB 3.0: https://catalog.ldc.upenn.edu/LDC2019T05 TOEFL Dataset: https://catalog.ldc.upenn.edu/LDC2014T06 GCDC: https://github.com/aylai/GCDC-corpus CoheSentia: https://github.com/AviyaMn/CoheSentia

  8. h

    Data from: Multilingual Modal Sense Classification using a Convolutional...

    • heidata.uni-heidelberg.de
    zip
    Updated Oct 7, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ana Marasović; Ana Marasović (2019). Multilingual Modal Sense Classification using a Convolutional Neural Network [Source Code] [Dataset]. http://doi.org/10.11588/DATA/ERDJDI
    Explore at:
    zip(3094680)Available download formats
    Dataset updated
    Oct 7, 2019
    Dataset provided by
    heiDATA
    Authors
    Ana Marasović; Ana Marasović
    License

    https://heidata.uni-heidelberg.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.11588/DATA/ERDJDIhttps://heidata.uni-heidelberg.de/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.11588/DATA/ERDJDI

    Description

    Abstract Modal sense classification (MSC) is aspecial WSD task that depends on themeaning of the proposition in the modal’s scope. We explore a CNN architecture for classifying modal sense in English and German. We show that CNNs are superior to manually designed feature-based classifiers and a standard NN classifier. We analyze the feature maps learned by the CNN and identify known and previously unattested linguistic features. We bench-mark the CNN on a standard WSD task,where it compares favorably to models using sense-disambiguated target vectors. (Marasović and Frank, 2016)

  9. F

    Data from: A Neural Approach for Text Extraction from Scholarly Figures

    • data.uni-hannover.de
    zip
    Updated Jan 20, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TIB (2022). A Neural Approach for Text Extraction from Scholarly Figures [Dataset]. https://data.uni-hannover.de/dataset/a-neural-approach-for-text-extraction-from-scholarly-figures
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 20, 2022
    Dataset authored and provided by
    TIB
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Description

    A Neural Approach for Text Extraction from Scholarly Figures

    This is the readme for the supplemental data for our ICDAR 2019 paper.

    You can read our paper via IEEE here: https://ieeexplore.ieee.org/document/8978202

    If you found this dataset useful, please consider citing our paper:

    @inproceedings{DBLP:conf/icdar/MorrisTE19,
     author  = {David Morris and
            Peichen Tang and
            Ralph Ewerth},
     title   = {A Neural Approach for Text Extraction from Scholarly Figures},
     booktitle = {2019 International Conference on Document Analysis and Recognition,
            {ICDAR} 2019, Sydney, Australia, September 20-25, 2019},
     pages   = {1438--1443},
     publisher = {{IEEE}},
     year   = {2019},
     url    = {https://doi.org/10.1109/ICDAR.2019.00231},
     doi    = {10.1109/ICDAR.2019.00231},
     timestamp = {Tue, 04 Feb 2020 13:28:39 +0100},
     biburl  = {https://dblp.org/rec/conf/icdar/MorrisTE19.bib},
     bibsource = {dblp computer science bibliography, https://dblp.org}
    }
    

    This work was financially supported by the German Federal Ministry of Education and Research (BMBF) and European Social Fund (ESF) (InclusiveOCW project, no. 01PE17004).

    Datasets

    We used different sources of data for testing, validation, and training. Our testing set was assembled by the work we cited by Böschen et al. We excluded the DeGruyter dataset, and use it as our validation dataset.

    Testing

    These datasets contain a readme with license information. Further information about the associated project can be found in the authors' published work we cited: https://doi.org/10.1007/978-3-319-51811-4_2

    Validation

    The DeGruyter dataset does not include the labeled images due to license restrictions. As of writing, the images can still be downloaded from DeGruyter via the links in the readme. Note that depending on what program you use to strip the images out of the PDF they are provided in, you may have to re-number the images.

    Training

    We used label_generator's generated dataset, which the author made available on a requester-pays amazon s3 bucket. We also used the Multi-Type Web Images dataset, which is mirrored here.

    Code

    We have made our code available in code.zip. We will upload code, announce further news, and field questions via the github repo.

    Our text detection network is adapted from Argman's EAST implementation. The EAST/checkpoints/ours subdirectory contains the trained weights we used in the paper.

    We used a tesseract script to run text extraction from detected text rows. This is inside our code code.tar as text_recognition_multipro.py.

    We used a java script provided by Falk Böschen and adapted to our file structure. We included this as evaluator.jar.

    Parameter sweeps are automated by param_sweep.rb. This file also shows how to invoke all of these components.

  10. D

    The source codes of pRVFLN

    • researchdata.ntu.edu.sg
    Updated Jun 24, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mahardhika Pratama; Mahardhika Pratama; Plamen P. Angelov; Edwin Lughofer; Meng Joo Er; Plamen P. Angelov; Edwin Lughofer; Meng Joo Er (2022). The source codes of pRVFLN [Dataset]. http://doi.org/10.21979/N9/FVZOI9
    Explore at:
    text/markdown(14610), text/x-matlab(919), text/plain; charset=us-ascii(10), text/x-matlab(1918), text/x-matlab(912), application/matlab-mat(492809), text/markdown(1895), text/x-matlab(214), text/x-matlab(1299), text/x-matlab(253), p(7600), txt(1691), text/x-matlab(1292), text/plain; charset=us-ascii(8), p(7548), txt(288)Available download formats
    Dataset updated
    Jun 24, 2022
    Dataset provided by
    DR-NTU (Data)
    Authors
    Mahardhika Pratama; Mahardhika Pratama; Plamen P. Angelov; Edwin Lughofer; Meng Joo Er; Plamen P. Angelov; Edwin Lughofer; Meng Joo Er
    License

    https://researchdata.ntu.edu.sg/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.21979/N9/FVZOI9https://researchdata.ntu.edu.sg/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.21979/N9/FVZOI9

    Dataset funded by
    Nanyang Technological University
    Austrian COMET-K2 program
    Description

    The source code of pRVFLN "Parsimonious random vector functional link network for data streams"

  11. d

    Fine-grained neural coding of bodies and body parts in human visual cortex

    • search.dataone.org
    • data.niaid.nih.gov
    • +1more
    Updated Nov 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Peter Janssen (2024). Fine-grained neural coding of bodies and body parts in human visual cortex [Dataset]. http://doi.org/10.5061/dryad.2v6wwpzz0
    Explore at:
    Dataset updated
    Nov 21, 2024
    Dataset provided by
    Dryad Digital Repository
    Authors
    Peter Janssen
    Description

    Body perception plays a fundamental role in social cognition. Yet, the neural mechanisms underlying this process in humans remain elusive given the spatiotemporal constraints of functional imaging. Here we present for the first time intracortical recordings of single- and multi-unit spiking activity in two epilepsy surgery patients in or near the extrastriate body area (EBA), a critical region for body perception. Our recordings revealed a strong preference for human bodies over a large range of control stimuli. Notably, body selectivity was driven by a distinct selectivity for body parts. The observed body selectivity generalized to non-photographic depictions of bodies including silhouettes and stick figures. Overall, our study provides unique neural data that bridge the gap between human neuroimaging and macaque electrophysiology studies, laying a solid foundation for computational models of human body processing., All data were collected using 96-channel utah arrays implanted in human visual cortex, and recorded with a Neuroport system (Blackrock Neurotech)., , # Supplementary Materials for [Your Scientific Paper Title]

    This repository contains the supplementary data and source code used in our study. The materials are organized into two main sections: Data and Source Code. Below, you will find detailed information on how the data is structured and how to utilize the provided source code for analysis.

    Table of Contents

    Data

    Directory Structure

    The data is organized into folders corresponding to different experiments. Each experiment folder contains subfolders for each...

  12. Follow-up Attention: An Empirical Study of Developer and Neural Model Code...

    • figshare.com
    zip
    Updated Aug 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Matteo Paltenghi (2024). Follow-up Attention: An Empirical Study of Developer and Neural Model Code Exploration - Raw Eye Tracking Data [Dataset]. http://doi.org/10.6084/m9.figshare.23599251.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 15, 2024
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Matteo Paltenghi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset of 92 valid eye tracking sessions of 25 participants working in Vscode and answering 15 different code understanding questions (e.g., what is the output, side effects, algorithmic complexity, concurrency etc.) on source code written in 3 programming langauges: Python, C++, C#.

  13. d

    Data from: ModelDB

    • dknet.org
    • scicrunch.org
    • +1more
    Updated Sep 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). ModelDB [Dataset]. http://identifiers.org/RRID:SCR_007271
    Explore at:
    Dataset updated
    Sep 1, 2024
    Description

    Curated database of published models so that they can be openly accessed, downloaded, and tested to support computational neuroscience. Provides accessible location for storing and efficiently retrieving computational neuroscience models.Coupled with NeuronDB. Models can be coded in any language for any environment. Model code can be viewed before downloading and browsers can be set to auto-launch the models. The model source code has to be available from publicly accessible online repository or WWW site. Original source code is used to generate simulation results from which authors derived their published insights and conclusions.

  14. Follow-up Attention: An Empirical Study of Developer and Neural Model Code...

    • figshare.com
    zip
    Updated Aug 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Matteo Paltenghi (2024). Follow-up Attention: An Empirical Study of Developer and Neural Model Code Exploration - Experimental Data [Dataset]. http://doi.org/10.6084/m9.figshare.23599233.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 15, 2024
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Matteo Paltenghi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The comparison data from the experiments of the paper "Extracting Meaningful Attention on Source Code: An Empirical Study of Developer and Neural Model Code Exploration", this is used to reproduce the plots in the paper.

  15. D

    Replication Data for: Detecting Adversarial Examples for Deep Neural...

    • researchdata.ntu.edu.sg
    application/gzip
    Updated Feb 28, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DR-NTU (Data) (2020). Replication Data for: Detecting Adversarial Examples for Deep Neural Networks via Layer Directed Discriminative Noise Injection [Dataset]. http://doi.org/10.21979/N9/WCIL7X
    Explore at:
    application/gzip(1072557183), application/gzip(13466)Available download formats
    Dataset updated
    Feb 28, 2020
    Dataset provided by
    DR-NTU (Data)
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Dataset funded by
    Ministry of Education (MOE)
    Description

    This dataset contains the program source code, model file and experimental data for analysis of the paper "Detecting Adversarial Examples for Deep Neural Networks via Layer Directed Discriminative Noise Injection"

  16. Data sets used in "Neural network processing of holographic images"

    • zenodo.org
    bin, nc
    Updated Mar 12, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    John S. Schreck; John S. Schreck; Gabrielle Gantos; Matthew Hayman; Aaron Bensemer; David John Gagne; Gabrielle Gantos; Matthew Hayman; Aaron Bensemer; David John Gagne (2022). Data sets used in "Neural network processing of holographic images" [Dataset]. http://doi.org/10.5281/zenodo.6347222
    Explore at:
    nc, binAvailable download formats
    Dataset updated
    Mar 12, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    John S. Schreck; John S. Schreck; Gabrielle Gantos; Matthew Hayman; Aaron Bensemer; David John Gagne; Gabrielle Gantos; Matthew Hayman; Aaron Bensemer; David John Gagne
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Included are the training, validation, and testing data sets for synthetic holograms (netCDF), the HOLODEC data set containing the RF07 examples (netCDF), and the two splits of manually labeled HOLODEC image tiles (numpy arrays). The source code for using the data sets can be found at https://github.com/NCAR/holodec-ml

  17. D

    Related Data for: Fingerprinting Deep Neural Networks - a DeepFool Approach

    • researchdata.ntu.edu.sg
    bin, png +4
    Updated Oct 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chip Hong Chang; Chip Hong Chang (2024). Related Data for: Fingerprinting Deep Neural Networks - a DeepFool Approach [Dataset]. http://doi.org/10.21979/N9/ZDWQLI
    Explore at:
    bin(44816447), bin(1172783), bin(44816191), bin(44829135), text/x-python(2771), bin(1149359), bin(2251383), bin(1140531), text/x-python(2352), text/x-python(1540), bin(2727066), text/markdown(2577), bin(44829007), bin(2298231), txt(729), bin(1190975), bin(5165), text/x-python(5697), bin(1144127), png(122673), text/x-python(2774), bin(417), text/x-python(1126), bin(1117131), bin(2703666), text/x-python(169), bin(1933773), text/x-python(225), bin(1910373), bin(3520524), bin(3497124), tsv(2320), bin(44816255)Available download formats
    Dataset updated
    Oct 16, 2024
    Dataset provided by
    DR-NTU (Data)
    Authors
    Chip Hong Chang; Chip Hong Chang
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Dataset funded by
    National Research Foundation (NRF)
    Description

    This dataset contains partial program source code and the data for analysis for the paper "Fingerprinting Deep Neural Networks - a DeepFool Approach"

  18. Z

    Ergo: A Gesture-Based Computer Interaction Device

    • data.niaid.nih.gov
    • data-staging.niaid.nih.gov
    Updated Nov 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kane, Boyd Robert (2023). Ergo: A Gesture-Based Computer Interaction Device [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10209418
    Explore at:
    Dataset updated
    Nov 27, 2023
    Dataset provided by
    Stellenbosch University
    Authors
    Kane, Boyd Robert
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset accompanies the Master of Science (Computer Science) thesis by B.R. Kane titled "Ergo: A Gesture-Based Computer Interaction Device". It contains the raw sensor recordings in CSV format (train/), the pre-processed training-validation and testing datasets trn_20_10.npz and tst_20_10.npz, the dataset of the literature in bibtex format as well as in csv format. The code to train machine learning models on the raw sensor data is available on GitHub: https://github.com/beyarkay/masters-code/ The code to analyse the gesture recognition is also available on GitHub (along with the source code of the thesis): https://github.com/beyarkay/masters-thesis/

  19. Emotion Prediction with Quantum5 Neural Network AI

    • kaggle.com
    zip
    Updated Oct 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    EMİRHAN BULUT (2025). Emotion Prediction with Quantum5 Neural Network AI [Dataset]. https://www.kaggle.com/datasets/emirhanai/emotion-prediction-with-semi-supervised-learning
    Explore at:
    zip(2332683 bytes)Available download formats
    Dataset updated
    Oct 19, 2025
    Authors
    EMİRHAN BULUT
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Emotion Prediction with Quantum5 Neural Network AI Machine Learning - By Emirhan BULUT

    V1

    I have created an artificial intelligence software that can make an emotion prediction based on the text you have written using the Semi Supervised Learning method and the RC algorithm. I used very simple codes and it was a software that focused on solving the problem. I aim to create the 2nd version of the software using RNN (Recurrent Neural Network). I hope I was able to create an example for you to use in your thesis and projects.

    V2

    I decided to apply a technique that I had developed in the emotion dataset that I had used Semi-Supervised learning in Machine Learning methods before. This technique is produced according to Quantum5 laws. I developed a smart artificial intelligence software that can predict emotion with Quantum5 neuronal networks. I share this software with all humanity as open source on Kaggle. It is my first open source project in NLP system with Quantum technology. Developing the NLP system with Quantum technology is very exciting!

    Happy learning!

    Emirhan BULUT

    Head of AI and AI Inventor

    Emirhan BULUT. (2022). Emotion Prediction with Quantum5 Neural Network AI [Data set]. Kaggle. https://doi.org/10.34740/KAGGLE/DS/2129637

    The coding language used:

    Python 3.9.8

    Libraries Used:

    Keras

    Tensorflow

    NumPy

    Pandas

    Scikit-learn (SKLEARN)

    https://raw.githubusercontent.com/emirhanai/Emotion-Prediction-with-Semi-Supervised-Learning-of-Machine-Learning-Software-with-RC-Algorithm---By/main/Quantum%205.png" alt="Emotion Prediction with Quantum5 Neural Network on AI - Emirhan BULUT">

    https://raw.githubusercontent.com/emirhanai/Emotion-Prediction-with-Semi-Supervised-Learning-of-Machine-Learning-Software-with-RC-Algorithm---By/main/Emotion%20Prediction%20with%20Semi%20Supervised%20Learning%20of%20Machine%20Learning%20Software%20with%20RC%20Algorithm%20-%20By%20Emirhan%20BULUT.png" alt="Emotion Prediction with Semi Supervised Learning of Machine Learning Software with RC Algorithm - Emirhan BULUT">

    Developer Information:

    Name-Surname: Emirhan BULUT

    Contact (Email) : emirhan@isap.solutions

    LinkedIn : https://www.linkedin.com/in/artificialintelligencebulut/

    Kaggle: https://www.kaggle.com/emirhanai

    Official Website: https://www.emirhanbulut.com.tr

  20. OpenGlue

    • kaggle.com
    zip
    Updated Apr 26, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    k_s (2022). OpenGlue [Dataset]. https://www.kaggle.com/datasets/ksork6s4/openglue
    Explore at:
    zip(340106 bytes)Available download formats
    Dataset updated
    Apr 26, 2022
    Authors
    k_s
    Description

    this dataset is clone of this repo

    The following is the README of original repository.

    =======================================

    OpenGlue - Open Source Pipeline for Image Matching

    StandWithUkraine License: MIT

    This is an implementation of the training, inference and evaluation scripts for OpenGlue under an open source license, our paper - OpenGlue: Open Source Graph Neural Net Based Pipeline for Image Matching

    Overview

    SuperGlue - a method for learning feature matching using graph neural network, proposed by a team (Paul-Edouard Sarlin, Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich) from Magic Leap. Official full paper - SuperGlue: Learning Feature Matching with Graph Neural Networks.

    We present OpenGlue: a free open-source framework for image matching, that uses a Graph Neural Network-based matcher inspired by SuperGlue. We show that including additional geometrical information, such as local feature scale, orientation, and affine geometry, when available (e.g. for SIFT features), significantly improves the performance of the OpenGlue matcher. We study the influence of the various attention mechanisms on accuracy and speed. We also present a simple architectural improvement by combining local descriptors with context-aware descriptors.

    This repo is based on PyTorch Lightning framework and enables user to train, predict and evaluate the model.

    For local feature extraction, our interface supports Kornia detectors and descriptors along with our version of SuperPoint.

    We provide an instruction on how to launch training on MegaDepth dataset and test the trained models on Image Matching Challenge.

    License

    This code is licensed under the MIT License. Modifications, distribution, commercial and academic uses are permitted. More information in LICENSE file.

    Data

    Steps to prepare MegaDepth dataset for training

    1) Create folder MegaDepth, where your dataset will be stored. mkdir MegaDepth && cd MegaDepth 2) Download and unzip MegaDepth_v1.tar.gz from official link. You should now be able to see MegaDepth/phoenix directory. 3) We provide the lists of pairs for training and validation, link to download. Each line corresponds to one pair and has the following structure: path_image_A path_image_B exif_rotationA exif_rotationB [KA_0 ... KA_8] [KB_0 ... KB_8] [T_AB_0 ... T_AB_15] overlap_AB overlap_AB - is a value of overlap between two images of the same scene, it shows how close (in position transformation) two images are.

    The resulting directory structure should be as follows: MegaDepth/ - pairs/ | - 0000/ | | - sparse-txt/ | | | pairs.txt ... - phoenix/S6/zl548/MegaDepth_v1/ | -0000/ | | - dense0/ | | | - depths/ | | | | id.h5 ... | | | - images/ | | | | id.jpg ... | | - dense1/ ... ...

    Steps to prepare Oxford-Paris dataset for pre-training

    We also release the open-source weights for a pretrained OpenGlue on this dataset.

    Usage

    This repository is divided into several modules: * config - configuration files with training hyperparameters * data - preprocessing and dataset for MegaDepth * examples - code and notebooks with examples of applications * models - module with OpenGlue architecture and detector/descriptors methods * utils - losses, metrics and additional training utils

    Dependencies

    For all necessary modules refer to requirements.txt pip3 install -r requirements.txt

    This code is compatible with Python >= 3.6.9 * PyTorch >= 1.10.0 * PyTorch Lightning >= 1.4.9 * Kornia >= 0.6.1 * OpenCV >= 4.5.4

    Training

    Extracting features

    There are two options for feature extraction: 1) Extract features during training. No additional steps required before Launching training.

    2) Extract and save features before training. We suggest using this approach, since training time is decreased immensely with pre-extracted features...

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Sheng Yang; Sheng Yang (2020). Replication Data for: SGDNet: An End-to-End Saliency-Guided Deep Neural Network for No-Reference Image Quality Assessment [Dataset]. http://doi.org/10.21979/N9/H38R0Z

Replication Data for: SGDNet: An End-to-End Saliency-Guided Deep Neural Network for No-Reference Image Quality Assessment

Related Article
Explore at:
application/matlab-mat(4893402), text/x-python(13153), application/matlab-mat(3092842), application/matlab-mat(1483131), bin(3836), text/markdown(2098), application/matlab-mat(523728), text/x-python(4239), application/matlab-mat(14102287), application/matlab-mat(463363), text/x-python(15118)Available download formats
Dataset updated
Oct 26, 2020
Dataset provided by
DR-NTU (Data)
Authors
Sheng Yang; Sheng Yang
License

Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically

Dataset funded by
Ministry of Education (MOE)
Description

This repository contains the reference code for our ACM MM 2019 paper. Its GitHub link is https://github.com/ysyscool/SGDNet

Search
Clear search
Close search
Google apps
Main menu