8 datasets found
  1. Auto Encoder MNIST Dataset

    • kaggle.com
    zip
    Updated Jan 31, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anil Thota (2019). Auto Encoder MNIST Dataset [Dataset]. https://www.kaggle.com/datasets/athota1/mnist-data
    Explore at:
    zip(0 bytes)Available download formats
    Dataset updated
    Jan 31, 2019
    Authors
    Anil Thota
    Description

    Dataset

    This dataset was created by Anil Thota

    Contents

  2. f

    Deep MNIST classifiers

    • figshare.com
    application/gzip
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eric Hunsberger (2023). Deep MNIST classifiers [Dataset]. http://doi.org/10.6084/m9.figshare.1446129.v2
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    figshare
    Authors
    Eric Hunsberger
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Deep classifiers trained on MNIST, to be run as spiking networks. See linked Github repository creates and runs these files. Includes some additional images (e.g. letters) for training the Spaun visual system.

  3. f

    Behavior of the simulated spiking network for the MNIST dataset.

    • plos.figshare.com
    tiff
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kendra S. Burbank (2023). Behavior of the simulated spiking network for the MNIST dataset. [Dataset]. http://doi.org/10.1371/journal.pcbi.1004566.g006
    Explore at:
    tiffAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOS Computational Biology
    Authors
    Kendra S. Burbank
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    a: Behavior of the network during a typical image presentation; compare with Fig 1b. Time period of external stimulation shown by grey bar. Raster plot includes all neurons which fired at least one spike during the presentation. The spikes of the visible neurons are in the bottom row and those of the hidden neurons are directly above. The top two rows, in grey, show the spikes for the inhibitory pools at each layer. Although the each training presentation ran for 65ms, all spikes occurred before 30ms so the raster plot was ended there. b: Reconstruction loss function, black dots, (defined in text) decreases over time, as does sparsity loss function (red, note log scale on y axis). c: The trained networks’ attempted reconstruction of representative training images. Each image shows the ON cell values minus the OFF cells. The first row shows the inputs to the network. The second row shows the attempted reconstruction .

  4. f

    Feedforward weights after training for the MNIST and natural image patch...

    • figshare.com
    tiff
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kendra S. Burbank (2023). Feedforward weights after training for the MNIST and natural image patch datasets. [Dataset]. http://doi.org/10.1371/journal.pcbi.1004566.g004
    Explore at:
    tiffAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOS Computational Biology
    Authors
    Kendra S. Burbank
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    a: Weights learned from the MNIST dataset. Each square in the grid represents the incoming weights to a single hidden unit; weights to the first 100 hidden units are shown. Weights from visible neurons which receive OFF inputs are subtracted from the weights from visible neurons which receive ON inputs. Then, weights to each neuron are normalized by dividing by the largest absolute value. b: Same as (a), but for the natural image patch dataset.

  5. f

    Learned hidden unit weights for different target activation rates ρ.

    • plos.figshare.com
    tiff
    Updated Jun 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kendra S. Burbank (2023). Learned hidden unit weights for different target activation rates ρ. [Dataset]. http://doi.org/10.1371/journal.pcbi.1004566.g010
    Explore at:
    tiffAvailable download formats
    Dataset updated
    Jun 11, 2023
    Dataset provided by
    PLOS Computational Biology
    Authors
    Kendra S. Burbank
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    a: Learned weights for the MNIST dataset with ρ = 0.001. b: Learned weights for the natural image patch dataset with ρ = 0.001. c: Learned weights for the MNIST dataset with ρ = 0.3.

  6. Hidden unit correlations after training.

    • plos.figshare.com
    tiff
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kendra S. Burbank (2023). Hidden unit correlations after training. [Dataset]. http://doi.org/10.1371/journal.pcbi.1004566.g008
    Explore at:
    tiffAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Kendra S. Burbank
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Neither the incoming weights nor the spiking activity is uncorrelated between hidden units. a: Correlations of the final trained synaptic weights between every pair of hidden units in the MNIST network. b: Correlations of the spike numbers from 1,000 stimulus presentations between every pair of hidden neurons for MNIST. c–d: Same, for the natural image network.

  7. Performance comparison of our method MoE-Sim-VAE with several published...

    • plos.figshare.com
    xls
    Updated Jun 10, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Performance comparison of our method MoE-Sim-VAE with several published methods on MNIST. [Dataset]. https://plos.figshare.com/articles/dataset/Performance_comparison_of_our_method_MoE-Sim-VAE_with_several_published_methods_on_MNIST_/14887791
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 10, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Andreas Kopf; Vincent Fortuin; Vignesh Ram Somnath; Manfred Claassen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The table is mainly extracted from [1, 21] and complemented with results of interest. (“-”: metric not reported).

  8. f

    Comparison of MoE-Sim-VAE performance to competitor methods in defining cell...

    • figshare.com
    xls
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andreas Kopf; Vincent Fortuin; Vignesh Ram Somnath; Manfred Claassen (2023). Comparison of MoE-Sim-VAE performance to competitor methods in defining cell type composition in CyTOF measurements. [Dataset]. http://doi.org/10.1371/journal.pcbi.1009086.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOS Computational Biology
    Authors
    Andreas Kopf; Vincent Fortuin; Vignesh Ram Somnath; Manfred Claassen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The results in the table are extracted from the review paper of [42], where 18 methods are compared on four different datasets. Our model outperforms the baselines on three out of four data sets.

  9. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Anil Thota (2019). Auto Encoder MNIST Dataset [Dataset]. https://www.kaggle.com/datasets/athota1/mnist-data
Organization logo

Auto Encoder MNIST Dataset

Explore at:
4 scholarly articles cite this dataset (View in Google Scholar)
zip(0 bytes)Available download formats
Dataset updated
Jan 31, 2019
Authors
Anil Thota
Description

Dataset

This dataset was created by Anil Thota

Contents

Search
Clear search
Close search
Google apps
Main menu