Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Deep classifiers trained on MNIST, to be run as spiking networks. See linked Github repository creates and runs these files. Includes some additional images (e.g. letters) for training the Spaun visual system.
This dataset was created by Anil Thota
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
a: Weights learned from the MNIST dataset. Each square in the grid represents the incoming weights to a single hidden unit; weights to the first 100 hidden units are shown. Weights from visible neurons which receive OFF inputs are subtracted from the weights from visible neurons which receive ON inputs. Then, weights to each neuron are normalized by dividing by the largest absolute value. b: Same as (a), but for the natural image patch dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
a: Behavior of the network during a typical image presentation; compare with Fig 1b. Time period of external stimulation shown by grey bar. Raster plot includes all neurons which fired at least one spike during the presentation. The spikes of the visible neurons are in the bottom row and those of the hidden neurons are directly above. The top two rows, in grey, show the spikes for the inhibitory pools at each layer. Although the each training presentation ran for 65ms, all spikes occurred before 30ms so the raster plot was ended there. b: Reconstruction loss function, black dots, (defined in text) decreases over time, as does sparsity loss function (red, note log scale on y axis). c: The trained networks’ attempted reconstruction of representative training images. Each image shows the ON cell values minus the OFF cells. The first row shows the inputs to the network. The second row shows the attempted reconstruction .
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
a: Learned weights for the MNIST dataset with ρ = 0.001. b: Learned weights for the natural image patch dataset with ρ = 0.001. c: Learned weights for the MNIST dataset with ρ = 0.3.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The table is mainly extracted from [1, 21] and complemented with results of interest. (“-”: metric not reported).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Neither the incoming weights nor the spiking activity is uncorrelated between hidden units. a: Correlations of the final trained synaptic weights between every pair of hidden units in the MNIST network. b: Correlations of the spike numbers from 1,000 stimulus presentations between every pair of hidden neurons for MNIST. c–d: Same, for the natural image network.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The results in the table are extracted from the review paper of [42], where 18 methods are compared on four different datasets. Our model outperforms the baselines on three out of four data sets.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Deep classifiers trained on MNIST, to be run as spiking networks. See linked Github repository creates and runs these files. Includes some additional images (e.g. letters) for training the Spaun visual system.