Facebook
TwitterDatabase of microarray analysis of twelve major classes of fluorescent labeled neurons within the adult mouse forebrain that provide the first comprehensive view of gene expression differences. The publicly available datasets demonstrate a profound molecular heterogeneity among neuronal subtypes, represented disproportionately by gene paralogs, and begin to reveal the genetic programs underlying the fundamental divisions between neuronal classes including that between glutamatergic and GABAergic neurons. Five of the 12 populations were chosen from cingulate cortex and included several subtypes of GABAergic interneurons and pyramidal neurons. The remaining seven were derived from the somatosensory cortex, hippocampus, amygdala and thalamus. Using these expression profiles, they were able to construct a taxonomic tree that reflected the expected major relationships between these populations, such as the distinction between cortical interneurons and projection neurons. The taxonomic tree indicated highly heterogeneous gene expression even within a single region. This dataset should be useful for the classification of unknown neuronal subtypes, the investigation of specifically expressed genes and the genetic manipulation of specific neuronal circuit elements. Datasets: * Full: Here you can query gene expression results for the neuronal populations * Strain: Here you can query the same expression results accessed under the full checkbox, with one additional population (CT6-CG2) included as a control for the effects of mouse strain. This population is identical to CT6-CG (YFPH) except the neurons were derived from wild-type mice of three distinct strains: G42, G30, and GIN. * Arlotta: Here you can query the same expression results accessed under the full checkbox, with nine additional populations from the dataset of Arlotta et al., 2005. These populations were purified by FACS after retrograde labeling with fluorescent microspheres. Populations are designated by the prefix ACS for corticospinal neurons, ACC for corticocallosal neurons and ACT for corticotectal neurons, followed by the suffix E18 for gestational age 18 embryos, or P3, P6 and P14 for postnatal day 3, 6 and 14 pups. For each successful gene query the following information is returned: # Signal level line plot: Signal level is plotted on Y-axis (log base 2) for each sample. Samples include the thirty six representing the twelve populations profiled in Sugino et al. In addition, six samples from homogenized (=dissociated and but not sorted) cortex are included representing two different strains: G42-HO is homogenate from strain G42, GIN-HO is homogenate from stain GIN. # Signal level raster plots: Signal level is represented by color (dark red is low, bright red is high) for all samples. Color scale is set to match minimum (dark red) and maximum (bright yellow) signal levels within the displayed set of probe sets. # Scaled signal level raster plots: Same as 2) except color scale is adjusted separately for each gene according to its maximum and minimum signal level. # Table: Basic information about the returned probe sets: * Affymetrix affyid of probe set * NCBI gene symbol, NCBI gene name * NCBI geneID * P-value score from ANOVA for each gene is also given if available (_anv column). P-value represents the probability that there is no difference in the expression across cell types.
Facebook
TwitterA database to support research on drugs for the treatment of different neurological disorders. It contains agents that act on neuronal receptors and signal transduction pathways in the normal brain and in nervous disorders. It enables searches for drug actions at the level of key molecular constituents, cell compartments and individual cells, with links to models of these actions.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Details of five sEMG benchmark databases.
Facebook
TwitterThis page describes the contents of a database of 1.7 million model neurons. This database is available for interested researchers after contacting the creators, but is not web accessible. The construction and analysis of the database are described in detail in Prinz AA, Billimoria CP, Marder E (2003). Alternative to hand-tuning conductance-based models: construction and analysis of databases of model neurons. J Neurophysiol 90: 3998-4015. Because of its size (over 6 GB even in the zipped version), it is not practicable to download the database over the internet. Instead, we have made multiple copies of the database on sets of two DVDs each. We are happy to send a set of DVDs to anybody who is interested upon e-mail request to Astrid Prinz.
Facebook
TwitterAbstract: The objective of this work was to compare methods of obtaining the site index for eucalyptus (Eucalyptus spp.) stands, as well as to evaluate their impact on the stability of this index in databases with and without outliers. Three methods were tested, using linear regression, quantile regression, and artificial neural network. Twenty-two permanent plots from a continuous forest inventory were used, measured in trees with ages from 23 to 83 months. The outliers were identified using a boxplot graphic. The artificial neural network showed better results than the linear and quantile regressions, both for dominant height and site index estimates. The stability obtained for the site index classification by the artificial neural network was also better than the one obtained by the other methods, regardless of the presence or the absence of outliers in the database. This shows that the artificial neural network is a solid modelling technique in the presence of outliers. When the cause of the presence of outliers in the database is not known, they can be kept in it if techniques as artificial neural networks or quantile regression are used.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Official dataset can be found here: https://springernature.figshare.com/articles/dataset/A_pediatric_ECG_database_with_disease_diagnosis_covering_11643_children/27078763
license: cc
Facebook
TwitterIAM Handwriting Database
The IAM Handwriting Database contains forms of handwritten English text which can be used to train and test handwritten text recognizers and to perform writer identification and verification experiments.
The database was first published in [1] at the ICDAR 1999. Using this database an HMM based recognition system for handwritten sentences was developed and published in [2] at the ICPR 2000. The segmentation scheme used in the second version of the database is documented in [3] and has been published in the ICPR 2002. The IAM-database as of October 2002 is described in [4]. We use the database extensively in our own research, see publications for further details.
The database contains forms of unconstrained handwritten text, which were scanned at a resolution of 300dpi and saved as PNG images with 256 gray levels. The figure below provides samples of a complete form, a text line and some extracted words.
Characteristics
The IAM Handwriting Database 3.0 is structured as follows:
657 writers contributed samples of their handwriting 1'539 pages of scanned text 5'685 isolated and labeled sentences 13'353 isolated and labeled text lines 115'320 isolated and labeled words The words have been extracted from pages of scanned text using an automatic segmentation scheme and were verified manually. The segmentation scheme has been developed at our institute [3].
All form, line and word images are provided as PNG files and the corresponding form label files, including segmentation information and variety of estimated parameters (from the preprocessing steps described in [2]), are included in the image files as meta-information in XML format which is described in XML file and XML file format (DTD).
References
[1] U. Marti and H. Bunke. A full English sentence database for off-line handwriting recognition. In Proc. of the 5th Int. Conf. on Document Analysis and Recognition, pages 705 - 708, 1999.
[2] U. Marti and H. Bunke. Handwritten Sentence Recognition. In Proc. of the 15th Int. Conf. on Pattern Recognition, Volume 3, pages 467 - 470, 2000.
[3] M. Zimmermann and H. Bunke. Automatic Segmentation of the IAM Off-line Database for Handwritten English Text. In Proc. of the 16th Int. Conf. on Pattern Recognition, Volume 4, pages 35 - 39, 2000.
[4] U. Marti and H. Bunke. The IAM-database: An English Sentence Database for Off-line Handwriting Recognition. Int. Journal on Document Analysis and Recognition, Volume 5, pages 39 - 46, 2002.
[5] S. Johansson, G.N. Leech and H. Goodluck. Manual of Information to accompany the Lancaster-Oslo/Bergen Corpus of British English, for use with digital Computers. Department of English, University of Oslo, Norway, 1978.
Facebook
Twitter
According to our latest research, the global Neural Search Platforms market size reached USD 4.8 billion in 2024, demonstrating robust growth driven by surging demand for advanced search capabilities across industries. The market is projected to expand at a CAGR of 24.7% from 2025 to 2033, reaching an estimated USD 41.2 billion by 2033. This remarkable growth trajectory is propelled by the increasing adoption of artificial intelligence and machine learning technologies to enhance information retrieval, relevance, and user experience in digital ecosystems. The proliferation of unstructured data, coupled with the need for semantic and context-aware search functionality, is fundamentally transforming how organizations interact with and extract value from their data assets.
One of the primary growth factors fueling the neural search platforms market is the exponential surge in data volumes across enterprises. Organizations are generating vast amounts of unstructured data from sources such as emails, social media, documents, and multimedia, making traditional keyword-based search solutions increasingly inadequate. Neural search platforms leverage deep learning and natural language processing to understand context, intent, and semantics, delivering more accurate and relevant results. This capability is particularly critical in industries like e-commerce, healthcare, and BFSI, where precise information retrieval can significantly enhance decision-making and customer engagement. The growing realization of these benefits is prompting enterprises to invest in neural search solutions, driving market expansion.
Another significant driver for the neural search platforms market is the ongoing digital transformation initiatives undertaken by businesses worldwide. As organizations strive to offer personalized experiences and improve operational efficiency, the need for intelligent search solutions that can interpret complex queries and deliver tailored results is becoming paramount. Neural search platforms enable enterprises to bridge the gap between user intent and available information, facilitating seamless navigation through vast data repositories. Furthermore, advancements in AI algorithms, increased computational power, and the availability of scalable cloud infrastructure have made the deployment of neural search technologies more accessible and cost-effective, further accelerating market growth.
The expanding application landscape of neural search platforms is also contributing to market momentum. Beyond traditional enterprise search, these platforms are being integrated into customer support systems, recommendation engines, knowledge management tools, and research databases. In sectors such as healthcare, neural search platforms assist clinicians in retrieving relevant medical literature and patient records, thereby improving diagnostic accuracy and patient outcomes. Similarly, in e-commerce, these platforms power intelligent product search and recommendation features, enhancing user satisfaction and driving sales conversions. The versatility of neural search technology is opening new avenues for innovation and adoption across diverse industry verticals.
Regionally, North America continues to dominate the neural search platforms market, accounting for the largest revenue share in 2024. The presence of leading technology providers, early adoption of advanced AI solutions, and a mature digital infrastructure underpin the region’s leadership. However, Asia Pacific is emerging as the fastest-growing market, fueled by rapid digitalization, expanding internet penetration, and increased investments in AI-driven technologies by enterprises and governments. Europe also demonstrates significant growth potential, particularly in sectors like BFSI and healthcare, where regulatory compliance and data security are critical. Latin America and the Middle East & Africa are gradually catching up, driven by growing awareness and digital transformation initiatives.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
he recognition of Covid-19 infection from the X-ray images is an emerging field in machine learning and computer vision community. Despite the big efforts that have been made in this field since the appearance of Covid-19 disease (2019), the field still suffers from two drawbacks. First, the available X-ray scans labeled as Covid-19 infected are relatively small. Second, all the works that have been made in the field are separated; no unified data, classes, and evaluation protocol.
Source: https://github.com/Edo2610/Covid-19_X-ray_Two-proposed-Databases Paper: https://www.mdpi.com/1424-8220/21/5/1742
In this work, based on the public and new collected data, we propose two X-ray covid-19 databases which have Three-classes and Five-classes.
In this databases we use more common classes used for this task: - Covid-19 - Pneumonia - Normal
In this task we would to create a more complete database with following classes:
We make our databases of Covid-19 X-ray scans publicly available to encourage other researchers to use it as a benchmark for their studies.
@Article{s21051742, AUTHOR = {Vantaggiato, Edoardo and Paladini, Emanuela and Bougourzi, Fares and Distante, Cosimo and Hadid, Abdenour and Taleb-Ahmed, Abdelmalik}, TITLE = {COVID-19 Recognition Using Ensemble-CNNs in Two New Chest X-ray Databases}, JOURNAL = {Sensors}, VOLUME = {21}, YEAR = {2021}, NUMBER = {5}, ARTICLE-NUMBER = {1742}, URL = {https://www.mdpi.com/1424-8220/21/5/1742}, ISSN = {1424-8220}, DOI = {10.3390/s21051742} }
Facebook
TwitterDatabase used for the bibliometric study on neurosciences and education (or pedagogy) in order to find the links with complex thinking.
Facebook
TwitterRoadmap on chaos‑inspired imaging technologies (CI2‑Tech) database
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Here, we employ the GengNet [26], and the sliding window length is fixed at 200ms for all experiments.
Facebook
TwitterDatabase of interactive neural computation computer models at levels ranging from simple linear filters to large-scale networks of spiking units. Interface tools are provided while browsing and exploring models.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
1.0 Introduction
Deep Shape From Template Dataset(DSfTD) a multimodal database(depth, registration and rgb data) of recordings synthetically created, monitoring in frontal position, objects being deformed, and it was designed to fulfil the following objetives:
The reconstruction and registration task can also be extended to practical applications such as augmented reality, retail or non invasive surgery.
To give you an idea on what to expect, you can have a look at the following video we prepared from similar data(https://www.youtube.com/watch?v=VvYj-FnuVp0).
2.0 Database Info
FI3S is composed from sequences comprising a broad variety of conditions:
The RGB info is stored in 8 bit images(.png) with each pixel between 0-255 value.
The depth and warps(registration) information is stored in general 16 bit images(.png), with each pixel normalized with three different normalizations, that are provided in the image code example of the database.
File naming conventions:
To ease adapting the experimental setup for specific tasks, we have designed a (verbose) naming conven- tion for the file names and folders.
Filename extensions: The distributed filenames have an extension of PNG images(.png), to provide an extended and generic use filetipe.
Depht Camera Specifications:
The first camera used in our emulations is a Kinect 2 for device, with the following intrinsic parameters:
cx_K = 947.64 / 4;
cy_K = 530.38 / 4;
fy_K = 1064 / 4;
fx_K = 1057.8 / 4;
All the images of the database are resized to 270x480, which imply a resize of the intrinsic parameters too, dividing by a factor of 4.
If you make use of this databases and/or its related documentation, you are kindly requested to cite the paper:
Deep Shape-from-Template: Wide-Baseline, Dense and Fast Registration and Deformable Reconstruction from a Single Image, David Fuentes-Jimenez, David Casillas-Perez, Daniel Pizarro, Toby Collins, Adrien Bartoli, 2018, (https://arxiv.org/abs/1811.07791).
Bibtext: @misc{fuentesjimenez2018deep, title={Deep Shape-from-Template: Wide-Baseline, Dense and Fast Registration and Deformable Reconstruction from a Single Image}, author={David Fuentes-Jimenez and David Casillas-Perez and Daniel Pizarro and Toby Collins and Adrien Bartoli}, year={2018}, eprint={1811.07791}, archivePrefix={arXiv}, primaryClass={cs.CV} }
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
1.0 Introduction
From images to 3D shapes (FI3S) is a multimodal database(depth, registration and rgb data) of recordings synthetically created, monitoring in frontal position, objects being deformed, and it was designed to fulfil the following objetives:
The reconstruction and registration task can also be extended to practical applications such as augmented reality, retail or non invasive surgery.
To give you an idea on what to expect, you can have a look at the following video we prepared from similar data(https://www.youtube.com/watch?v=VvYj-FnuVp0).
2.0 Database Info
FI3S is composed from sequences comprising a broad variety of conditions:
The RGB info is stored in 8 bit images(.png) with each pixel between 0-255 value.
The depth and warps(registration) information is stored in general 16 bit images(.png), with each pixel normalized with three different normalizations, that are provided in the image code example of the database.
File naming conventions:
To ease adapting the experimental setup for specific tasks, we have designed a (verbose) naming conven- tion for the file names and folders.
Filename extensions: The distributed filenames have an extension of PNG images(.png), to provide an extended and generic use filetipe.
Depht Camera Specifications:
The first camera used in our emulations is a Kinect 2 for device, with the following intrinsic parameters:
cx_K = 947.64 / 4;
cy_K = 530.38 / 4;
fy_K = 1064 / 4;
fx_K = 1057.8 / 4;
All the images of the database are resized to 270x480, which imply a resize of the intrinsic parameters too, dividing by a factor of 4.
If you make use of this databases and/or its related documentation, you are kindly requested to cite the paper:
Deep Shape-from-Template: Wide-Baseline, Dense and Fast Registration and Deformable Reconstruction from a Single Image, David Fuentes-Jimenez, David Casillas-Perez, Daniel Pizarro, Toby Collins, Adrien Bartoli, 2018, (https://arxiv.org/abs/1811.07791).
D. Fuentes-Jimenez, D. Pizarro, D. Casillas-Perez, T. Collins and A. Bartoli, "Texture-Generic Deep Shape-From-Template," in IEEE Access, vol. 9, pp. 75211-75230, 2021, doi: 10.1109/ACCESS.2021.3082011.
Bibtext:
@ARTICLE{9435325,
author={Fuentes-Jimenez, David and Pizarro, Daniel and Casillas-Perez, David and Collins, Toby and Bartoli, Adrien},
journal={IEEE Access},
title={Texture-Generic Deep Shape-From-Template},
year={2021},
volume={9},
number={},
pages={75211-75230},
doi={10.1109/ACCESS.2021.3082011}}
@misc{fuentesjimenez2018deep, title={Deep Shape-from-Template: Wide-Baseline, Dense and Fast Registration and Deformable Reconstruction from a Single Image}, author={David Fuentes-Jimenez and David Casillas-Perez and Daniel Pizarro and Toby Collins and Adrien Bartoli}, year={2018}, eprint={1811.07791}, archivePrefix={arXiv}, primaryClass={cs.CV} }
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This item contains a description of the Wadaba dataset that was used to develop the MobileNetV3 architecture for image recognition. The dataset contains 4000 images of plastic objects that vary in colour and transparency, which were separated into 20 different sets of 200 images. The dataset used was secondary data collected from the Wadaba Institute’s database, available at: https://wadaba.pcz.pl/.
Facebook
TwitterVersion 1.0 database for neuro-endocrine-immune (dbNEI) is a web-based knowledge resource specific for the NEI systems. It provides a knowledge environment for understanding the main regulatory systems of NEI in a molecular level. dbNEI provides a knowledge environment for understanding the main regulatory systems of NEI in a molecular level. dbNEI collects 1,058 NEI related signal molecules, their 940 interactions and 72 affiliated tissues from the Cell Signaling Networks database and manually selects 982 NEI papers from PubMed. NEI related information, such as signal transductions, regulations and control subunits, are integrated. Especially, dbNEI represents as graphic visualization, by which control subunits can be automatically obtained according to the inquiring issues. Version 2.0: We updated the database in four aspects. 1. Recruiting new NEI genes and compounds. 2. Adding KEGG,HPRD,Transcription factor and microRNA target relations. 3. Collecting drug-gene and disease-gene relation. 4. Building multi-layer network for drug-NEI-disease.
Facebook
TwitterA database of quantum mechanical calculations on organic photovoltaic candidate molecules. Related Publications: Peter C. St. John, Caleb Phillips, Travis W. Kemper, A. Nolan Wilson, Michael F. Crowley, Mark R. Nimlos, Ross E. Larsen. (2018) Message-passing neural networks for high-throughput polymer screening arXiv:1807.10363
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains all processed, scripts and statistical analysis to generate the results of the manuscript entitled "The neural representation of an auditory spatial cue in primate cortex".Data structureThe raw data is compressed in multiple zip files within the raw_data_zip folder.The scripts_and_data directory contains sub-directories with the scripts, data, and figures pertinent to the modality defined by the directory name.All the statistical analyses can be found in the 'statistical_analyses' folder, in which the analysis for each specific modality are given by the folder name.All .sqlite databases are structured hierarchically by the tables 'subjects', 'measurement_info', and 'stimuli'. All other tables will share the same level of hierarchy, in which the specific rows are linked to each subject, measurement, and stimuli by the columns 'id_subject', 'id_measurement', and 'id_stimuli'.All R, python, and Matlab scripts will access the different databases (or data files) directly to generate the different figures and analyses.Details on how to run the code can be found in the following linkhttps://gitlab.com/jundurraga/meg_eeg_behavioural
Facebook
TwitterA database of virtually generated anatomically plausible neurons for several morphological classes, including cerebellar Purkinje cells, hippocampal pyramidal and granule cells, and spinal cord motoneurons. It presently contains 542 cells. In the trade neurons collection the database contains an amaral cell archive, neuron morpho reconstructions, and mouse alpha motoneurons. Their collection of generated neurons include motoneurons, Purkinje cells, and hippocampal pyramidal cells.
Facebook
TwitterDatabase of microarray analysis of twelve major classes of fluorescent labeled neurons within the adult mouse forebrain that provide the first comprehensive view of gene expression differences. The publicly available datasets demonstrate a profound molecular heterogeneity among neuronal subtypes, represented disproportionately by gene paralogs, and begin to reveal the genetic programs underlying the fundamental divisions between neuronal classes including that between glutamatergic and GABAergic neurons. Five of the 12 populations were chosen from cingulate cortex and included several subtypes of GABAergic interneurons and pyramidal neurons. The remaining seven were derived from the somatosensory cortex, hippocampus, amygdala and thalamus. Using these expression profiles, they were able to construct a taxonomic tree that reflected the expected major relationships between these populations, such as the distinction between cortical interneurons and projection neurons. The taxonomic tree indicated highly heterogeneous gene expression even within a single region. This dataset should be useful for the classification of unknown neuronal subtypes, the investigation of specifically expressed genes and the genetic manipulation of specific neuronal circuit elements. Datasets: * Full: Here you can query gene expression results for the neuronal populations * Strain: Here you can query the same expression results accessed under the full checkbox, with one additional population (CT6-CG2) included as a control for the effects of mouse strain. This population is identical to CT6-CG (YFPH) except the neurons were derived from wild-type mice of three distinct strains: G42, G30, and GIN. * Arlotta: Here you can query the same expression results accessed under the full checkbox, with nine additional populations from the dataset of Arlotta et al., 2005. These populations were purified by FACS after retrograde labeling with fluorescent microspheres. Populations are designated by the prefix ACS for corticospinal neurons, ACC for corticocallosal neurons and ACT for corticotectal neurons, followed by the suffix E18 for gestational age 18 embryos, or P3, P6 and P14 for postnatal day 3, 6 and 14 pups. For each successful gene query the following information is returned: # Signal level line plot: Signal level is plotted on Y-axis (log base 2) for each sample. Samples include the thirty six representing the twelve populations profiled in Sugino et al. In addition, six samples from homogenized (=dissociated and but not sorted) cortex are included representing two different strains: G42-HO is homogenate from strain G42, GIN-HO is homogenate from stain GIN. # Signal level raster plots: Signal level is represented by color (dark red is low, bright red is high) for all samples. Color scale is set to match minimum (dark red) and maximum (bright yellow) signal levels within the displayed set of probe sets. # Scaled signal level raster plots: Same as 2) except color scale is adjusted separately for each gene according to its maximum and minimum signal level. # Table: Basic information about the returned probe sets: * Affymetrix affyid of probe set * NCBI gene symbol, NCBI gene name * NCBI geneID * P-value score from ANOVA for each gene is also given if available (_anv column). P-value represents the probability that there is no difference in the expression across cell types.