Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset was obtained at the Queensland Brain Institute, Australia, using a 64 channel EEG Biosemi system. 21 healthy participants completed an auditory oddball paradigm (as described in Garrido et al., 2017).For a description of the oddball paradigm, please see Garrido et al., 2017:Garrido, M.I., Rowe, E.G., Halasz, V., & Mattingley, J. (2017). Bayesian mapping reveals that attention boosts neural responses to predicted and unpredicted stimuli. Cerebral Cortex, 1-12. DOI: 10.1093/cercor/bhx087If you use this dataset, please cite its doi, as well as citing the associated methods paper, which is as follows:Harris, C.D., Rowe, E.G., Randeniya, R. and Garrido, M.I. (2018). Bayesian Model Selection Maps for group studies using M/EEG data.For scripts to analyse the data, please see: https://github.com/ClareDiane/BMS4EEG
This data release provides remotely sensed data, field measurements, and MATLAB code associated with an effort to produce image-derived velocity maps for a reach of the Sacramento River in California's Central Valley. Data collection occurred from September 16-19, 2024, and involved cooperators from the Intelligent Robotics Group from the National Aeronautics and Space Administration (NASA) Ames Research Center and the National Oceanographic and Atmospheric Administration (NOAA) Southwest Fisheries Science Center. The remotely sensed data were obtained from an Uncrewed Aircraft System (UAS) and are stored in Robot Operating System (ROS) .bag files. Within these files, the various data types are organized into ROS topics including: images from a thermal camera, measurements of the distance from the UAS down to the water surface made with a laser range finder, and position and orientation data recorded by a Global Navigation Satellite System (GNSS) receiver and Inertial Measurement Unit (IMU) during the UAS flights. This instrument suite is part of an experimental payload called the River Observing System (RiOS) designed for measuring streamflow and further detail is provided in the metadata file associated with this data release. For the September 2024 test flights, the RiOS payload was deployed from a DJI Matrice M600 Pro hexacopter hovering approximately 270 m above the river. At this altitude, the thermal images have a pixel size of approximately 0.38 m but are not geo-referenced. Two types of ROS .bag files are provided in separate zip folders. The first, Baguettes.zip, contains "baguettes" that include 15-second subsets of data with a reduced sampling rate for the GNSS and IMU. The second, FullBags.zip, contains the full set of ROS topics recorded by RiOS but have been subset to include only the time ranges during which the UAS was hovering in place over one of 11 cross sections along the reach. The start times are included in the .bag file names as portable operating system interface (posix) time stamps. To view the data within ROS .bag files, the Foxglove Studio program linked below is freely available and provides a convenient interface. Note that to view the thermal images, the contrast will need to be adjusted to minimum and maximum values around 12,000 to 15,000, though some further refinement of these values might be necessary to enhance the display. To enable geo-referencing of the thermal images in a post-processing mode, another M600 hexacopter equipped with a standard visible camera was deployed along the river to acquire images from which an orthophoto was produced: 20240916_SacramentoRiver_Ortho_5cm.tif. This orthophoto has a spatial resolution of 0.05 m and is in the Universal Transverse Mercator (UTM) coordinate system, Zone 10. To assess the accuracy of the orthophoto, 21 circular aluminum ground control targets visible in both thermal and RGB (red, green, blue) images were placed in the field and their locations surveyed with a Real-Time Kinematic (RTK) GNSS receiver. The coordinates of these control points are provided in the file SacGCPs20240916.csv. Please see the metadata for additional information on the camera, the orthophoto production process, and the RTK GNSS survey. The thermal images were used as input to Particle Image Velocimetry (PIV) algorithms to infer surface flow velocities throughout the reach. To assess the accuracy of the resulting image-derived velocity estimates, field measurements of flow velocity were obtained using a SonTek M9 acoustic Doppler current profiler (ADCP). These data were acquired along a series of 11 cross sections oriented perpendicular to the primary downstream flow direction and spaced approximately 150 m apart. At each cross section, the boat from which the ADCP was deployed made four passes across the channel and the resulting data was then aggregated into mean cross sections using the Velocity Mapping Toolbox (VMT) referenced below (Parsons et al., 2013). The VMT output was further processed as described in the metadata and ultimately led to a single comma delimited text file, SacAdcp20240918.csv, with cross section numbers, spatial coordinates (UTM Zone 10N), cross-stream distances, velocity vector components, and water depths. To assess the sensitivity of thermal image velocimetry to environmental conditions, air and water temperatures were recorded using a pair of Onset HOBO U20 pressure transducer data loggers set to record pressure and temperature. Deploying one data logger in the air and one in the water also provided information on variations in water level during the test flights. The resulting temperature and water level time series are provided in the file HoboDataSummary.csv with a one-minute sampling interval. These data sets were used to develop and test a new framework for mapping flow velocities in river channels in approximately real time using images from an UAS as they are acquired. Prototype code for implementing this approach was developed in MATLAB and is also included in the data release as a zip folder called VelocityMappingCode.zip. Further information on the individual functions (*.m files) included within this folder is available in the metadata file associated with this data release. The code is provided as is and is intended for research purposes only. Users are advised to thoroughly read the metadata file associated with this data release to understand the appropriate use and limitations of the data and code provided herein.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This package contains the datasets and supplementary materials used in the IPIN 2021 Competition.
Contents:
IPIN2021_Track03_TechnicalAnnex_V1-02.pdf: Technical annex describing the competition
01-Logfiles: This folder contains a subfolder with the 105 training logfiles, 80 of them single floor indoors, 10 in outdoor areas, 10 of them in the indoor auditorium with floor-trasitio and 5 of them in floor-transition zones, a subfolder with the 20 validation logfiles, and a subfolder with the 3 blind evaluation logfile as provided to competitors.
02-Supplementary_Materials: This folder contains the matlab/octave parser, the raster maps, the files for the matlab tools and the trajectory visualization.
03-Evaluation: This folder contains the scripts used to calculate the competition metric, the 75th percentile on the 82 evaluation points. It requires the Matlab Mapping Toolbox. The ground truth is also provided as 3 csv files. Since the results must be provided with a 2Hz freq. starting from apptimestamp 0, the GT files include the closest timestamp matching the timing provided by competitors for the 3 evaluation logfiles. It contains samples of reported estimations and the corresponding results.
Please, cite the following works when using the datasets included in this package:
Torres-Sospedra, J.; et al. Datasets and Supporting Materials for the IPIN 2021 Competition Track 3 (Smartphone-based, off-site). http://dx.doi.org/10.5281/zenodo.5948678
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Accurate anatomical localization of intracranial electrodes is important for identifying the seizure foci in patients with epilepsy and for interpreting effects from cognitive studies employing intracranial electroencephalography. Localization is typically performed by coregistering postimplant computed tomography (CT) with preoperative magnetic resonance imaging (MRI). Electrodes are then detected in the CT, and the corresponding brain region is identified using the MRI. Many existing software packages for electrode localization chain together separate preexisting programs or rely on command line instructions to perform the various localization steps, making them difficult to install and operate for a typical user. Further, many packages provide solutions for some, but not all, of the steps needed for confident localization. We have developed software, Locate electrodes Graphical User Interface (LeGUI), that consists of a single interface to perform all steps needed to localize both surface and depth/penetrating intracranial electrodes, including coregistration of the CT to MRI, normalization of the MRI to the Montreal Neurological Institute template, automated electrode detection for multiple types of electrodes, electrode spacing correction and projection to the brain surface, electrode labeling, and anatomical targeting. The software is written in MATLAB, core image processing is performed using the Statistical Parametric Mapping toolbox, and standalone executable binaries are available for Windows, Mac, and Linux platforms. LeGUI was tested and validated on 51 datasets from two universities. The total user and computational time required to process a single dataset was approximately 1 h. Automatic electrode detection correctly identified 4362 of 4695 surface and depth electrodes with only 71 false positives. Anatomical targeting was verified by comparing electrode locations from LeGUI to locations that were assigned by an experienced neuroanatomist. LeGUI showed a 94% match with the 482 neuroanatomist-assigned locations. LeGUI combines all the features needed for fast and accurate anatomical localization of intracranial electrodes into a single interface, making it a valuable tool for intracranial electrophysiology research.
Information on water depth in river channels is important for a number of applications in water resource management but can be difficult to obtain via conventional field methods, particularly over large spatial extents and with the kind of frequency and regularity required to support monitoring programs. Remote sensing methods could provide a viable alternative means of mapping river bathymetry (i.e., water depth). The purpose of this study was to develop and test new, spectrally based techniques for estimating water depth from satellite image data. More specifically, a neural network-based temporal ensembling approach was evaluated in comparison to several other neural network depth retrieval (NNDR) algorithms. These methods are described in a manuscript titled "Neural Network-Based Temporal Ensembling of Water Depth Estimates Derived from SuperDove Images" and the purpose of this data release is to make available the depth maps produced using these techniques. The images used as input were acquired by the SuperDove cubesats comprising the PlanetScope constellation, but the original images cannot be redistributed due to licensing restrictions; the end products derived from these images are provided instead. The large number of cubesats in the PlanetScope constellation allows for frequent temporal coverage and the neural network-based approach takes advantage of this high density time series of information by estimating depth via one of four NNDR methods described in the manuscript: 1. Mean-spec: the images are averaged over time and the resulting mean image is used as input to the NNDR. 2. Mean-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is averaged to obtain the final depth map. 3. NN-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is then used as input to a second, ensembling neural network that essentially weights the depth estimates from the individual images so as to optimize the agreement between the image-derived depth estimates and field measurements of water depth used for training; the output from the ensembling neural network serves as the final depth map. 4. Optimal single image: a separate NNDR is applied independently to each image in the time series and only the image that yields the strongest agreement between the image-derived depth estimates and the field measurements of water depth used for training is used as the final depth map. MATLAB (Version 24.1, including the Deep Learning Toolbox) source code for performing this analysis is provided in the function NN_depth_ensembling.m and the figure included on this landing page provides a flow chart illustrating the four different neural network-based depth retrieval methods. As examples of the resulting models, MATLAB *.mat data files containing the best-performing neural network model for each site are provided below, along with a file that lists the PlanetScope image identifiers for the images that were used for each site. To develop and test this new NNDR approach, the method was applied to satellite images from three rivers across the U.S.: the American, Colorado, and Potomac. For each site, field measurements of water depth available through other data releases were used for training and validation. The depth maps produced via each of the four methods described above are provided as GeoTIFF files, with file name suffixes that indicate the method employed: X_mean-spec.tif, X_mean-depth.tif, X_NN-depth.tif, and X-single-image.tif, where X denotes the site name. The spatial resolution of the depth maps is 3 meters and the pixel values within each map are water depth estimates in units of meters.
Information on water depth in river channels is important for a number of applications in water resource management but can be difficult to obtain via conventional field methods, particularly over large spatial extents and with the kind of frequency and regularity required to support monitoring programs. Remote sensing methods could provide a viable alternative means of mapping river bathymetry (i.e., water depth). The purpose of this study was to develop and test new, spectrally based techniques for estimating water depth from satellite image data. More specifically, a neural network-based temporal ensembling approach was evaluated in comparison to several other neural network depth retrieval (NNDR) algorithms. These methods are described in a manuscript titled "Neural Network-Based Temporal Ensembling of Water Depth Estimates Derived from SuperDove Images" and the purpose of this data release is to make available the depth maps produced using these techniques. The images used as input were acquired by the SuperDove cubesats comprising the PlanetScope constellation, but the original images cannot be redistributed due to licensing restrictions; the end products derived from these images are provided instead. The large number of cubesats in the PlanetScope constellation allows for frequent temporal coverage and the neural network-based approach takes advantage of this high density time series of information by estimating depth via one of four NNDR methods described in the manuscript: 1. Mean-spec: the images are averaged over time and the resulting mean image is used as input to the NNDR. 2. Mean-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is averaged to obtain the final depth map. 3. NN-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is then used as input to a second, ensembling neural network that essentially weights the depth estimates from the individual images so as to optimize the agreement between the image-derived depth estimates and field measurements of water depth used for training; the output from the ensembling neural network serves as the final depth map. 4. Optimal single image: a separate NNDR is applied independently to each image in the time series and only the image that yields the strongest agreement between the image-derived depth estimates and the field measurements of water depth used for training is used as the final depth map. MATLAB (Version 24.1, including the Deep Learning Toolbox) for performing this analysis is provided in the function NN_depth_ensembling.m available on the main landing page for the data release of which this is a child item, along with a flow chart illustrating the four different neural network-based depth retrieval methods. To develop and test this new NNDR approach, the method was applied to satellite images from the American River near Fair Oaks, CA, acquired in October 2020. Field measurements of water depth available through another data release (Legleiter, C.J., and Harrison, L.R., 2022, Field measurements of water depth from the American River near Fair Oaks, CA, October 19-21, 2020: U.S. Geological Survey data release, https://doi.org/10.5066/P92PNWE5) were used for training and validation. The depth maps produced via each of the four methods described above are provided as GeoTIFF files, with file name suffixes that indicate the method employed: American_mean-spec.tif, American_mean-depth.tif, American_NN-depth.tif, and American-single-image.tif. The spatial resolution of the depth maps is 3 meters and the pixel values within each map are water depth estimates in units of meters.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This package contains the datasets and supplementary materials used in the IPIN 2021 Competition.
Contents:
- IPIN2021_Track4_CallForCompetition_v2.1.pdf: Call for competition including the technical annex describing the competition
- 01-Logfiles: This folder contains 2 files for each Trials (Testing, Scoring01, Scoring02)
- IPIN2021_T4_xxx.csv : data file containing ACCE, ROTA, MAGN, PRES, TEMP, GSBS, GOBS, POSI frames
- IPIN2021_T4_xxx_gnss_ephem.nav : for trajectory estimation.
- 02-Supplementary_Materials: This folder contains the datasheet files of the different sensors, a static logfile of about 12 hours that can be used for sensors bias estimation (Allan Variance) and a logfile of about 1 minute that can be used to calibrate the magnetometer sensor (Magnetometer Calibration).
- 03-Evaluation: This folder contains the scripts used to calculate the competition metric, the 75th percentile on all evaluation points. It requires Matlab Mapping Toolbox. We also provide ground truth as 1 CSV file. It contains samples of reported estimations and the corresponding results. Just run script_Eval_IPIN2021.mat
We provide additional information on the competition at: https://evaal.aaloa.org/2021/call-for-competitions
Citation Policy:
Please, cite the following works when using the datasets included in this package:
Ortiz, M.; Zhu, N.; Renaudin, V. Datasets and Supporting Materials for the IPIN 2021 Competition Track 4 (Foot-Mounted IMU based Positioning, offsite-online), Zenodo 2021
https://doi.org/10.5281/zenodo.10497732
Check the citation policy at: https://doi.org/10.5281/zenodo.10497732
Contact:
For any further questions about the database and this competition track, please contact:
Miguel Ortiz (miguel.ortiz@univ-eiffel.fr) at the University Gustave Eiffel, France.
Ni Zhu (ni.zhu@univ-eiffel.fr) at the University Gustave Eiffel, France.
Acknowledgements:
We thank all the staff of Atlantis Le Centre (Shopping Mall in Nantes) for their invaluable support throughout our collection days.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
# A large-scale fMRI dataset in response to short naturalistic facial expressions videos
Naturalistic facial expressions dataset (NFED),a large-scale dataset of whole-brain functional magnetic resonance imaging (fMRI) responses to 1,320 short (3s) facial expression video clips.NFED offers researchers fMRI data that enables them to investigate the neural mechanisms involved in processing emotional information communicated by facial expression videos in real-world environments.
The dataset contains raw data, pre-processed volume data,pre-processed surface data and suface-based analyzed data.
To get more details, please refer to the paper at {website} and the dataset at https://openneuro.org/datasets/ds005047
## Preprocess procedure
The MRI data were preprocessed by using Kay et al, combining code written in MATLAB and certain tools from FreeSurfer, SPM,and FSL(http://github.com/kendrickkay).We used FreeSurfer software (http://surfer.nmr.mgh.harvard.edu) to construct the pial and white surfaces of participants from the T1 volume. Additionally, we established an intermediate gray matter surface between the pial and the white surfaces for all participants.
**code: ./volume_pre-process/**
Detailed usage notes are available in codes, please read carefully and modify variables to satisfy your customed environment.
## GLM of main experiment
We performed a single-trial GLM(GLMsingle),a advanced denoising toolbox in MATLAB used to improve single-trial BOLD response estimates, to model the time-series data in surface forma from the pre-processed fMRI data of each participant. Three specific amplitudes of responses (i.e., beta values) were estimated by modeling the BOLD response in relation to each video onset from 1 to 3 seconds in 1 second steps.Using GLMsingle in the manner described above, the BOLD responses evoked by each video were assessed in the run of each session. In total, we extracted 2(repetitions) x 3 (seconds) beta estimates for each video condition in the training set,and 10 (repetitions) x 3(seconds) estimated beta values for each video condition in the testing set.
**code: ./GLMsingle-main-experiment/matlab/examples.m**
#### retinotopic mapping
The fMRI data from the the population receptive field experiment were analyzed by a pRF model implemented in the analyzePRF toolbox (http://cvnlab.net/analyzePRF/) to characterize individual retinotopic representation. Make sure to download required software mentioned in the code.
**code: ./Functional-localizer-experiment-analysis/s4a_analysis_prf.m**
#### fLoc experiment
We used GLMdenoise,a data-driven denoising method,to analyze the pre-processed fMRI data from the fLoc experiment.We used a "condition-split" strategy to code the 10 stimulus categories, splitting the trials related to each category into individual conditions in each run. Six response estimates (beta values) for each category were produced by using six condition-splits.To quantify selectivity for various categories and domains,we computed t-values using the GLM beta values after fitting the GLM.The regions of interest with category selectivity for each participant were defined by using the resulting maps.
**code: ./Functional-localizer-experiment-analysis/s4a_analysis_floc.m**
## Validation
### Basic quality control
**code: ./validation/FD/FD.py**
**code: ./validation/tSNR/tSNR.py**
### noise celling
The data are available at https://openneuro.org/datasets/ds005047."./validation/code/noise_celling/sub-xx" store the intermediate files required for running the program
**code: ./validation/noise_celling/Noise_Ceiling.py**
### Correspondence between human brain and DCNN
The data are available at https://openneuro.org/datasets/ds005047.We combined the data from main experiment and functional localizer experiments to build an encoding model to replicate the hierarchical correspondences of representation between the brain and the DCNN. The encoding models were built to map artificial representations from each layer of the pre-trained VideoMAEv2 to neural representations from each area of the human visual cortex as defined in the multimodal parcellation atlas.
**code: ./validation/dnnbrain/**
### Semantic metadata of action and expression labels reveal that NFED can encode temporal and spatial stimuli features in the brain
The data are available at https://openneuro.org/datasets/ds005047."./validation/code/semantic_metadata/xx_xx_semantic_metadata" store the intermediate files required for running the program.
**code: ./validation/semantic_metadata/**
### GLMsingle-main-experiment
The data are available at https://openneuro.org/datasets/ds005047."./validation/code/noise_celling/sub-xx" store the intermediate files required for running the program
**code: ./validation/noise_celling/Noise_Ceiling.py**
## results
The results can be viewed at "https://openneuro.org/datasets/ds005047/derivatives/validation/results/brain_map_individual".
## Whole-brain mapping
The whole-brain data mapped to the cerebral cortex as obtained from the technical validation.
**code: ./show_results_allbrain/Showresults.m**
## Mannually prepared environment
We provide the *requirements.txt* to install python packages used in these codes. However, some packages like *GLM* and *pre-processing* require external dependecies and we have provided the packages in the corresponding file.
## stimuli
The video stimuli used in the NFED experiment are saved in the "stimuli_1" and "stimuli_2" folders.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This package contains the datasets and supplementary materials used in the IPIN 2023 Competition.
Contents: - IPIN2023_Track4_CallForCompetition_v2.2.pdf: Call for competition including the technical annex describing the competition
01-Logfiles: This folder contains 2 files for each Trials (Testing, Scoring01, Scoring02) - IPIN2023_T4_xxx.txt : data file containing ACCE, ROTA, MAGN, PRES, TEMP, GSBS, GOBS, POSI frames - IPIN2023_T4_xxx_gnss_ephem.nav : for trajectory estimation.
02-Supplementary_Materials: This folder contains the datasheet files of the different sensors, a static logfile of about 12 hours that can be used for sensors bias estimation (Allan Variance) and a logfile of about 1 minute that can be used to calibrate the magnetometer sensor (Magnetometer Calibration).
03-Evaluation: This folder contains the scripts used to calculate the competition metric, the 75th percentile on all evaluation points. It requires Matlab Mapping Toolbox. We also provide ground truth of the 2 scoring trials as 2 MAT and KML files. It contains samples of reported estimations and the corresponding results. Just run script_Eval_IPIN2023.mat
We provide additional information on the competition at: https://evaal.aaloa.org/2023/call-for-competition
Citation Policy: Please, cite the following works when using the datasets included in this package:
Ortiz, M.; Zhu, N.; Ziyou L. ; Renaudin, V. Datasets and Supporting Materials for the IPIN 2023 Competition Track 4 (Foot-Mounted IMU based Positioning, offsite-online), Zenodo 2023 https://doi.org/10.5281/zenodo.8399764
Check the citation policy at: https://doi.org/10.5281/zenodo.8399764
Contact: For any further questions about the database and this competition track, please contact:
Miguel Ortiz (miguel.ortiz@univ-eiffel.fr) at the University Gustave Eiffel, France.
Ni Zhu (ni.zhu@univ-eiffel.fr) at the University Gustave Eiffel, France.
Acknowledgements: We thank Maximilian Stahlke and Christopher Mutschler at Fraunhofer ISS, as well as Joaquín Torres-Sospedra from Universidade do Minho and Francesco Potortì and Antonino Crivello from ISTI-CNR Pisa, for their support in collecting the datasets.
We extend our appreciation to the staff at the Museum for Industrial Culture (Museum Industriekultur) for their unwavering patience and invaluable support throughout our collection days.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This package contains the datasets and supplementary materials used in the IPIN 2023 Competition.
Contents
We provide additional information on the competition at: https://evaal.aaloa.org/2023/call-for-competition
Citation Policy
Please cite the following works when using the datasets included in this package:
Torres-Sospedra, J.; et al. Datasets and Supporting Materials for the IPIN 2023
Competition Track 3 (Smartphone-based, off-site), Zenodo 2023
http://dx.doi.org/10.5281/zenodo.8362205
Check the updated citation policy at: http://dx.doi.org/10.5281/zenodo.8362205
Contact
For any further questions about the database and this competition track, please contact:
Joaquín Torres-Sospedra
Centro ALGORITMI,
Universidade do Minho, Portugal
info@jtorr.es - jtorres@algoritmi.uminho.pt
Antonio R. Jiménez
Centre of Automation and Robotics (CAR)-CSIC/UPM, Spain
antonio.jimenez@csic.es
Antoni Pérez-Navarro
Faculty of Computer Sciences, Multimedia and Telecommunication, Universitat Oberta de Catalunya, Barcelona, Spain
aperezn@uoc.edu
Acknowledgements
We thank Maximilian Stahlke and Christopher Mutschler at Fraunhofer ISS, as well as Miguel Ortiz and Ziyou Li at Université Gustave Eiffel, for their invaluable support in collecting the datasets. And last but certainly not least, Antonino Crivello and Francesco Potortì for their huge effort in georeferencing the competition venue and evaluation points.
We extend our appreciation to the staff at the Museum for Industrial Culture (Museum Industriekultur) for their unwavering patience and invaluable support throughout our collection days.
We are also grateful to Francesco Potortì, the ISTI-CNR team (Paolo, Michele & Filippo), and the Fraunhofer IIS team (Chris, Tobi, Max, ...) for their invaluable commitment to organizing and promoting the IPIN competition.
This work and competition belong to the IPIN 2023 Conference in Nuremberg (Germany).
Parts of this work received the financial support received from projects and grants:
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset was obtained at the Queensland Brain Institute, Australia, using a 64 channel EEG Biosemi system. 21 healthy participants completed an auditory oddball paradigm (as described in Garrido et al., 2017).For a description of the oddball paradigm, please see Garrido et al., 2017:Garrido, M.I., Rowe, E.G., Halasz, V., & Mattingley, J. (2017). Bayesian mapping reveals that attention boosts neural responses to predicted and unpredicted stimuli. Cerebral Cortex, 1-12. DOI: 10.1093/cercor/bhx087If you use this dataset, please cite its doi, as well as citing the associated methods paper, which is as follows:Harris, C.D., Rowe, E.G., Randeniya, R. and Garrido, M.I. (2018). Bayesian Model Selection Maps for group studies using M/EEG data.For scripts to analyse the data, please see: https://github.com/ClareDiane/BMS4EEG