4 datasets found
  1. Z

    Ultra high-density 255-channel EEG-AAD dataset

    • data.niaid.nih.gov
    Updated Jun 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zink, Rob (2024). Ultra high-density 255-channel EEG-AAD dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4518753
    Explore at:
    Dataset updated
    Jun 13, 2024
    Dataset provided by
    Mundanad Narayanan, Abhijith
    Zink, Rob
    Bertrand, Alexander
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    If using this dataset, please cite the following paper above and the current Zenodo repository:A. Mundanad Narayanan, R. Zink, and A. Bertrand, "EEG miniaturization limits for stimulus decoding with EEG sensor networks", Journal of Neural Engineering, vol. 18, 2021, doi: 10.1088/1741-2552/ac2629

    Experiment*************

    This dataset contains 255-channel electroencephalography (EEG) data collected during an auditory attention decoding experiment (AAD). The EEG was recorded using a SynAmps RT device (Compumedics, Australia) at a sampling rate of 1 kHz and using active Ag/Cl electrodes. The electrodes were placed on the head according to the international 10-5 (5%) system. 30 normal hearing male subjects between 22 and 35 years old participated in the experiment. All of them signed an informed consent form approved by the KU Leuven ethical committee.

    Two Dutch stories narrated by different male speakers divided into two parts of 6 minutes each were used as the stimuli in the experiment [1]. A single trial of the experiment involved the presentation of these two parts (one of both stories) to the subject through insert phones (Etymotic ER3A) at 60dBA. These speech stimuli were filtered using a head-related transfer function (HRTF) such that the stories seemed to arrive from two distinct spatial locations, namely left and right with respect to the subject with 180 degrees separation. In each trial, the subjects were asked to attend to only one ear while ignoring the other. Four trials of 6 minutes each were carried out, in which each story part is used twice. The order of presentations was randomized and balanced over different subjects. Thus approximately 24 minutes of EEG data was recorded per subject.

    File organization and details********************************

    The EEG data of each of the 30 subjects are uploaded as a ZIP file with the name Sx.tar.gzip here x=0,1,2,..,29. When a zip file is extracted, the EEG data are in their original raw format as recorded by the CURRY software [2]. The data files of each recording consist of four files with the same name but different extensions, namely, .dat, ,dap, .rs3 and .ceo. The name of each file follows the following convention: Sx_AAD_P. With P taking one of the following values for each file:1. 1L2. 1R3. 2L4. 2R

    The letter 'L' or 'R' in P indicates the attended direction of each subject in a recording: left and right respectively. A MATLAB function to read the software is provided in the directory called scripts. A python function to read the file is available in this Github repository [3].The original version of stimuli presented to subjects, i.e. without the HRTF filtering, can be found after extracting the stimuli.zip file in WAV format. There are 4 WAV files corresponding to the two parts of each of the two stories. These files have been sampled at 44.1 kHz. The order of presentation of these WAV files is given in the table below: Stimuli presentation and attention information of files

    Trial (P) Stimuli: Left-ear Stimuli: Right-ear Attention

    1L part1_track1_dry part1_track2_dry Left

    1R part1_track1_dry part1_track2_dry Right

    2L part2_track2_dry part2_track1_dry Left

    2R part2_track2_dry part2_track1_dry Right

    Additional files (after extracting scripts.zip and misc.zip):

    scripts/sample_script.m: Demonstrates reading an EEG-AAD recording and extracting the start and end of the experiment.

    misc/channel-layout.jpeg: The 255-channel EEG cap layout

    misc/eeg255ch_locs.csv: The channel names, numbers and their spherical (theta and phi) scalp coordinates.

    [1] Radioboeken voor kinderen, http://radioboeken.eu/kinderradioboeken.php?lang=NL, 2007 (Accessed: 8 Feb 2021)

    [2] CURRY 8 X – Data Acquisition and Online Processing, https://compumedicsneuroscan.com/product/curry-data-acquisition-online-processing-x/ (Accessed: 8, Feb, 2021)

    [3] Abhijith Mundanad Narayanan, "EEG analysis in python", 2021. https://github.com/mabhijithn/eeg-analyse , (Accessed: 8 Feb, 2021)

  2. f

    Travel time to cities and ports in the year 2015

    • figshare.com
    tiff
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andy Nelson (2023). Travel time to cities and ports in the year 2015 [Dataset]. http://doi.org/10.6084/m9.figshare.7638134.v4
    Explore at:
    tiffAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    figshare
    Authors
    Andy Nelson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset and the validation are fully described in a Nature Scientific Data Descriptor https://www.nature.com/articles/s41597-019-0265-5

    If you want to use this dataset in an interactive environment, then use this link https://mybinder.org/v2/gh/GeographerAtLarge/TravelTime/HEAD

    The following text is a summary of the information in the above Data Descriptor.

    The dataset is a suite of global travel-time accessibility indicators for the year 2015, at approximately one-kilometre spatial resolution for the entire globe. The indicators show an estimated (and validated), land-based travel time to the nearest city and nearest port for a range of city and port sizes.

    The datasets are in GeoTIFF format and are suitable for use in Geographic Information Systems and statistical packages for mapping access to cities and ports and for spatial and statistical analysis of the inequalities in access by different segments of the population.

    These maps represent a unique global representation of physical access to essential services offered by cities and ports.

    The datasets travel_time_to_cities_x.tif (where x has values from 1 to 12) The value of each pixel is the estimated travel time in minutes to the nearest urban area in 2015. There are 12 data layers based on different sets of urban areas, defined by their population in year 2015 (see PDF report).

    travel_time_to_ports_x (x ranges from 1 to 5)

    The value of each pixel is the estimated travel time to the nearest port in 2015. There are 5 data layers based on different port sizes.

    Format Raster Dataset, GeoTIFF, LZW compressed Unit Minutes

    Data type Byte (16 bit Unsigned Integer)

    No data value 65535

    Flags None

    Spatial resolution 30 arc seconds

    Spatial extent

    Upper left -180, 85

    Lower left -180, -60 Upper right 180, 85 Lower right 180, -60 Spatial Reference System (SRS) EPSG:4326 - WGS84 - Geographic Coordinate System (lat/long)

    Temporal resolution 2015

    Temporal extent Updates may follow for future years, but these are dependent on the availability of updated inputs on travel times and city locations and populations.

    Methodology Travel time to the nearest city or port was estimated using an accumulated cost function (accCost) in the gdistance R package (van Etten, 2018). This function requires two input datasets: (i) a set of locations to estimate travel time to and (ii) a transition matrix that represents the cost or time to travel across a surface.

    The set of locations were based on populated urban areas in the 2016 version of the Joint Research Centre’s Global Human Settlement Layers (GHSL) datasets (Pesaresi and Freire, 2016) that represent low density (LDC) urban clusters and high density (HDC) urban areas (https://ghsl.jrc.ec.europa.eu/datasets.php). These urban areas were represented by points, spaced at 1km distance around the perimeter of each urban area.

    Marine ports were extracted from the 26th edition of the World Port Index (NGA, 2017) which contains the location and physical characteristics of approximately 3,700 major ports and terminals. Ports are represented as single points

    The transition matrix was based on the friction surface (https://map.ox.ac.uk/research-project/accessibility_to_cities) from the 2015 global accessibility map (Weiss et al, 2018).

    Code The R code used to generate the 12 travel time maps is included in the zip file that can be downloaded with these data layers. The processing zones are also available.

    Validation The underlying friction surface was validated by comparing travel times between 47,893 pairs of locations against journey times from a Google API. Our estimated journey times were generally shorter than those from the Google API. Across the tiles, the median journey time from our estimates was 88 minutes within an interquartile range of 48 to 143 minutes while the median journey time estimated by the Google API was 106 minutes within an interquartile range of 61 to 167 minutes. Across all tiles, the differences were skewed to the left and our travel time estimates were shorter than those reported by the Google API in 72% of the tiles. The median difference was −13.7 minutes within an interquartile range of −35.5 to 2.0 minutes while the absolute difference was 30 minutes or less for 60% of the tiles and 60 minutes or less for 80% of the tiles. The median percentage difference was −16.9% within an interquartile range of −30.6% to 2.7% while the absolute percentage difference was 20% or less in 43% of the tiles and 40% or less in 80% of the tiles.

    This process and results are included in the validation zip file.

    Usage Notes The accessibility layers can be visualised and analysed in many Geographic Information Systems or remote sensing software such as QGIS, GRASS, ENVI, ERDAS or ArcMap, and also by statistical and modelling packages such as R or MATLAB. They can also be used in cloud-based tools for geospatial analysis such as Google Earth Engine.

    The nine layers represent travel times to human settlements of different population ranges. Two or more layers can be combined into one layer by recording the minimum pixel value across the layers. For example, a map of travel time to the nearest settlement of 5,000 to 50,000 people could be generated by taking the minimum of the three layers that represent the travel time to settlements with populations between 5,000 and 10,000, 10,000 and 20,000 and, 20,000 and 50,000 people.

    The accessibility layers also permit user-defined hierarchies that go beyond computing the minimum pixel value across layers. A user-defined complete hierarchy can be generated when the union of all categories adds up to the global population, and the intersection of any two categories is empty. Everything else is up to the user in terms of logical consistency with the problem at hand.

    The accessibility layers are relative measures of the ease of access from a given location to the nearest target. While the validation demonstrates that they do correspond to typical journey times, they cannot be taken to represent actual travel times. Errors in the friction surface will be accumulated as part of the accumulative cost function and it is likely that locations that are further away from targets will have greater a divergence from a plausible travel time than those that are closer to the targets. Care should be taken when referring to travel time to the larger cities when the locations of interest are extremely remote, although they will still be plausible representations of relative accessibility. Furthermore, a key assumption of the model is that all journeys will use the fastest mode of transport and take the shortest path.

  3. P

    Laboratory effect perception during virtual stages auralization Dataset

    • paperswithcode.com
    Updated May 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ernesto Accolti (2025). Laboratory effect perception during virtual stages auralization Dataset [Dataset]. https://paperswithcode.com/dataset/laboratory-effect-perception-during-virtual
    Explore at:
    Dataset updated
    May 25, 2025
    Authors
    Ernesto Accolti
    Description

    Introduction

    These audio files accompany the preprint by Accolti (2025), which presents a preliminary study on the effect of the acoustical conditions of three different rooms on the perception of virtual stages for music. The three laboratory rooms include:

    An anechoic room, which represents an ideal recording condition. A custom-made hearing booth 1 with insufficient sound absorption, likely representing the worst-case scenario. A custom-made hearing booth 2 with better, achievable absorption, serving as a compromise scenario. The aim of the study is to assess how these environments affect the perception of virtual stages for music.

    Description of the virtual stages

    Two virtual stage configurations were simulated: A small stage: 12 m (width) × 10 m (depth) × 6 m (height) A large stage: 24 m (width) × 10 m (depth) × 12 m (height)

    Both stages share a common audience area with dimensions: 41.5 m (length) × 23 m (width) × 19 m (height). Refer to the virtual room model in Accolti [2025].

    The surface properties used were: Audience area: absorption coefficient = 0.80, scattering coefficient = 0.70 Remaining surfaces: absorption = 0.20, scattering = 0.10

    TABLE I: Main conditions of the three laboratory rooms

    Room Width Length Height α (absorption coefficient) Anechoic room 3.5 m 4.5 m 2.5 m 0.99 Hearing booth 1 2.0 m 2.0 m 2.0 m 0.50 Hearing booth 2 2.1 m 3.0 m 2.5 m 0.97

    Soundfield simulation

    Raven software is used to simulate both the virtual stages and the three laboratory rooms. The software is based on image source and ray tracing methods [Schröder y Vorländer, 2011]. Skipping the direct sound, is a feature of Raven that allows to simulate the sound reflections in a room without the direct sound reaching the listener.

    A violist is placed in the centre of the virtual stages, modelled with the source directivity and head related transfer function from public databases [Ackermann y Brinkmann, 2024, Brinkmann et al., 2017]. The anechoic sound of the viola is a recording of the first 6 seconds of the third movement of the Summer concerto of the four seasons by Vivaldi (RV315) extracted from the databse of the Sorbonne University [Thery y Katz, 2019].

    Simulations were carried out using Raven [Schröder & Vorländer, 2011], a software based on image-source and ray-tracing methods. Raven allows the direct sound to be skipped, making it possible to simulate only the reflections in the room.

    A violist was placed at the center of each virtual stage. The directivity of the sound source and the listener's head-related transfer function (HRTF) were modeled using public databases [Ackermann & Brinkmann, 2024; Brinkmann et al., 2017].

    The anechoic recording used is the first 6 seconds of the third movement of Summer from Vivaldi’s Four Seasons (RV315), sourced from the Sorbonne University database [Thery & Katz, 2019].

    Audiofiles code

    Each audio file is named using the format:

    R

    (empty) → no lab effect included

    v: default (anechoic rendering in the concert hall)

    u: lab effect only (the simulated coloration due to the room)

    T: combined (anechoic rendering + lab effect)

    How to listen and compare

    You can aurally compare the pure virtual stage simulation (e.g., Rl_v) with the colored versions due to each laboratory room:

    Rl99_T: Anechoic room

    Rl50_T: Hearing booth 1

    Rl97_T: Hearing booth 2

    Alternatively, load Rl_v into a DAW and add the isolated coloration (Rl99_u, Rl50_u, Rl97_u) as a second track. You may use mute/unmute for A/B comparisons.

    Similar comparisons can be made with the small virtual hall using:

    Rs_v vs Rs99_T, Rs50_T, Rs97_T

    Rs_v in one track and Rs99_u, Rs50_u, Rs97_u in another track

    References

    Accolti, E. (2025). Effect of laboratory conditions on the perception of virtual stages for music. arXiv preprint. https://arxiv.org/abs/2505.20552

    Ackermann, D., & Brinkmann, F. (2024). A database with directivities of musical instruments. J. Audio Eng. Soc., 72(3).

    Brinkmann, F., Lindau, A., Weinzierl, S., Van De Par, S., Müller-Trapet, M., Opdam, R., & Vorländer, M. (2017). A high resolution and full-spherical head-related transfer function database for different head-above-torso orientations. Journal of the Audio Engineering Society, 65(10), 841–848.

    Schröder, D., & Vorländer, M. (2011). Raven: A real-time framework for the auralization of interactive virtual environments. In: Forum Acusticum, pp. 1541–1546.

    Thery, D., & Katz, B. F. G. (2019). Anechoic audio and 3D-video content database of small ensemble performances for virtual concerts. In: International Congress on Acoustics (ICA 2019).

  4. Number of neural dimensions returned by LocaNMF for each region/hemsiphere...

    • plos.figshare.com
    xls
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Matthew R. Whiteway; Dan Biderman; Yoni Friedman; Mario Dipoppa; E. Kelly Buchanan; Anqi Wu; John Zhou; Niccolò Bonacchi; Nathaniel J. Miska; Jean-Paul Noel; Erica Rodriguez; Michael Schartner; Karolina Socha; Anne E. Urai; C. Daniel Salzman; John P. Cunningham; Liam Paninski (2023). Number of neural dimensions returned by LocaNMF for each region/hemsiphere in the two-view dataset. [Dataset]. http://doi.org/10.1371/journal.pcbi.1009439.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Matthew R. Whiteway; Dan Biderman; Yoni Friedman; Mario Dipoppa; E. Kelly Buchanan; Anqi Wu; John Zhou; Niccolò Bonacchi; Nathaniel J. Miska; Jean-Paul Noel; Erica Rodriguez; Michael Schartner; Karolina Socha; Anne E. Urai; C. Daniel Salzman; John P. Cunningham; Liam Paninski
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Number of neural dimensions returned by LocaNMF for each region/hemsiphere in the two-view dataset.

  5. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Zink, Rob (2024). Ultra high-density 255-channel EEG-AAD dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4518753

Ultra high-density 255-channel EEG-AAD dataset

Explore at:
Dataset updated
Jun 13, 2024
Dataset provided by
Mundanad Narayanan, Abhijith
Zink, Rob
Bertrand, Alexander
License

Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically

Description

If using this dataset, please cite the following paper above and the current Zenodo repository:A. Mundanad Narayanan, R. Zink, and A. Bertrand, "EEG miniaturization limits for stimulus decoding with EEG sensor networks", Journal of Neural Engineering, vol. 18, 2021, doi: 10.1088/1741-2552/ac2629

Experiment*************

This dataset contains 255-channel electroencephalography (EEG) data collected during an auditory attention decoding experiment (AAD). The EEG was recorded using a SynAmps RT device (Compumedics, Australia) at a sampling rate of 1 kHz and using active Ag/Cl electrodes. The electrodes were placed on the head according to the international 10-5 (5%) system. 30 normal hearing male subjects between 22 and 35 years old participated in the experiment. All of them signed an informed consent form approved by the KU Leuven ethical committee.

Two Dutch stories narrated by different male speakers divided into two parts of 6 minutes each were used as the stimuli in the experiment [1]. A single trial of the experiment involved the presentation of these two parts (one of both stories) to the subject through insert phones (Etymotic ER3A) at 60dBA. These speech stimuli were filtered using a head-related transfer function (HRTF) such that the stories seemed to arrive from two distinct spatial locations, namely left and right with respect to the subject with 180 degrees separation. In each trial, the subjects were asked to attend to only one ear while ignoring the other. Four trials of 6 minutes each were carried out, in which each story part is used twice. The order of presentations was randomized and balanced over different subjects. Thus approximately 24 minutes of EEG data was recorded per subject.

File organization and details********************************

The EEG data of each of the 30 subjects are uploaded as a ZIP file with the name Sx.tar.gzip here x=0,1,2,..,29. When a zip file is extracted, the EEG data are in their original raw format as recorded by the CURRY software [2]. The data files of each recording consist of four files with the same name but different extensions, namely, .dat, ,dap, .rs3 and .ceo. The name of each file follows the following convention: Sx_AAD_P. With P taking one of the following values for each file:1. 1L2. 1R3. 2L4. 2R

The letter 'L' or 'R' in P indicates the attended direction of each subject in a recording: left and right respectively. A MATLAB function to read the software is provided in the directory called scripts. A python function to read the file is available in this Github repository [3].The original version of stimuli presented to subjects, i.e. without the HRTF filtering, can be found after extracting the stimuli.zip file in WAV format. There are 4 WAV files corresponding to the two parts of each of the two stories. These files have been sampled at 44.1 kHz. The order of presentation of these WAV files is given in the table below: Stimuli presentation and attention information of files

Trial (P) Stimuli: Left-ear Stimuli: Right-ear Attention

1L part1_track1_dry part1_track2_dry Left

1R part1_track1_dry part1_track2_dry Right

2L part2_track2_dry part2_track1_dry Left

2R part2_track2_dry part2_track1_dry Right

Additional files (after extracting scripts.zip and misc.zip):

scripts/sample_script.m: Demonstrates reading an EEG-AAD recording and extracting the start and end of the experiment.

misc/channel-layout.jpeg: The 255-channel EEG cap layout

misc/eeg255ch_locs.csv: The channel names, numbers and their spherical (theta and phi) scalp coordinates.

[1] Radioboeken voor kinderen, http://radioboeken.eu/kinderradioboeken.php?lang=NL, 2007 (Accessed: 8 Feb 2021)

[2] CURRY 8 X – Data Acquisition and Online Processing, https://compumedicsneuroscan.com/product/curry-data-acquisition-online-processing-x/ (Accessed: 8, Feb, 2021)

[3] Abhijith Mundanad Narayanan, "EEG analysis in python", 2021. https://github.com/mabhijithn/eeg-analyse , (Accessed: 8 Feb, 2021)

Search
Clear search
Close search
Google apps
Main menu