26 datasets found
  1. EEG_Auditory_Oddball_Preprocessed_Data

    • figshare.com
    bin
    Updated Jan 31, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Clare D Harris; Elise G Rowe; Roshini Randeniya; Marta I Garrido (2019). EEG_Auditory_Oddball_Preprocessed_Data [Dataset]. http://doi.org/10.6084/m9.figshare.5812764.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Jan 31, 2019
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Clare D Harris; Elise G Rowe; Roshini Randeniya; Marta I Garrido
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset was obtained at the Queensland Brain Institute, Australia, using a 64 channel EEG Biosemi system. 21 healthy participants completed an auditory oddball paradigm (as described in Garrido et al., 2017).For a description of the oddball paradigm, please see Garrido et al., 2017:Garrido, M.I., Rowe, E.G., Halasz, V., & Mattingley, J. (2017). Bayesian mapping reveals that attention boosts neural responses to predicted and unpredicted stimuli. Cerebral Cortex, 1-12. DOI: 10.1093/cercor/bhx087If you use this dataset, please cite its doi, as well as citing the associated methods paper, which is as follows:Harris, C.D., Rowe, E.G., Randeniya, R. and Garrido, M.I. (2018). Bayesian Model Selection Maps for group studies using M/EEG data.For scripts to analyse the data, please see: https://github.com/ClareDiane/BMS4EEG

  2. SamSrf v5.84 (pRF mapping toolbox) - OUT OF DATE!

    • figshare.com
    zip
    Updated Sep 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data from SamPenDu (2024). SamSrf v5.84 (pRF mapping toolbox) - OUT OF DATE! [Dataset]. http://doi.org/10.6084/m9.figshare.1344765.v25
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 25, 2024
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Data from SamPenDu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is an out-of-date version of our pRF mapping toolbox and no longer supported! While this version should be stable, USE IT AT YOUR OWN RISK! We instead recommend our new version SamSrf X, which will continue to be updated with bugfixes and new features. You can also use the new version for further analysis of maps from the old version.SamSrf X is available for download at:http://osf.io/2rgsm.--------------------------------------------------------------------------------Version: 5.84 (18-09-2017)Our Matlab toolbox for pRF mapping analysis. Uses SPM8 or SPM12 and FreeSurfer functionality for preprocessing. Also requires Statistics Toolbox, Optimization Toolbox, and Curve Fitting Toolbox (not strictly necessary) for Matlab.An extensive documentation "cookbook" is included. Please contact Sam (sampendu.wordpress.com) for any questions but please be advised that we are not able to provide tech support for people we don't collaborate with.As of version 5.63, we included a new tutorial explaining how to delineate visual areas using the DelineationTool in MatLab and giving advice on what to do with tricky retinotopic maps.

  3. d

    Data from: Remotely sensed data, field measurements, and MATLAB code used to...

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Nov 20, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Remotely sensed data, field measurements, and MATLAB code used to produce image-derived velocity maps for a reach of the Sacramento River near Glenn, CA, September 16-19, 2024 [Dataset]. https://catalog.data.gov/dataset/remotely-sensed-data-field-measurements-and-matlab-code-used-to-produce-image-derived-v-19
    Explore at:
    Dataset updated
    Nov 20, 2025
    Dataset provided by
    U.S. Geological Survey
    Area covered
    Glenn, Sacramento River
    Description

    This data release provides remotely sensed data, field measurements, and MATLAB code associated with an effort to produce image-derived velocity maps for a reach of the Sacramento River in California's Central Valley. Data collection occurred from September 16-19, 2024, and involved cooperators from the Intelligent Robotics Group from the National Aeronautics and Space Administration (NASA) Ames Research Center and the National Oceanographic and Atmospheric Administration (NOAA) Southwest Fisheries Science Center. The remotely sensed data were obtained from an Uncrewed Aircraft System (UAS) and are stored in Robot Operating System (ROS) .bag files. Within these files, the various data types are organized into ROS topics including: images from a thermal camera, measurements of the distance from the UAS down to the water surface made with a laser range finder, and position and orientation data recorded by a Global Navigation Satellite System (GNSS) receiver and Inertial Measurement Unit (IMU) during the UAS flights. This instrument suite is part of an experimental payload called the River Observing System (RiOS) designed for measuring streamflow and further detail is provided in the metadata file associated with this data release. For the September 2024 test flights, the RiOS payload was deployed from a DJI Matrice M600 Pro hexacopter hovering approximately 270 m above the river. At this altitude, the thermal images have a pixel size of approximately 0.38 m but are not geo-referenced. Two types of ROS .bag files are provided in separate zip folders. The first, Baguettes.zip, contains "baguettes" that include 15-second subsets of data with a reduced sampling rate for the GNSS and IMU. The second, FullBags.zip, contains the full set of ROS topics recorded by RiOS but have been subset to include only the time ranges during which the UAS was hovering in place over one of 11 cross sections along the reach. The start times are included in the .bag file names as portable operating system interface (posix) time stamps. To view the data within ROS .bag files, the Foxglove Studio program linked below is freely available and provides a convenient interface. Note that to view the thermal images, the contrast will need to be adjusted to minimum and maximum values around 12,000 to 15,000, though some further refinement of these values might be necessary to enhance the display. To enable geo-referencing of the thermal images in a post-processing mode, another M600 hexacopter equipped with a standard visible camera was deployed along the river to acquire images from which an orthophoto was produced: 20240916_SacramentoRiver_Ortho_5cm.tif. This orthophoto has a spatial resolution of 0.05 m and is in the Universal Transverse Mercator (UTM) coordinate system, Zone 10. To assess the accuracy of the orthophoto, 21 circular aluminum ground control targets visible in both thermal and RGB (red, green, blue) images were placed in the field and their locations surveyed with a Real-Time Kinematic (RTK) GNSS receiver. The coordinates of these control points are provided in the file SacGCPs20240916.csv. Please see the metadata for additional information on the camera, the orthophoto production process, and the RTK GNSS survey. The thermal images were used as input to Particle Image Velocimetry (PIV) algorithms to infer surface flow velocities throughout the reach. To assess the accuracy of the resulting image-derived velocity estimates, field measurements of flow velocity were obtained using a SonTek M9 acoustic Doppler current profiler (ADCP). These data were acquired along a series of 11 cross sections oriented perpendicular to the primary downstream flow direction and spaced approximately 150 m apart. At each cross section, the boat from which the ADCP was deployed made four passes across the channel and the resulting data was then aggregated into mean cross sections using the Velocity Mapping Toolbox (VMT) referenced below (Parsons et al., 2013). The VMT output was further processed as described in the metadata and ultimately led to a single comma delimited text file, SacAdcp20240918.csv, with cross section numbers, spatial coordinates (UTM Zone 10N), cross-stream distances, velocity vector components, and water depths. To assess the sensitivity of thermal image velocimetry to environmental conditions, air and water temperatures were recorded using a pair of Onset HOBO U20 pressure transducer data loggers set to record pressure and temperature. Deploying one data logger in the air and one in the water also provided information on variations in water level during the test flights. The resulting temperature and water level time series are provided in the file HoboDataSummary.csv with a one-minute sampling interval. These data sets were used to develop and test a new framework for mapping flow velocities in river channels in approximately real time using images from an UAS as they are acquired. Prototype code for implementing this approach was developed in MATLAB and is also included in the data release as a zip folder called VelocityMappingCode.zip. Further information on the individual functions (*.m files) included within this folder is available in the metadata file associated with this data release. The code is provided as is and is intended for research purposes only. Users are advised to thoroughly read the metadata file associated with this data release to understand the appropriate use and limitations of the data and code provided herein.

  4. l

    Data set for a comprehensive tutorial on the SOM-RPM toolbox for MATLAB

    • opal.latrobe.edu.au
    • researchdata.edu.au
    hdf
    Updated Aug 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sarah Bamford; Wil Gardner; Paul Pigram; Ben Muir; David Winkler; Davide Ballabio (2024). Data set for a comprehensive tutorial on the SOM-RPM toolbox for MATLAB [Dataset]. http://doi.org/10.26181/25648905.v2
    Explore at:
    hdfAvailable download formats
    Dataset updated
    Aug 22, 2024
    Dataset provided by
    La Trobe
    Authors
    Sarah Bamford; Wil Gardner; Paul Pigram; Ben Muir; David Winkler; Davide Ballabio
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    This data set is uploaded as supporting information for the publication entitled:A Comprehensive Tutorial on the SOM-RPM Toolbox for MATLABThe attached file 'case_study' includes the following:X : Data from a ToF-SIMS hyperspectral image. A stage raster containing 960 x800 pixels with 963 associated m/z peaks.pk_lbls: The m/z label for each of the 963 m/z peaks.mdl and mdl_masked: SOM-RPM models created using the SOM-RPM tutorial provided within the cited article.Additional details about the datasets can be found in the published article.V2 - contains modified peak lists to show intensity weighted m/z rather than peak midpoint. If you use this data set in your work, please cite our work as follows:[LINK TO BE ADDED TO PAPER ONCE DOI RECEIVED]

  5. Z

    Data from: Datasets and Supporting Materials for the IPIN 2021 Competition...

    • data.niaid.nih.gov
    • recerca.uoc.edu
    Updated Jun 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joaquin Torres-Sospedra; Fernando Aranda Polo; Felipe Parralejo; Vladimir Bellavista Parent; Fernando Alvarez; Antoni Pérez-Navarro; Antonio R. Jimenez; Fernando Seco (2022). Datasets and Supporting Materials for the IPIN 2021 Competition Track 3 (Smartphone-based, off-site) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5948677
    Explore at:
    Dataset updated
    Jun 14, 2022
    Dataset provided by
    Universitat Oberta de Catalunya
    Universidad de Extremadura
    Centro Superior de Investigaciones Científicas
    UBIK Geospatial Solutions, SL
    Authors
    Joaquin Torres-Sospedra; Fernando Aranda Polo; Felipe Parralejo; Vladimir Bellavista Parent; Fernando Alvarez; Antoni Pérez-Navarro; Antonio R. Jimenez; Fernando Seco
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This package contains the datasets and supplementary materials used in the IPIN 2021 Competition.

    Contents:

    IPIN2021_Track03_TechnicalAnnex_V1-02.pdf: Technical annex describing the competition

    01-Logfiles: This folder contains a subfolder with the 105 training logfiles, 80 of them single floor indoors, 10 in outdoor areas, 10 of them in the indoor auditorium with floor-trasitio and 5 of them in floor-transition zones, a subfolder with the 20 validation logfiles, and a subfolder with the 3 blind evaluation logfile as provided to competitors.

    02-Supplementary_Materials: This folder contains the matlab/octave parser, the raster maps, the files for the matlab tools and the trajectory visualization.

    03-Evaluation: This folder contains the scripts used to calculate the competition metric, the 75th percentile on the 82 evaluation points. It requires the Matlab Mapping Toolbox. The ground truth is also provided as 3 csv files. Since the results must be provided with a 2Hz freq. starting from apptimestamp 0, the GT files include the closest timestamp matching the timing provided by competitors for the 3 evaluation logfiles. It contains samples of reported estimations and the corresponding results.

    Please, cite the following works when using the datasets included in this package:

    Torres-Sospedra, J.; et al. Datasets and Supporting Materials for the IPIN 2021 Competition Track 3 (Smartphone-based, off-site). http://dx.doi.org/10.5281/zenodo.5948678

  6. Transit shipboard bathymetry from the ODEMAR cruise (Pourquoi Pas?, 2013)

    • seanoe.org
    bin
    Updated Nov 20, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Javier Escartin (2023). Transit shipboard bathymetry from the ODEMAR cruise (Pourquoi Pas?, 2013) [Dataset]. http://doi.org/10.17882/97230
    Explore at:
    binAvailable download formats
    Dataset updated
    Nov 20, 2023
    Dataset provided by
    SEANOE
    Authors
    Javier Escartin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Nov 15, 2013 - Nov 19, 2013
    Area covered
    Variables measured
    Bathymetry and Elevation
    Description

    multibeam bathymetry data, gridded at ~100m, acquired around during the transit towards the mid-atlantic ridge axis (13°20'n and 13°30'n oceanic detachments area) during the odemar cruise (2013). bathymetric data was acquired by the pourquoi pas? multibeam system (reson seabat 7150).data is provided as geotiffs (wgs84)odm_transit_16nov.tifextent -27.8375538800000015,16.3530121699999995 : -26.3755538800000018,16.6860121699999979width 1463height 334data type float32 - thirty two bit floating pointgdal driver description gtiffgdal driver metadata geotiffdataset description /users/jescartin/work/proyectos/odemar/data/bathygrids/transit/odm_transit_16nov.tifcompression packbitsband 1 statistics_maximum=-4139.5639648438statistics_mean=-4612.4823849892statistics_minimum=-4866.7373046875statistics_stddev=121.40234041923statistics_valid_percent=29.85scale: 1offset: 0more information area_or_point=areatifftag_software=matlab 9.14, mapping toolbox 5.5dimensions x: 1463 y: 334 bands: 1origin -27.8375538800000015,16.6860121699999979pixel size 0.0009993164730006834189,-0.0009970059880239474058odm_transit_17nov.tifextent -32.4955601099999996,15.5934151199999995 : -27.8195601099999976,16.4454151199999998width 4677height 853data type float32 - thirty two bit floating pointgdal driver description gtiffgdal driver metadata geotiffdataset description /users/jescartin/work/proyectos/odemar/data/bathygrids/transit/odm_transit_17nov.tifcompression packbitsband 1 statistics_approximate=yesstatistics_maximum=-4458.8168945312statistics_mean=-5144.8000932272statistics_minimum=-5562.1850585938statistics_stddev=164.55246561258statistics_valid_percent=50.04scale: 1offset: 0more information area_or_point=areatifftag_software=matlab 9.14, mapping toolbox 5.5dimensions x: 4677 y: 853 bands: 1origin -32.4955601099999996,16.4454151199999998pixel size 0.0009997861877271759391,-0.0009988276670574447458 odm_transit_18nov.tifextent -36.9279732571747630,14.8264479347363469 : -32.4729732571747647,15.7054479347363465width 4456height 880data type float32 - thirty two bit floating pointgdal driver description gtiffgdal driver metadata geotiffdataset description /users/jescartin/work/proyectos/odemar/data/bathygrids/transit/odm_transit_18nov.tifcompression packbitsband 1 statistics_approximate=yesstatistics_maximum=-5171.58203125statistics_mean=-5712.1738558372statistics_minimum=-5946.1611328125statistics_stddev=139.45706418798statistics_valid_percent=7.565scale: 1offset: 0more information area_or_point=areatifftag_software=matlab 9.14, mapping toolbox 5.5dimensions x: 4456 y: 880 bands: 1origin -36.9279732571747630,15.7054479347363465pixel size 0.0009997755834829438748,-0.0009988636363636358394 odm_transit_19nov.tifextent -41.3188024399999989,14.0843680599999992 : -36.8828024399999990,14.9493680599999994width 4437height 866data type float32 - thirty two bit floating pointgdal driver description gtiffgdal driver metadata geotiffdataset description /users/jescartin/work/proyectos/odemar/data/bathygrids/transit/odm_transit_19nov.tifcompression packbitsband 1 statistics_approximate=yesstatistics_maximum=-4540.5317382812statistics_mean=-5176.2961468757statistics_minimum=-5426.123046875statistics_stddev=183.14492125461statistics_valid_percent=7.674scale: 1offset: 0more information area_or_point=areatifftag_software=matlab 9.14, mapping toolbox 5.5dimensions x: 4437 y: 866 bands: 1origin -41.3188024399999989,14.9493680599999994pixel size 0.0009997746224926751767,-0.0009988452655889147342odm_transit_20nov.tifextent -44.9630377200000027,13.5018919999999998 : -41.3030377200000061,14.1828919999999989width 3661height 682data type float32 - thirty two bit floating pointgdal driver description gtiffgdal driver metadata geotiffdataset description /users/jescartin/work/proyectos/odemar/data/bathygrids/transit/odm_transit_20nov.tifcompression packbitsband 1 statistics_approximate=yesstatistics_maximum=-3173.7924804688statistics_mean=-4347.3027374804statistics_minimum=-4821.65625statistics_stddev=276.6463937856statistics_valid_percent=42.31scale: 1offset: 0more information area_or_point=areatifftag_software=matlab 9.14, mapping toolbox 5.5dimensions x: 3661 y: 682 bands: 1origin -44.9630377200000027,14.1828919999999989pixel size 0.0009997268505872703662,-0.000998533724340174662

  7. d

    Data from: Maps of water depth derived from satellite images of selected...

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Nov 20, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Maps of water depth derived from satellite images of selected reaches of the American, Colorado, and Potomac Rivers acquired in 2020 and 2021 (ver. 2.0, September 2024) [Dataset]. https://catalog.data.gov/dataset/maps-of-water-depth-derived-from-satellite-images-of-selected-reaches-of-the-american-colo
    Explore at:
    Dataset updated
    Nov 20, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Colorado, United States
    Description

    Information on water depth in river channels is important for a number of applications in water resource management but can be difficult to obtain via conventional field methods, particularly over large spatial extents and with the kind of frequency and regularity required to support monitoring programs. Remote sensing methods could provide a viable alternative means of mapping river bathymetry (i.e., water depth). The purpose of this study was to develop and test new, spectrally based techniques for estimating water depth from satellite image data. More specifically, a neural network-based temporal ensembling approach was evaluated in comparison to several other neural network depth retrieval (NNDR) algorithms. These methods are described in a manuscript titled "Neural Network-Based Temporal Ensembling of Water Depth Estimates Derived from SuperDove Images" and the purpose of this data release is to make available the depth maps produced using these techniques. The images used as input were acquired by the SuperDove cubesats comprising the PlanetScope constellation, but the original images cannot be redistributed due to licensing restrictions; the end products derived from these images are provided instead. The large number of cubesats in the PlanetScope constellation allows for frequent temporal coverage and the neural network-based approach takes advantage of this high density time series of information by estimating depth via one of four NNDR methods described in the manuscript: 1. Mean-spec: the images are averaged over time and the resulting mean image is used as input to the NNDR. 2. Mean-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is averaged to obtain the final depth map. 3. NN-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is then used as input to a second, ensembling neural network that essentially weights the depth estimates from the individual images so as to optimize the agreement between the image-derived depth estimates and field measurements of water depth used for training; the output from the ensembling neural network serves as the final depth map. 4. Optimal single image: a separate NNDR is applied independently to each image in the time series and only the image that yields the strongest agreement between the image-derived depth estimates and the field measurements of water depth used for training is used as the final depth map. MATLAB (Version 24.1, including the Deep Learning Toolbox) source code for performing this analysis is provided in the function NN_depth_ensembling.m and the figure included on this landing page provides a flow chart illustrating the four different neural network-based depth retrieval methods. As examples of the resulting models, MATLAB *.mat data files containing the best-performing neural network model for each site are provided below, along with a file that lists the PlanetScope image identifiers for the images that were used for each site. To develop and test this new NNDR approach, the method was applied to satellite images from three rivers across the U.S.: the American, Colorado, and Potomac. For each site, field measurements of water depth available through other data releases were used for training and validation. The depth maps produced via each of the four methods described above are provided as GeoTIFF files, with file name suffixes that indicate the method employed: X_mean-spec.tif, X_mean-depth.tif, X_NN-depth.tif, and X-single-image.tif, where X denotes the site name. The spatial resolution of the depth maps is 3 meters and the pixel values within each map are water depth estimates in units of meters.

  8. d

    Maps of water depth derived from satellite images of the American River...

    • catalog.data.gov
    • s.cnmilf.com
    Updated Nov 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Maps of water depth derived from satellite images of the American River acquired in October 2020 [Dataset]. https://catalog.data.gov/dataset/maps-of-water-depth-derived-from-satellite-images-of-the-american-river-acquired-in-octobe
    Explore at:
    Dataset updated
    Nov 26, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    American River, United States
    Description

    Information on water depth in river channels is important for a number of applications in water resource management but can be difficult to obtain via conventional field methods, particularly over large spatial extents and with the kind of frequency and regularity required to support monitoring programs. Remote sensing methods could provide a viable alternative means of mapping river bathymetry (i.e., water depth). The purpose of this study was to develop and test new, spectrally based techniques for estimating water depth from satellite image data. More specifically, a neural network-based temporal ensembling approach was evaluated in comparison to several other neural network depth retrieval (NNDR) algorithms. These methods are described in a manuscript titled "Neural Network-Based Temporal Ensembling of Water Depth Estimates Derived from SuperDove Images" and the purpose of this data release is to make available the depth maps produced using these techniques. The images used as input were acquired by the SuperDove cubesats comprising the PlanetScope constellation, but the original images cannot be redistributed due to licensing restrictions; the end products derived from these images are provided instead. The large number of cubesats in the PlanetScope constellation allows for frequent temporal coverage and the neural network-based approach takes advantage of this high density time series of information by estimating depth via one of four NNDR methods described in the manuscript: 1. Mean-spec: the images are averaged over time and the resulting mean image is used as input to the NNDR. 2. Mean-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is averaged to obtain the final depth map. 3. NN-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is then used as input to a second, ensembling neural network that essentially weights the depth estimates from the individual images so as to optimize the agreement between the image-derived depth estimates and field measurements of water depth used for training; the output from the ensembling neural network serves as the final depth map. 4. Optimal single image: a separate NNDR is applied independently to each image in the time series and only the image that yields the strongest agreement between the image-derived depth estimates and the field measurements of water depth used for training is used as the final depth map. MATLAB (Version 24.1, including the Deep Learning Toolbox) for performing this analysis is provided in the function NN_depth_ensembling.m available on the main landing page for the data release of which this is a child item, along with a flow chart illustrating the four different neural network-based depth retrieval methods. To develop and test this new NNDR approach, the method was applied to satellite images from the American River near Fair Oaks, CA, acquired in October 2020. Field measurements of water depth available through another data release (Legleiter, C.J., and Harrison, L.R., 2022, Field measurements of water depth from the American River near Fair Oaks, CA, October 19-21, 2020: U.S. Geological Survey data release, https://doi.org/10.5066/P92PNWE5) were used for training and validation. The depth maps produced via each of the four methods described above are provided as GeoTIFF files, with file name suffixes that indicate the method employed: American_mean-spec.tif, American_mean-depth.tif, American_NN-depth.tif, and American-single-image.tif. The spatial resolution of the depth maps is 3 meters and the pixel values within each map are water depth estimates in units of meters.

  9. u

    Data from: Linear Supervised Transfer Learning Toolbox

    • pub.uni-bielefeld.de
    Updated Dec 21, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Benjamin Paaßen; Alexander Schulz (2018). Linear Supervised Transfer Learning Toolbox [Dataset]. https://pub.uni-bielefeld.de/record/2912671
    Explore at:
    Dataset updated
    Dec 21, 2018
    Authors
    Benjamin Paaßen; Alexander Schulz
    License

    Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
    License information was derived automatically

    Description

    This Matlab (R) toolbox provides several algorithms to learn a linear mapping from an n-dimensional source space to an m-dimensional target space, such that it makes a classification or clustering model that has been trained in the source space applicable in the target space. The source space model is assumed to be either a vector quantization model (such as learning vector quantizations and variations thereof, neural gas or k-Means) or a (labelled) mixture of Gaussians. The target space may be any vector space, but this toolbox will typically fail if the relationship between source and target space is highly nonlinear. In contrast, this toolbox is particularly effective if the difference between source and target space can be expressed in terms of simple, linear transformations such as rotations and scalings.

  10. Datasets and Supporting Materials for the IPIN 2022 Competition Track 4...

    • zenodo.org
    • data.niaid.nih.gov
    • +1more
    zip
    Updated Jan 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Miguel Ortiz; Miguel Ortiz; Ni Zhu; Ni Zhu; Ziyou Li; Ziyou Li; Valérie Reanudin; Valérie Reanudin (2024). Datasets and Supporting Materials for the IPIN 2022 Competition Track 4 (Foot-Mounted IMU based Positioning, offsite-online) [Dataset]. http://doi.org/10.5281/zenodo.10497364
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 12, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Miguel Ortiz; Miguel Ortiz; Ni Zhu; Ni Zhu; Ziyou Li; Ziyou Li; Valérie Reanudin; Valérie Reanudin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This package contains the datasets and supplementary materials used in the IPIN 2022 Competition.

    Contents:
    - IPIN2022_Track4_CallForCompetition_v1.3.pdf: Call for competition including the technical annex describing the competition

    - 01-Logfiles: This folder contains 2 files for each Trials (Testing, Scoring01, Scoring02)
    - IPIN2022_T4_xxx.txt : data file containing ACCE, ROTA, MAGN, PRES, TEMP, GSBS, GOBS, POSI frames
    - IPIN2022_T4_xxx_yyy_ephem.nav : for trajectory estimation.

    - 02-Supplementary_Materials: This folder contains the datasheet files of the different sensors, a static logfile of about 12 hours that can be used for sensors bias estimation (Allan Variance) and a logfile of about 1 minute that can be used to calibrate the magnetometer sensor (Magnetometer Calibration).

    - 03-Evaluation: This folder contains the scripts used to calculate the competition metric, the 75th percentile on all evaluation points. It requires Matlab Mapping Toolbox. We also provide ground truth of the 2 scoring trials as CSV files (with full rate @60Hz and with evaluation point only). It contains samples of reported estimations and the corresponding results. Just run script_Eval_IPIN2022.mat

    We provide additional information on the competition at: https://evaal.aaloa.org/2022/call-for-competitions

    Citation Policy:
    Please, cite the following works when using the datasets included in this package:

    Ortiz, M.; Zhu, N.; Ziyou L.; Renaudin, V. Datasets and Supporting Materials for the IPIN 2022 Competition Track 4 (Foot-Mounted IMU based Positioning, offsite-online), Zenodo 2022
    https://doi.org/10.5281/zenodo.10497364

    Check the citation policy at: https://doi.org/10.5281/zenodo.10497364

    Contact:

    For any further questions about the database and this competition track, please contact:

    Miguel Ortiz (miguel.ortiz@univ-eiffel.fr) at the University Gustave Eiffel, France.
    Ni Zhu (ni.zhu@univ-eiffel.fr) at the University Gustave Eiffel, France.

    Acknowledgements:

    We thank Frederic Le-Bourhis and Aravind Ramseh from Univ-Eiffel for their support in collecting the datasets.

    We extend our appreciation to the staff at the Nantes Central station for their invaluable support throughout our collection days.

  11. c

    Maps of water depth derived from satellite images of the Potomac River...

    • s.cnmilf.com
    • catalog.data.gov
    Updated Oct 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Maps of water depth derived from satellite images of the Potomac River acquired in July and August of 2021 [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/maps-of-water-depth-derived-from-satellite-images-of-the-potomac-river-acquired-in-july-an
    Explore at:
    Dataset updated
    Oct 1, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Potomac River
    Description

    Information on water depth in river channels is important for a number of applications in water resource management but can be difficult to obtain via conventional field methods, particularly over large spatial extents and with the kind of frequency and regularity required to support monitoring programs. Remote sensing methods could provide a viable alternative means of mapping river bathymetry (i.e., water depth). The purpose of this study was to develop and test new, spectrally based techniques for estimating water depth from satellite image data. More specifically, a neural network-based temporal ensembling approach was evaluated in comparison to several other neural network depth retrieval (NNDR) algorithms. These methods are described in a manuscript titled "Neural Network-Based Temporal Ensembling of Water Depth Estimates Derived from SuperDove Images" and the purpose of this data release is to make available the depth maps produced using these techniques. The images used as input were acquired by the SuperDove cubesats comprising the PlanetScope constellation, but the original images cannot be redistributed due to licensing restrictions; the end products derived from these images are provided instead. The large number of cubesats in the PlanetScope constellation allows for frequent temporal coverage and the neural network-based approach takes advantage of this high density time series of information by estimating depth via one of four NNDR methods described in the manuscript: 1. Mean-spec: the images are averaged over time and the resulting mean image is used as input to the NNDR. 2. Mean-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is averaged to obtain the final depth map. 3. NN-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is then used as input to a second, ensembling neural network that essentially weights the depth estimates from the individual images so as to optimize the agreement between the image-derived depth estimates and field measurements of water depth used for training; the output from the ensembling neural network serves as the final depth map. 4. Optimal single image: a separate NNDR is applied independently to each image in the time series and only the image that yields the strongest agreement between the image-derived depth estimates and the field measurements of water depth used for training is used as the final depth map. MATLAB (Version 24.1, including the Deep Learning Toolbox) source code for performing this analysis is provided in the function NN_depth_ensembling.m available on the main landing page for the data release of which this is a child item. To develop and test this new NNDR approach, the method was applied to satellite images from the Potomac River near Brunswick, MD, acquired in July and August of 2021. Field measurements of water depth available through another data release (Duda, J.M., Greise, A.J., and Young, J.A., 2020, Potomac River ADCP Bathymetric Survey, October 2019: U.S. Geological Survey data release, https://doi.org/10.5066/P9GOZZYX) were used for training and validation. The depth maps produced via each of the four methods described above are provided as GeoTIFF files, with file name suffixes that indicate the method employed: Potomac_mean-spec.tif, Potomac_mean-depth.tif, Potomac_NN-depth.tif, and Potomac-single-image.tif. The spatial resolution of the depth maps is 3 meters and the pixel values within each map are water depth estimates in units of meters.

  12. How to set the input parameters: an example.

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alessandro Montalto; Luca Faes; Daniele Marinazzo (2023). How to set the input parameters: an example. [Dataset]. http://doi.org/10.1371/journal.pone.0109462.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Alessandro Montalto; Luca Faes; Daniele Marinazzo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    How to set the input parameters: an example.

  13. u

    Datasets and Supporting Materials for the IPIN 2024 Competition Track 4...

    • producciocientifica.uv.es
    • data.niaid.nih.gov
    Updated 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ortiz, Miguel; Li, Ziyou; Zhu, Ni; Renaudin, Valerie; Potorti, Francesco; Torres-Sospedra, Joaquin; Ortiz, Miguel; Li, Ziyou; Zhu, Ni; Renaudin, Valerie; Potorti, Francesco; Torres-Sospedra, Joaquin (2024). Datasets and Supporting Materials for the IPIN 2024 Competition Track 4 (Foot-Mounted IMU based Positioning, offsite-online) [Dataset]. https://producciocientifica.uv.es/documentos/67bc32b5478fbf5d29390b0b
    Explore at:
    Dataset updated
    2024
    Authors
    Ortiz, Miguel; Li, Ziyou; Zhu, Ni; Renaudin, Valerie; Potorti, Francesco; Torres-Sospedra, Joaquin; Ortiz, Miguel; Li, Ziyou; Zhu, Ni; Renaudin, Valerie; Potorti, Francesco; Torres-Sospedra, Joaquin
    Description

    This package contains the datasets and supplementary materials used in the IPIN 2024 Competition.

    Contents:- IPIN2024_Track4_CallForCompetition_v1.4.pdf: Call for competition including the technical annex describing the competition

    • 01-Logfiles: This folder contains 2 files for each Trials (Testing, Scoring01, Scoring02) - IPIN2024_T4_xxx.txt : data file containing ACCE, ROTA, MAGN, PRES, TEMP, GSBS, GOBS, POSI frames - IPIN2024_T4_xxx_gnss_ephem.nav : GNSS navigation files for trajectory estimation. - 02-Supplementary_Materials: This folder contains the datasheet files of the different sensors, a static logfile of about 14 hours that can be used for sensors bias estimation (Allan Variance) and a logfile of about 1 minute that can be used to calibrate the magnetometer sensor (Magnetometer Calibration).

    • 03-Evaluation: This folder contains the scripts used to calculate the competition metric, the 75th percentile on all evaluation points. It requires Matlab Mapping Toolbox. We also provide ground truth of the all trials (1 Testing + 2 Scorings) as 2 MAT and KML files. It contains samples of reported estimations and the corresponding results. Just run script_Eval_IPIN2024.mat

    We provide additional information on the competition at: https://competition.ipin-conference.org/2024/call-for-competition

    Citation Policy:Please, cite the following works when using the datasets included in this package:

    Ortiz, M.; Ziyou L.; Zhu, N.; Renaudin, V. Datasets and Supporting Materials for the IPIN 2024 Competition Track 4 (Foot-Mounted IMU based Positioning, offsite-online), Zenodo 2024https://doi.org/10.5281/zenodo.14501047

    Check the citation policy at: https://doi.org/10.5281/zenodo.14501047

    Contact:For any further questions about the database and this competition track, please contact:

    Miguel Ortiz (miguel.ortiz@univ-eiffel.fr) at the University Gustave Eiffel, France.  Ni Zhu (ni.zhu@univ-eiffel.fr) at the University Gustave Eiffel, France.
    

    Acknowledgements:We thank the staff at "La Cité des Congrès" based in Nantes for their unwavering patience and invaluable support throughout our collection days.

  14. c

    Maps of water depth derived from satellite images of the Colorado River...

    • s.cnmilf.com
    • catalog.data.gov
    Updated Oct 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Maps of water depth derived from satellite images of the Colorado River acquired in March and April of 2021 [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/maps-of-water-depth-derived-from-satellite-images-of-the-colorado-river-acquired-in-march-
    Explore at:
    Dataset updated
    Oct 29, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Colorado River
    Description

    Information on water depth in river channels is important for a number of applications in water resource management but can be difficult to obtain via conventional field methods, particularly over large spatial extents and with the kind of frequency and regularity required to support monitoring programs. Remote sensing methods could provide a viable alternative means of mapping river bathymetry (i.e., water depth). The purpose of this study was to develop and test new, spectrally based techniques for estimating water depth from satellite image data. More specifically, a neural network-based temporal ensembling approach was evaluated in comparison to several other neural network depth retrieval (NNDR) algorithms. These methods are described in a manuscript titled "Neural Network-Based Temporal Ensembling of Water Depth Estimates Derived from SuperDove Images" and the purpose of this data release is to make available the depth maps produced using these techniques. The images used as input were acquired by the SuperDove cubesats comprising the PlanetScope constellation, but the original images cannot be redistributed due to licensing restrictions; the end products derived from these images are provided instead. The large number of cubesats in the PlanetScope constellation allows for frequent temporal coverage and the neural network-based approach takes advantage of this high density time series of information by estimating depth via one of four NNDR methods described in the manuscript: 1. Mean-spec: the images are averaged over time and the resulting mean image is used as input to the NNDR. 2. Mean-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is averaged to obtain the final depth map. 3. NN-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is then used as input to a second, ensembling neural network that essentially weights the depth estimates from the individual images so as to optimize the agreement between the image-derived depth estimates and field measurements of water depth used for training; the output from the ensembling neural network serves as the final depth map. 4. Optimal single image: a separate NNDR is applied independently to each image in the time series and only the image that yields the strongest agreement between the image-derived depth estimates and the field measurements of water depth used for training is used as the final depth map. MATLAB (Version 24.1, including the Deep Learning Toolbox) source code for performing this analysis is provided in the function NN_depth_ensembling.m available on the main landing page for the data release of which this is a child item, along with a flow chart illustrating the four different neural network-based depth retrieval methods. To develop and test this new NNDR approach, the method was applied to satellite images from the Colorado River near Lees Ferry, AZ, acquired in March and April of 2021. Field measurements of water depth available through another data release (Legleiter, C.J., Debenedetto, G.P., and Forbes, B.T., 2022, Field measurements of water depth from the Colorado River near Lees Ferry, AZ, March 16-18, 2021: U.S. Geological Survey data release, https://doi.org/10.5066/P9HZL7BZ) were used for training and validation. The depth maps produced via each of the four methods described above are provided as GeoTIFF files, with file name suffixes that indicate the method employed: Colorado_mean-spec.tif, Colorado_mean-depth.tif, Colorado_NN-depth.tif, and Colorado-single-image.tif. In addition, to assess the robustness of the Mean-spec and NN-depth methods to the introduction of a large pulse of sediment by a flood event that occurred partway through the image time series, depth maps from before and after the flood are provided in the files Colorado_Mean-spec_after_flood.tif, Colorado_Mean-spec_before_flood.tif, Colorado_NN-depth_after_flood.tif, and Colorado_NN-depth_before_flood.tif. The spatial resolution of the depth maps is 3 meters and the pixel values within each map are water depth estimates in units of meters.

  15. Example of the parameters required to define the methods for an experiment...

    • plos.figshare.com
    xls
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alessandro Montalto; Luca Faes; Daniele Marinazzo (2023). Example of the parameters required to define the methods for an experiment on 5 variables. [Dataset]. http://doi.org/10.1371/journal.pone.0109462.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Alessandro Montalto; Luca Faes; Daniele Marinazzo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In the second column the instantaneous effects are neglected both for targets and conditioning. In the third column we set instantaneous effects for some drivers and the respective targets. For example, when the target is 1, instantaneous effects are taken into account for driver 2 (first two rows, right column, parameter idDrivers) and conditioning variable 3 (first row, right column, parameter idOtherLagZero).Example of the parameters required to define the methods for an experiment on 5 variables.

  16. u

    Datasets and Supporting Materials for the IPIN 2024 Competition Track 3...

    • recerca.uoc.edu
    Updated 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Torres Sospedra, Joaquin; Crivello, Antonino; Stahlke, Maximilian; Potortì, Francesco; Ortiz, Miguel; Li, Ziyou; Perez-Navarro, Antoni; Jimenez Ruiz, Antonio Ramon; Torres Sospedra, Joaquin; Crivello, Antonino; Stahlke, Maximilian; Potortì, Francesco; Ortiz, Miguel; Li, Ziyou; Perez-Navarro, Antoni; Jimenez Ruiz, Antonio Ramon (2024). Datasets and Supporting Materials for the IPIN 2024 Competition Track 3 (Smartphone-based, off-site) [Dataset]. https://recerca.uoc.edu/documentos/67bc32b7478fbf5d29390db7?lang=ca
    Explore at:
    Dataset updated
    2024
    Authors
    Torres Sospedra, Joaquin; Crivello, Antonino; Stahlke, Maximilian; Potortì, Francesco; Ortiz, Miguel; Li, Ziyou; Perez-Navarro, Antoni; Jimenez Ruiz, Antonio Ramon; Torres Sospedra, Joaquin; Crivello, Antonino; Stahlke, Maximilian; Potortì, Francesco; Ortiz, Miguel; Li, Ziyou; Perez-Navarro, Antoni; Jimenez Ruiz, Antonio Ramon
    Description

    This package contains the datasets and supplementary materials used in the IPIN 2024 Competition.

    Contents

    Track-3_TA-2024.pdf: Technical annex describing the competition (Version 1)

    01 Logfiles: This folder contains a subfolder with the 54 training trials, a subfolder with the 4 testing trials (validation), and a subfolder with the 2 blind scoring trials (test) as provided to competitors.

    02 Supplementary_Materials: This folder contains the Matlab/octave parser, the raster maps, the files for the Matlab tools and the trajectory visualization.

    03 Evaluation: This folder contains the scripts we used to calculate the competition metric, the 75th percentile on the 69 evaluation points. It requires the Matlab Mapping Toolbox. We also provide the ground truth as 2 CSV files. It contains samples of reported estimations and the corresponding results.

    We provide additional information on the competition at: https://competition.ipin-conference.org/2024/call-for-competition

    Citation Policy

    Please cite the following works when using the datasets included in this package:

    Torres-Sospedra, J.; et al. Datasets and Supporting Materials for the IPIN 2024Competition Track 3 (Smartphone-based, off-site), Zenodo 2024http://dx.doi.org/10.5281/zenodo.13931119

    Check the updated citation policy at: http://dx.doi.org/10.5281/zenodo.13931119

    Contact

    For any further questions about the database and this competition track, please contact:

    Joaquín Torres-Sospedra Departament d'Informatica, Universitat de València, 46100 Burjassot, SpainValgrAI - Valencian Graduate School and Research Network of Artificial Intelligence, Camí de Vera s/n, 46022 Valencia, SpainJoaquin.Torres@uv.es - info@jtorr.es Antonio R. Jiménez Centre of Automation and Robotics (CAR)-CSIC/UPM, Spain antonio.jimenez@csic.es

    Antoni Pérez-NavarroFaculty of Computer Sciences, Multimedia and Telecommunication, Universitat Oberta de Catalunya, Barcelona, Spainaperezn@uoc.edu

    Acknowledgements

    We thank Maximilian Stahlke and Christopher Mutschler at Fraunhofer ISS, as well as Miguel Ortiz and Ziyou Li at Université Gustave Eiffel, for their invaluable support in collecting the datasets. And last but certainly not least, Antonino Crivello and Francesco Potortì for their huge effort in georeferencing the competition venue and evaluation points.

    We extend our appreciation to the staff at the Museum for Industrial Culture (Museum Industriekultur) for their unwavering patience and invaluable support throughout our collection days.

    We are also grateful to Francesco Potortì, the ISTI-CNR team (Paolo, Michele & Filippo), and the Fraunhofer IIS team (Chris, Tobi, Max, ...) for their invaluable commitment to organizing and promoting the IPIN competition.

    This work and competition are part of the IPIN 2023 Conference in Nuremberg (Germany) and the IPIN 2024 Conference in Hong Kong.

    Parts of this work received the financial support received from projects and grants:

    POSITIONATE (CIDEXG/2023/17, Conselleria d’Educació, Universitats i Ocupació, Generalitat Valenciana)

    ORIENTATE (H2020-MSCA-IF-2020, Grant Agreement 101023072)

    GeoLibero (from CYTED)

    INDRI (MICINN, ref. PID2021-122642OB-C42, PID2021-122642OB-C43, PID2021-122642OB-C44, MCIU/AEI/FEDER UE)

    MICROCEBUS (MICINN, ref. RTI2018-095168-B-C55, MCIU/AEI/FEDER UE)

    TARSIUS (TIN2015-71564-C4-2-R, MINECO/FEDER)

    SmartLoc(CSIC-PIE Ref.201450E011)

    LORIS (TIN2012-38080-C04-04)

  17. Z

    Datasets and Supporting Materials for the IPIN 2023 Competition Track 4...

    • data.niaid.nih.gov
    • producciocientifica.uv.es
    • +2more
    Updated Jan 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ortiz, Miguel; Zhu, Ni; Li, Ziyou; Renaudin, Valérie; Torres-Sospedra, Joaquin; Crivello, Antonino; Potorti, Francesco (2024). Datasets and Supporting Materials for the IPIN 2023 Competition Track 4 (Foot-Mounted IMU based Positioning, offsite-online) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8399763
    Explore at:
    Dataset updated
    Jan 12, 2024
    Dataset provided by
    University Gustave Eiffel, AME, GEOLOC
    University of Minho
    National Research Council
    Authors
    Ortiz, Miguel; Zhu, Ni; Li, Ziyou; Renaudin, Valérie; Torres-Sospedra, Joaquin; Crivello, Antonino; Potorti, Francesco
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This package contains the datasets and supplementary materials used in the IPIN 2023 Competition.

    Contents: - IPIN2023_Track4_CallForCompetition_v2.2.pdf: Call for competition including the technical annex describing the competition

    • 01-Logfiles: This folder contains 2 files for each Trials (Testing, Scoring01, Scoring02) - IPIN2023_T4_xxx.txt : data file containing ACCE, ROTA, MAGN, PRES, TEMP, GSBS, GOBS, POSI frames - IPIN2023_T4_xxx_gnss_ephem.nav : for trajectory estimation.

    • 02-Supplementary_Materials: This folder contains the datasheet files of the different sensors, a static logfile of about 12 hours that can be used for sensors bias estimation (Allan Variance) and a logfile of about 1 minute that can be used to calibrate the magnetometer sensor (Magnetometer Calibration).

    • 03-Evaluation: This folder contains the scripts used to calculate the competition metric, the 75th percentile on all evaluation points. It requires Matlab Mapping Toolbox. We also provide ground truth of the 2 scoring trials as 2 MAT and KML files. It contains samples of reported estimations and the corresponding results. Just run script_Eval_IPIN2023.mat

    We provide additional information on the competition at: https://evaal.aaloa.org/2023/call-for-competition

    Citation Policy: Please, cite the following works when using the datasets included in this package:

    Ortiz, M.; Zhu, N.; Ziyou L. ; Renaudin, V. Datasets and Supporting Materials for the IPIN 2023 Competition Track 4 (Foot-Mounted IMU based Positioning, offsite-online), Zenodo 2023 https://doi.org/10.5281/zenodo.8399764

    Check the citation policy at: https://doi.org/10.5281/zenodo.8399764

    Contact: For any further questions about the database and this competition track, please contact:

    Miguel Ortiz (miguel.ortiz@univ-eiffel.fr) at the University Gustave Eiffel, France.
    Ni Zhu (ni.zhu@univ-eiffel.fr) at the University Gustave Eiffel, France.
    

    Acknowledgements: We thank Maximilian Stahlke and Christopher Mutschler at Fraunhofer ISS, as well as Joaquín Torres-Sospedra from Universidade do Minho and Francesco Potortì and Antonino Crivello from ISTI-CNR Pisa, for their support in collecting the datasets.

    We extend our appreciation to the staff at the Museum for Industrial Culture (Museum Industriekultur) for their unwavering patience and invaluable support throughout our collection days.

  18. Total Viewshed of Bohemia

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Jun 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexandra Bucha Rášová; Alexandra Bucha Rášová; David Novák; David Novák; Martin Kuna; Martin Kuna; Blažej Bucha; Filip Pružinec; Blažej Bucha; Filip Pružinec (2024). Total Viewshed of Bohemia [Dataset]. http://doi.org/10.5281/zenodo.5764173
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 28, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Alexandra Bucha Rášová; Alexandra Bucha Rášová; David Novák; David Novák; Martin Kuna; Martin Kuna; Blažej Bucha; Filip Pružinec; Blažej Bucha; Filip Pružinec
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    The dataset represents a collection of total viewsheds (cf. Llobera et al. 2010) created for the territory of Bohemia (Czech Republic; ca. 57,000 km2). The total-viewshed calculation was based on the R2 algorithm (see Franklin and Ray 1994) and uses the viewshed function from MATLAB’s Mapping Toolbox that was significantly optimized by the authors for large-scale parallel computations. To reduce the computational time, we calculated single viewsheds using every fourth cell as the observing point (cf. Rášová 2017).

    The Digital Terrain Model of the Czech Republic of the 5th Generation (DMR 5G) was used as input for the calculations. Prior to the calculation, the input DEM was cleared of modern landscape elements (e.g. embankments of railways and roads, quarries, etc.; for details see Novák – Pružinec 2022). Eight total visibility models of the territory of Bohemia were constructed using the IT4Innovations research infrastructure (https://www.it4i.cz/). Both the observer and the target heights were set at 1.5 m. The viewsheds differ in two parameters: the visibility radius and the resolution of the input grid. As the basic radius, we have set 0.5 km and the cell size of 5 m; further layers are conceived as multiples of these parameters: 1 km/10 m, 2 km/20 m, 4 km/40 m, 8 km/80 m, 16 km/160 m and 32 km/320 m. The only exception is the model with a radius of 64 km, where we preserved the cell size of the preceding iteration (320 m). In the individual models, the visibility values are indicated in percentages corresponding to the portion of visible cells in the given radius (0–100%; rounded up to the next complete value).

    Filenames of individual rasters correspond to the parameters set above. For further details see:

    • Kuna, M. – Novák, D. – Bucha Rašová, A. – Bucha, B. – Machová, B. – Havlice, J. – John, J. – Chvojka, O. 2022: Computing and testing extensive total viewsheds: a case of prehistoric burial mounds in Bohemia. Journal of Archaeological Science 142, 105596. https://doi.org/10.1016/j.jas.2022.105596
    • Novák, D. – Pružinec, F. 2022: Potential and Implications of Automated Pre-Processing of Lidar-Based Digital Elevation Models for Large-Scale Archaeological Landscape Analysis. Available at SSRN: http://dx.doi.org/10.2139/ssrn.4063514
  19. u

    Data from: Datasets and Supporting Materials for the IPIN 2023 Competition...

    • producciocientifica.uv.es
    • recerca.uoc.edu
    • +1more
    Updated 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Torres-Sospedra, Joaquín; Crivello, Antonino; Stahlke, Maximilian; Potortì, Francesco; Ortiz, Miguel; Li, Ziyou; Pérez-Navarro, Antoni; Jiménez, Antonio R.; Torres-Sospedra, Joaquín; Crivello, Antonino; Stahlke, Maximilian; Potortì, Francesco; Ortiz, Miguel; Li, Ziyou; Pérez-Navarro, Antoni; Jiménez, Antonio R. (2023). Datasets and Supporting Materials for the IPIN 2023 Competition Track 3 (Smartphone-based, off-site) [Dataset]. https://producciocientifica.uv.es/documentos/67321de4aea56d4af0484e89
    Explore at:
    Dataset updated
    2023
    Authors
    Torres-Sospedra, Joaquín; Crivello, Antonino; Stahlke, Maximilian; Potortì, Francesco; Ortiz, Miguel; Li, Ziyou; Pérez-Navarro, Antoni; Jiménez, Antonio R.; Torres-Sospedra, Joaquín; Crivello, Antonino; Stahlke, Maximilian; Potortì, Francesco; Ortiz, Miguel; Li, Ziyou; Pérez-Navarro, Antoni; Jiménez, Antonio R.
    Description

    This package contains the datasets and supplementary materials used in the IPIN 2023 Competition. Contents

    Track-3_TA-2023.pdf: Technical annexe describing the competition (Version 2) 01 Logfiles: This folder contains a subfolder with the 54 training trials, a subfolder with the 4 testing trials (validation), and a subfolder with the 2 blind scoring trials (test) as provided to competitors. 02 Supplementary_Materials: This folder contains the Matlab/octave parser, the raster maps, the files for the Matlab tools and the trajectory visualization. 03 Evaluation: This folder contains the scripts we used to calculate the competition metric, the 75th percentile on the 69 evaluation points. It requires the Matlab Mapping Toolbox. We also provide the ground truth as 2 CSV files. It contains samples of reported estimations and the corresponding results. We provide additional information on the competition at: https://evaal.aaloa.org/2023/call-for-competition Citation Policy Please cite the following works when using the datasets included in this package: Torres-Sospedra, J.; et al. Datasets and Supporting Materials for the IPIN 2023Competition Track 3 (Smartphone-based, off-site), Zenodo 2023http://dx.doi.org/10.5281/zenodo.8362205 Check the updated citation policy at: http://dx.doi.org/10.5281/zenodo.8362205 Contact For any further questions about the database and this competition track, please contact: Joaquín Torres-Sospedra Centro ALGORITMI,Universidade do Minho, Portugalinfo@jtorr.es - jtorres@algoritmi.uminho.pt Antonio R. Jiménez Centre of Automation and Robotics (CAR)-CSIC/UPM, Spain antonio.jimenez@csic.es Antoni Pérez-NavarroFaculty of Computer Sciences, Multimedia and Telecommunication, Universitat Oberta de Catalunya, Barcelona, Spainaperezn@uoc.edu Acknowledgements We thank Maximilian Stahlke and Christopher Mutschler at Fraunhofer ISS, as well as Miguel Ortiz and Ziyou Li at Université Gustave Eiffel, for their invaluable support in collecting the datasets. And last but certainly not least, Antonino Crivello and Francesco Potortì for their huge effort in georeferencing the competition venue and evaluation points. We extend our appreciation to the staff at the Museum for Industrial Culture (Museum Industriekultur) for their unwavering patience and invaluable support throughout our collection days. We are also grateful to Francesco Potortì, the ISTI-CNR team (Paolo, Michele & Filippo), and the Fraunhofer IIS team (Chris, Tobi, Max, ...) for their invaluable commitment to organizing and promoting the IPIN competition. This work and competition belong to the IPIN 2023 Conference in Nuremberg (Germany). Parts of this work received the financial support received from projects and grants:

    ORIENTATE (H2020-MSCA-IF-2020, Grant Agreement 101023072) GeoLibero (from CYTED) INDRI (MICINN, ref. PID2021-122642OB-C42, PID2021-122642OB-C43, PID2021-122642OB-C44, MCIU/AEI/FEDER UE) MICROCEBUS (MICINN, ref. RTI2018-095168-B-C55, MCIU/AEI/FEDER UE) TARSIUS (TIN2015-71564-C4-2-R, MINECO/FEDER) SmartLoc(CSIC-PIE Ref.201450E011) LORIS (TIN2012-38080-C04-04)

  20. H

    Fluvial Energy Balance Model (FLUVIAL-EB)

    • hydroshare.org
    • beta.hydroshare.org
    zip
    Updated May 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Erin Bray; Jeff Dozier (2024). Fluvial Energy Balance Model (FLUVIAL-EB) [Dataset]. http://doi.org/10.4211/hs.091e74a3643542258620d313960291b3
    Explore at:
    zip(1.5 GB)Available download formats
    Dataset updated
    May 20, 2024
    Dataset provided by
    HydroShare
    Authors
    Erin Bray; Jeff Dozier
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2011 - Dec 31, 2011
    Area covered
    Description

    We developed a spectral river energy balance model (FLUVIAL-EB, Bray et al., 2017) to predict energy fluxes and river temperature along a large lowland regulated river and, more specifically, developed a module within the model with the specific aim of quantifying the response of river temperatures to perturbations in atmospheric variables. For clear, flowing water that is usually present below dams, FLUVIAL-EB couples a spectral radiation balance model with turbulent energy fluxes, bed conduction, and a 1D hydraulic model employed over the longitudinal profile of a lowland river whose water depth and velocity vary with distance downstream. The dynamic component of the model accounts for the feedback between spatial and temporal variability in water temperature and changes in the atmospheric fluxes and conduction into or out from the streambed. The predicted water temperature is used to compute the latent, sensible, net longwave, bed conduction, and advective energy flux at every time step. Absorbed shortwave radiation is computed for every wavelength in the solar spectrum, and then integrated across all wavelengths. The continuous component of the model interpolates between measurements at meteorological stations at discrete times. Because the governing differential equation is instantaneous, the input meteorological variables must be available for any values of x and t. From hourly averages of input data, we calculated instantaneous data by generating a cumulative sum, applying a smoothing spline, and then taking the derivative. Doing so allows for a continuous spatial and temporal field of the entire river based on hourly meteorological data and modeled steady-state hydraulic values under bankfull flow conditions, with temporal resolution up to 30 s and spatial resolution of every 100 m along the river. The compressed file of the model folder ('FLUVIAL-EB_BrayDozier_2023.zip') contains approximately 82 Matlab scripts and .mat files that, together, make up the entire FLUVIAL-EB model; all are compatible in Matlab version 2023b. The primary command line function, used to run model simulations, is titled 'riverExplicitSoln.m'. To run the model for the spatial extent on the San Joaquin River, CA, USA, you must install the following Matlab Toolboxes: Computer Vision, Curve Fitting, Database Toolbox, Image Processing Toolbox, Mapping Toolbox, Signal Processing Toolbox, Statistics and Machine Learning Toolbox, and the Raster Reprojection Toolbox (not on Mathworks, written by Jeff Dozier and attached below). For examples of command lines used in model simulations, see 'runmodel_baseline.m'.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Clare D Harris; Elise G Rowe; Roshini Randeniya; Marta I Garrido (2019). EEG_Auditory_Oddball_Preprocessed_Data [Dataset]. http://doi.org/10.6084/m9.figshare.5812764.v1
Organization logoOrganization logo

EEG_Auditory_Oddball_Preprocessed_Data

Explore at:
binAvailable download formats
Dataset updated
Jan 31, 2019
Dataset provided by
Figsharehttp://figshare.com/
figshare
Authors
Clare D Harris; Elise G Rowe; Roshini Randeniya; Marta I Garrido
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

This dataset was obtained at the Queensland Brain Institute, Australia, using a 64 channel EEG Biosemi system. 21 healthy participants completed an auditory oddball paradigm (as described in Garrido et al., 2017).For a description of the oddball paradigm, please see Garrido et al., 2017:Garrido, M.I., Rowe, E.G., Halasz, V., & Mattingley, J. (2017). Bayesian mapping reveals that attention boosts neural responses to predicted and unpredicted stimuli. Cerebral Cortex, 1-12. DOI: 10.1093/cercor/bhx087If you use this dataset, please cite its doi, as well as citing the associated methods paper, which is as follows:Harris, C.D., Rowe, E.G., Randeniya, R. and Garrido, M.I. (2018). Bayesian Model Selection Maps for group studies using M/EEG data.For scripts to analyse the data, please see: https://github.com/ClareDiane/BMS4EEG

Search
Clear search
Close search
Google apps
Main menu