Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This data set is uploaded as supporting information for the publication entitled:A Comprehensive Tutorial on the SOM-RPM Toolbox for MATLABThe attached file 'case_study' includes the following:X : Data from a ToF-SIMS hyperspectral image. A stage raster containing 960 x800 pixels with 963 associated m/z peaks.pk_lbls: The m/z label for each of the 963 m/z peaks.mdl and mdl_masked: SOM-RPM models created using the SOM-RPM tutorial provided within the cited article.Additional details about the datasets can be found in the published article.V2 - contains modified peak lists to show intensity weighted m/z rather than peak midpoint. If you use this data set in your work, please cite our work as follows:[LINK TO BE ADDED TO PAPER ONCE DOI RECEIVED]
Multibeam bathymetry data, gridded at ~100m, acquired around during the transit towards the Mid-Atlantic ridge axis (13°20'N and 13°30'N oceanic detachments area) during the ODEMAR cruise (2013). Bathymetric data was acquired by the PourQUoi Pas? multibeam system (RESON SEABAT 7150). Data is provided as geotiffs (WGS84) ODM_transit_16Nov.tif Extent -27.8375538800000015,16.3530121699999995 : -26.3755538800000018,16.6860121699999979 Width 1463 Height 334 Data type Float32 - Thirty two bit floating point GDAL Driver Description GTiff GDAL Driver Metadata GeoTIFF Dataset Description /Users/jescartin/WORK/Proyectos/ODEMAR/DATA/Bathygrids/Transit/ODM_transit_16Nov.tif Compression PACKBITS Band 1 STATISTICS_MAXIMUM=-4139.5639648438 STATISTICS_MEAN=-4612.4823849892 STATISTICS_MINIMUM=-4866.7373046875 STATISTICS_STDDEV=121.40234041923 STATISTICS_VALID_PERCENT=29.85 Scale: 1 Offset: 0 More information AREA_OR_POINT=Area TIFFTAG_SOFTWARE=MATLAB 9.14, Mapping Toolbox 5.5 Dimensions X: 1463 Y: 334 Bands: 1 Origin -27.8375538800000015,16.6860121699999979 Pixel Size 0.0009993164730006834189,-0.0009970059880239474058 ODM_transit_17Nov.tif Extent -32.4955601099999996,15.5934151199999995 : -27.8195601099999976,16.4454151199999998 Width 4677 Height 853 Data type Float32 - Thirty two bit floating point GDAL Driver Description GTiff GDAL Driver Metadata GeoTIFF Dataset Description /Users/jescartin/WORK/Proyectos/ODEMAR/DATA/Bathygrids/Transit/ODM_transit_17Nov.tif Compression PACKBITS Band 1 STATISTICS_APPROXIMATE=YES STATISTICS_MAXIMUM=-4458.8168945312 STATISTICS_MEAN=-5144.8000932272 STATISTICS_MINIMUM=-5562.1850585938 STATISTICS_STDDEV=164.55246561258 STATISTICS_VALID_PERCENT=50.04 Scale: 1 Offset: 0 More information AREA_OR_POINT=Area TIFFTAG_SOFTWARE=MATLAB 9.14, Mapping Toolbox 5.5 Dimensions X: 4677 Y: 853 Bands: 1 Origin -32.4955601099999996,16.4454151199999998 Pixel Size 0.0009997861877271759391,-0.0009988276670574447458 ODM_transit_18Nov.tif Extent -36.9279732571747630,14.8264479347363469 : -32.4729732571747647,15.7054479347363465 Width 4456 Height 880 Data type Float32 - Thirty two bit floating point GDAL Driver Description GTiff GDAL Driver Metadata GeoTIFF Dataset Description /Users/jescartin/WORK/Proyectos/ODEMAR/DATA/Bathygrids/Transit/ODM_transit_18Nov.tif Compression PACKBITS Band 1 STATISTICS_APPROXIMATE=YES STATISTICS_MAXIMUM=-5171.58203125 STATISTICS_MEAN=-5712.1738558372 STATISTICS_MINIMUM=-5946.1611328125 STATISTICS_STDDEV=139.45706418798 STATISTICS_VALID_PERCENT=7.565 Scale: 1 Offset: 0 More information AREA_OR_POINT=Area TIFFTAG_SOFTWARE=MATLAB 9.14, Mapping Toolbox 5.5 Dimensions X: 4456 Y: 880 Bands: 1 Origin -36.9279732571747630,15.7054479347363465 Pixel Size 0.0009997755834829438748,-0.0009988636363636358394 ODM_transit_19Nov.tif Extent -41.3188024399999989,14.0843680599999992 : -36.8828024399999990,14.9493680599999994 Width 4437 Height 866 Data type Float32 - Thirty two bit floating point GDAL Driver Description GTiff GDAL Driver Metadata GeoTIFF Dataset Description /Users/jescartin/WORK/Proyectos/ODEMAR/DATA/Bathygrids/Transit/ODM_transit_19Nov.tif Compression PACKBITS Band 1 STATISTICS_APPROXIMATE=YES STATISTICS_MAXIMUM=-4540.5317382812 STATISTICS_MEAN=-5176.2961468757 STATISTICS_MINIMUM=-5426.123046875 STATISTICS_STDDEV=183.14492125461 STATISTICS_VALID_PERCENT=7.674 Scale: 1 Offset: 0 More information AREA_OR_POINT=Area TIFFTAG_SOFTWARE=MATLAB 9.14, Mapping Toolbox 5.5 Dimensions X: 4437 Y: 866 Bands: 1 Origin -41.3188024399999989,14.9493680599999994 Pixel Size 0.0009997746224926751767,-0.0009988452655889147342 ODM_transit_20Nov.tif Extent -44.9630377200000027,13.5018919999999998 : -41.3030377200000061,14.1828919999999989 Width 3661 Height 682 Data type Float32 - Thirty two bit floating point GDAL Driver Description GTiff GDAL Driver Metadata GeoTIFF Dataset Description /Users/jescartin/WORK/Proyectos/ODEMAR/DATA/Bathygrids/Transit/ODM_transit_20Nov.tif Compression PACKBITS Band 1 STATISTICS_APPROXIMATE=YES STATISTICS_MAXIMUM=-3173.7924804688 STATISTICS_MEAN=-4347.3027374804 STATISTICS_MINIMUM=-4821.65625 STATISTICS_STDDEV=276.6463937856 STATISTICS_VALID_PERCENT=42.31 Scale: 1 Offset: 0 More information AREA_OR_POINT=Area TIFFTAG_SOFTWARE=MATLAB 9.14, Mapping Toolbox 5.5 Dimensions X: 3661 Y: 682 Bands: 1 Origin -44.9630377200000027,14.1828919999999989 Pixel Size 0.0009997268505872703662,-0.000998533724340174662 Important Note: This submission has been initially submitted to SEA scieNtific Open data Edition (SEANOE) publication service and received the recorded DOI. The metadata elements have been further processed (refined) in EMODnet Ingestion Service in order to conform with the Data Submission Service specifications.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
How to set the input parameters: an example.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This package contains the datasets and supplementary materials used in the IPIN 2021 Competition.
Contents:
IPIN2021_Track03_TechnicalAnnex_V1-02.pdf: Technical annex describing the competition
01-Logfiles: This folder contains a subfolder with the 105 training logfiles, 80 of them single floor indoors, 10 in outdoor areas, 10 of them in the indoor auditorium with floor-trasitio and 5 of them in floor-transition zones, a subfolder with the 20 validation logfiles, and a subfolder with the 3 blind evaluation logfile as provided to competitors.
02-Supplementary_Materials: This folder contains the matlab/octave parser, the raster maps, the files for the matlab tools and the trajectory visualization.
03-Evaluation: This folder contains the scripts used to calculate the competition metric, the 75th percentile on the 82 evaluation points. It requires the Matlab Mapping Toolbox. The ground truth is also provided as 3 csv files. Since the results must be provided with a 2Hz freq. starting from apptimestamp 0, the GT files include the closest timestamp matching the timing provided by competitors for the 3 evaluation logfiles. It contains samples of reported estimations and the corresponding results.
Please, cite the following works when using the datasets included in this package:
Torres-Sospedra, J.; et al. Datasets and Supporting Materials for the IPIN 2021 Competition Track 3 (Smartphone-based, off-site). http://dx.doi.org/10.5281/zenodo.5948678
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In the second column the instantaneous effects are neglected both for targets and conditioning. In the third column we set instantaneous effects for some drivers and the respective targets. For example, when the target is 1, instantaneous effects are taken into account for driver 2 (first two rows, right column, parameter idDrivers) and conditioning variable 3 (first row, right column, parameter idOtherLagZero).Example of the parameters required to define the methods for an experiment on 5 variables.
Information on water depth in river channels is important for a number of applications in water resource management but can be difficult to obtain via conventional field methods, particularly over large spatial extents and with the kind of frequency and regularity required to support monitoring programs. Remote sensing methods could provide a viable alternative means of mapping river bathymetry (i.e., water depth). The purpose of this study was to develop and test new, spectrally based techniques for estimating water depth from satellite image data. More specifically, a neural network-based temporal ensembling approach was evaluated in comparison to several other neural network depth retrieval (NNDR) algorithms. These methods are described in a manuscript titled "Neural Network-Based Temporal Ensembling of Water Depth Estimates Derived from SuperDove Images" and the purpose of this data release is to make available the depth maps produced using these techniques. The images used as input were acquired by the SuperDove cubesats comprising the PlanetScope constellation, but the original images cannot be redistributed due to licensing restrictions; the end products derived from these images are provided instead. The large number of cubesats in the PlanetScope constellation allows for frequent temporal coverage and the neural network-based approach takes advantage of this high density time series of information by estimating depth via one of four NNDR methods described in the manuscript: 1. Mean-spec: the images are averaged over time and the resulting mean image is used as input to the NNDR. 2. Mean-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is averaged to obtain the final depth map. 3. NN-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is then used as input to a second, ensembling neural network that essentially weights the depth estimates from the individual images so as to optimize the agreement between the image-derived depth estimates and field measurements of water depth used for training; the output from the ensembling neural network serves as the final depth map. 4. Optimal single image: a separate NNDR is applied independently to each image in the time series and only the image that yields the strongest agreement between the image-derived depth estimates and the field measurements of water depth used for training is used as the final depth map. MATLAB (Version 24.1, including the Deep Learning Toolbox) source code for performing this analysis is provided in the function NN_depth_ensembling.m and the figure included on this landing page provides a flow chart illustrating the four different neural network-based depth retrieval methods. As examples of the resulting models, MATLAB *.mat data files containing the best-performing neural network model for each site are provided below, along with a file that lists the PlanetScope image identifiers for the images that were used for each site. To develop and test this new NNDR approach, the method was applied to satellite images from three rivers across the U.S.: the American, Colorado, and Potomac. For each site, field measurements of water depth available through other data releases were used for training and validation. The depth maps produced via each of the four methods described above are provided as GeoTIFF files, with file name suffixes that indicate the method employed: X_mean-spec.tif, X_mean-depth.tif, X_NN-depth.tif, and X-single-image.tif, where X denotes the site name. The spatial resolution of the depth maps is 3 meters and the pixel values within each map are water depth estimates in units of meters.
Information on water depth in river channels is important for a number of applications in water resource management but can be difficult to obtain via conventional field methods, particularly over large spatial extents and with the kind of frequency and regularity required to support monitoring programs. Remote sensing methods could provide a viable alternative means of mapping river bathymetry (i.e., water depth). The purpose of this study was to develop and test new, spectrally based techniques for estimating water depth from satellite image data. More specifically, a neural network-based temporal ensembling approach was evaluated in comparison to several other neural network depth retrieval (NNDR) algorithms. These methods are described in a manuscript titled "Neural Network-Based Temporal Ensembling of Water Depth Estimates Derived from SuperDove Images" and the purpose of this data release is to make available the depth maps produced using these techniques. The images used as input were acquired by the SuperDove cubesats comprising the PlanetScope constellation, but the original images cannot be redistributed due to licensing restrictions; the end products derived from these images are provided instead. The large number of cubesats in the PlanetScope constellation allows for frequent temporal coverage and the neural network-based approach takes advantage of this high density time series of information by estimating depth via one of four NNDR methods described in the manuscript: 1. Mean-spec: the images are averaged over time and the resulting mean image is used as input to the NNDR. 2. Mean-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is averaged to obtain the final depth map. 3. NN-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is then used as input to a second, ensembling neural network that essentially weights the depth estimates from the individual images so as to optimize the agreement between the image-derived depth estimates and field measurements of water depth used for training; the output from the ensembling neural network serves as the final depth map. 4. Optimal single image: a separate NNDR is applied independently to each image in the time series and only the image that yields the strongest agreement between the image-derived depth estimates and the field measurements of water depth used for training is used as the final depth map. MATLAB (Version 24.1, including the Deep Learning Toolbox) source code for performing this analysis is provided in the function NN_depth_ensembling.m available on the main landing page for the data release of which this is a child item, along with a flow chart illustrating the four different neural network-based depth retrieval methods. To develop and test this new NNDR approach, the method was applied to satellite images from the Colorado River near Lees Ferry, AZ, acquired in March and April of 2021. Field measurements of water depth available through another data release (Legleiter, C.J., Debenedetto, G.P., and Forbes, B.T., 2022, Field measurements of water depth from the Colorado River near Lees Ferry, AZ, March 16-18, 2021: U.S. Geological Survey data release, https://doi.org/10.5066/P9HZL7BZ) were used for training and validation. The depth maps produced via each of the four methods described above are provided as GeoTIFF files, with file name suffixes that indicate the method employed: Colorado_mean-spec.tif, Colorado_mean-depth.tif, Colorado_NN-depth.tif, and Colorado-single-image.tif. In addition, to assess the robustness of the Mean-spec and NN-depth methods to the introduction of a large pulse of sediment by a flood event that occurred partway through the image time series, depth maps from before and after the flood are provided in the files Colorado_Mean-spec_after_flood.tif, Colorado_Mean-spec_before_flood.tif, Colorado_NN-depth_after_flood.tif, and Colorado_NN-depth_before_flood.tif. The spatial resolution of the depth maps is 3 meters and the pixel values within each map are water depth estimates in units of meters.
Information on water depth in river channels is important for a number of applications in water resource management but can be difficult to obtain via conventional field methods, particularly over large spatial extents and with the kind of frequency and regularity required to support monitoring programs. Remote sensing methods could provide a viable alternative means of mapping river bathymetry (i.e., water depth). The purpose of this study was to develop and test new, spectrally based techniques for estimating water depth from satellite image data. More specifically, a neural network-based temporal ensembling approach was evaluated in comparison to several other neural network depth retrieval (NNDR) algorithms. These methods are described in a manuscript titled "Neural Network-Based Temporal Ensembling of Water Depth Estimates Derived from SuperDove Images" and the purpose of this data release is to make available the depth maps produced using these techniques. The images used as input were acquired by the SuperDove cubesats comprising the PlanetScope constellation, but the original images cannot be redistributed due to licensing restrictions; the end products derived from these images are provided instead. The large number of cubesats in the PlanetScope constellation allows for frequent temporal coverage and the neural network-based approach takes advantage of this high density time series of information by estimating depth via one of four NNDR methods described in the manuscript: 1. Mean-spec: the images are averaged over time and the resulting mean image is used as input to the NNDR. 2. Mean-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is averaged to obtain the final depth map. 3. NN-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is then used as input to a second, ensembling neural network that essentially weights the depth estimates from the individual images so as to optimize the agreement between the image-derived depth estimates and field measurements of water depth used for training; the output from the ensembling neural network serves as the final depth map. 4. Optimal single image: a separate NNDR is applied independently to each image in the time series and only the image that yields the strongest agreement between the image-derived depth estimates and the field measurements of water depth used for training is used as the final depth map. MATLAB (Version 24.1, including the Deep Learning Toolbox) for performing this analysis is provided in the function NN_depth_ensembling.m available on the main landing page for the data release of which this is a child item, along with a flow chart illustrating the four different neural network-based depth retrieval methods. To develop and test this new NNDR approach, the method was applied to satellite images from the American River near Fair Oaks, CA, acquired in October 2020. Field measurements of water depth available through another data release (Legleiter, C.J., and Harrison, L.R., 2022, Field measurements of water depth from the American River near Fair Oaks, CA, October 19-21, 2020: U.S. Geological Survey data release, https://doi.org/10.5066/P92PNWE5) were used for training and validation. The depth maps produced via each of the four methods described above are provided as GeoTIFF files, with file name suffixes that indicate the method employed: American_mean-spec.tif, American_mean-depth.tif, American_NN-depth.tif, and American-single-image.tif. The spatial resolution of the depth maps is 3 meters and the pixel values within each map are water depth estimates in units of meters.
This package contains the datasets and supplementary materials used in the IPIN 2022 Competition. Contents: Track-3_TA-2022.pdf: Technical annex describing the competition (Version 2) 01 Logfiles: This folder contains a subfolder with the 89 training trials a subfolder with the 24 testing trials (validation), and a subfolder with the 3 blind scoring trials (test) as provided to competitors. 02 Supplementary_Materials: This folder contains the matlab/octave parser, the raster maps, the files for the matlab tools and the trajectory visualization. 03 Evaluation: This folder contains the scripts used to calculate the competition metric, the 75th percentile on the 31|61|61 evaluation points. It requires the Matlab Mapping Toolbox. The ground truth is also provided as 3 csv files. Since the results must be provided with a 2Hz freq. starting from apptimestamp 0, the GT files include the closest timestamp matching the timing provided by competitors for the 3 evaluation logfiles. It contains samples of reported estimations and the corresponding results. Please, cite the following works when using the datasets included in this package: Torres-Sospedra, J.; et al. Datasets and Supporting Materials for the IPIN 2022 Competition Track 3 (Smartphone-based, off-site), Zenodo 2022. http://dx.doi.org/10.5281/zenodo.7612915
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
The dataset represents a collection of total viewsheds (cf. Llobera et al. 2010) created for the territory of Bohemia (Czech Republic; ca. 57,000 km2). The total-viewshed calculation was based on the R2 algorithm (see Franklin and Ray 1994) and uses the viewshed function from MATLAB’s Mapping Toolbox that was significantly optimized by the authors for large-scale parallel computations. To reduce the computational time, we calculated single viewsheds using every fourth cell as the observing point (cf. Rášová 2017).
The Digital Terrain Model of the Czech Republic of the 5th Generation (DMR 5G) was used as input for the calculations. Prior to the calculation, the input DEM was cleared of modern landscape elements (e.g. embankments of railways and roads, quarries, etc.; for details see Novák – Pružinec 2022). Eight total visibility models of the territory of Bohemia were constructed using the IT4Innovations research infrastructure (https://www.it4i.cz/). Both the observer and the target heights were set at 1.5 m. The viewsheds differ in two parameters: the visibility radius and the resolution of the input grid. As the basic radius, we have set 0.5 km and the cell size of 5 m; further layers are conceived as multiples of these parameters: 1 km/10 m, 2 km/20 m, 4 km/40 m, 8 km/80 m, 16 km/160 m and 32 km/320 m. The only exception is the model with a radius of 64 km, where we preserved the cell size of the preceding iteration (320 m). In the individual models, the visibility values are indicated in percentages corresponding to the portion of visible cells in the given radius (0–100%; rounded up to the next complete value).
Filenames of individual rasters correspond to the parameters set above. For further details see:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Accurate anatomical localization of intracranial electrodes is important for identifying the seizure foci in patients with epilepsy and for interpreting effects from cognitive studies employing intracranial electroencephalography. Localization is typically performed by coregistering postimplant computed tomography (CT) with preoperative magnetic resonance imaging (MRI). Electrodes are then detected in the CT, and the corresponding brain region is identified using the MRI. Many existing software packages for electrode localization chain together separate preexisting programs or rely on command line instructions to perform the various localization steps, making them difficult to install and operate for a typical user. Further, many packages provide solutions for some, but not all, of the steps needed for confident localization. We have developed software, Locate electrodes Graphical User Interface (LeGUI), that consists of a single interface to perform all steps needed to localize both surface and depth/penetrating intracranial electrodes, including coregistration of the CT to MRI, normalization of the MRI to the Montreal Neurological Institute template, automated electrode detection for multiple types of electrodes, electrode spacing correction and projection to the brain surface, electrode labeling, and anatomical targeting. The software is written in MATLAB, core image processing is performed using the Statistical Parametric Mapping toolbox, and standalone executable binaries are available for Windows, Mac, and Linux platforms. LeGUI was tested and validated on 51 datasets from two universities. The total user and computational time required to process a single dataset was approximately 1 h. Automatic electrode detection correctly identified 4362 of 4695 surface and depth electrodes with only 71 false positives. Anatomical targeting was verified by comparing electrode locations from LeGUI to locations that were assigned by an experienced neuroanatomist. LeGUI showed a 94% match with the 482 neuroanatomist-assigned locations. LeGUI combines all the features needed for fast and accurate anatomical localization of intracranial electrodes into a single interface, making it a valuable tool for intracranial electrophysiology research.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This package contains the datasets and supplementary materials used in the IPIN 2023 Competition.
Contents: - IPIN2023_Track4_CallForCompetition_v2.2.pdf: Call for competition including the technical annex describing the competition
01-Logfiles: This folder contains 2 files for each Trials (Testing, Scoring01, Scoring02) - IPIN2023_T4_xxx.txt : data file containing ACCE, ROTA, MAGN, PRES, TEMP, GSBS, GOBS, POSI frames - IPIN2023_T4_xxx_gnss_ephem.nav : for trajectory estimation.
02-Supplementary_Materials: This folder contains the datasheet files of the different sensors, a static logfile of about 12 hours that can be used for sensors bias estimation (Allan Variance) and a logfile of about 1 minute that can be used to calibrate the magnetometer sensor (Magnetometer Calibration).
03-Evaluation: This folder contains the scripts used to calculate the competition metric, the 75th percentile on all evaluation points. It requires Matlab Mapping Toolbox. We also provide ground truth of the 2 scoring trials as 2 MAT and KML files. It contains samples of reported estimations and the corresponding results. Just run script_Eval_IPIN2023.mat
We provide additional information on the competition at: https://evaal.aaloa.org/2023/call-for-competition
Citation Policy: Please, cite the following works when using the datasets included in this package:
Ortiz, M.; Zhu, N.; Ziyou L. ; Renaudin, V. Datasets and Supporting Materials for the IPIN 2023 Competition Track 4 (Foot-Mounted IMU based Positioning, offsite-online), Zenodo 2023 https://doi.org/10.5281/zenodo.8399764
Check the citation policy at: https://doi.org/10.5281/zenodo.8399764
Contact: For any further questions about the database and this competition track, please contact:
Miguel Ortiz (miguel.ortiz@univ-eiffel.fr) at the University Gustave Eiffel, France.
Ni Zhu (ni.zhu@univ-eiffel.fr) at the University Gustave Eiffel, France.
Acknowledgements: We thank Maximilian Stahlke and Christopher Mutschler at Fraunhofer ISS, as well as Joaquín Torres-Sospedra from Universidade do Minho and Francesco Potortì and Antonino Crivello from ISTI-CNR Pisa, for their support in collecting the datasets.
We extend our appreciation to the staff at the Museum for Industrial Culture (Museum Industriekultur) for their unwavering patience and invaluable support throughout our collection days.
The Fields2Benhmark dataset is a collection of 350 agricultural fields in vector format manually selected to test agricultural coverage path planning algorithms.
The files in this dataset are organized into two folders:
imgs/: contains the satellite images of the fields. wkt/: has the vector data of the fields.
A file with the same name in both folders corresponds to the same field. The first two letters in the name of the field indicate the country the field belongs to.
The fields have been extracted from the EuroCrops dataset --specifically from the Netherlands, Estonia and Lithuania --, transformed each field into its own file, and converted them to Well-Known text (wkt). The satellite images were also provided, and they were created using the Mapping Toolbox from Matlab.
Funding This dataset is provided as part of the project "Fields2Cover: Robust and efficient coverage paths for autonomous agricultural vehicles" (with project number ENPPS.LIFT.019.019 of the research programme Science PPP Fund for the top sectors which is (partly) financed by the Dutch Research Council (NWO).
@article{Mier_Fields2Cover_An_open-source_2023, author={Mier, Gonzalo and Valente, João and de Bruin, Sytze}, journal={IEEE Robotics and Automation Letters}, title={Fields2Cover: An Open-Source Coverage Path Planning Library for Unmanned Agricultural Vehicles}, year={2023}, volume={8}, number={4}, pages={2166-2172}, doi={10.1109/LRA.2023.3248439} }
This package contains the datasets and supplementary materials used in the IPIN 2023 Competition. Contents
Track-3_TA-2023.pdf: Technical annexe describing the competition (Version 2) 01 Logfiles: This folder contains a subfolder with the 54 training trials, a subfolder with the 4 testing trials (validation), and a subfolder with the 2 blind scoring trials (test) as provided to competitors. 02 Supplementary_Materials: This folder contains the Matlab/octave parser, the raster maps, the files for the Matlab tools and the trajectory visualization. 03 Evaluation: This folder contains the scripts we used to calculate the competition metric, the 75th percentile on the 69 evaluation points. It requires the Matlab Mapping Toolbox. We also provide the ground truth as 2 CSV files. It contains samples of reported estimations and the corresponding results. We provide additional information on the competition at: https://evaal.aaloa.org/2023/call-for-competition Citation Policy Please cite the following works when using the datasets included in this package: Torres-Sospedra, J.; et al. Datasets and Supporting Materials for the IPIN 2023Competition Track 3 (Smartphone-based, off-site), Zenodo 2023http://dx.doi.org/10.5281/zenodo.8362205 Check the updated citation policy at: http://dx.doi.org/10.5281/zenodo.8362205 Contact For any further questions about the database and this competition track, please contact: Joaquín Torres-Sospedra Centro ALGORITMI,Universidade do Minho, Portugalinfo@jtorr.es - jtorres@algoritmi.uminho.pt Antonio R. Jiménez Centre of Automation and Robotics (CAR)-CSIC/UPM, Spain antonio.jimenez@csic.es Antoni Pérez-NavarroFaculty of Computer Sciences, Multimedia and Telecommunication, Universitat Oberta de Catalunya, Barcelona, Spainaperezn@uoc.edu Acknowledgements We thank Maximilian Stahlke and Christopher Mutschler at Fraunhofer ISS, as well as Miguel Ortiz and Ziyou Li at Université Gustave Eiffel, for their invaluable support in collecting the datasets. And last but certainly not least, Antonino Crivello and Francesco Potortì for their huge effort in georeferencing the competition venue and evaluation points. We extend our appreciation to the staff at the Museum for Industrial Culture (Museum Industriekultur) for their unwavering patience and invaluable support throughout our collection days. We are also grateful to Francesco Potortì, the ISTI-CNR team (Paolo, Michele & Filippo), and the Fraunhofer IIS team (Chris, Tobi, Max, ...) for their invaluable commitment to organizing and promoting the IPIN competition. This work and competition belong to the IPIN 2023 Conference in Nuremberg (Germany). Parts of this work received the financial support received from projects and grants:
ORIENTATE (H2020-MSCA-IF-2020, Grant Agreement 101023072) GeoLibero (from CYTED) INDRI (MICINN, ref. PID2021-122642OB-C42, PID2021-122642OB-C43, PID2021-122642OB-C44, MCIU/AEI/FEDER UE) MICROCEBUS (MICINN, ref. RTI2018-095168-B-C55, MCIU/AEI/FEDER UE) TARSIUS (TIN2015-71564-C4-2-R, MINECO/FEDER) SmartLoc(CSIC-PIE Ref.201450E011) LORIS (TIN2012-38080-C04-04)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This package contains the datasets and supplementary materials used in the IPIN 2022 Competition.
Contents:- IPIN2022_Track4_CallForCompetition_v1.3.pdf: Call for competition including the technical annex describing the competition
01-Logfiles: This folder contains 2 files for each Trials (Testing, Scoring01, Scoring02) - IPIN2022_T4_xxx.txt : data file containing ACCE, ROTA, MAGN, PRES, TEMP, GSBS, GOBS, POSI frames - IPIN2022_T4_xxx_yyy_ephem.nav : for trajectory estimation. - 02-Supplementary_Materials: This folder contains the datasheet files of the different sensors, a static logfile of about 12 hours that can be used for sensors bias estimation (Allan Variance) and a logfile of about 1 minute that can be used to calibrate the magnetometer sensor (Magnetometer Calibration).
03-Evaluation: This folder contains the scripts used to calculate the competition metric, the 75th percentile on all evaluation points. It requires Matlab Mapping Toolbox. We also provide ground truth of the 2 scoring trials as CSV files (with full rate @60Hz and with evaluation point only). It contains samples of reported estimations and the corresponding results. Just run script_Eval_IPIN2022.mat
We provide additional information on the competition at: https://evaal.aaloa.org/2022/call-for-competitions
Citation Policy:Please, cite the following works when using the datasets included in this package:
Ortiz, M.; Zhu, N.; Ziyou L.; Renaudin, V. Datasets and Supporting Materials for the IPIN 2022 Competition Track 4 (Foot-Mounted IMU based Positioning, offsite-online), Zenodo 2022https://doi.org/10.5281/zenodo.10497364
Check the citation policy at: https://doi.org/10.5281/zenodo.10497364
Contact:
For any further questions about the database and this competition track, please contact:
Miguel Ortiz (miguel.ortiz@univ-eiffel.fr) at the University Gustave Eiffel, France. Ni Zhu (ni.zhu@univ-eiffel.fr) at the University Gustave Eiffel, France.
Acknowledgements:
We thank Frederic Le-Bourhis and Aravind Ramseh from Univ-Eiffel for their support in collecting the datasets.
We extend our appreciation to the staff at the Nantes Central station for their invaluable support throughout our collection days.
The AA4528 corridor dataset contains the Matlab scripts for the corridor algorithm, ice shelf locations and file extensions. The corridor algorithm is designed to calculate the parts of the ocean which can directly propagate swell into an exposed ice shelf. The algorithm achieves this as an expansion of the coastal exposure algorithm (Reid and Massom, 2021), with the details of the inner working of the algorithm work presented in the paper attached with this dataset. Corridors can be used to calculate the frequency of swell reaching an ice shelf per year and can be combined with hindcasts to extract relevant wave data to an ice shelf for modelling or data analysis purposes.
The corridor algorithm requires sea ice concentration data, which was provided by the NSIDC Sea ice concentrations from the Nimbus-7 SMMR and DMSP SSM/I-SSMIS Passive Microwave Data, Version 1 (https://nsidc.org/data/nsidc-0051). Ice shelf coordinates were extracted from the gfsc_25s.msk that come with the sea ice data, with the aid of Antarctic Mapping Toolbox (Greene et al., 2017), and were attached separately to make editing more consistent. As this is designed to use daily sea ice data from the 1st of January 1979 onwards, I’ve also attached the sea ice files for the off-days when the sea-ice data was taken every 2nd day. Th file extensions script was also included to be able to switch through off-day files and changes that occur with the NSIDC file format.
The ocean hindcast that the corridor algorithm was built around is the CAWCR Wave Hindcast – Aggregated Collection (https://data.csiro.au/collections/collection/CI39819v005). The corridor algorithm uses daily data to make it consistent with the sea ice data and calculated the maximum significant wave height for each cell present in the hindcast. Data that was extracted from it was the maximum daily significant wave height recorded in the corridor and the direction of that cell. Data was taken from 01/09/1979 to 31/08/2019 giving 40 years of data which accounts for seasonality of corridors.
The excel spreadsheet attached contains relevant corridor data for each ice shelf with an area greater than 500 km^2. Area was determined by either the supplementary files from Rignot et. al., 2013, or ice shelf areas from the Antarctic mapping toolbox (Greene et al., 2017). Angle1 and Angle2 were the ones used in the direction filter, and there should be a comment in the filter with how it handles if Angle 1 is greater than Angle 2 or vice versa. Ac is the corridor area, PA is potential corridor area (i.e. the absolute max it could be with the settings we used, Ac_max is the maximum corridor area, D_cor is the days that corridors were present, Hs is significant wave height and LW (large waves) is counting days per year when significant wave heights greater than or equal to 6 m (Morim et al., 2021).
Refs: Greene, C. A., Gwyther, D. E. and Blankenship, D. D. (2017) ‘Antarctic Mapping Tools for MATLAB’, Computers and Geosciences, 104, pp. 151–157. doi: 10.1016/j.cageo.2016.08.003. Morim, J. et al. (2021) ‘Global-scale changes to extreme ocean wave events due to anthropogenic warming’, Environmental Research Letters, 16(7), p. 074056. doi: 10.1088/1748-9326/ac1013. Reid, P. and Massom, R. (2021) ‘Change and Variability in Antarctic Coastal Exposure , 1979-2020’. In pre-print (https://assets.researchsquare.com/files/rs-636839/v1/02002d0b-2c6c-402b-8e14-7f77075d8f90.pdf?c=1631885736) Rignot, E. et al. (2013) ‘Ice-shelf melting around antarctica’, Science, 341(6143), pp. 266–270. doi: 10.1126/science.1235798.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This data set is uploaded as supporting information for the publication entitled:A Comprehensive Tutorial on the SOM-RPM Toolbox for MATLABThe attached file 'case_study' includes the following:X : Data from a ToF-SIMS hyperspectral image. A stage raster containing 960 x800 pixels with 963 associated m/z peaks.pk_lbls: The m/z label for each of the 963 m/z peaks.mdl and mdl_masked: SOM-RPM models created using the SOM-RPM tutorial provided within the cited article.Additional details about the datasets can be found in the published article.V2 - contains modified peak lists to show intensity weighted m/z rather than peak midpoint. If you use this data set in your work, please cite our work as follows:[LINK TO BE ADDED TO PAPER ONCE DOI RECEIVED]