Facebook
TwitterThis Datasets contains the Kitti Object Detection Benchmark, created by Andreas Geiger, Philip Lenz and Raquel Urtasun in the Proceedings of 2012 CVPR ," Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite". This Kernel contains the object detection part of their different Datasets published for Autonomous Driving. It contains a set of images with their bounding box labels and velodyne point clouds. For more information visit the Website they published the data on (http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=2d).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This .las file contains sample LiDAR point cloud data collected by National Ecological Observatory Network's Airborne Observation Platform. The .las file format is a commonly used file format to store LIDAR point cloud data.This teaching data set is used for several tutorials on the NEON website (neonscience.org). The dataset is for educational purposes, data for research purposes can be obtained from the NEON Data Portal (data.neonscience.org).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
a 3-D image sensor
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The proposed dataset
Facebook
Twitter3D point cloud representing all physical features (e.g. buildings, trees and terrain) across City of Melbourne. The data has been encoded into a .las file format containing geospatial coordinates and RGB values for each point. The download is a zip file containing compressed .las files for tiles across the city area.
The geospatial data has been captured in Map Grid of Australia (MGA) Zone 55 projection and is reflected in the xyz coordinates within each .las file.
Also included are RGB (Red, Green, Blue) attributes to indicate the colour of each point.
Capture Information
- Capture Date: May 2018
- Capture Pixel Size: 7.5cm ground sample distance
- Map Projection: MGA Zone 55 (MGA55)
- Vertical Datum: Australian Height Datum (AHD)
- Spatial Accuracy (XYZ): Supplied survey control used for control (Madigan Surveying) – 25 cm absolute accuracy
Limitations:
Whilst every effort is made to provide the data as accurate as possible, the content may not be free from errors, omissions or defects.
Sample Data:
For an interactive sample of the data please see the link below.
https://cityofmelbourne.maps.arcgis.com/apps/webappviewer3d/index.html?id=b3dc1147ceda46ffb8229117a2dac56d
Preview:
Download:
A zip file containing the .las files representing tiles of point cloud data across City of Melbourne area.
Download Point Cloud Data (4GB)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains high-resolution point cloud data of nine reinforced concrete bridges located in rural areas of Japan, collected in December 2023 using the Matterport Pro3 terrestrial laser scanner. The scanner features a 360° horizontal field of view (FOV) and a 295° vertical FOV, operating with a 904nm wavelength laser beam. It achieves a measurement accuracy of ±20 mm at a distance of 10 m and captures up to 100,000 points per second.Key characteristics of the dataset:Data Format: LASCoordinate System: Local, without georeferencingResolution: Coordinate scale value of 1 mmThis dataset was created to support research on automated dimension estimation of bridge components using semantic segmentation and geometric analysis. It can be utilized by researchers and practitioners in structural engineering, computer vision, and infrastructure management for tasks such as semantic segmentation, structural analysis, and digital twin development.
Facebook
TwitterThis data collection of the 3D Elevation Program (3DEP) consists of Lidar Point Cloud (LPC) projects as provided to the USGS. These point cloud files contain all the original lidar points collected, with the original spatial reference and units preserved. These data may have been used as the source of updates to the 1/3-arcsecond, 1-arcsecond, and 2-arcsecond seamless 3DEP Digital Elevation Models (DEMs). The 3DEP data holdings serve as the elevation layer of The National Map, and provide foundational elevation information for earth science studies and mapping applications in the United States. Lidar (Light detection and ranging) discrete-return point cloud data are available in LAZ format. The LAZ format is a lossless compressed version of the American Society for Photogrammetry and Remote Sensing (ASPRS) LAS format. Point Cloud data can be converted from LAZ to LAS or LAS to LAZ without the loss of any information. Either format stores 3-dimensional point cloud data and point attributes along with header information and variable length records specific to the data. Millions of data points are stored as a 3-dimensional data cloud as a series of geo-referenced x, y coordinates and z (elevation), as well as other attributes for each point. Additonal information about the las file format can be found here: https://www.asprs.org/divisions-committees/lidar-division/laser-las-file-format-exchange-activities. All 3DEP products are public domain.
Facebook
TwitterThis data set provides 3 m gridded, bare-earth elevations (excluding trees) that are used as the baseline for the Airborne Snow Observatory (ASO) snow-on products. The data were collected during snow-free conditions as part of the NASA/JPL ASO aircraft survey campaigns.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a thin cloud removal dataset (NUAA-CR4L8/9) for Landsat 8 and 9 images. If you find this useful, consider citing our work:
[1] Li, J., Wang, Y., Sheng, Q., Wu, Z., Wang, B., Ling, X., Liu, X., Du, Y., Gao, F., Camps-valls, G., Molinier, M., 2025. CloudRuler : Rule-based transformer for cloud removal in Landsat images. Remote Sens. Environ. 328, 114913. https://doi.org/10.1016/j.rse.2025.114913
[2] Du, Y., Li, J., Sheng, Q., Zhu, Y., Wang, B., Ling, X., 2024. Dehazing Network: Asymmetric Unet Based on Physical Model. IEEE Trans. Geosci. Remote Sens. 62, 1–12. https://doi.org/10.1109/TGRS.2024.3359217
The Collection 2 Level 1 data served as the source data for the NUAA-CR4L8/9 dataset. There are 20 paired images, consisting of both cloudy and cloud-free scenes, from Landsat 8 and 9, acquired between 2022 and 2024, with an 8-day time interval for the same region in each image pair. In each image pair, if the Landsat 8 or 9 image is cloudy, the cloud-free image is chosen from the other satellite. The ratio of training and testing image pairs is set to 4:1. In this way, 16 image pairs are used for training, and four image pairs are used for testing, respectively. All the images are located in Southeast of USA. Both training and testing datasets contain different types of land cover. This makes the NUAA-CRL8/9 dataset representative.
Facebook
Twitterhttp://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/INSPIRE_Directive_Article13_1ahttp://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/INSPIRE_Directive_Article13_1a
This dataset contains the cloud-free products from Landsat 7 Enhanced Thematic Mapper collection acquired over Europe, North Africa and the Middle East; for each scene only one product is selected, with the minimal cloud coverage. The Landsat 7 ETM+ scenes typically cover 185 x 170 km. A standard full scene is nominally centred on the intersection between a Path and Row (the actual image centre can vary by up to 100 m). The data are system corrected.
Facebook
TwitterThis dataset provides estimates of maximum snow cover extent (SCE) and snow depth for each 8-day composite period from 2001 to 2017 at 1 km resolution across Alaska. The study area covers the majority land area of Alaska except for areas covered by perennial ice/snow or open water. A downscaling scheme was used in which Modern-Era Retrospective Analysis for Research and Applications, Version 2 (MERRA-2) global reanalysis 0.5 degree snow depth data were interpolated to a finer 1 km spatial grid. In the methods used, the downscaling scheme incorporated MODIS SCE (MOD10A2) to better account for the influence of local topography on the 1km snow distribution patterns. For MODIS cloud-contaminated pixels, persistent and patchy cloud cover conditions were improved by applying an elevation-based spatial filtering algorithm to predict snow occurrence. Cloud-free MODIS SCE data were then used to downscale MERRA-2 snow depth data. For each snow-covered 1 km pixel indicated by the MODIS data, the snow depth was estimated based on the snow depth of the neighboring MERRA-2 0.5 grid cell, with weights predicted using a spatial filter.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Point-wise annotation was conducted on input point clouds to prepare a labeled dataset for segmenting different sorghum plant-organ. Each sorghum plant's leaf, stem, and panicle were manually labeled in 0, 1, and 2, respectively, using the segment module of the CloudCompare software.
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Landsat and Sentinel-2 acquisitions are among the most frequently used medium-resolution (i.e., 10-30 m) optical data. The data are extensively used in terrestrial vegetation applications, including but not limited to, land cover and land use mapping, vegetation condition and phenology monitoring, and disturbance and change mapping. While the Landsat archives alone provide over 40 years, and counting, of continuous and consistent observations, since mid-2015 Sentinel-2 has enabled a revisit frequency of up to 2-days. Although the spatio-temporal availability of both data archives is well-known at the scene level, information on the actual availability of usable (i.e., cloud-, snow-, and shade-free) observations at the pixel level needs to be explored for each study to ensure correct parametrization of used algorithms, thus robustness of subsequent analyses. However, a priori data exploration is time and resource‑consuming, thus is rarely performed. As a result, the spatio-temporal heterogeneity of usable data is often inadequately accounted for in the analysis design, risking ill-advised selection of algorithms and hypotheses, and thus inferior quality of final results. Here we present a global dataset comprising precomputed daily availability of usable Landsat and Sentinel-2 data sampled at a pixel-level in a regular 0.18°-point grid. We based the dataset on the complete 1982-2024 Landsat surface reflectance data (Collection 2) and 2015-2024 Seninel-2 top-of-the-atmosphere reflectance scenes (pre‑Collection-1 and Collection-1). Derivation of cloud-, snow-, and shade-free observations followed the methodology developed in our recent study on data availability over Europe (Lewińska et al., 2023; https://doi.org/10.20944/preprints202308.2174.v2). Furthermore, we expanded the dataset with growing season information derived based on the 2001‑2019 time series of the yearly 500 m MODIS land cover dynamics product (MCD12Q2; Collection 6). As such, our dataset presents a unique overview of the spatio-temporal availability of usable daily Landsat and Sentinel-2 data at the global scale, hence offering much-needed a priori information aiding the identification of appropriate methods and challenges for terrestrial vegetation analyses at the local to global scales. The dataset can be viewed using the dedicated GEE App (link in Related Works). As of February 2025 the dataset has been extended with the 2024 data. Methods We based our analyses on freely and openly accessible Landsat and Sentinel-2 data archives available in Google Earth Engine (Gorelick et al., 2017). We used all Landsat surface reflectance Level 2, Tier 1, Collection 2 scenes acquired with the Thematic Mapper (TM) (Earth Resources Observation And Science (EROS) Center, 1982), Enhanced Thematic Mapper (ETM+) (Earth Resources Observation And Science (EROS) Center, 1999), and Operational Land Imager (OLI) (Earth Resources Observation And Science (EROS) Center, 2013) scanners between 22nd August 1982 and 31st December 2024, and Sentinel-2 TOA reflectance Level-1C scenes (pre‑Collection-1 (European Space Agency, 2015, 2021) and Collection-1 (European Space Agency, 2022)) acquired with the MultiSpectral Instrument (MSI) between 23rd June 2015 and 31st December 2024. We implemented a conservative pixel-quality screening to identify cloud-, snow-, and shade-free land pixels. For the Landsat time series, we relied on the inherent pixel quality bands (Foga et al., 2017; Zhu & Woodcock, 2012) excluding all pixels flagged as cloud, snow, or shadow as well as pixels with the fill-in value of 20,000 (scale factor 0.0001; (Zhang et al., 2022)). Furthermore, due to the Landsat 7 orbit drift (Qiu et al., 2021) we excluded all ETM+ scenes acquired after 31st December 2020. Because Sentinel-2 Level-2A quality masks lack the desired scope and accuracy (Baetens et al., 2019; Coluzzi et al., 2018), we resorted to Level-1C scenes accompanied by the supporting Cloud Probability product. Furthermore, we employed a selection of conditions, including a threshold on Band 10 (SWIR-Cirrus), which is not available at Level‑2A. Overall, our Sentinel-2-specific cloud, shadow, and snow screening comprised:
exclusion of all pixels flagged as clouds and cirrus in the inherent ‘QA60’ cloud mask band; exclusion of all pixels with cloud probability >50% as defined in the corresponding Cloud Probability product available for each scene; exclusion of cirrus clouds (B10 reflectance >0.01); exclusion of clouds based on Cloud Displacement Analysis (CDI<‑0.5) (Frantz et al., 2018); exclusion of dark pixels (B8 reflectance <0.16) within cloud shadows modelled for each scene with scene‑specific sun parameters for the clouds identified in the previous steps. Here we assumed a cloud height of 2,000 m. exclusion of pixels within a 40-m buffer (two pixels at 20-m resolution) around each identified cloud and cloud shadow object. exclusion of snow pixels identified with a snow mask branch of the Sen2Cor processor (Main-Knorn et al., 2017).
Through applying the data screening, we generated a collection of daily availability records for Landsat and Sentinel-2 data archives. We next subsampled the resulting binary time series with a regular 0.18° x 0.18°‑point grid defined in the EPSG:4326 projection, obtaining 475,150 points located over land between ‑179.8867°W and 179.5733°E and 83.50834°N and ‑59.05167°S. Owing to the substantial amount of data comprised in the Landsat and Sentinel-2 archives and the computationally demanding process of cloud-, snow-, and shade-screening, we performed the subsampling in batches corresponding to a 4° x 4° regular grid and consolidated the final data in post-processing. We derived the pixel-specific growing season information from the 2001-2019 time series of the yearly 500‑m MODIS land cover dynamics product (MCD12Q2; Collection 6) available in Google Earth Engine. We only used information on the start and the end of a growing season, excluding all pixels with quality below ‘best’. When a pixel went through more than one growing cycle per year, we approximated a growing season as the period between the beginning of the first growing cycle and the end of the last growing cycle. To fill in data gaps arising from low-quality data and insufficiently pronounced seasonality (Friedl et al., 2019), we used a 5x5 mean moving window filter to ensure better spatial continuity of our growing season datasets. Following (Lewińska et al., 2023), we defined the start of the season as the pixel-specific 25th percentile of the 2001-2019 distribution for the start of the season dates, and the end of the season as the pixel-specific 75th percentile of the 2001-2019 distribution for end of the season dates. Finally, we subsampled the start and end of the season datasets with the same regular 0.18° x 0.18°-point grid defined in the EPSG:4326 projection. References:
Baetens, L., Desjardins, C., & Hagolle, O. (2019). Validation of Copernicus Sentinel-2 Cloud Masks Obtained from MAJA, Sen2Cor, and FMask Processors Using Reference Cloud Masks Generated with a Supervised Active Learning Procedure. Remote Sensing, 11(4), 433. https://doi.org/10.3390/rs11040433 Coluzzi, R., Imbrenda, V., Lanfredi, M., & Simoniello, T. (2018). A first assessment of the Sentinel-2 Level 1-C cloud mask product to support informed surface analyses. Remote Sensing of Environment, 217, 426–443. https://doi.org/10.1016/j.rse.2018.08.009 Earth Resources Observation And Science (EROS) Center. (1982). Collection-2 Landsat 4-5 Thematic Mapper (TM) Level-1 Data Products [Other]. U.S. Geological Survey. https://doi.org/10.5066/P918ROHC Earth Resources Observation And Science (EROS) Center. (1999). Collection-2 Landsat 7 Enhanced Thematic Mapper Plus (ETM+) Level-1 Data Products [dataset]. U.S. Geological Survey. https://doi.org/10.5066/P9TU80IG Earth Resources Observation And Science (EROS) Center. (2013). Collection-2 Landsat 8-9 OLI (Operational Land Imager) and TIRS (Thermal Infrared Sensor) Level-1 Data Products [Other]. U.S. Geological Survey. https://doi.org/10.5066/P975CC9B European Space Agency. (2015). Sentinel-2 MSI Level-1C TOA Reflectance [dataset]. European Space Agency. https://doi.org/10.5270/S2_-d8we2fl European Space Agency. (2021). Sentinel-2 MSI Level-1C TOA Reflectance, Collection 0 [dataset]. European Space Agency. https://doi.org/10.5270/S2_-d8we2fl European Space Agency. (2022). Sentinel-2 MSI Level-1C TOA Reflectance [dataset]. European Space Agency. https://doi.org/10.5270/S2_-742ikth Foga, S., Scaramuzza, P. L., Guo, S., Zhu, Z., Dilley, R. D., Beckmann, T., Schmidt, G. L., Dwyer, J. L., Joseph Hughes, M., & Laue, B. (2017). Cloud detection algorithm comparison and validation for operational Landsat data products. Remote Sensing of Environment, 194, 379–390. https://doi.org/10.1016/j.rse.2017.03.026 Frantz, D., Haß, E., Uhl, A., Stoffels, J., & Hill, J. (2018). Improvement of the Fmask algorithm for Sentinel-2 images: Separating clouds from bright surfaces based on parallax effects. Remote Sensing of Environment, 215, 471–481. https://doi.org/10.1016/j.rse.2018.04.046 Friedl, M., Josh, G., & Sulla-Menashe, D. (2019). MCD12Q2 MODIS/Terra+Aqua Land Cover Dynamics Yearly L3 Global 500m SIN Grid V006 [dataset]. NASA EOSDIS Land Processes DAAC. https://doi.org/10.5067/MODIS/MCD12Q2.006 Gorelick, N., Hancher, M., Dixon, M., Ilyushchenko, S., Thau, D., & Moore, R. (2017). Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sensing of Environment, 202, 18–27. https://doi.org/10.1016/j.rse.2017.06.031Lewińska K.E., Ernst S., Frantz D., Leser U., Hostert P., Global Overview of Usable Landsat and Sentinel-2 Data for 1982–2023. Data in Brief 57, (2024) https://doi.org/10.1016/j.dib.2024.111054 Main-Knorn, M., Pflug, B., Louis, J., Debaecker, V., Müller-Wilm, U., & Gascon, F. (2017). Sen2Cor for Sentinel-2. In L. Bruzzone, F. Bovolo,
Facebook
TwitterOpen Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
License information was derived automatically
The LIDAR point cloud is an archive of hundreds of millions, or sometimes billions of highly accurate 3-dimensional x,y,z points and component attributes produced by the Environment Agency.
The environment agecy site specific LIDAR DSM and DTM Time Stamped Tiles gridded raster products are derived from the point cloud. The component attributes a point cloud contains can provide valuable additional information to supplement elevation and can enable the user to make bespoke raster products such as canopy height models or intensity rasters.
Site specific LIDAR surveys have been carried out across England since 1998, with certain areas, such as the coastal zone, being surveyed multiple times. The point cloud is available for surveys going back to 2006. Although the DSM and DTM Tile Stamped Tiles products are derived from the point cloud data there may not necessarily be a matching point cloud for each surface model due to historic data archiving processes.
During processing the point cloud classifies the laser returns in the 'ground' and 'surface objects'. Further manual editing undertkaen on the derived digital terrain model (DTM) means the classifed ground points in the point cloud data will not match the final derived DTM.
Data is available in 5km download zip files for each year of survey. Within each downloaded zip file are LAZ files aligned to the Ordinance Survey grid. The size of each tile is dependent upon the spatial resolution of the data.
Please refere to the coverage metadata files for the start and end date flown of a survey as well as additional component information the point cloud contains such as the average point density.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset describes the cloud removal dataset of the Yunnan-Guizhou Plateau in China. The dataset currently contains 19 subfolders, with numbers 1 to 18 used to distinguish specific data collection areas. Each subfolder includes pairs of TIFF images (consisting of cloud-covered and cloud-free images) and corresponding subfolders for cropped images. The folder named "ALL" contains all the image data, of which "CLOUD" and "CLEAR" each contain 4792 images. The "train" and "test" folders are split 8:2, respectively, for training and testing purposes.
Facebook
TwitterERA5 is the fifth generation of the European Centre for Medium-Range Weather Forecasts (ECMWF ) Atmospheric Reanalysis, providing hourly estimates of a large number of atmospheric, land, and oceanic climate variables. This data spans from 1979 to the present, covering the Earth on a 30 km grid and resolves the atmosphere using 137 levels from the surface up to a height of 80 km. A reanalysis is the “most complete picture currently possible of past weather and climate.” Reanalyses are created from assimilation of a wide range of data sources via numerical weather prediction (NWP) models. Meteorologically valuable variables for land and atmosphere were ingested and converted from grib data to Zarr (with no other modifications) to surface a cloud-optimized version of ERA5. In addition, an open-sourced code base is provided to show the providence of the data as well as demonstrate common research workflows. This dataset includes both raw (grib) and cloud-optimized (zarr) files. Use cases. ERA5 data can be used in many different applications, including: Training ML models that predict the impact of weather on different phenomena Training and evaluating ML models that forecast the weather Computing climatologies, the average weather for a region over a given period of time Visualizing and studying historical weather events, such as Hurricane Sandy Thanks to the open data policy of the Copernicus Climate Change and Atmosphere Monitoring Services and ECMWF, this dataset is available free as part of the Google Cloud Public Dataset Program. Please see below for license information.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This article presents a cloud-free snow cover dataset with a daily temporal resolution and 0.05° spatial resolution from March 2000 to February 2017 over the contiguous United States (CONUS). The dataset was developed by completely removing clouds from the original NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS) Snow Cover Area product (MOD10C1) through a series of spatiotemporal filters followed by the Variational Interpolation (VI) algorithm; the filters and VI algorithm were evaluated using bootstrapping test. The dataset was validated over the period with the Landsat 7 ETM+ snow cover maps in the Seattle, Minneapolis, Rocky Mountains, and Sierra Nevada regions. The resulting cloud-free snow cover captured accurately dynamic changes of snow throughout the period in terms of Probability of Detection (POD) and False Alarm Ratio (FAR) with average values of 0.955 and 0.179 for POD and FAR, respectively. The dataset provides continuous inputs of snow cover area for hydrologic studies for almost two decades. The VI algorithm can be applied in other regions given that a proper validation can be performed.
Facebook
Twitterhttp://reference.data.gov.uk/id/open-government-licencehttp://reference.data.gov.uk/id/open-government-licence
The aim of the GRAPE project was to produce a global cloud and aerosol dataset using a state-of-the-art physical retrieval of the entire duration of the Along Track Scanning Radiometer 2 (ATSR-2) mission (aboard ERS-2). This dataset will be compared and contrasted with existing climatologies (based on different instruments and very different retrieval algorithms). The GRAPE project was initially funded through the Clouds, Water Vapour and Climate (CWVC) Programme, a five-year NERC directed research programme. The dataset has been developed further within the National Centre for Earth Observation (NCEO) and now includes data from the Advanced Along Track Scanning Radiometer (AATSR). The GRAPE dataset contains cloud optical depth, aerosol optical depth (cloud free), cloud phase, cloud particle size, cloud top pressure, cloud fraction and cloud ice/water path along with associated error measurements.
Facebook
TwitterThe NASA LaRC cloud and clear sky radiation properties dataset is generated using algorithms initially developed for application to TRMM and MODIS imagery within the NASA CERES program. The algorithms have been adapted to operate upon AVHRR, an instrument that has fewer spectral channels than MODIS. This dataset utilizes calibrated AVHRR reflectances from a companion FCDR. Cloud and clear-sky radiation properties are derived globally at the 4 km Global Area Coverage pixel scale during both day and night using this approach. CDR quality variables include: Cloud and clear sky pixel detection (count), Cloud top thermodynamic phase (count), Cloud optical depth (count), Cloud particle effective radius (micrometers), Air pressure at effective cloud top (hPa), Air temperature at effective cloud top (K), and Height at effective cloud top (km). Other Non-CDR Quality Variables include: Air pressure at cloud top (hPa), Air temperature at cloud top (K), Height at cloud top (km), Height at cloud base (km), Air pressure at cloud base (hPa), Overshooting cloud top detection mask (count), Land and sea surface temperature retrieval (K), Shortwave broadband albedo (unit less), Longwave broadband flux (W/m2), Snow and ice cover flag (count), Land and sea surface temperature retrieval quality flag (count), Clear sky pixel classification (count), Cloudy pixel classification (count)
Facebook
TwitterAttribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
These sample LiDAR datasets were collected by the Hungarian State Railways with a Riegl VMX-450 high density mobile mapping system (MMS) mounted on a railroad vehicle. The sensor was capable of recording 1.1 million points / sec with an average 3 dimensional range precision of 3 mm and a maximum threshold of 7 mm. Average positional accuracy was 3 cm with a maximum threshold of 5 cm. The acquired point clouds contain the georeferenced spatial information (3D coordinates) with intensity and RGB data attached to the points. The applied reference system is the Hungarian national spatial reference system, EPSG:23700.
3 datasets with different topographical regions of Hungary were selected: 1) mav_szabadszallas_csengod_665500_162600_665900_163200.laz is a curved rail track segment on flat terrain between the city of Szabadszállás and the town of Csengőd. The selected segment is ca. 600 m long, containing 51.8 million points. 2) mav_sztg_szh_439040_183444_440377_183863.laz is a curved rail track segment with varied terrain and slopes between the cities Szentgotthárd and Szombathely. The selected segment is ca. 1500 m long, containing 58.6 million points. 3) mav_szabadszallas_csengod_666285_159100_666436_159200.laz is a curved rail track segment on flat terrain between Szabadszállás and Csengőd, 100 m long, containing 7.3 million points. Manually annotated ground truth data for cable and rail track recognition is also attached.
Facebook
TwitterThis Datasets contains the Kitti Object Detection Benchmark, created by Andreas Geiger, Philip Lenz and Raquel Urtasun in the Proceedings of 2012 CVPR ," Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite". This Kernel contains the object detection part of their different Datasets published for Autonomous Driving. It contains a set of images with their bounding box labels and velodyne point clouds. For more information visit the Website they published the data on (http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=2d).