Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
LiDAR_Point_Clouds, Classified. AHD have been preocessed to conform to the Australian Height Datum and converted from files collected as swaths in to tiles of data. The file formats is LAS.
LAS is an industry format created and maintained by the American Society for Photogrammetry and Remote Sensing (ASPRS). LAS is a published standard file format for the interchange of lidar data. It maintains specific information related to lidar data. It is a way for vendors and clients to interchange data and maintain all information specific to that data. Each LAS file contains metadata of the lidar survey in a header block followed by individual records for each laser pulse recorded. The header portion of each LAS file holds attribute information on the lidar survey itself: data extents, flight date, flight time, number of point records, number of points by return, any applied data offset, and any applied scale factor. The following lidar point attributes are maintained for each laser pulse of a LAS file: x,y,z location information, GPS time stamp, intensity, return number, number of returns, point classification values, scan angle, additional RGB values, scan direction, edge of flight line, user data, point source ID and waveform information. Each and every lidar point in a LAS file can have a classification code set for it. Classifying lidar data allows you to organize mass points into specific data classes while still maintaining them as a whole data collection in LAS files. Typically, these classification codes represent the type of object that has reflected the laser pulse. Point classification is usually completed by data vendors using semi-automated techniques on the point cloud to assign the feature type associated with each point. Lidar points can be classified into a number of categories including bare earth or ground, top of canopy, and water. The different classes are defined using numeric integer codes in the LAS files. The following table contains the LAS classification codes as defined in the LAS 1.1 standard: Class code Classification type 0 Never classified 1 Unassigned 2 Ground 3 Low vegetation 4 Medium vegetation 5 High vegetation 6 Building 7 Noise 8 Model key 9 Water
Lineage: Fugro Spatial Solutions (FSS) were awarded a contract by Geoscience Australia to carry out an Aerial LiDAR Survey over the Kakadu National Park. The data will be used to examine the potential impacts of climate change and sea level rise on the West Alligator, South Alligator, East Alligator River systems and other minor areas. The project area was flight planned using parameters as specified. A FSS aircraft and aircrew were mobilised to site and the project area was captured using a Leica ALS60 system positioned using a DGPS base-station at Darwin airport. The Darwin base-station was positioned by DGPS observations from local control stations. A ground control survey was carried out by FSS surveyors to determine ground positions and heights for control and check points throughout the area. All data was returned to FSS office in Perth and processed. The deliverable datasets were generated and supplied to Geoscience Australia with this metadata information.
NEDF Metadata Acquisition Start Date: Saturday, 22 October 2011 Acquisition End Date: Wednesday, 16 November 2011 Sensor: LiDAR Device Name: Leica ALS60 (S/N: 6145) Flying Height (AGL): 1409 INS/IMU Used: uIRS-56024477 Number of Runs: 468 Number of Cross Runs: 28 Swath Width: 997 Flight Direction: Non-Cardinal Swath (side) Overlap: 20 Horizontal Datum: GDA94 Vertical Datum: AHD71 Map Projection: MGA53 Description of Aerotriangulation Process Used: Not Applicable Description of Rectification Process Used: Not Applicable Spatial Accuracy Horizontal: 0.8 Spatial Accuracy Vertical: 0.3 Average Point Spacing (per/sqm): 2 Laser Return Types: 4 pulses (1st 2nd 3rd 4th and intensity) Data Thinning: None Laser Footprint Size: 0.32 Calibration certification (Manufacturer/Cert. Company): Leica Limitations of the Data: To project specification Surface Type: Various Product Type: Other Classification Type: C0 Grid Resolution: 2 Distribution Format: Other Processing/Derivation Lineage: Capture, Geodetic Validation WMS: Not Applicable?
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This New Zealand Point Cloud Classification Deep Learning Package will classify point clouds into building and background classes. This model is optimized to work with New Zealand aerial LiDAR data.The classification of point cloud datasets to identify Building is useful in applications such as high-quality 3D basemap creation, urban planning, and planning climate change response.Building could have a complex irregular geometrical structure that is hard to capture using traditional means. Deep learning models are highly capable of learning these complex structures and giving superior results.This model is designed to extract Building in both urban and rural area in New Zealand.The Training/Testing/Validation dataset are taken within New Zealand resulting of a high reliability to recognize the pattern of NZ common building architecture.Licensing requirementsArcGIS Desktop - ArcGIS 3D Analyst extension for ArcGIS ProUsing the modelThe model can be used in ArcGIS Pro's Classify Point Cloud Using Trained Model tool. Before using this model, ensure that the supported deep learning frameworks libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Note: Deep learning is computationally intensive, and a powerful GPU is recommended to process large datasets.The model is trained with classified LiDAR that follows the The model was trained using a training dataset with the full set of points. Therefore, it is important to make the full set of points available to the neural network while predicting - allowing it to better discriminate points of 'class of interest' versus background points. It is recommended to use 'selective/target classification' and 'class preservation' functionalities during prediction to have better control over the classification and scenarios with false positives.The model was trained on airborne lidar datasets and is expected to perform best with similar datasets. Classification of terrestrial point cloud datasets may work but has not been validated. For such cases, this pre-trained model may be fine-tuned to save on cost, time, and compute resources while improving accuracy. Another example where fine-tuning this model can be useful is when the object of interest is tram wires, railway wires, etc. which are geometrically similar to electricity wires. When fine-tuning this model, the target training data characteristics such as class structure, maximum number of points per block and extra attributes should match those of the data originally used for training this model (see Training data section below).OutputThe model will classify the point cloud into the following classes with their meaning as defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) described below: 0 Background 6 BuildingApplicable geographiesThe model is expected to work well in the New Zealand. It's seen to produce favorable results as shown in many regions. However, results can vary for datasets that are statistically dissimilar to training data.Training dataset - Auckland, Christchurch, Kapiti, Wellington Testing dataset - Auckland, WellingtonValidation/Evaluation dataset - Hutt City Dataset City Training Auckland, Christchurch, Kapiti, Wellington Testing Auckland, Wellington Validating HuttModel architectureThis model uses the SemanticQueryNetwork model architecture implemented in ArcGIS Pro.Accuracy metricsThe table below summarizes the accuracy of the predictions on the validation dataset. - Precision Recall F1-score Never Classified 0.984921 0.975853 0.979762 Building 0.951285 0.967563 0.9584Training dataThis model is trained on classified dataset originally provided by Open TopoGraphy with < 1% of manual labelling and correction.Train-Test split percentage {Train: 75~%, Test: 25~%} Chosen this ratio based on the analysis from previous epoch statistics which appears to have a descent improvementThe training data used has the following characteristics: X, Y, and Z linear unitMeter Z range-137.74 m to 410.50 m Number of Returns1 to 5 Intensity16 to 65520 Point spacing0.2 ± 0.1 Scan angle-17 to +17 Maximum points per block8192 Block Size50 Meters Class structure[0, 6]Sample resultsModel to classify a dataset with 23pts/m density Wellington city dataset. The model's performance are directly proportional to the dataset point density and noise exlcuded point clouds.To learn how to use this model, see this story
LiDAR point cloud data for Washington, DC is available for anyone to use on Amazon S3. This dataset, managed by the Office of the Chief Technology Officer (OCTO), through the direction of the District of Columbia GIS program, contains tiled point cloud data for the entire District along with associated metadata.
The point cloud was delivered with data in the following classifications: Class 1 - Processed but Unclassified; Class 2 - Bare Earth Ground; Class 3 - Low Vegetation; Class 4 - Medium Vegetation; Class 5 - High Vegetation, Class 6 - Buildings; Class 7 - Low Point (Noise); Class 9 - Water; Class 17 - Bridge Decks; Class 18 - High Noise; Class 20 - Ignored Ground.
The classification of point cloud datasets to identify distribution wires is useful for identifying vegetation encroachment around power lines. Such workflows are important for preventing fires and power outages and are typically manual, recurring, and labor-intensive. This model is designed to extract distribution wires at the street level. Its predictions for high-tension transmission wires are less consistent with changes in geography as compared to street-level distribution wires. In the case of high-tension transmission wires, a lower ‘recall’ value is observed as compared to the value observed for low-lying street wires and poles.Using the modelFollow the guide to use the model. The model can be used with ArcGIS Pro's Classify Point Cloud Using Trained Model tool. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.InputThe model accepts unclassified point clouds with point geometry (X, Y and Z values). Note: The model is not dependent on any additional attributes such as Intensity, Number of Returns, etc. This model is trained to work on unclassified point clouds that are in a projected coordinate system, in which the units of X, Y and Z are based on the metric system of measurement. If the dataset is in degrees or feet, it needs to be re-projected accordingly. The model was trained using a training dataset with the full set of points. Therefore, it is important to make the full set of points available to the neural network while predicting - allowing it to better discriminate points of 'class of interest' versus background points. It is recommended to use 'selective/target classification' and 'class preservation' functionalities during prediction to have better control over the classification and scenarios with false positives.The model was trained on airborne lidar datasets and is expected to perform best with similar datasets. Classification of terrestrial point cloud datasets may work but has not been validated. For such cases, this pre-trained model may be fine-tuned to save on cost, time, and compute resources while improving accuracy. Another example where fine-tuning this model can be useful is when the object of interest is tram wires, railway wires, etc. which are geometrically similar to electricity wires. When fine-tuning this model, the target training data characteristics such as class structure, maximum number of points per block and extra attributes should match those of the data originally used for training this model (see Training data section below).OutputThe model will classify the point cloud into the following classes with their meaning as defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) described below: Classcode Class Description 0 Background Class 14 Distribution Wires 15 Distribution Tower/PolesApplicable geographiesThe model is expected to work within any geography. It's seen to produce favorable results as shown here in many regions. However, results can vary for datasets that are statistically dissimilar to training data.Model architectureThis model uses the RandLANet model architecture implemented in ArcGIS API for Python.Accuracy metricsThe table below summarizes the accuracy of the predictions on the validation dataset. - Precision Recall F1-score Background (0) 0.999679 0.999876 0.999778 Distribution Wires (14) 0.955085 0.936825 0.945867 Distribution Poles (15) 0.707983 0.553888 0.621527Training dataThis model is trained on manually classified training dataset provided to Esri by AAM group. The training data used has the following characteristics: X, Y, and Z linear unitmeter Z range-240.34 m to 731.17 m Number of Returns1 to 5 Intensity1 to 4095 Point spacing0.2 ± 0.1 Scan angle-42 to +35 Maximum points per block20000 Extra attributesNone Class structure[0, 14, 15]Sample resultsHere are a few results from the model.
https://www.ontario.ca/page/open-government-licence-ontariohttps://www.ontario.ca/page/open-government-licence-ontario
Many Ontario lidar point cloud datasets have been made available for direct download by the Government of Canada through the federal Open Government Portal under the LiDAR Point Clouds – CanElevation Series record. Instructions for bulk data download are available in the Download Instructions document linked from that page. To download individual tiles, zoom in on the map in GeoHub and click a tile for a pop-up containing a download link.
See the LIO Support - Large Data Ordering Instructions to obtain a copy of data for projects that are not yet available for direct download. Data can be requested by project area or a set of tiles. To determine which project contains your area of interest or to view single tiles, zoom in on the map above and click. For bulk tile orders follow the link in the Additional Documentation section below to download the tile index in shapefile format. Data sizes by project area are listed below.
The Ontario Point Cloud (Lidar-Derived) consists of points containing elevation and intensity information derived from returns collected by an airborne topographic lidar sensor. The minimum point cloud classes are Unclassified, Ground, Water, High and Low Noise. The data is structured into non-overlapping 1-km by 1-km tiles in LAZ format.
This dataset is a compilation of lidar data from multiple acquisition projects, as such specifications, parameters, accuracy and sensors may vary by project. Some project have additional classes, such as vegetation and buildings. See the detailed User Guide and contractor metadata reports linked below for additional information, including information about interpreting the index for placement of data orders.
Raster derivatives have been created from the point clouds. These products may meet your needs and are available for direct download. For a representation of bare earth, see the Ontario Digital Terrain Model (Lidar-Derived). For a model representing all surface features, see the Ontario Digital Surface Model (Lidar-Derived).
You can monitor the availability and status of lidar projects on the Ontario Lidar Coverage map on the Ontario Elevation Mapping Program hub page.
Additional Documentation
Ontario Classified Point Cloud (Lidar-Derived) - User Guide (DOCX)
OMAFRA Lidar 2016-18 - Cochrane - Additional Metadata (PDF) OMAFRA Lidar 2016-18 - Peterborough - Additional Metadata (PDF) OMAFRA Lidar 2016-18 - Lake Erie - Additional Metadata (PDF) CLOCA Lidar 2018 - Additional Contractor Metadata (PDF) South Nation Lidar 2018-19 - Additional Contractor Metadata (PDF) OMAFRA Lidar 2022 - Lake Huron - Additional Metadata (PDF) OMAFRA Lidar 2022 - Lake Simcoe - Additional Metadata (PDF) Huron-Georgian Bay Lidar 2022-23 - Additional Metadata (Word) Kawartha Lakes Lidar 2023 - Additional Metadata (Word) Sault Ste Marie Lidar 2023-24 - Additional Metadata (Word) Thunder Bay Lidar 2023-24 - Additional Metadata (Word) Timmins Lidar 2024 - Additional Metadata (Word)
OMAFRA Lidar Point Cloud 2016-18 - Cochrane - Lift Metadata (SHP) OMAFRA Lidar Point Cloud 2016-18- Peterborough - Lift Metadata (SHP) OMAFRA Lidar Point Cloud 2016-18 - Lake Erie - Lift Metadata (SHP) CLOCA Lidar Point Cloud 2018 - Lift Metadata (SHP) South Nation Lidar Point Cloud 2018-19 - Lift Metadata (SHP) York-Lake Simcoe Lidar Point Cloud 2019 - Lift Metadata (SHP) Ottawa River Lidar Point Cloud 2019-20 - Lift Metadata (SHP) OMAFRA Lidar Point Cloud 2022 - Lake Huron - Lift Metadata (SHP) OMAFRA Lidar Point Cloud 2022 - Lake Simcoe - Lift Metadata (SHP) Eastern Ontario Lidar Point Cloud 2021-22 - Lift Medatadata (SHP) DEDSFM Huron-Georgian Bay Lidar Point Cloud 2022-23 - Lift Metadata (SHP) DEDSFM Kawartha Lakes Lidar Point Cloud 2023 - Lift Metadata (SHP) DEDSFM Sault Ste Marie Lidar Point Cloud 2023-24 - Lift Metadata (SHP) DEDSFM Sudbury Lidar Point Cloud 2023-24 - Lift Metadata (SHP) DEDSFM Thunder Bay Lidar Point Cloud 2023-24 - Lift Metadata (SHP) DEDSFM Timmins Lidar Point Cloud 2024 - Lift Metadata (SHP) GTA 2023 - Lift Metadata (SHP)
Ontario Classified Point Cloud (Lidar-Derived) - Tile Index (SHP)
Ontario Lidar Project Extents (SHP)
Data Package Sizes
LEAP 2009 - 22.9 GB
OMAFRA Lidar 2016-18 - Cochrane - 442 GB OMAFRA Lidar 2016-18 - Lake Erie - 1.22 TB OMAFRA Lidar 2016-18 - Peterborough - 443 GB
GTA 2014 - 57.6 GB GTA 2015 - 63.4 GB Brampton 2015 - 5.9 GB Peel 2016 - 49.2 GB Milton 2017 - 15.3 GB Halton 2018 - 73 GB
CLOCA 2018 - 36.2 GB
South Nation 2018-19 - 72.4 GB
York Region-Lake Simcoe Watershed 2019 - 75 GB
Ottawa River 2019-20 - 836 GB
Lake Nipissing 2020 - 700 GB
Ottawa-Gatineau 2019-20 - 551 GB
Hamilton-Niagara 2021 - 660 GB
OMAFRA Lidar 2022 - Lake Huron - 204 GB OMAFRA Lidar 2022 - Lake Simcoe - 154 GB
Belleville 2022 - 1.09 TB
Eastern Ontario 2021-22 - 1.5 TB
Huron Shores 2021 - 35.5 GB
Muskoka 2018 - 72.1 GB Muskoka 2021 - 74.2 GB Muskoka 2023 - 532 GB The Muskoka lidar projects are available in the CGVD2013 or CGVD28 vertical datums. Please specifify which datum is needed when ordering data.
Digital Elevation Data to Support Flood Mapping 2022-26:
Huron-Georgian Bay 2022 - 1.37 TB Huron-Georgian Bay 2023 - 257 GB Huron-Georgian Bay 2023 Bruce - 95.2 GB Kawartha Lakes 2023 - 385 GB Sault Ste Marie 2023-24 - 1.15 TB Sudbury 2023-24 - 741 GB Thunder Bay 2023-24 - 654 GB Timmins 2024 - 318 GB
GTA 2023 - 985 GB
Status On going: Data is continually being updated
Maintenance and Update Frequency As needed: Data is updated as deemed necessary
Contact Ontario Ministry of Natural Resources - Geospatial Ontario, geospatial@ontario.ca
https://www.neonscience.org/data-samples/data-policies-citationhttps://www.neonscience.org/data-samples/data-policies-citation
Unclassified three-dimensional point cloud by flightline and classified point cloud by 1 km tile, provided in LAZ format. Classifications follow standard ASPRS definitions. All point coordinates are provided in meters. Horizontal coordinates are referenced in the appropriate UTM zone and the ITRF00 datum. Elevations are referenced to Geoid12A.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This New Zealand Point Cloud Classification Deep Learning Package will classify point clouds into tree and background classes. This model is optimized to work with New Zealand aerial LiDAR data.The classification of point cloud datasets to identify Trees is useful in applications such as high-quality 3D basemap creation, urban planning, forestry workflows, and planning climate change response.Trees could have a complex irregular geometrical structure that is hard to capture using traditional means. Deep learning models are highly capable of learning these complex structures and giving superior results.This model is designed to extract Tree in both urban and rural area in New Zealand.The Training/Testing/Validation dataset are taken within New Zealand resulting of a high reliability to recognize the pattern of NZ common building architecture.Licensing requirementsArcGIS Desktop - ArcGIS 3D Analyst extension for ArcGIS ProUsing the modelThe model can be used in ArcGIS Pro's Classify Point Cloud Using Trained Model tool. Before using this model, ensure that the supported deep learning frameworks libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Note: Deep learning is computationally intensive, and a powerful GPU is recommended to process large datasets.InputThe model is trained with classified LiDAR that follows the LINZ base specification. The input data should be similar to this specification.Note: The model is dependent on additional attributes such as Intensity, Number of Returns, etc, similar to the LINZ base specification. This model is trained to work on classified and unclassified point clouds that are in a projected coordinate system, in which the units of X, Y and Z are based on the metric system of measurement. If the dataset is in degrees or feet, it needs to be re-projected accordingly. The model was trained using a training dataset with the full set of points. Therefore, it is important to make the full set of points available to the neural network while predicting - allowing it to better discriminate points of 'class of interest' versus background points. It is recommended to use 'selective/target classification' and 'class preservation' functionalities during prediction to have better control over the classification and scenarios with false positives.The model was trained on airborne lidar datasets and is expected to perform best with similar datasets. Classification of terrestrial point cloud datasets may work but has not been validated. For such cases, this pre-trained model may be fine-tuned to save on cost, time, and compute resources while improving accuracy. Another example where fine-tuning this model can be useful is when the object of interest is tram wires, railway wires, etc. which are geometrically similar to electricity wires. When fine-tuning this model, the target training data characteristics such as class structure, maximum number of points per block and extra attributes should match those of the data originally used for training this model (see Training data section below).OutputThe model will classify the point cloud into the following classes with their meaning as defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) described below: 0 Background 5 Trees / High-vegetationApplicable geographiesThe model is expected to work well in the New Zealand. It's seen to produce favorable results as shown in many regions. However, results can vary for datasets that are statistically dissimilar to training data.Training dataset - Wellington CityTesting dataset - Tawa CityValidation/Evaluation dataset - Christchurch City Dataset City Training Wellington Testing Tawa Validating ChristchurchModel architectureThis model uses the PointCNN model architecture implemented in ArcGIS API for Python.Accuracy metricsThe table below summarizes the accuracy of the predictions on the validation dataset. - Precision Recall F1-score Never Classified 0.991200 0.975404 0.983239 High Vegetation 0.933569 0.975559 0.954102Training dataThis model is trained on classified dataset originally provided by Open TopoGraphy with < 1% of manual labelling and correction.Train-Test split percentage {Train: 80%, Test: 20%} Chosen this ratio based on the analysis from previous epoch statistics which appears to have a descent improvementThe training data used has the following characteristics: X, Y, and Z linear unitMeter Z range-121.69 m to 26.84 m Number of Returns1 to 5 Intensity16 to 65520 Point spacing0.2 ± 0.1 Scan angle-15 to +15 Maximum points per block8192 Block Size20 Meters Class structure[0, 5]Sample resultsModel to classify a dataset with 5pts/m density Christchurch city dataset. The model's performance are directly proportional to the dataset point density and noise exlcuded point clouds.To learn how to use this model, see this story
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
The classification of airborne lidar data is a relevant task in different disciplines. The information about the geometry and the full waveform can be used in order to classify the 3D point cloud. In Wadden Sea areas the classification of lidar data is of main interest for the scientific monitoring of coastal morphology and habitats, but it becomes a challenging task due to flat areas with hardly any discriminative objects. For the classification we combine a Conditional Random Fields framework with a Random Forests approach. By classifying in this way, we benefit from the consideration of context on the one hand and from the opportunity to utilise a high number of classification features on the other hand. We investigate the relevance of different features for the lidar points in coastal areas as well as for the interaction of neighbouring points.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionThe unmanned aerial vehicle -based light detection and ranging (UAV-LiDAR) can quickly acquire the three-dimensional information of large areas of vegetation, and has been widely used in tree species classification.MethodsUAV-LiDAR point clouds of Populus alba, Populus simonii, Pinus sylvestris, and Pinus tabuliformis from 12 sample plots, 2,622 tree in total, were obtained in North China, training and testing sets were constructed through data pre-processing, individual tree segmentation, feature extraction, Non-uniform Grid and Farther Point Sampling (NGFPS), and then four tree species were classified efficiently by two machine learning algorithms and two deep learning algorithms.ResultsResults showed that PointMLP achieved the best accuracy for identification of the tree species (overall accuracy = 96.94%), followed by RF (overall accuracy = 95.62%), SVM (overall accuracy = 94.89%) and PointNet++(overall accuracy = 85.65%). In addition, the most suitable number of point cloud sampling of single tree is between 1,024 and 2048 when using the NGFPS method in the two deep learning models. Furthermore, feature value of elev_percentile_99th has an important influence on tree species classification and tree species with similar crown structures may lead to a higher misidentification rate.DiscussionThe study underscores the efficiency of PointMLP as a robust and streamlined solution, which offers a novel technological support for tree species classification in forestry resource management.
Lidar point cloud data with classifications – unclassified (1), ground (2), low vegetation (3), medium vegetation (4), high vegetation (5), buildings (6), low point - noise (7), reserved – model keypoint (8), high noise (18). / Données de nuages de points Lidar avec classification : non classifié (1); sol (2); végétation basse (3); végétation moyenne (4); végétation élevée (5); bâtiment (6); point bas – bruit (7); réservé – point de repère (8); bruit élevé (18).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The annotated point clouds were generated to train the weakly supervised semantic segmentation algorithm Semantic Query Network (SQN) to classify point clouds [1]. The dataset covers 16 tiles of airborne LiDAR data in an area of 7.2 km2 in Shatin, Hong Kong, China. 11 tiles were used for training, while 5 tiles were used for validation. There are multiple types of construction in the dataset including high-rise residential buildings, low-rise village houses, and large public buildings. Green spaces are mainly composed of wood areas in open spaces (e.g., in parks and hills) and planted trees in residential gardens and nearby roads. Point clouds are classified in ground, buildings, and trees.
The LiDAR data is owned by the Hong Kong government. Please visit the Spatial Data Portal, Survey Division, CEDD (https://sdportal.cedd.gov.hk/#/en/) for more details.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset is captured over Samford Ecological Research
Facility (SERF), which is located within the Samford valley in south east
Queensland, Australia. The central point of the dataset is located at
coordinates: 27.38572oS, 152.877098oE. The Vegetation Management
Act 1999 protects the vegetation on this property as it provides a refuge
to native flora and fauna that are under increasing pressure caused by urbanization.The hyperspectral image was acquired by the SPECIM AsiaEAGLE II
sensor on the second of February, 2013. This sensor captures 252 spectral
channels ranging from 400.7nm to 999.2nm. The last five channels,
i.e., channels 248 to 252, are corrupted and can be excluded. The spatial
resolution of the hyperspectral data was set to 1m.The airborne light detection and ranging (LiDAR) data were captured
by the ALTM Leica ALS50-II sensor in 2009 composing of a total of 3716157
points in the study area: 2133050 for the first return points, 1213712 for the
second return points, 345.736 for the third return points, and 23659 for the
fourth return points.The average flight height was 1700 meters and the average point
density is two points per square meter. The laser pulse wavelength is 1064nm
with a repetition rate of 126 kHz, an average sample spacing of 0.8m
and a footprint of 0.34m. The data were collected up to four returns per
pulse and the intensity records were supplied on all pulse returns.The nominal vertical accuracy was ±0.15m at 1 sigma and the
measured vertical accuracy was ±0.05m at 1 sigma. These values have been
determined from check points contrived on an open clear ground. The measured
horizontal accuracy was ± 0.31m at 1 sigma.The obtained ground LiDAR returns were interpolated and rasterized
into a 1m×1m digital elevation model (DEM) provided by the LiDAR
contractor, which was produced from the LiDAR ground points and interpolated
coastal boundaries.The first returns of the airborne LiDAR sensor were utilized to
produce the normalized digital surface model (nDSM) at 1m spatial
resolution using Las2dem.The 1m spatial resolution intensity image was also produced
using Las2dem. This software interpolated the points using triangulated
irregular networks (TIN). Then, the TINs were rasterized into the nDSM and the
intensity image with a pixel size of 1m. The intensity image with 1m
spatial resolution was also produced using Las2dem.The LiDAR data were classified into ground" and
non-ground" by the data contractor using algorithms tailored especially
for the project area. For the areas covered by dense vegetation, less laser
pulse reaches the ground. Consequently, fewer ground points were available for
DEM and nDSM surfaces interpolation in those areas. Therefore, the DEM and the
nDSM tend to be less accurate in these areas.In order to use the datasets, please fulfill the following three
requirements:
1) Giving an acknowledgement as follows:
The authors gratefully acknowledge TERN AusCover and Remote Sensing Centre, Department of Science, Information Technology, Innovation and the Arts, QLD for providing the hyperspectral and LiDAR data, respectively. Airborne lidar are from http://www.auscover.org.au/xwiki/bin/view/Product+pages/Airborne+LidarAirborne hyperspectral are from http://www.auscover.org.au/xwiki/bin/view/Product+pages/Airborne+Hyperspectral
2) Using the following license for LiDAR and hyperspectral data:
http://creativecommons.org/licenses/by/3.0/3) This dataset was made public by Dr. Pedram Ghamisi from German Aerospace Center (DLR) and Prof. Stuart Phinn from the University of Queensland. Please cite: In WORD:Pedram Ghamisi and Stuart Phinn, Fusion of LiDAR and Hyperspectral Data, Figshare, December 2015, https://dx.doi.org/10.6084/m9.figshare.2007723.v3In LaTex:@article{Ghamisi2015,author = "Pedram Ghamisi and Stuart Phinn",title = "{Fusion of LiDAR and Hyperspectral Data}",journal={Figshare},year = {2015},month = {12},url = "10.6084/m9.figshare.2007723.v3",
}
Binary point-cloud data were produced for the Chandeleur Islands, Louisiana, from remotely sensed, geographically referenced elevation measurements collected by Leading Edge Geomatics (LEG) using a Leica Chiroptera II Bathymetric and Topographic Sensor. Dewberry reports that the nominal pulse spacing for this project was 1 point every 0.7 meters. Dewberry used proprietary procedures to classify the LAS according to project specifications: 0-Never Classified, 1-Unclassified, 2-Ground (includes model key point bit for points identified as Model Key Point), 7-Low Noise, 17-Bridges, 18-High Noise, 40-Bathymetric point or submerged topography (includes model key point bit for points identified as Model Key Point), 41-Water Surface, and 42-Derived water surface.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The B4 Lidar Project collected lidar point cloud data of the southern San Andreas and San Jacinto Faults in southern California. Data acquisition and processing were performed by the National Center for Airborne Laser Mapping (NCALM) in partnership with the USGS and Ohio State University through funding from the EAR Geophysics program at the National Science Foundation (NSF). Optech International contributed the ALTM3100 laser scanner system. UNAVCO and SCIGN assisted in GPS ground control and continuous high rate GPS data acquisition. A group of volunteers from USGS, UCSD, UCLA, Caltech and private industry, as well as gracious landowners along the fault zones, also made the project possible. If you utilize the B4 data for talks, posters or publications, we ask that you acknowledge the B4 project. The B4 logo can be downloaded here.
A new reprocessed (classified) version of this dataset is here:
Publications associated with this dataset can be found at NCALM's Data Tracking Center
Single photon light detection and ranging (SPL LiDAR) is an active remote sensing technology for:
mapping vegetation aspects including cover, density and height representing the earth's terrain and elevation contours
We acquired SPL data on an airborne acquisition platform under leaf-on conditions to support Forest Resources Inventory (FRI) development.
FRI provides:
information to support resource management planning and land use decisions within Ontario’s Managed Forest Zone information on tree species, density, heights, ages and distribution
The SPL data point density ranges from a min of 25pts/m. Each point represents heights of objects such as:
ground level terrain points heights of vegetation buildings
The LiDAR was classified according to the Ontario LiDAR classifications. Low, medium and tall vegetation are classed as 3, 4, 5 and 12 classes.
The FRI SPL products include the following digital elevation models:
digital terrain model canopy height model digital surface model intensity model (signal width to return ratio) forest inventory raster metrics forest inventory attributes predicted streams hydro break lines block control points
LiDAR fMVA data supports developing detailed 3D analysis of:
forest inventory terrain hydrology infrastructure transportation
We made significant investments in Single Photon LiDAR data, now available on the Open Data Catalogue Derivatives are available for streaming or through download.
The map reflects areas with LiDAR data available for download. Zoom in to see data tiles and download options. Select individual tiles to download the data.
You can download:
classified point cloud data can also be downloaded via .laz format derivatives in a compressed .tiff format Forest Resource Inventory leaf-on LiDAR Tile Index (Download: Shapefile | File Geodatabase | GeoPackage )
Web raster services
You can access the data through our web raster services. For more information and tutorials, read the Ontario Web Raster Services User Guide.
If you have questions about how to use the Web raster services, email Geospatial Ontario (GEO) at geospatial@ontario.ca.
Note: Internal Users replace “https://ws.” with “https://intra.ws.”
CHM - https://ws.geoservices.lrc.gov.on.ca/arcgis5/rest/services/Elevation/FRI_CHM_SPL/ImageServer DSM - https://ws.geoservices.lrc.gov.on.ca/arcgis5/rest/services/Elevation/FRI_DSM_SPL/ImageServer DTM - https://ws.geoservices.lrc.gov.on.ca/arcgis5/rest/services/Elevation/FRI_DTM_SPL/ImageServer T1 Imagery - https://ws.geoservices.lrc.gov.on.ca/arcgis5/rest/services/AerialImagery/FRI_Imagery_T1/ImageServer T2 Imagery - https://ws.geoservices.lrc.gov.on.ca/arcgis5/rest/services/AerialImagery/FRI_T2_Imagery/ImageServer Land Cover - https://ws.geoservices.lrc.gov.on.ca/arcgis5/rest/services/Thematic/Ontario_Land_Cover_Compilation_v2/ImageServer
Service Endpoint
https://services1.arcgis.com/TJH5KDher0W13Kgo/arcgis/rest/services/FRI_Data_Access/FeatureServer
Additional Documentation
Forest Resources Inventory | ontario.ca
Status
On going: data is being continually updated
Maintenance and Update Frequency
As needed: data is updated as deemed necessary
Contact
Natural Resources Information Unit, Forest Resources Inventory Program, FRI@ontario.ca
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
The Delaware River Basin (DRB) covers portions of five states (Delaware, Maryland, New Jersey, New York, and Pennsylvania) and several geologic provinces, encompassing much of the complex geology of the Mid-Atlantic region. This data release focuses on the recently glaciated northern DRB, which includes portions of New Jersey, New York, and Pennsylvania. Groundwater storage is conceptualized to be greatest in the glacial surficial aquifers in the upper part of the basin, thus characterization of this critical zone is of primary importance for USGS Next Generation Water Observing System (NGWOS) modeling of baseflow to the upper Delaware River. In support of this effort, we trained four deep learning models to classify surficial materials in unique physiographic areas of the northern DRB, using previously published surficial geologic maps as training data. First, we compiled existing digital surficial geologic map data at various scales (1:100,000 to 1:24,000), with high-resolution ...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Le but de ce travail était de définir et de cartographier un ensemble de classes écohydrologiques répétitives sur les îles Calvert et Hécate à l'aide de données de télédétection et d'une technique de classification non supervisée. La carte qui en résulte fournit un nouvel outil pour caractériser l'étendue et les propriétés internes des différentes classes d'écosystèmes, pour stratifier les plans d'étude futurs et pour évaluer l'influence des caractéristiques du paysage terrestre sur les processus des bassins versants.
« Traditionnellement, l'inventaire forestier et la cartographie des écosystèmes à l'échelle locale et régionale reposent sur l'interprétation manuelle de photographies aériennes, sur la base de schémas de classification normalisés et pilotés par des experts. Ces approches actuelles fournissent les informations nécessaires à la gestion des écosystèmes forestiers mais limitent la résolution thématique et spatiale de la cartographie et sont rarement répétées. L'objectif de cette recherche était de démontrer l'utilité d'une technique quantitative non supervisée basée sur des données LiDAR (Light Detection And Ranging) et des images satellitaires multispectrales pour cartographier les écosystèmes à l'échelle locale sur un paysage hétérogène d'écosystèmes forestiers et non forestiers. Nous avons dérivé une gamme de mesures caractérisant le terrain et la végétation locaux à partir d'images LiDAR et RapidEye pour les îles Calvert et Hécate, en Colombie-Britannique. Ces paramètres ont été utilisés dans une analyse de grappes pour classer et caractériser quantitativement les unités écologiques de l'île. Au total, 18 grappes ont été dérivées. Les grappes ont été attribuées avec des statistiques sommaires quantitatives à partir des entrées de données de télédétection et contextualisées par comparaison avec des unités écologiques délimitées dans une méthode de cartographie traditionnelle dirigée par des experts à l'aide de photographies aériennes. Les 18 groupes décrivent des écosystèmes allant des zones arbustives ouvertes aux forêts denses et productives et comprennent une zone riveraine et de nombreux écosystèmes plus humides et humides. Les grappes fournissent des informations détaillées et spatialement explicites pour caractériser le paysage en tant que mosaïque d'unités définies par la topographie et la structure de la végétation. Cette étude démontre que l'utilisation de divers types de données de télédétection dans une classification quantitative peut fournir aux scientifiques et aux gestionnaires des informations multivariées uniques à celles qui résultent des méthodes traditionnelles de cartographie des écosystèmes basées sur des experts. » - Résumé de Thompson et al. 2016.
Une explication complète des méthodes est disponible dans Thompson et al. 2016. Régionalisation basée sur les données de l'écosystème forestier et non forestier de la côte de la Colombie-Britannique à l'aide d'images LiDAR et RapidEye. Le manuscrit est disponible ici : Thompson et al. 2016
Un petit nombre de vides de données dans la couverture LiDAR de 2012 étaient présents et ont été exclus de l'analyse. Bien que les vides aient depuis été comblés par de nouvelles données LiDAR acquises en 2014, les nouvelles données n'ont pas été incluses dans l'analyse de Thompson et al. D'autres « lacunes » dans la couverture spatiale de la carte finale sont le résultat de l'exclusion des zones non végétalisées (conformément à l'indice de végétation par différence normalisée (NDVI) et à l'Atlas provincial des eaux douces (FWA) : http://geobc.gov.bc.ca/base-mapping/atlas/fwa/index.html). Outre les petits plans d'eau, ces zones non végétalisées comprennent quelques petites zones à haute altitude qui étaient recouvertes de neige au moment de l'acquisition de l'image RapidEye.
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
Lidar point cloud data with classifications – unclassified (1), ground (2), low vegetation (3), medium vegetation (4), high vegetation (5), buildings (6), low point - noise (7), reserved – model keypoint (8), high noise (18).
Statewide 2016 Lidar points colorized with 2018 NAIP imagery as a scene created by Esri using ArcGIS Pro for the entire State of Connecticut. This service provides the colorized Lidar point in interactive 3D for visualization, interaction of the ability to make measurements without downloading.Lidar is referenced at https://cteco.uconn.edu/data/lidar/ and can be downloaded at https://cteco.uconn.edu/data/download/flight2016/. Metadata: https://cteco.uconn.edu/data/flight2016/info.htm#metadata. The Connecticut 2016 Lidar was captured between March 11, 2016 and April 16, 2016. Is covers 5,240 sq miles and is divided into 23, 381 tiles. It was acquired by the Captiol Region Council of Governments with funding from multiple state agencies. It was flown and processed by Sanborn. The delivery included classified point clouds and 1 meter QL2 DEMs. The 2016 Lidar is published on the Connecticut Environmental Conditions Online (CT ECO) website. CT ECO is the collaborative work of the Connecticut Department of Energy and Environmental Protection (DEEP) and the University of Connecticut Center for Land Use Education and Research (CLEAR) to share environmental and natural resource information with the general public. CT ECO's mission is to encourage, support, and promote informed land use and development decisions in Connecticut by providing local, state and federal agencies, and the public with convenient access to the most up-to-date and complete natural resource information available statewide.Process used:Extract Building Footprints from Lidar1. Prepare Lidar - Download 2016 Lidar from CT ECO- Create LAS Dataset2. Extract Building Footprints from LidarUse the LAS Dataset in the Classify Las Building Tool in ArcGIS Pro 2.4.Colorize LidarColorizing the Lidar points means that each point in the point cloud is given a color based on the imagery color value at that exact location.1. Prepare Imagery- Acquire 2018 NAIP tif tiles from UConn (originally from USDA NRCS).- Create mosaic dataset of the NAIP imagery.2. Prepare and Analyze Lidar Points- Change the coordinate system of each of the lidar tiles to the Projected Coordinate System CT NAD 83 (2011) Feet (EPSG 6434). This is because the downloaded tiles come in to ArcGIS as a Custom Projection which cannot be published as a Point Cloud Scene Layer Package.- Convert Lidar to zlas format and rearrange. - Create LAS Datasets of the lidar tiles.- Colorize Lidar using the Colorize LAS tool in ArcGIS Pro. - Create a new LAS dataset with a division of Eastern half and Western half due to size limitation of 500GB per scene layer package. - Create scene layer packages (.slpk) using Create Cloud Point Scene Layer Package. - Load package to ArcGIS Online using Share Package. - Publish on ArcGIS.com and delete the scene layer package to save storage cost.Additional layers added:Visit https://cteco.uconn.edu/projects/lidar3D/layers.htm for a complete list and links. 3D Buildings and Trees extracted by Esri from the lidarShaded Relief from CTECOImpervious Surface 2012 from CT ECONAIP Imagery 2018 from CTECOContours (2016) from CTECOLidar 2016 Download Link derived from https://www.cteco.uconn.edu/data/download/flight2016/index.htm
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
LiDAR_Point_Clouds, Classified. AHD have been preocessed to conform to the Australian Height Datum and converted from files collected as swaths in to tiles of data. The file formats is LAS.
LAS is an industry format created and maintained by the American Society for Photogrammetry and Remote Sensing (ASPRS). LAS is a published standard file format for the interchange of lidar data. It maintains specific information related to lidar data. It is a way for vendors and clients to interchange data and maintain all information specific to that data. Each LAS file contains metadata of the lidar survey in a header block followed by individual records for each laser pulse recorded. The header portion of each LAS file holds attribute information on the lidar survey itself: data extents, flight date, flight time, number of point records, number of points by return, any applied data offset, and any applied scale factor. The following lidar point attributes are maintained for each laser pulse of a LAS file: x,y,z location information, GPS time stamp, intensity, return number, number of returns, point classification values, scan angle, additional RGB values, scan direction, edge of flight line, user data, point source ID and waveform information. Each and every lidar point in a LAS file can have a classification code set for it. Classifying lidar data allows you to organize mass points into specific data classes while still maintaining them as a whole data collection in LAS files. Typically, these classification codes represent the type of object that has reflected the laser pulse. Point classification is usually completed by data vendors using semi-automated techniques on the point cloud to assign the feature type associated with each point. Lidar points can be classified into a number of categories including bare earth or ground, top of canopy, and water. The different classes are defined using numeric integer codes in the LAS files. The following table contains the LAS classification codes as defined in the LAS 1.1 standard: Class code Classification type 0 Never classified 1 Unassigned 2 Ground 3 Low vegetation 4 Medium vegetation 5 High vegetation 6 Building 7 Noise 8 Model key 9 Water
Lineage: Fugro Spatial Solutions (FSS) were awarded a contract by Geoscience Australia to carry out an Aerial LiDAR Survey over the Kakadu National Park. The data will be used to examine the potential impacts of climate change and sea level rise on the West Alligator, South Alligator, East Alligator River systems and other minor areas. The project area was flight planned using parameters as specified. A FSS aircraft and aircrew were mobilised to site and the project area was captured using a Leica ALS60 system positioned using a DGPS base-station at Darwin airport. The Darwin base-station was positioned by DGPS observations from local control stations. A ground control survey was carried out by FSS surveyors to determine ground positions and heights for control and check points throughout the area. All data was returned to FSS office in Perth and processed. The deliverable datasets were generated and supplied to Geoscience Australia with this metadata information.
NEDF Metadata Acquisition Start Date: Saturday, 22 October 2011 Acquisition End Date: Wednesday, 16 November 2011 Sensor: LiDAR Device Name: Leica ALS60 (S/N: 6145) Flying Height (AGL): 1409 INS/IMU Used: uIRS-56024477 Number of Runs: 468 Number of Cross Runs: 28 Swath Width: 997 Flight Direction: Non-Cardinal Swath (side) Overlap: 20 Horizontal Datum: GDA94 Vertical Datum: AHD71 Map Projection: MGA53 Description of Aerotriangulation Process Used: Not Applicable Description of Rectification Process Used: Not Applicable Spatial Accuracy Horizontal: 0.8 Spatial Accuracy Vertical: 0.3 Average Point Spacing (per/sqm): 2 Laser Return Types: 4 pulses (1st 2nd 3rd 4th and intensity) Data Thinning: None Laser Footprint Size: 0.32 Calibration certification (Manufacturer/Cert. Company): Leica Limitations of the Data: To project specification Surface Type: Various Product Type: Other Classification Type: C0 Grid Resolution: 2 Distribution Format: Other Processing/Derivation Lineage: Capture, Geodetic Validation WMS: Not Applicable?