Tracking an animal's location from video has many applications, from providing information on health and welfare to validating sensor-based technologies. Typically, accurate location estimation from video is achieved using cameras with overhead (top-down) views, but structural and financial limitations may require mounting cameras at other angles. We describe a user-friendly solution to manually extract an animal's location from non-overhead video. Our method uses QGIS, an open-source geographic information system, to: (1) assign facility-based coordinates to pixel coordinates in non-overhead frames; 2) use the referenced coordinates to transform the non-overhead frames to an overhead view; and 3) determine facility-based x, y coordinates of animals from the transformed frames. Using this method, we could determine an object's facility-based x, y coordinates with an accuracy of 0.13 ± 0.09 m (mean ± SD; range: 0.01–0.47 m) when compared to the ground truth (coordinates manually recorded..., Please see the description in the associated research publication., Please see the included README file.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Land cover change map in Kef-Siliana region between 2017-2022. This map of change is an output of the use of SCP plugin in QGIS version 3.28.15. Source Data Coordinate System: Universal Transverse Mercator (UTM) WGS84. Service Coordinate System: Web Mercator Auxiliary Sphere WGS84 (EPSG:3857). Cell Size: 10-meters. Data Key: 1 from Water to Water; 2 from Water to Forest; 3 from Water to Flooded vegetation; 4 from Water to Crop; 5 from Water to Built Area; 6 from Water to Bare land; 7 from Water to Rangeland; 8 from Forest to Water; 9 from Forest to Forest; 10 from Forest to Flooded vegetation; 11 from Forest to Crop; 12 from Forest to Built Area; 13 from Forest to Bare land; 14 from Forest to Rangeland; 15 from Flooded vegetation to Water; 16 from Flooded vegetation to Forest; 17 from Flooded vegetation to Flooded vegetation; 18 from Flooded vegetation to Crop; 19 from Flooded vegetation to Built Area; 20 from Flooded vegetation to Bare land; 21 from Flooded vegetation to Rangeland; 22 from Crop to Water; 23 from Crop to Forest; 24 from Crop to Flooded vegetation; 25 from Crop to Crop; 26 from Crop to Built Area; 27 from Crop to Bare land; 28 from Crop to Rangeland; 29 from Built Area to Water; 30 from Built Area to Forest; 31 from Built Area to Flooded vegetation; 32 from Built Area to Crop; 33 from Built Area to Built Area; 34 from Built Area to Bare land; 35 from Built Area to Rangeland; 36 from Bare land to Water; 37 from Bare land to Forest; 38 from Bare land to Flooded vegetation; 39 from Bare land to Crop; 40 from Bare land to Built Area; 41 from Bare land to Bare land; 42 from Bare land to Rangeland; 43 from Rangeland to Water; 44 from Rangeland to Forest; 45 from Rangeland to Flooded vegetation; 46 from Rangeland to Crop; 47 from Rangeland to Built Area; 48 from Rangeland to Bare land; 49 from Rangeland to Rangeland.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This Python script (Shape2DJI_Pilot_KML.py) will scan a directory, find all the ESRI shapefiles (.shp), reproject to EPSG 4326 (geographic coordinate system WGS84 ellipsoid), create an output directory and make a new Keyhole Markup Language (.kml) file for every line or polygon found in the files. These new *.kml files are compatible with DJI Pilot 2 on the Smart Controller (e.g., for M300 RTK). The *.kml files created directly by ArcGIS or QGIS are not currently compatible with DJI Pilot.
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
The High Resolution Digital Elevation Model (HRDEM) product is derived from airborne LiDAR data (mainly in the south) and satellite images in the north. The complete coverage of the Canadian territory is gradually being established. It includes a Digital Terrain Model (DTM), a Digital Surface Model (DSM) and other derived data. For DTM datasets, derived data available are slope, aspect, shaded relief, color relief and color shaded relief maps and for DSM datasets, derived data available are shaded relief, color relief and color shaded relief maps. The productive forest line is used to separate the northern and the southern parts of the country. This line is approximate and may change based on requirements. In the southern part of the country (south of the productive forest line), DTM and DSM datasets are generated from airborne LiDAR data. They are offered at a 1 m or 2 m resolution and projected to the UTM NAD83 (CSRS) coordinate system and the corresponding zones. The datasets at a 1 m resolution cover an area of 10 km x 10 km while datasets at a 2 m resolution cover an area of 20 km by 20 km. In the northern part of the country (north of the productive forest line), due to the low density of vegetation and infrastructure, only DSM datasets are generally generated. Most of these datasets have optical digital images as their source data. They are generated at a 2 m resolution using the Polar Stereographic North coordinate system referenced to WGS84 horizontal datum or UTM NAD83 (CSRS) coordinate system. Each dataset covers an area of 50 km by 50 km. For some locations in the north, DSM and DTM datasets can also be generated from airborne LiDAR data. In this case, these products will be generated with the same specifications as those generated from airborne LiDAR in the southern part of the country. The HRDEM product is referenced to the Canadian Geodetic Vertical Datum of 2013 (CGVD2013), which is now the reference standard for heights across Canada. Source data for HRDEM datasets is acquired through multiple projects with different partners. Since data is being acquired by project, there is no integration or edgematching done between projects. The tiles are aligned within each project. The product High Resolution Digital Elevation Model (HRDEM) is part of the CanElevation Series created in support to the National Elevation Data Strategy implemented by NRCan. Collaboration is a key factor to the success of the National Elevation Data Strategy. Refer to the “Supporting Document” section to access the list of the different partners including links to their respective data.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Tracking an animal's location from video has many applications, from providing information on health and welfare to validating sensor-based technologies. Typically, accurate location estimation from video is achieved using cameras with overhead (top-down) views, but structural and financial limitations may require mounting cameras at other angles. We describe a user-friendly solution to manually extract an animal's location from non-overhead video. Our method uses QGIS, an open-source geographic information system, to: (1) assign facility-based coordinates to pixel coordinates in non-overhead frames; 2) use the referenced coordinates to transform the non-overhead frames to an overhead view; and 3) determine facility-based x, y coordinates of animals from the transformed frames. Using this method, we could determine an object's facility-based x, y coordinates with an accuracy of 0.13 ± 0.09 m (mean ± SD; range: 0.01–0.47 m) when compared to the ground truth (coordinates manually recorded..., Please see the description in the associated research publication., Please see the included README file.