4 datasets found
  1. d

    Data from: Estimating animal location from non-overhead camera views

    • search.dataone.org
    • data.niaid.nih.gov
    • +2more
    Updated Nov 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jocelyn M. Woods; Sarah J. J. Adcock (2023). Estimating animal location from non-overhead camera views [Dataset]. http://doi.org/10.5061/dryad.rr4xgxddm
    Explore at:
    Dataset updated
    Nov 30, 2023
    Dataset provided by
    Dryad Digital Repository
    Authors
    Jocelyn M. Woods; Sarah J. J. Adcock
    Time period covered
    Jan 1, 2023
    Description

    Tracking an animal's location from video has many applications, from providing information on health and welfare to validating sensor-based technologies. Typically, accurate location estimation from video is achieved using cameras with overhead (top-down) views, but structural and financial limitations may require mounting cameras at other angles. We describe a user-friendly solution to manually extract an animal's location from non-overhead video. Our method uses QGIS, an open-source geographic information system, to: (1) assign facility-based coordinates to pixel coordinates in non-overhead frames; 2) use the referenced coordinates to transform the non-overhead frames to an overhead view; and 3) determine facility-based x, y coordinates of animals from the transformed frames. Using this method, we could determine an object's facility-based x, y coordinates with an accuracy of 0.13 ± 0.09 m (mean ± SD; range: 0.01–0.47 m) when compared to the ground truth (coordinates manually recorded..., Please see the description in the associated research publication., Please see the included README file.

  2. M

    Land Cover change in Kef-Siliana region (2017-2022)

    • data.mel.cgiar.org
    csv, tiff
    Updated Jun 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zahra Shiri; Aymen Frija; Aymen Frija; Hichem Rejeb; Hassen Ouerghemmi; Quang Bao Le; Quang Bao Le; Zahra Shiri; Hichem Rejeb; Hassen Ouerghemmi (2025). Land Cover change in Kef-Siliana region (2017-2022) [Dataset]. https://data.mel.cgiar.org/dataset.xhtml?persistentId=hdl:20.500.11766.1/FK2/YUXPQY
    Explore at:
    csv(2106), csv(269), tiff(14526837), csv(808), csv(143)Available download formats
    Dataset updated
    Jun 8, 2025
    Dataset provided by
    MELDATA
    Authors
    Zahra Shiri; Aymen Frija; Aymen Frija; Hichem Rejeb; Hassen Ouerghemmi; Quang Bao Le; Quang Bao Le; Zahra Shiri; Hichem Rejeb; Hassen Ouerghemmi
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2022 - Jan 1, 2024
    Area covered
    Siliana, kef, Tunisia
    Dataset funded by
    CGIARhttp://cgiar.org/
    Description

    Land cover change map in Kef-Siliana region between 2017-2022. This map of change is an output of the use of SCP plugin in QGIS version 3.28.15. Source Data Coordinate System: Universal Transverse Mercator (UTM) WGS84. Service Coordinate System: Web Mercator Auxiliary Sphere WGS84 (EPSG:3857). Cell Size: 10-meters. Data Key: 1 from Water to Water; 2 from Water to Forest; 3 from Water to Flooded vegetation; 4 from Water to Crop; 5 from Water to Built Area; 6 from Water to Bare land; 7 from Water to Rangeland; 8 from Forest to Water; 9 from Forest to Forest; 10 from Forest to Flooded vegetation; 11 from Forest to Crop; 12 from Forest to Built Area; 13 from Forest to Bare land; 14 from Forest to Rangeland; 15 from Flooded vegetation to Water; 16 from Flooded vegetation to Forest; 17 from Flooded vegetation to Flooded vegetation; 18 from Flooded vegetation to Crop; 19 from Flooded vegetation to Built Area; 20 from Flooded vegetation to Bare land; 21 from Flooded vegetation to Rangeland; 22 from Crop to Water; 23 from Crop to Forest; 24 from Crop to Flooded vegetation; 25 from Crop to Crop; 26 from Crop to Built Area; 27 from Crop to Bare land; 28 from Crop to Rangeland; 29 from Built Area to Water; 30 from Built Area to Forest; 31 from Built Area to Flooded vegetation; 32 from Built Area to Crop; 33 from Built Area to Built Area; 34 from Built Area to Bare land; 35 from Built Area to Rangeland; 36 from Bare land to Water; 37 from Bare land to Forest; 38 from Bare land to Flooded vegetation; 39 from Bare land to Crop; 40 from Bare land to Built Area; 41 from Bare land to Bare land; 42 from Bare land to Rangeland; 43 from Rangeland to Water; 44 from Rangeland to Forest; 45 from Rangeland to Flooded vegetation; 46 from Rangeland to Crop; 47 from Rangeland to Built Area; 48 from Rangeland to Bare land; 49 from Rangeland to Rangeland.

  3. B

    Shapefile to DJI Pilot KML conversion tool

    • borealisdata.ca
    • search.dataone.org
    Updated Jan 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nicolas Cadieux (2023). Shapefile to DJI Pilot KML conversion tool [Dataset]. http://doi.org/10.5683/SP3/W1QMQ9
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 30, 2023
    Dataset provided by
    Borealis
    Authors
    Nicolas Cadieux
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This Python script (Shape2DJI_Pilot_KML.py) will scan a directory, find all the ESRI shapefiles (.shp), reproject to EPSG 4326 (geographic coordinate system WGS84 ellipsoid), create an output directory and make a new Keyhole Markup Language (.kml) file for every line or polygon found in the files. These new *.kml files are compatible with DJI Pilot 2 on the Smart Controller (e.g., for M300 RTK). The *.kml files created directly by ArcGIS or QGIS are not currently compatible with DJI Pilot.

  4. G

    High Resolution Digital Elevation Model (HRDEM) - CanElevation Series

    • open.canada.ca
    • catalogue.arctic-sdi.org
    esri rest, geotif +5
    Updated Jun 17, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Natural Resources Canada (2025). High Resolution Digital Elevation Model (HRDEM) - CanElevation Series [Dataset]. https://open.canada.ca/data/en/dataset/957782bf-847c-4644-a757-e383c0057995
    Explore at:
    shp, geotif, html, pdf, esri rest, json, kmzAvailable download formats
    Dataset updated
    Jun 17, 2025
    Dataset provided by
    Natural Resources Canada
    License

    Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
    License information was derived automatically

    Description

    The High Resolution Digital Elevation Model (HRDEM) product is derived from airborne LiDAR data (mainly in the south) and satellite images in the north. The complete coverage of the Canadian territory is gradually being established. It includes a Digital Terrain Model (DTM), a Digital Surface Model (DSM) and other derived data. For DTM datasets, derived data available are slope, aspect, shaded relief, color relief and color shaded relief maps and for DSM datasets, derived data available are shaded relief, color relief and color shaded relief maps. The productive forest line is used to separate the northern and the southern parts of the country. This line is approximate and may change based on requirements. In the southern part of the country (south of the productive forest line), DTM and DSM datasets are generated from airborne LiDAR data. They are offered at a 1 m or 2 m resolution and projected to the UTM NAD83 (CSRS) coordinate system and the corresponding zones. The datasets at a 1 m resolution cover an area of 10 km x 10 km while datasets at a 2 m resolution cover an area of 20 km by 20 km. In the northern part of the country (north of the productive forest line), due to the low density of vegetation and infrastructure, only DSM datasets are generally generated. Most of these datasets have optical digital images as their source data. They are generated at a 2 m resolution using the Polar Stereographic North coordinate system referenced to WGS84 horizontal datum or UTM NAD83 (CSRS) coordinate system. Each dataset covers an area of 50 km by 50 km. For some locations in the north, DSM and DTM datasets can also be generated from airborne LiDAR data. In this case, these products will be generated with the same specifications as those generated from airborne LiDAR in the southern part of the country. The HRDEM product is referenced to the Canadian Geodetic Vertical Datum of 2013 (CGVD2013), which is now the reference standard for heights across Canada. Source data for HRDEM datasets is acquired through multiple projects with different partners. Since data is being acquired by project, there is no integration or edgematching done between projects. The tiles are aligned within each project. The product High Resolution Digital Elevation Model (HRDEM) is part of the CanElevation Series created in support to the National Elevation Data Strategy implemented by NRCan. Collaboration is a key factor to the success of the National Elevation Data Strategy. Refer to the “Supporting Document” section to access the list of the different partners including links to their respective data.

  5. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Jocelyn M. Woods; Sarah J. J. Adcock (2023). Estimating animal location from non-overhead camera views [Dataset]. http://doi.org/10.5061/dryad.rr4xgxddm

Data from: Estimating animal location from non-overhead camera views

Related Article
Explore at:
Dataset updated
Nov 30, 2023
Dataset provided by
Dryad Digital Repository
Authors
Jocelyn M. Woods; Sarah J. J. Adcock
Time period covered
Jan 1, 2023
Description

Tracking an animal's location from video has many applications, from providing information on health and welfare to validating sensor-based technologies. Typically, accurate location estimation from video is achieved using cameras with overhead (top-down) views, but structural and financial limitations may require mounting cameras at other angles. We describe a user-friendly solution to manually extract an animal's location from non-overhead video. Our method uses QGIS, an open-source geographic information system, to: (1) assign facility-based coordinates to pixel coordinates in non-overhead frames; 2) use the referenced coordinates to transform the non-overhead frames to an overhead view; and 3) determine facility-based x, y coordinates of animals from the transformed frames. Using this method, we could determine an object's facility-based x, y coordinates with an accuracy of 0.13 ± 0.09 m (mean ± SD; range: 0.01–0.47 m) when compared to the ground truth (coordinates manually recorded..., Please see the description in the associated research publication., Please see the included README file.

Search
Clear search
Close search
Google apps
Main menu