53 datasets found
  1. Z

    Downsized camera trap images for automated classification

    • data.niaid.nih.gov
    • zenodo.org
    Updated Dec 1, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ewers, Robert M (2022). Downsized camera trap images for automated classification [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6627706
    Explore at:
    Dataset updated
    Dec 1, 2022
    Dataset provided by
    Wearne, Oliver R
    Ewers, Robert M
    Heon, Sui P
    Norman, Danielle L
    Chapman, Philip M
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Description: Downsized (256x256) camera trap images used for the analyses in "Can CNN-based species classification generalise across variation in habitat within a camera trap survey?", and the dataset composition for each analysis. Note that images tagged as 'human' have been removed from this dataset. Full-size images for the BorneoCam dataset will be made available at LILA.science. The full SAFE camera trap dataset metadata is available at DOI: 10.5281/zenodo.6627707. Project: This dataset was collected as part of the following SAFE research project: Machine learning and image recognition to monitor spatio-temporal changes in the behaviour and dynamics of species interactions Funding: These data were collected as part of research funded by:

    NERC (NERC QMEE CDT Studentship, NE/P012345/1, http://gotw.nerc.ac.uk/list_full.asp?pcode=NE%2FP012345%2F1&cookieConsent=A) This dataset is released under the CC-BY 4.0 licence, requiring that you cite the dataset in any outputs, but has the additional condition that you acknowledge the contribution of these funders in any outputs.

    XML metadata: GEMINI compliant metadata for this dataset is available here Files: This dataset consists of 3 files: CT_image_data_info2.xlsx, DN_256x256_image_files.zip, DN_generalisability_code.zip CT_image_data_info2.xlsx This file contains dataset metadata and 1 data tables:

    Dataset Images (described in worksheet Dataset_images) Description: This worksheet details the composition of each dataset used in the analyses Number of fields: 69 Number of data rows: 270287 Fields:

    filename: Root ID (Field type: id) camera_trap_site: Site ID for the camera trap location (Field type: location) taxon: Taxon recorded by camera trap (Field type: taxa) dist_level: Level of disturbance at site (Field type: ordered categorical) baseline: Label as to whether image is included in the baseline training, validation (val) or test set, or not included (NA) (Field type: categorical) increased_cap: Label as to whether image is included in the 'increased cap' training, validation (val) or test set, or not included (NA) (Field type: categorical) dist_individ_event_level: Label as to whether image is included in the 'individual disturbance level datasets split at event level' training, validation (val) or test set, or not included (NA) (Field type: categorical) dist_combined_event_level_1: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance level 1' training or test set, or not included (NA) (Field type: categorical) dist_combined_event_level_2: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance level 2' training or test set, or not included (NA) (Field type: categorical) dist_combined_event_level_3: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance level 3' training or test set, or not included (NA) (Field type: categorical) dist_combined_event_level_4: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance level 4' training or test set, or not included (NA) (Field type: categorical) dist_combined_event_level_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance level 5' training or test set, or not included (NA) (Field type: categorical) dist_combined_event_level_pair_1_2: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1 and 2 (pair)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_pair_1_3: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1 and 3 (pair)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_pair_1_4: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1 and 4 (pair)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_pair_1_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1 and 5 (pair)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_pair_2_3: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 2 and 3 (pair)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_pair_2_4: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 2 and 4 (pair)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_pair_2_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 2 and 5 (pair)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_pair_3_4: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 3 and 4 (pair)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_pair_3_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 3 and 5 (pair)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_pair_4_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 4 and 5 (pair)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_triple_1_2_3: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 2 and 3 (triple)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_triple_1_2_4: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 2 and 4 (triple)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_triple_1_2_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 2 and 5 (triple)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_triple_1_3_4: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 3 and 4 (triple)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_triple_1_3_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 3 and 5 (triple)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_triple_1_4_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 4 and 5 (triple)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_triple_2_3_4: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 2, 3 and 4 (triple)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_triple_2_3_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 2, 3 and 5 (triple)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_triple_2_4_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 2, 4 and 5 (triple)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_triple_3_4_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 3, 4 and 5 (triple)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_quad_1_2_3_4: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 2, 3 and 4 (quad)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_quad_1_2_3_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 2, 3 and 5 (quad)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_quad_1_2_4_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 2, 4 and 5 (quad)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_quad_1_3_4_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 3, 4 and 5 (quad)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_quad_2_3_4_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 2, 3, 4 and 5 (quad)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_all_1_2_3_4_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 2, 3, 4 and 5 (all)' training set, or not included (NA) (Field type: categorical) dist_camera_level_individ_1: Label as to whether image is included in the 'disturbance level combination analysis split at camera level: disturbance

  2. Data from: A Deep Learning-Based Approach for Efficient Detection and...

    • zenodo.org
    zip
    Updated Apr 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dotti Prisca; Miguel Fernandez-Tenorio; Radoslav Janicek; Radoslav Janicek; Pablo Márquez-Neila; Marcel Wullschleger; Raphael Sznitman; Marcel Egger; Marcel Egger; Dotti Prisca; Miguel Fernandez-Tenorio; Pablo Márquez-Neila; Marcel Wullschleger; Raphael Sznitman (2024). A Deep Learning-Based Approach for Efficient Detection and Classification of Local Ca²⁺ Release Events in Full-Frame Confocal Imaging [Dataset]. http://doi.org/10.5281/zenodo.10391727
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 26, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Dotti Prisca; Miguel Fernandez-Tenorio; Radoslav Janicek; Radoslav Janicek; Pablo Márquez-Neila; Marcel Wullschleger; Raphael Sznitman; Marcel Egger; Marcel Egger; Dotti Prisca; Miguel Fernandez-Tenorio; Pablo Márquez-Neila; Marcel Wullschleger; Raphael Sznitman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    U-Net implementation and project code: https://github.com/dottipr/sparks_project

    GUI's github repository: https://github.com/r-janicek/xytCalciumSignalsDetection

  3. O

    CIFAR10-DVS

    • opendatalab.com
    zip
    Updated Mar 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tsinghua University (2023). CIFAR10-DVS [Dataset]. http://doi.org/10.3389/fnins.2017.00309
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 17, 2023
    Dataset provided by
    Tsinghua University
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    CIFAR10-DVS is an event-stream dataset for object classification. 10,000 frame-based images that come from CIFAR-10 dataset are converted into 10,000 event streams with an event-based sensor, whose resolution is 128×128 pixels. The dataset has an intermediate difficulty with 10 different classes. The repeated closed-loop smooth (RCLS) movement of frame-based images is adopted to implement the conversion. Due to the transformation, they produce rich local intensity changes in continuous time which are quantized by each pixel of the event-based camera.

  4. O

    Web Image Dataset for Event Recognition (WIDER)

    • opendatalab.com
    zip
    Updated Sep 22, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chinese University of Hong Kong (2022). Web Image Dataset for Event Recognition (WIDER) [Dataset]. https://opendatalab.com/OpenDataLab/Web_Image_Dataset_for_Event_etc
    Explore at:
    zip(935363083 bytes)Available download formats
    Dataset updated
    Sep 22, 2022
    Dataset provided by
    Chinese University of Hong Kong
    License

    http://yjxiong.me/event_recog/WIDER/download_form.phphttp://yjxiong.me/event_recog/WIDER/download_form.php

    Description

    WIDER is a dataset for complex event recognition from static images. As of v0.1, it contains 61 event categories and around 50574 images annotated with event class labels. We provide a split of 50% for training and 50% for testing.

  5. FEMA Historical Geospatial Damage Assessments

    • gis-fema.hub.arcgis.com
    Updated Feb 27, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FEMA AGOL (2019). FEMA Historical Geospatial Damage Assessments [Dataset]. https://gis-fema.hub.arcgis.com/datasets/fema-historical-geospatial-damage-assessments
    Explore at:
    Dataset updated
    Feb 27, 2019
    Dataset provided by
    Federal Emergency Management Agencyhttp://www.fema.gov/
    Authors
    FEMA AGOL
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    FEMA Historical Geospatial Damage Assessment DatabaseMethodologyFor visual damage assessments using post-event imagery:Destroyed structures are classified based on a visual post-event imagery review that the structure was collapsed. Affected structures were classified based on a visual post-event imagery review indicating there were missing roof segments, failure of structural elements, and visible damage. Visual imagery assessments are primarily completed using nadir “looking straight down” imagery so damages to the sides of buildings were not included in the visual assessments. Often, imagery was not acquired during peak flood crests on rivers or surge inundation along the coast and as a result, the visual assessments may focus on resulting wind damages, not flood impacts. There may be damages visible on-the-ground that were not assessed using the imagery.For modeled damage assessments using depth grids:Damage categories (Affected, Minor, Major, Destroyed) are derived from flood depths at the structure as characterized by the best-available flood depth grid at the time of the damage assessment.Data DictionaryDamage Level:The damage category assigned to the structure based on modeled or visual assessment.No DamageAffectedMinorMajorDestroyedDamage Type:The type of event that created the damage. Multi-event: more than one type of event created damage.Assessment Type:The method for assigning a damage category:Field Assessed: damage category validated in the fieldModeled: damage category predicted based on modeled wind, flood or surge dataOther type: damage category predicted based on other type of geospatial analysisRemote Sensing: damage category assigned using image processing and image validationUnknown: damage category predicted based on unknown type of analysisInundation Depth:Depth of flooding in feet. Predicted/modeled or measured/observed.Wind Exposure Level:Severity of wind impact the structure experienced based on the Saffir-Simpson Hurricane Wind Scale or the Enhanced Fujita Rating for Tornados.Peak Ground Acceleration:PGA experienced by the structure during an earthquake based on USGS ShakeMap GIS data.Accessible:Indicates whether the structure is accessible or inaccessible due to debris, flooding, damage or other reason.Production Date:Date when the damage category was assigned.Imagery Date:Acquisition date of the image that was used to assign structural damage categories.Event Name:Name of the natural disaster that caused the damage.Event Date:The date of the event or natural disaster.Data Producer:Organization that created the damage assessment data.Disaster Number:Unique ID value assigned for natural disaster events.USNG:The USNG grid ID that the structural damage lies within.Access & Use InformationPublic: This dataset is intended for public access and use.

  6. VegetationFGDB

    • usfs.hub.arcgis.com
    Updated Feb 10, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Forest Service (2020). VegetationFGDB [Dataset]. https://usfs.hub.arcgis.com/datasets/367e660281a44c9983fb982f5f1a95e3
    Explore at:
    Dataset updated
    Feb 10, 2020
    Dataset provided by
    U.S. Department of Agriculture Forest Servicehttp://fs.fed.us/
    Authors
    U.S. Forest Service
    Area covered
    Description

    Fire affected vegetation varied significantly within the Tanglewood Fire Perimeter. Some field mapping of the varied nature of fire affected vegetation around the built properties occurred and is documented here. However, these field digitized polygons representing fire affected vegetation did not always correspond spatially to actual fire affected vegetation. Differences were mostly reflective of field data collectors experience with aerial photo interpretation. See Maranghides and McNamara (2016) for more details on how fire effected vegetation was documented.

    Post-fire aerial imagery was also collected within one month of the Tanglewood Fire as follows:

    Amarillo Department of Public Safety (DPS): oblique aerial imagery of select portions of the Tanglewood Fire on February 28, 2011.Pictometry International: This post-fire imagery, collected on April 1, 2011, included orthorectified nadir imagery and oblique imagery as presented in these web pages.

    The post-fire imagery collected by Pictometry International was also utilized to map fire affected above ground vegetation (i.e., not grasses) across the entire incident. Finally, the National Agriculture Imagery Program (NAIP) acquired imagery if the study area in the summer of 2012.

    The Pictometry International imagery was collected almost a month after the incident, and the effects of fire on burned grass was less evident at the time of this image acquisition. This information decay is shown in the image below, comparing post-fire aerial imagery a day after the fire, versus the Pictometry International imagery one month after the fire. The blackened appearance of the grass area present immediately after the fire is more washed out in the post-fire Pictometry International imagery. The washed out appearance makes it challenging to map fire affected grasses ubiquitously across the incident from the post-fire imagery alone.

    Post-fire imagery from Pictometry International and DPS portraying the loss of information due to changes over time.

    The Pictometry International imagery does portray some effects of fire on burned above ground vegetation (e.g., trees and shrubs). Consequently, classification of the post-fire Pictometry International imagery occurred for post-fire green vegetation, vegetation with no foliage remaining and vegetation that saw scorching from fire, as observed in the post-fire imagery.

    This fire affected vegetation mapping occurred using an object-based image classification approach in Feature Analyst™. First, the identification of several training sites representative of each of the three vegetation mapping classifications listed above occurred. Then, three supervised image classifications were conducted using each of the three training classes. First, the scorched vegetation was classified. Next, vegetation with no foliage was classified. Finally, the green vegetation was classified last. For the last two supervised classifications, each of the previous classification results was input as a mask. Consequently, there is no overlap of vegetation classifications.

    The Pictometry International imagery was also collected when many deciduous trees were in leaf-off conditions. Consequently, in specific locations, deciduous trees that were not affected by the fire were mapped as vegetation with no foliage remaining. The post-fire NAIP imagery from the summer of 2012 was also utilized to map green vegetation in order to identify vegetation that was green after the fire during leaf-on conditions. Again, training sites were digitized and used in a supervised classification. This classification does not distinguish between vegetation that lost its foilage initially due to the fire but later recovered or vegetation that was in leaf-off conditions and was not affected by the fire. Also, some green vegetation in the NAIP imagery from the 2012 is from the growth of new vegetation after the fire.

    The above-described fire affected vegetation mapping generally portrays the extent of burned above ground vegetation more accurately than the field data. The two datasets could be combined to produce an improved mapping of fire affected vegetation, in some locations. Nonetheless, the post-fire aerial imagery mapping presented here does sometimes erroneously omit burned above ground vegetation and erroneously commit not burned above ground vegetation to the burned class, though the quantitative assessment of these errors has not occurred to date.

    The fire affected vegetation mapping presented here is only intended to highlight relatively large areas of fire affected vegetation. This information is qualitatively used in the next section to detail the varying nature of exposure at the Tanglewood Fire. The coarse quality of the mapping effort requires some cleanup for more quantitative assessments of exposure, which might be facilitated with the use of the ground data and images to improve results.

  7. f

    Data from: Characterizing multi-decadal, annual land cover change dynamics...

    • tandf.figshare.com
    docx
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    C.R. Hakkenberg; M.P. Dannenberg; C. Song; K.B. Ensor (2023). Characterizing multi-decadal, annual land cover change dynamics in Houston, TX based on automated classification of Landsat imagery [Dataset]. http://doi.org/10.6084/m9.figshare.7314566.v1
    Explore at:
    docxAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    C.R. Hakkenberg; M.P. Dannenberg; C. Song; K.B. Ensor
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Texas, Houston
    Description

    In 2017, Hurricane Harvey caused substantial loss of life and property in the swiftly urbanizing region of Houston, TX. Now in its wake, researchers are tasked with investigating how to plan for and mitigate the impact of similar events in the future, despite expectations of increased storm intensity and frequency as well as accelerating urbanization trends. Critical to this task is the development of automated workflows for producing accurate and consistent land cover maps of sufficiently fine spatio-temporal resolution over large areas and long timespans. In this study, we developed an innovative automated classification algorithm that overcomes some of the traditional trade-offs between fine spatio-temporal resolution and extent – to produce a multi-scene, 30m annual land cover time series characterizing 21 years of land cover dynamics in the 35,000 km2 Greater Houston area. The ensemble algorithm takes advantage of the synergistic value of employing all acceptable Landsat imagery in a given year, using aggregate votes from the posterior predictive distributions of multiple image composites to mitigate against misclassifications in any one image, and fill gaps due to missing and contaminated data, such as those from clouds and cloud shadows. The procedure is fully automated, combining adaptive signature generalization and spatio-temporal stabilization for consistency across sensors and scenes. The land cover time series is validated using independent, multi-temporal fine-resolution imagery, achieving crisp overall accuracies between 78–86% and fuzzy overall accuracies between 91–94%. Validated maps and corresponding areal cover estimates corroborate what census and economic data from the Greater Houston area likewise indicate: rapid growth from 1997–2017, demonstrated by the conversion of 2,040 km2 (± 400 km2) to developed land cover, 14% of which resulted from the conversion of wetlands. Beyond its implications for urbanization trends in Greater Houston, this study demonstrates the potential for automated approaches to quantifying large extent, fine resolution land cover change, as well as the added value of temporally-dense time series for characterizing higher-order spatio-temporal dynamics of land cover, including periodicity, abrupt transitions, and time lags from underlying demographic and socio-economic trends.

  8. m

    UAVS-FDDB: UAVs-based Forest Fire Detection Database

    • data.mendeley.com
    Updated May 10, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Md. Najmul Mowla (2024). UAVS-FDDB: UAVs-based Forest Fire Detection Database [Dataset]. http://doi.org/10.17632/5m98kvdkyt.2
    Explore at:
    Dataset updated
    May 10, 2024
    Authors
    Md. Najmul Mowla
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The UAVs-based Forest Fire Database (UAVs-FFDB) encompasses four distinct classes: 1. Pre-evening Forest Condition 2. Evening Forest Condition 3. Pre-evening Fire Incident 4. Evening Fire Incident. The images were captured using UAVs equipped with Raspberry Pi Camera V2 technology in the forested areas surrounding Adana Alparslan Türkeş Science and Technology University, Adana, Turkey. This dataset is divided into two main components: original data (raw) and augmented data, each accompanied by an annotation file. The raw data comprises 1,653 images, while the augmented dataset contains 15,560 images. Below is the distribution of images across the four classes:

    -Raw Data
    Pre-evening Forest Condition = 222 Evening Forest Condition = 286
    Pre-evening Fire Incident = 791 Evening Fire Incident = 354

    -Augmented Data Pre-evening Forest Condition = 3,890 Evening Forest Condition = 3,890 Pre-evening Fire Incident = 3,890 Evening Fire Incident = 3,890

  9. d

    Bridging the Gap between Quadrats and Satellites: Assessing Utility of...

    • catalog.data.gov
    • fisheries.noaa.gov
    Updated Oct 31, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Office for Coastal Management (Custodian) (2024). Bridging the Gap between Quadrats and Satellites: Assessing Utility of Drone-based Imagery to Enhance Emergent Vegetation Biomonitoring - NERRS/NSC(NERRS Science Collaborative) [Dataset]. https://catalog.data.gov/dataset/bridging-the-gap-between-quadrats-and-satellites-assessing-utility-of-drone-based-imagery-to-en1
    Explore at:
    Dataset updated
    Oct 31, 2024
    Dataset provided by
    Office for Coastal Management (Custodian)
    Description

    Monitoring plays a central role in detecting change in coastal ecosystems. The National Estuarine Research Reserve System (NERRS) invests heavily in assessing changes in tidal wetlands through the System-wide Monitoring Program (SWMP). This monitoring is conducted in 1m2 permanent plots every 1-3 years via in situ sampling and at reserve-wide scales via airplane imagery every 5-10 years. While both approaches have strengths, important processes at intermediate spatial (i.e., marsh platform) and finer temporal (i.e., storm events) scales may be missed. Uncrewed Aerial Systems (UAS, i.e., drones) can provide high spatial resolution and coverage, with customizable sensors, at user-defined times. Based on a needs assessment and discussions with NERRS end users, we conducted a regionally coordinated effort, working in salt marshes and mangroves within six reserves in the Southeast and Caribbean to develop, assess and collaboratively refine a UAS-based tidal wetlands monitoring protocol aimed at entry-level UAS users. Using ground-based surveys for validation, we 1) assessed the efficacy of UAS-based imagery for estimating vegetation percent cover, delineating ecotones (e.g., low to high marsh), and generating digital elevation models, and 2) assessed the utility of multispectral sensors for improving products from #1 and developing vegetation indices to estimate aboveground biomass (e.g., normalized difference vegetation index, NDVI). UAS-derived elevation models and canopy height estimates were generally of insufficient accuracy to be useful when compared to field measures. Across sites, root mean squared error ranged from 0.25 to 0.59m for bare earth models, 0.15 to 1.58m for vegetation surface models, and 0.33 to 2.1m for canopy height. The accuracy of ecotones delineated from UAS imagery varied among ecotones. The average distance between image- and field-based delineations of the wetland-water ecotone was 0.18 +/- 0.01m, whereas differences of the low-high marsh ecotone were 1.25 +/- 0.11m. Overall accuracy of vegetated and unvegetated classifications among sites was 85 +/- 4%. Comparison of field- and image-based estimates of total percent vegetated cover indicated modest agreement between the two approaches, although percent cover was generally overestimated from imagery. Average differences in percent cover between approaches was ~5% at one reserve, but >25% at four reserves. Overall accuracy of species-specific classifications among reserves was 74 +/- 6% when using both orthomosaics and surface vegetation models. Comparison of field- and image-based estimates of species-specific cover indicated minimal agreement between the two approaches; the interquartile ranges of the differences were wide for all species (>40%). Aboveground biomass in monospecific Spartina alterniflora plots was highly correlated to NDVI (R2 > 0.69), although the relationship was reserve- and sensor-specific. The strength of the relationship between NDVI and biomass was weaker in mixed-species plots (R2 = 0.52). This project serves as a critical first step for improving tidal wetland monitoring conducted as part of SWMP. Furthermore, the project increased the technical capacity of end users to conduct UAS-based wetland monitoring. This research collaboration was the first of its kind in the region and has catalyzed continued collaboration to identify regional management needs and expand UAS-based monitoring to additional coastal habitats (e.g., oyster reefs).

  10. i

    Fall Detection Dataset for Multiclass Classification

    • ieee-dataport.org
    Updated Jul 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Senthil Kumar Thangavel (2025). Fall Detection Dataset for Multiclass Classification [Dataset]. https://ieee-dataport.org/documents/fall-detection-dataset-multiclass-classification
    Explore at:
    Dataset updated
    Jul 4, 2025
    Authors
    Senthil Kumar Thangavel
    Description

    including 9 distinct classes of activities related to falls and non-fall events. The dataset aims to enhance the robustness of fall detection systems by providing a diverse range of scenarios.

  11. Learning Privacy from Visual Entities - Curated data sets and pre-computed...

    • zenodo.org
    zip
    Updated May 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alessio Xompero; Alessio Xompero; Andrea Cavallaro; Andrea Cavallaro (2025). Learning Privacy from Visual Entities - Curated data sets and pre-computed visual entities [Dataset]. http://doi.org/10.5281/zenodo.15348506
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 7, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Alessio Xompero; Alessio Xompero; Andrea Cavallaro; Andrea Cavallaro
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description
    This repository contains the curated image privacy datasets and pre-computed visual entities used in the publication Learning Privacy from Visual Entities by A. Xompero and A. Cavallaro.
    [
    arxiv][code]

    Curated image privacy data sets

    In the article, we trained and evaluated models on the Image Privacy Dataset (IPD) and the PrivacyAlert dataset. The datasets are originally provided by other sources and have been re-organised and curated for this work.

    Our curation organises the datasets in a common structure. We updated the annotations and labelled the splits of the data in the annotation file. This avoids having separated folders of images for each data split (training, validation, testing) and allows a flexible handling of new splits, e.g. created with a stratified K-Fold cross-validation procedure. As for the original datasets (PicAlert and PrivacyAlert), we provide the link to the images in bash scripts to download the images. Another bash script re-organises the images in sub-folders with maximum 1000 images in each folder.

    Both datasets refer to images publicly available on Flickr. These images have a large variety of content, including sensitive content, seminude people, vehicle plates, documents, private events. Images were annotated with a binary label denoting if the content was deemed to be public or private. As the images are publicly available, their label is mostly public. These datasets have therefore a high imbalance towards the public class. Note that IPD combines two other existing datasets, PicAlert and part of VISPR, to increase the number of private images already limited in PicAlert. Further details in our corresponding https://doi.org/10.48550/arXiv.2503.12464" target="_blank" rel="noopener">publication.

    List of datasets and their original source:

    Notes:

    • For PicAlert and PrivacyAlert, only urls to the original locations in Flickr are available in the Zenodo record
    • Collector and authors of the PrivacyAlert dataset selected the images from Flickr under Public Domain license
    • Owners of the photos on Flick could have removed the photos from the social media platform
    • Running the bash scripts to download the images can incur in the "429 Too Many Requests" status code

    Pre-computed visual entitities

    Some of the models run their pipeline end-to-end with the images as input, whereas other models require different or additional inputs. These inputs include the pre-computed visual entities (scene types and object types) represented in a graph format, e.g. for a Graph Neural Network. Re-using these pre-computed visual entities allows other researcher to build new models based on these features while avoiding re-computing the same on their own or for each epoch during the training of a model (faster training).

    For each image of each dataset, namely PrivacyAlert, PicAlert, and VISPR, we provide the predicted scene probabilities as a .csv file , the detected objects as a .json file in COCO data format, and the node features (visual entities already organised in graph format with their features) as a .json file. For consistency, all the files are already organised in batches following the structure of the images in the datasets folder. For each dataset, we also provide the pre-computed adjacency matrix for the graph data.

    Note: IPD is based on PicAlert and VISPR and therefore IPD refers to the scene probabilities and object detections of the other two datasets. Both PicAlert and VISPR must be downloaded and prepared to use IPD for training and testing.

    Further details on downloading and organising data can be found in our GitHub repository: https://github.com/graphnex/privacy-from-visual-entities (see ARTIFACT-EVALUATION.md#pre-computed-visual-entitities-)

    Enquiries, questions and comments

    If you have any enquiries, question, or comments, or you would like to file a bug report or a feature request, use the issue tracker of our GitHub repository.

  12. m

    Data from: EEG Dataset for natural image recognition through Visual Stimuli

    • data.mendeley.com
    Updated Jan 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shamama Anwar (2025). EEG Dataset for natural image recognition through Visual Stimuli [Dataset]. http://doi.org/10.17632/g9shp2gxhy.2
    Explore at:
    Dataset updated
    Jan 20, 2025
    Authors
    Shamama Anwar
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Electroencephalography (EEG) is a technique for measuring the electrical activity of the brain in the form of action potentials using electrodes placed on the scalp. The technique is gaining popularity for research investigations due to its non-invasive nature and ease of application. EEG exposes a wide range of human brain potentials, including event-related, sensory, and visually evoked potentials (VEPs), and helps to build complex applications. The current dataset consists of thirty-two subjects' EEG recordings in response to visual stimuli (VEPs). The purpose of collecting such data is because of its contribution in the advancement of visual decoding and supporting EEG-based image classification and reconstruction. The primary goal is to investigate the cognitive mechanisms behind known and unknown perceptions. The dataset was collected using a standardised experimental setup that included several experimental phases to capture the essence of the experiment. Thirty-five adult participants participated in the data collection process. They had no visual impairment and took the Vividness of Visual Imagery Questionnaire (VVIQ) test to answer sixteen questions based on their memory and imagination. Out of the thirty-five participants, thirty-two cleared the test and their EEG were recorded. The data was collected using a 14-channel EPOC X – 14 EEG device. The recordings were sampled at 128 Hz, and the 10 – 20 system was followed for electrode placement. EMOTIVPro software was used for collection and annotation. The brain activity signals were collected while the participants were viewing an image displayed on a white screen. The image consists of natural objects like apple (class A), flower (class F), car (class C) and human face (class P). The file “VVIQuestionnaire.pdf” is the questionnaire used to ascertain the visual imagination of the participants. The other file “Participant_info.csv” contains the details of the participants (age, gender, image class viewed, and Participant ID) and their VVIQ score. The names of the participants have been purposely removed for reasons of anonymity and a unique participant ID has been assigned to each participant. These IDs are further used to represent the EEG of the participants. Each class folder further contains two subfolders: A1, A2 (for class A); C1, C2 (for class C); P1, P2 (for class P); and F1, F2 (for class F). All these folders contain the data acquired from the different participants who were shown these images as a csv and edf file. This file structure makes data easier to access and analyse based on the class of visual stimuli images and experimental design employed.

  13. Z

    Colorado State University Geometric Snowflake Classification Dataset

    • data.niaid.nih.gov
    Updated Mar 5, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cam Key (2021). Colorado State University Geometric Snowflake Classification Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4584199
    Explore at:
    Dataset updated
    Mar 5, 2021
    Dataset provided by
    Branislav Notaros
    Cam Key
    Adam Hicks
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Colorado
    Description

    This dataset contains over 25,000 human-classified images of snowflakes sorted into the following categories:

    AG (aggregate)

    CC (columnar crystal)

    GR (graupel)

    PC (planar crystal)

    SP (small particle)

    Filenames are in the following format:

    YYYY.MM.DD_HH.MM.SS_flake_X_cam_Y_candidate_Z.png

    YYYY.MM.DD_HH.MM.SS: datetime raw image was collected

    X: trigger event

    Y: MASC system camera index that produced raw image

    Z: detection index in raw image (multiple flakes are detected in most raw images)

    We request that all users of this dataset reference this Zenodo entry and the accompanying JTECH paper:

    Key, C. et al. (2021) Advanced Deep Learning-Based Supervised Classification of Multi-Angle Snowflake Camera Images, JTECH

  14. a

    Data from: A Collaborative Change Detection Approach on Multi-Sensor Spatial...

    • hub.arcgis.com
    Updated Mar 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GEOAP (2024). A Collaborative Change Detection Approach on Multi-Sensor Spatial Imagery for Desert Wetland Monitoring after a Flash Flood in Southern Morocco [Dataset]. https://hub.arcgis.com/documents/geoap::a-collaborative-change-detection-approach-on-multi-sensor-spatial-imagery-for-desert-wetland-monitoring-after-a-flash-flood-in-southern-morocco/about
    Explore at:
    Dataset updated
    Mar 18, 2024
    Dataset authored and provided by
    GEOAP
    Area covered
    Southern Provinces, Morocco
    Description

    This study aims to present a technique that combines multi-sensor spatial data to monitor wetland areas after a flash-flood event in a Saharan arid region. To extract the most efficient information, seven satellite images (radar and optical) taken before and after the event were used. To achieve the objectives, this study used Sentinel-1 data to discriminate water body and soil roughness, and optical data to monitor the soil moisture after the event. The proposed method combines two approaches: one based on spectral processing, and the other based on categorical processing. The first step was to extract four spectral indices and utilize change vector analysis on multispectral diachronic images from three MSI Sentinel-2 images and two Landsat-8 OLI images acquired before and after the event. The second step was performed using pattern classification techniques, namely, linear classifiers based on support vector machines (SVM) with Gaussian kernels. The results of these two approaches were fused to generate a collaborative wetland change map. The application of co-registration and supervised classification based on textural and intensity information from Radar Sentinel-1 images taken before and after the event completes this work. The results obtained demonstrate the importance of the complementarity of multi-sensor images and a multi-approach methodology to better monitor changes to a wetland area after a flash-flood disaster.https://doi.org/10.3390/rs11091042

  15. EEG dataset

    • figshare.com
    bin
    Updated Dec 6, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    minho lee (2019). EEG dataset [Dataset]. http://doi.org/10.6084/m9.figshare.8091242.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Dec 6, 2019
    Dataset provided by
    figshare
    Authors
    minho lee
    License

    https://www.gnu.org/copyleft/gpl.htmlhttps://www.gnu.org/copyleft/gpl.html

    Description

    This dataset has collected for the study of "Robust Detection of Event-Related Potentials in a User-Voluntary Short-Term Imagery Task.

  16. Z

    Hyperspectral imagery Research Products - Toulouse urban area 2015 (French...

    • data.niaid.nih.gov
    Updated Sep 26, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Houet, Thomas (2022). Hyperspectral imagery Research Products - Toulouse urban area 2015 (French ANR HYEP project) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3611239
    Explore at:
    Dataset updated
    Sep 26, 2022
    Dataset provided by
    Weber, Christiane
    Gadal, Sébastien
    Houet, Thomas
    Briottet, Xavier
    Mallet, Clément
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    France, Toulouse
    Description

    The HYEP project (ANR 14-CE22-0016-01) main goal was to propose a panel of methods and processes designed for hyperspectral imaging, which specificity makes a weighty auxiliary for the monitoring of the elements of the urban area. The main results of the project can be found at

    G. Roussel, C. Weber, X. Briottet and X. Ceamanos, "Comparison of two atmospheric correction methods for the classification of spaceborne urban hyperspectral data depending on the spatial resolution", International Journal of Remote Sensing, vol. 39(5), pp. 1593-1614, 2018.

    F. Z. Benhalouche, M. S. Karoui, Y. Deville, I. Boukerch, A. Ouamri, ``Multi-sharpening hyperspectral remote sensing data by multiplicative joint-criterion linear-quadratic nonnegative matrix factorization'', Proceedings of the 2017 IEEE International Workshop on Electronics, Control, Measurement, Signals and their application to Mechatronics (ECMSM 2017), May 24-26, 2017, Donostia - San Sebastian

    Gintautas Mozgeris, Vytaut ̇e Juodkien ̇e, Donatas Jonikaviˇcius, Lina Straigyt ̇e, S ́ebastien Gadal, and Walid Ouerghemmi. Ultra-Light Aircraft-Based Hyperspectral and Colour-Infrared Imaging to Identify Deciduous Tree Species in an Urban Environment. Remote Sensing, 10(10), October 2018.

    Christiane Weber, Thomas Houet, S ́ebastien Gadal, Rahim Aguejdad, Grzegorz Skupinski, Yannick Deville, Jocelyn Chanussot, Mauro Dalla Mura, Xavier Briottet, Cl ́ement Mallet, and Arnaud Le Bris. HYEP HYperspectral imagery for Environmental urban Planning : principaux résultats. In 7ème colloque scientifique du groupe SFPT-GH, Toulouse, France, July 2019. ONERA - SFTP.

    Christiane Weber, Rahim Aguejdad, Xavier Briottet, Josselin Aval, Sophie Fabre, Jean Demuynck, Emmanuel Zenou, Yannick Deville, Moussa Sofiane Karoui, Fatima Zohra, Sébastien Gadal, Walid Ouerghemmi, Clément Mallet, Arnaud Le Bris, and Nesrine CHEHATA. Hyperspectral Imagery for Environmental Urban Planning. In IEEE International Geoscience and Remote Sensing Symposium (IGARSS) 2018, pages 1628–1631, Valencia, Spain, July 2018a. IEEE.

    Christiane Weber, Rahim Aguejdad, X Briottet, J Avala, S. Fabre, J Demuynck, E Zenou, Y. Deville, M. Karoui, F Z Benhalouche, S Gadal, W Ourghemmi, C. Mallet, A. Le Bris, and N. Chehata. HYPERSPECTRAL IMAGERY FOR ENVIRONMENTAL URBAN PLANNING. In IGARSS 2018, Valencia, Spain, 2018b.

    W. Ouerghemmi, A. Le Bris, Nesrine CHEHATA, and Clément Mallet. A two-step decision fusion strategy: application to hyperspectral and multispectral images for urban classification. In International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, volume XLII-1/W1, pages 167–174, Hanover, Germany, May 2017. Copernicus GmbH (Copernicus Publications).

    Christiane Weber, Sébastien GADAL, Xavier Briottet, and Clément Mallet. Apport de l’imagerie hyperspectrale pour la planification urbaine. In Karine Emsellem, Diego Moreno, Christine Voiron-Canicio, and Didier Josselin, editors, SAGEO 2016 - Spatial Analysis and Geomatics, Actes de la conférence SAGEO’2016 - Spatial Analysis and GEOmatics, pages 454–462, Nice, France, December 2016.

    Gintautas Mozgeris, S ́ebastien Gadal, Donatas Jonikaviˇcius, Lina Straigyte, Walid Ouerghemmi, and Vytaut ̇e Juodkiene. Hyperspectral and color-infrared imaging from ultra-light aircraft: Potential to recognize tree species in urban environments. In University of California Los Angeles, editor, 8th Workshop in Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, pages 542–546, Los Angeles, United States, August 2016.

    Alexandre Hervieu, Arnaud Le Bris, and Cl ́ement Mallet. Fusion of hyperspectral and VHR multispectral image classifications in urban α–areas. In ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, volume III-3, pages 457–464, Prague, Czech Republic, July 2016.

    Christiane Weber, Thomas Houet, Sebastien GADAL, Rahim Aguejdad, Grzegorz Skupinski, Aziz Serradj, Yannick Deville, Jocelyn Chanussot, Mauro Dalla Mura, Xavier Briottet, Clément Mallet, and Arnaud Le Bris. ANR HYEP ANR 14-CE22-0016-01Hyperspectral imagery for Environmental urban Planning HyepProgramme Mobilité et systèmes urbains 2014. Research report, CNRS UMR TETIS, ESPACE, LETG ; ONERA ; GIPSA-lab ; IRAP ; IGN, October 2018c.

    Josselin Aval, Sophie Fabre, Emmanuel Zenou, David Sheeren, Mathieu Fauvel & Xavier Briottet (2019) Object-based fusion for urban tree species classification from hyperspectral, panchromatic and nDSM data, International Journal of Remote Sensing, 40:14, 5339-5365, DOI: 10.1080/01431161.2019.1579937

    Charlotte Brabant, Emilien Alvarez-Vanhard, Achour Laribi, Gwenaël Morin, Kim Thanh Nguyen et al. Comparison of Hyperspectral Techniques for Urban Tree Diversity Classification Remote Sensing, MDPI, 2019, 11 (11), pp.1269. ⟨10.3390/rs11111269⟩ hal-02191084v1

    C. Brabant, Emilien Alvarez-Vanhard, Gwenaël Morin, Thanh Ngoc Nguyen, Achour Laribi et al. Evaluation of dimensional reduction methods on urban vegetation classification performance using hyperspectral data IGARSS 2018, Jul 2018, Valencia, Spain halshs-02191363v1

    Charlotte Brabant, Emilien Alvarez-Vanhard, Thomas Houet. Improving the classification of urban tree diversity from Very High Spatial Resolution hyperspectral images: comparison of multiples techniques Joint Urban Remote Sensing Event (JURSE 2019), May 2019, Vannes, France halshs-02191097v1

    This Dataset contains five research outputs of this project that were produced on the basis of Hyperspectral data obtained during an acquisition campaign led on Toulouse (France) urban area on July 2015 using Hyspex instrument which provides 408 spectral bands spread over 0.4 – 2.5 μ. Flight altitude lead to 2 m spatial resolution images.

    Fields_samples.7z: ESRI Shape Format. Supervised SVN classification results for 600 urban trees according to a 3 level nomenclature: leaf type (5 classes), family (12 & 19 classes) and species (14 & 27 classes). The number of classes differ for the two latter as they depend on the minimum number of individuals considered (4 and 10 individuals per class respectively). Trees positions have been acquired using differential GPS and are given with centimetric to decimetric precision. A randomly selected subset of these trees has been used to train machine SVM and Random Forest classification algorithms. Those algorithms were applied to hyperspectral images using a number of classes for family (12 & 19 classes) and species (14 & 27 classes) levels defined according to the minimum number of individuals considered during training/validation process (4 and 10 individuals per class, respectively). Global classification precision for several training subsets is given by Brabant et al, 2019 (https://www.mdpi.com/470202) in terms of averaged overall accuracy (AOA) and averaged kappa index of agreement (AKIA).

    HySPex-2m.7z: full hyperspectral VNIR-SWIR ENVI standard image obtained from the coregistration of both VNIR and SWIR ones through a signal aggregation process that allowed to obtain a synthetic VNIR 1.6 m spatial resolution image, with pixels exactly corresponding to natif SWIR image ones. First, a spatially resampled 1.6 m VNIR image was built, where output pixel values were calculated as the average of the VNIR 0.8 m pixel values that spatially contribute to it. Then, ground control points (GCP) were selected over both images and SWIR one was tied to the VNIR 1.6 m image using a bilinear resampling method using ENVI tool. This lead to a 1.6 m spatial resolution full VNIR-SWIR image.

    HYPXIM-4m.7z, HYPXIM-8m.7z, Sentinel2-10m.7z: hyperspectral ENVI standard simulated images. Spatial and spectral configurations generated correspond to ESA SENTINEL-2 instrument that was lunched on 2015, and HYPXIM sensor which was under study at that time.

  17. n

    Landsat 7 Educational Image Subsets

    • access.earthdata.nasa.gov
    • cmr.earthdata.nasa.gov
    Updated Apr 21, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2017). Landsat 7 Educational Image Subsets [Dataset]. https://access.earthdata.nasa.gov/collections/C1214609800-SCIOPS
    Explore at:
    Dataset updated
    Apr 21, 2017
    Time period covered
    Jan 1, 1970 - Present
    Area covered
    Description

    EOS-WEBSTER has agreed to serve satellite image subsets for the Forest Watch ("http://www.forestwatch.sr.unh.edu") program and other educational programs which make use of satellite imagery. Forest Watch is a New England-wide environmental education activity designed to introduce teachers and students to field, laboratory, and satellite data analysis methods for assessing the state-of-health of local forest stands. One of the activities in Forest Watch involves image processing and data analysis of Landsat Thematic Mapper data (TM/ETM+) for the area around a participant's school. The image processing of local Landsat data allows the students to use their ground truth data from field-based activities to better interpret the satellite data for their own back yard. Schools use a freely available image processing software, MultiSpec ("http://dynamo.ecn.purdue.edu/%7Ebiehl/MultiSpec/"), to analyze the imagery. Value-added Landsat data, typically in a 512 x 512 pixel subset, are supplied by this collection. The Forest Watch program has supplied the data subsets in this collection based on the schools involved with their activities.

    Satellite data subsets may be searched by state or other category, and by spectral type. These images may be previewed through this system, ordered, and downloaded. Some historic Landsat 5 data subsets, which were acquired for this program, are also provided through this system. Landsat 5 subsets are multispectral data with 5 bands of data (TM bands 1-5). Landsat 7 subsets contain all bands of data and each subset has three spectral file types: 1) multispectral (ETM+ bands 1-5 and 7), 2) panchromatic (ETM+ band 8), and 3) Thermal (ETM+ band 6 high and low gain channels). Each spectral type must be ordered separately; this can be accomplished by choosing more than one spectral file type in your search parameters.

    These image subsets are served in the ERDAS Imagine (.img) format, which can be opened by newer versions of the MultiSpec program (versions greater than Nov. 1999). The MultiSpec program can be downloaded via the Internet at: "http://dynamo.ecn.purdue.edu/%7Ebiehl/MultiSpec/"

    A header file is provided with most Landsat 7 subsets giving the specifics of the image.

    Please refer to the references to learn more about Forest Watch, Landsat, and the data this satellite acquires.

    In the near future we hope to release a new Satellite Interface, which would allow a user to search for satellite data from a number of platforms based on user-selected search parameters and then sub-set the data and choose an appropriate output format.

    If you have any other questions regarding our Forest Watch Satellite data holdings, please contact our User Services Personnel (support@eos-webster.sr.unh.edu).

    Available Data Sets:

    Many New England subsets are available, based on the location of participating schools in the Forest Watch program. Additional scenes are also included based on historical use within the Forest Watch program. Other scenes may be added in the future. If you don't see a scene of the location you are interested in, and that location is within New England, then please contact User Services (support@eos-webster.sr.unh.edu) to see if we can custom-create a subset for you.

    Data Format

    The data are currently held in EOS-WEBSTER in ERDAS Imagine (.img) format. This format is used by new versions of the MultiSpec program, and other image processing programs. Most of the subset scenes provided through this system have been projected to a Lambert Projection so that MultiSpec can display Latitude and Longitude values for each image cell (see "http://www.forestwatch.sr.unh.edu/online/" Using Mac MultiSpec to display Lat./Lon. Coordinates).

    Data can be ordered by spectral type. For Landsat 7, three spectral types are available: 1) Multispectral (bands 1-5 & 7), 2) Panchromatic (pan), and 3) Thermal (bands 6 a&b) (see Table 2). The multispectral (ms) files contain six bands of data, the panchromatic (pan) files contains one band of data, and the thermal (therm) files contain two bands of data representing a high and low sensor gain.

    A header file is provided for most Landsat 7 subsets which have been projected in the Lambert projection. This header file provides the necessary information for importing the data into MultiSpec for Latitude/Longitude display.

  18. o

    NASA SPoRT Dust Event Labels

    • explore.openaire.eu
    Updated Mar 22, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nicholas Elmer; Emily Berndt; Sebastian Harkema; Angela Burke; Kevin Fuell; Caley Feemster; Robert Junod (2021). NASA SPoRT Dust Event Labels [Dataset]. http://doi.org/10.5281/zenodo.4627951
    Explore at:
    Dataset updated
    Mar 22, 2021
    Authors
    Nicholas Elmer; Emily Berndt; Sebastian Harkema; Angela Burke; Kevin Fuell; Caley Feemster; Robert Junod
    Description

    GENERAL INFORMATION 1. Title of Dataset: SPoRT Dust Event Labels 2. Author Information: A. Nicholas Elmer NASA Postdoctoral Program NASA Marshall Space Flight Center Huntsville, Alabama, USA nicholas.j.elmer@nasa.gov B. Emily Berndt Earth Science Office NASA Marshall Space Flight Center Huntsville, Alabama, USA emily.b.berndt@nasa.gov 3. Date of data collection: 2018-01-14 to 2020-06-09 4. Geographic location of data collection: Southwest United States West longitude: 126.0 W East longitude: 90.0 W South latitude: 24.0 N North latitude: 45.0 N 5. Funding source: Data collection was supported by the NASA Short-term Prediction Research and Transition (SPoRT) project at NASA Marshall Space Flight Center. DATA & FILE OVERVIEW 1. File List: testing_dataset.txt training_dataset.txt validation_dataset.txt Georeferenced polygon shapefiles, comprising .shp, .shx, .dbf, and .prj files with timestamp {YYYY}{MM}{DD}T{HH}{MM}{SS}. 2. Relationship between files: This dataset contains: 1) Georeferenced (WGS 1984) polygon shapefiles containing image classification for airborne dust. 2) Text files listing the timestamp of shapefiles used in the training, testing, and validation datasets used by the Berndt et al. (2021) random forest dust detection model. METHODOLOGICAL INFORMATION. 1. Description of methods for collection: The dust labels were manually assigned by atmospheric scientists based largely on the GOES-16 ABI Dust RGB imagery but supplemented by GOES-16 true color imagery, Area Forecast Discussions issued by NOAA National Weather Service Weather Forecast Offices, and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) measurements. 2. Methods for processing the data: GOES-16 ABI Dust RGB imagery was downloaded from Amazon Web Services and regridded to a 2-km rectangular grid. Feature labels were manually drawn on the imagery and classified by experts with the aid of a Python Graphical User Interface (GUI) based on the Tkinter python package. DATA-SPECIFIC INFORMATION FOR SHAPEFILES: 1. Shapefile coordinate system: WGS 1984 2. Number of Shapefiles: 83 3. Number of Polygons per Shapefile: Varies 4. Number of Attributes per polygon: 1 5. Attribute list: A. Class: 0 --> No Dust 1 --> Dust 2 --> Reserved for future use 3 --> No Data Value

  19. e

    Reconstruction monitoring in Beirut, Lebanon, following August 2020...

    • data.europa.eu
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joint Research Centre, Reconstruction monitoring in Beirut, Lebanon, following August 2020 explosion, for the Reform, Recovery and Reconstruction Framework (3RF) (2021-03-10) [Dataset]. https://data.europa.eu/data/datasets/aa4e3619-4e53-406d-8dec-954d9a6372d3?locale=el
    Explore at:
    esri file geodatabaseAvailable download formats
    Dataset authored and provided by
    Joint Research Centre
    License

    http://data.europa.eu/eli/dec/2011/833/ojhttp://data.europa.eu/eli/dec/2011/833/oj

    Area covered
    Lebanon, Beirut
    Description


    Activation date: 2021-03-10
    Event type: Industrial accident

    Activation reason:
    The explosion of a large amount of ammonium nitrate stored in a warehouse in the port of Beirut on the 4th of August 2020, had a devastating outcome not only for the port area but affecting larger areas of Greater Beirut, reaching kilometres inland. The Reform, Recovery and Reconstruction Framework (3RF) has been developed by the World Bank Group, United Nations and European Union, bringing together the civil society, the government and the international community in order to provide a roadmap to ensure that people’s needs are addressed through a combination of socio-economic recovery and reform. EMSN087 provides the 3RF with data and information on the baseline damage assessment as for February 2021, and subsequent six monitoring assessments of the reconstruction progress on a quarterly basis beginning in April 2021 and reaching until July 2022. While satellite imagery data serve as the primary information source, ancillary data were made available by UN-Habitat Lebanon. UN-Habitat and the Municipalities of Beirut and Bourj Hammoud conducted a field survey for damage assessment after the explosion. This survey was supported by the Rotary Club Beirut Cedars in helping to provide the GeoPal application of the GeoPal company free of charge to assist the response. The data were used to cross-check the damage assessment based on the satellite imagery from February 2021. A first reconstruction monitoring cycle was performed, in order to evidence the reconstruction progress comparing with the damage assessment from February 2021, which served as reference for the reconstruction monitoring. Based on photointerpretation of satellite imagery acquired in April 2021, each damaged building/feature identified in the baseline got associated with a reconstruction status.Damage assessment at building level (February 2021, baseline)The damage assessment was conducted at building level, including a focus on industrial areas or facilities of public interest. It was based on a visual inspection of satellite images comparing the situation pre-event, immediate post-event and post-event imagery (February 2021). A photointerpretation key (PIK) was developed as a guide to the imagery analysts to assign the damage grade and allow the user to understand how the analysis was conducted. The classification is based on the following grading classes:Destroyed (Very heavy structural damage total or close to collapse)Damaged (Visible Structural and non-structural damage)Possible damaged (Uncertain interpretation due to reduced visibility)No visible damage (No visible damage in the satellite imagery data)Under reconstruction (already under reconstruction)New building (not existing before the explosion)Not analysed (due to e.g. cloud coverage, building shadow)The results of the analysis show that for the vast majority of the buildings no damage visible damage could be determined based on the visual analysis of the satellite imagery with the reference data February 2021.Reconstruction monitoring at building level1st Reconstruction monitoring analysis, April 2021The reconstruction monitoring was conducted at building level, with a focus on landfills, wrecks and storage waste. The monitoring was performed based on visual inspection of satellite images comparing the situation post-event as of February 2021 (involved in the damage assessment) and April 2021. Photointerpretation keys (PIK) were developed to assess the reconstruction status of buildings and features involved in the monitoring process. The classification is based on the following reconstruction status classes::New (structures/features not existing in the baseline damage assessment)Unchanged (no changes detected with respect to the damage grade identified in the baseline damage assessment of February 2021)Reconstruction ongoing (structures under reconstruction e.g. cranes, scaffolding)Demolished (residuals of the demolishment are visible)Removed (structures in the damage assessment are missing in the images of April 2021)Reconstructed (structures detected as fully functional)No visible change (uncertain interpretation due to the quality of satellite imagery)Not analysed (due to e.g. cloud coverage, or non-visibility in the images)Results of the first reconstruction monitoring cycle, based on visual interpretation of the satellite imagery of April 2021, show that buildings assessed with damage grades as damaged, destroyed, possible damage in February 2021 are mostly under the same conditions and therefore classified as unchanged. A small amount is classified as reconstruction ongoing. Only 4 buildings are identified as reconstructed.2nd Reconstruction monitoring analysis, July 2021Once again, the reconstruction monitoring was conducted on building footprint level, with a focus on landfills, wrecks and storage waste. The monitoring was performed on the bas

  20. f

    Table_1_Enhancement of Event-Related Desynchronization in Motor Imagery...

    • frontiersin.figshare.com
    docx
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jiaxin Xie; Maoqin Peng; Jingqing Lu; Chao Xiao; Xin Zong; Manqing Wang; Dongrui Gao; Yun Qin; Tiejun Liu (2023). Table_1_Enhancement of Event-Related Desynchronization in Motor Imagery Based on Transcranial Electrical Stimulation.DOCX [Dataset]. http://doi.org/10.3389/fnhum.2021.635351.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Frontiers
    Authors
    Jiaxin Xie; Maoqin Peng; Jingqing Lu; Chao Xiao; Xin Zong; Manqing Wang; Dongrui Gao; Yun Qin; Tiejun Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Due to the individual differences controlling brain-computer interfaces (BCIs), the applicability and accuracy of BCIs based on motor imagery (MI-BCIs) are limited. To improve the performance of BCIs, this article examined the effect of transcranial electrical stimulation (tES) on brain activity during MI. This article designed an experimental paradigm that combines tES and MI and examined the effects of tES based on the measurements of electroencephalogram (EEG) features in MI processing, including the power spectral density (PSD) and dynamic event-related desynchronization (ERD). Finally, we investigated the effect of tES on the accuracy of MI classification using linear discriminant analysis (LDA). The results showed that the ERD of the μ and β rhythms in the left-hand MI task was enhanced after electrical stimulation with a significant effect in the tDCS group. The average classification accuracy of the transcranial alternating current stimulation (tACS) group and transcranial direct current stimulation (tDCS) group (88.19% and 89.93% respectively) were improved significantly compared to the pre-and pseudo stimulation groups. These findings indicated that tES can improve the performance and applicability of BCI and that tDCS was a potential approach in regulating brain activity and enhancing valid features during noninvasive MI-BCI processing.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Ewers, Robert M (2022). Downsized camera trap images for automated classification [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6627706

Downsized camera trap images for automated classification

Explore at:
Dataset updated
Dec 1, 2022
Dataset provided by
Wearne, Oliver R
Ewers, Robert M
Heon, Sui P
Norman, Danielle L
Chapman, Philip M
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Description: Downsized (256x256) camera trap images used for the analyses in "Can CNN-based species classification generalise across variation in habitat within a camera trap survey?", and the dataset composition for each analysis. Note that images tagged as 'human' have been removed from this dataset. Full-size images for the BorneoCam dataset will be made available at LILA.science. The full SAFE camera trap dataset metadata is available at DOI: 10.5281/zenodo.6627707. Project: This dataset was collected as part of the following SAFE research project: Machine learning and image recognition to monitor spatio-temporal changes in the behaviour and dynamics of species interactions Funding: These data were collected as part of research funded by:

NERC (NERC QMEE CDT Studentship, NE/P012345/1, http://gotw.nerc.ac.uk/list_full.asp?pcode=NE%2FP012345%2F1&cookieConsent=A) This dataset is released under the CC-BY 4.0 licence, requiring that you cite the dataset in any outputs, but has the additional condition that you acknowledge the contribution of these funders in any outputs.

XML metadata: GEMINI compliant metadata for this dataset is available here Files: This dataset consists of 3 files: CT_image_data_info2.xlsx, DN_256x256_image_files.zip, DN_generalisability_code.zip CT_image_data_info2.xlsx This file contains dataset metadata and 1 data tables:

Dataset Images (described in worksheet Dataset_images) Description: This worksheet details the composition of each dataset used in the analyses Number of fields: 69 Number of data rows: 270287 Fields:

filename: Root ID (Field type: id) camera_trap_site: Site ID for the camera trap location (Field type: location) taxon: Taxon recorded by camera trap (Field type: taxa) dist_level: Level of disturbance at site (Field type: ordered categorical) baseline: Label as to whether image is included in the baseline training, validation (val) or test set, or not included (NA) (Field type: categorical) increased_cap: Label as to whether image is included in the 'increased cap' training, validation (val) or test set, or not included (NA) (Field type: categorical) dist_individ_event_level: Label as to whether image is included in the 'individual disturbance level datasets split at event level' training, validation (val) or test set, or not included (NA) (Field type: categorical) dist_combined_event_level_1: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance level 1' training or test set, or not included (NA) (Field type: categorical) dist_combined_event_level_2: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance level 2' training or test set, or not included (NA) (Field type: categorical) dist_combined_event_level_3: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance level 3' training or test set, or not included (NA) (Field type: categorical) dist_combined_event_level_4: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance level 4' training or test set, or not included (NA) (Field type: categorical) dist_combined_event_level_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance level 5' training or test set, or not included (NA) (Field type: categorical) dist_combined_event_level_pair_1_2: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1 and 2 (pair)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_pair_1_3: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1 and 3 (pair)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_pair_1_4: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1 and 4 (pair)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_pair_1_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1 and 5 (pair)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_pair_2_3: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 2 and 3 (pair)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_pair_2_4: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 2 and 4 (pair)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_pair_2_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 2 and 5 (pair)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_pair_3_4: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 3 and 4 (pair)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_pair_3_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 3 and 5 (pair)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_pair_4_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 4 and 5 (pair)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_triple_1_2_3: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 2 and 3 (triple)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_triple_1_2_4: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 2 and 4 (triple)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_triple_1_2_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 2 and 5 (triple)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_triple_1_3_4: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 3 and 4 (triple)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_triple_1_3_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 3 and 5 (triple)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_triple_1_4_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 4 and 5 (triple)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_triple_2_3_4: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 2, 3 and 4 (triple)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_triple_2_3_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 2, 3 and 5 (triple)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_triple_2_4_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 2, 4 and 5 (triple)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_triple_3_4_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 3, 4 and 5 (triple)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_quad_1_2_3_4: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 2, 3 and 4 (quad)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_quad_1_2_3_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 2, 3 and 5 (quad)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_quad_1_2_4_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 2, 4 and 5 (quad)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_quad_1_3_4_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 3, 4 and 5 (quad)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_quad_2_3_4_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 2, 3, 4 and 5 (quad)' training set, or not included (NA) (Field type: categorical) dist_combined_event_level_all_1_2_3_4_5: Label as to whether image is included in the 'disturbance level combination analysis split at event level: disturbance levels 1, 2, 3, 4 and 5 (all)' training set, or not included (NA) (Field type: categorical) dist_camera_level_individ_1: Label as to whether image is included in the 'disturbance level combination analysis split at camera level: disturbance

Search
Clear search
Close search
Google apps
Main menu