56 datasets found
  1. Mapillary Oriented Imagery Catalog

    • gemelo-digital-en-arcgis-gemelodigital.hub.arcgis.com
    Updated Apr 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    asturksever_mapillary_meta (2023). Mapillary Oriented Imagery Catalog [Dataset]. https://gemelo-digital-en-arcgis-gemelodigital.hub.arcgis.com/content/6a86b887f1cd47d3a97f36ff2b27717d
    Explore at:
    Dataset updated
    Apr 15, 2023
    Dataset provided by
    Mapillaryhttps://mapillary.com/
    Metahttp://meta.com/
    Authors
    asturksever_mapillary_meta
    Description

    With either the Oriented Imagery add-in for ArcGIS Pro or the Oriented Imagery widget for ArcGIS Experience Builder, you can access Mapillary’s growing archive of over 2 billion images directly from your ArcGIS Pro project or Experience Builder web app. Once the Mapillary oriented imagery catalog is added to your app or project, you’ll be able to click on a point of interest on your map or scene, then explore all the images of that location in Mapillary’s open database. Learn more about Oriented Imagery. Interested in adding your own imagery to Mapillary’s archive? Learn more about making your images available as open data.

    Getting Started:

    To add Mapillary imagery to your Experience Builder app:

    Create a new Experience Builder app, or edit an existing app. Add the Oriented Imagery widget to your web app. Configure the Oriented Imagery widget: Select the map widget you want to use. Under Choose catalog, select “From item URL” Under “Item URL,” Copy and paste https://mapillary-meta.maps.arcgis.com/home/item.html?id=6a86b887f1cd47d3a97f36ff2b27717d Select Add. Optionally, under “Configure editing” you can make feature layers from your web map or web scene available as overlays in the Oriented Imagery viewer.

    Save and preview your app.

    In the app, navigate to your area of interest, select the Oriented Imagery widget, and click anywhere highlighted in green to pull up images of that location.

  2. a

    Oriented Image Layer for Sandbox

    • v8-wide-area-search-solution-napsg.hub.arcgis.com
    Updated Aug 26, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NAPSG Foundation (2022). Oriented Image Layer for Sandbox [Dataset]. https://v8-wide-area-search-solution-napsg.hub.arcgis.com/datasets/oriented-image-layer-for-sandbox
    Explore at:
    Dataset updated
    Aug 26, 2022
    Dataset authored and provided by
    NAPSG Foundation
    Area covered
    Description

    A point layer used to store onsite observations with oriented images.

  3. Observation OIC

    • esrifrance.hub.arcgis.com
    Updated Aug 1, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri France (2021). Observation OIC [Dataset]. https://esrifrance.hub.arcgis.com/content/09a44b4a0faf49fe90cecd145fe15b11
    Explore at:
    Dataset updated
    Aug 1, 2021
    Dataset provided by
    Esrihttp://esri.com/
    Authors
    Esri France
    Description

    Oriented Imagery Catalog for QuickCapture Project

  4. CamenAI Horses OIC

    • esrinederland.hub.arcgis.com
    Updated Feb 16, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri Nederland (2024). CamenAI Horses OIC [Dataset]. https://esrinederland.hub.arcgis.com/content/a445b7c195dc469eb2da81005b5e97db
    Explore at:
    Dataset updated
    Feb 16, 2024
    Dataset provided by
    Esrihttp://esri.com/
    Authors
    Esri Nederland
    Description

    Oriented imagery catalog of data collected near IJzerlo, The Netherlands

  5. EDLO2ID: An Efficient-deep-learning-and-object-oriented Image Dataset for...

    • zenodo.org
    zip
    Updated Aug 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chang Li; Pengfei Zhang; Cairun Huang; Chang Li; Pengfei Zhang; Cairun Huang (2023). EDLO2ID: An Efficient-deep-learning-and-object-oriented Image Dataset for Large-scene Mapping [Dataset]. http://doi.org/10.5281/zenodo.6961937
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 15, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Chang Li; Pengfei Zhang; Cairun Huang; Chang Li; Pengfei Zhang; Cairun Huang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    EDLO2ID: An Efficient-deep-learning-and-object-oriented Image Dataset for Large-scene Mapping

    The dataset can be unzipped and includes an image dataset and a vector dataset, which includes nine land use/land cover categories (i.e., cropland, orchard, forestland, grassland, construction land, transportation land, water body, bare land, terrace) for object-oriented remote sensing image mapping using deep learning.

  6. Z

    Data from: Dataset "Privacy-aware image classification and search"

    • data.niaid.nih.gov
    • eprints.soton.ac.uk
    Updated Oct 15, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Siersdorfer, Stefan (2021). Dataset "Privacy-aware image classification and search" [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4568970
    Explore at:
    Dataset updated
    Oct 15, 2021
    Dataset provided by
    Zerr, Sergej
    Demidova, Elena
    Hare Jonathon
    Siersdorfer, Stefan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Modern content sharing environments such as Flickr or YouTube contain a large number of private resources such as photos showing weddings, family holidays, and private parties. These resources can be of a highly sensitive nature, disclosing many details of the users' private sphere. In order to support users in making privacy decisions in the context of image sharing and to provide them with a better overview of privacy-related visual content available on the Web, we propose techniques to automatically detect private images and to enable privacy-oriented image search. In order to classify images, we use the metadata like title and tags and plan to use visual features which are described in our scientific paper. The data set used in the paper is now available.

    Picalet! cleaned dataset - ( recommended for experiments) userstudy - (images annotated with queries, anonymized user id and privacy value)

  7. Photo OIC

    • esrinederland.hub.arcgis.com
    Updated Mar 10, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri Nederland (2021). Photo OIC [Dataset]. https://esrinederland.hub.arcgis.com/content/88aa87d7517b4ce1bed5f7e752ce9a30
    Explore at:
    Dataset updated
    Mar 10, 2021
    Dataset provided by
    Esrihttp://esri.com/
    Authors
    Esri Nederland
    Description

    Oriented Imagery Catalog for QuickCapture Project

  8. U

    Coast Train--Labeled imagery for training and evaluation of data-driven...

    • data.usgs.gov
    • catalog.data.gov
    Updated Jan 22, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Phillipe Wernette; Daniel Buscombe; Jaycee Favela; Sharon Fitzpatrick; Evan Goldstein; Nicholas Enwright; Erin Dunand (2025). Coast Train--Labeled imagery for training and evaluation of data-driven models for image segmentation [Dataset]. http://doi.org/10.5066/P91NP87I
    Explore at:
    Dataset updated
    Jan 22, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Authors
    Phillipe Wernette; Daniel Buscombe; Jaycee Favela; Sharon Fitzpatrick; Evan Goldstein; Nicholas Enwright; Erin Dunand
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Time period covered
    Jan 1, 2008 - Dec 31, 2020
    Description

    Coast Train is a library of images of coastal environments, annotations, and corresponding thematic label masks (or ‘label images’) collated for the purposes of training and evaluating machine learning (ML), deep learning, and other models for image segmentation. It includes image sets from both geospatial satellite, aerial, and UAV imagery and orthomosaics, as well as non-geospatial oblique and nadir imagery. Images include a diverse range of coastal environments from the U.S. Pacific, Gulf of Mexico, Atlantic, and Great Lakes coastlines, consisting of time-series of high-resolution (≤1m) orthomosaics and satellite image tiles (10–30m). Each image, image annotation, and labelled image is available as a single NPZ zipped file. NPZ files follow the following naming convention: {datasource}_{numberofclasses}_{threedigitdatasetversion}.zip, where {datasource} is the source of the original images (for example, NAIP, Landsat 8, Sentinel 2), {numberofclasses} is the number of classes us ...

  9. a

    OpenStreetMap (Focused Imagery Hybrid)

    • hub.arcgis.com
    Updated Apr 25, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Natrona Regional Geospatial Cooperative (NRGC) (2024). OpenStreetMap (Focused Imagery Hybrid) [Dataset]. https://hub.arcgis.com/maps/cdee077dc0d54ec2bb497fd661ea258f
    Explore at:
    Dataset updated
    Apr 25, 2024
    Dataset authored and provided by
    Natrona Regional Geospatial Cooperative (NRGC)
    Area covered
    Description

    This web map presents a vector basemap of OpenStreetMap (OSM) data hosted by Esri. Esri created this vector tile basemap from the Daylight map distribution of OSM data, which is supported by Facebook and supplemented with additional data from Microsoft. It provides a reference layer featuring map labels, boundary lines, and roads and includes imagery. The OSM Daylight map will be updated every month with the latest version of OSM Daylight data. OpenStreetMap is an open collaborative project to create a free editable map of the world. Volunteers gather location data using GPS, local knowledge, and other free sources of information and upload it. The resulting free map can be viewed and downloaded from the OpenStreetMap site: www.OpenStreetMap.org. Esri is a supporter of the OSM project and is excited to make this enhanced vector basemap available to the ArcGIS user and developer communities.

  10. d

    Data from: Aerial imagery from UAS survey of the intertidal zone at West...

    • catalog.data.gov
    • data.amerigeoss.org
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Aerial imagery from UAS survey of the intertidal zone at West Whidbey Island, WA, 2019-06-04 [Dataset]. https://catalog.data.gov/dataset/aerial-imagery-from-uas-survey-of-the-intertidal-zone-at-west-whidbey-island-wa-2019-06-04
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    U.S. Geological Survey
    Area covered
    Island County, West Whidbey Avenue, Washington
    Description

    This portion of the data release presents the raw aerial imagery collected during the unmanned aerial system (UAS) survey of the intertidal zone at West Whidbey Island, WA, on 2019-06-04. The imagery was acquired using a Department of Interior-owned 3DR Solo quadcopter fitted with a Ricoh GR II digital camera featuring a global shutter. Flights using both a nadir camera orientation and an oblique camera orientation were conducted. For the nadir flights (F04, F05, F06, F07, and F08), the camera was mounted using a fixed mount on the bottom of the UAS and oriented in an approximately nadir orientation. The UAS was flown on pre-programmed autonomous flight lines at an approximate altitude of 70 meters above ground level (AGL), resulting in a nominal ground-sample-distance (GSD) of 1.8 centimeters per pixel. The flight lines were oriented roughly shore-parallel and were spaced to provide approximately 70 percent overlap between images from adjacent lines. For the oblique orientation flights (F03, F09, F10, and F11), the camera was mounted using a fixed mount on the bottom of the UAS and oriented facing forward with a downward tilt. The UAS was flown manually in a sideways-facing orientation with the camera pointed toward the bluff. The camera was triggered at 1 Hz using a built-in intervalometer. After acquisition, the images were renamed to include flight number and acquisition time in the file name. The coordinates of the approximate image acquisition location were added ('geotagged') to the image metadata (EXIF) using the telemetry log from the UAS onboard single-frequency autonomous GPS. The image EXIF were also updated to include additional information related to the acquisition. Although the images were recorded in both JPG and camera raw (Adobe DNG) formats, only the JPG images are provided in this data release. The data release includes a total of 3,336 JPG images. Images from takeoff and landing sequences were not used for processing and have been omitted from the data release. The images from each flight are provided in a zip file named with the flight number.

  11. Attention-figshare-Datas.xlsx

    • figshare.com
    xlsx
    Updated Sep 3, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Isabelle Leboeuf (2020). Attention-figshare-Datas.xlsx [Dataset]. http://doi.org/10.6084/m9.figshare.12912149.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Sep 3, 2020
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Isabelle Leboeuf
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    AbstractIntroduction: Compassion focused-imagery (CFI), one of the psychological interventions of compassion-focused therapy, is receiving increasing attention. It is a therapeutic tool that targets the process of self-criticism by prompting individuals to imagine themselves as compassionate or to imagine receiving compassion from an ideal compassionate other. This research examines the role of self-criticism in the attentional processing of emotional stimuli, namely, critical and compassionate facial expressions. It is hypothesized that the activation of positive social emotions through CFI plays a role in broadening attention in the processing of emotional stimuli. Method: The McEwan Faces stimulus set, which includes critical, neutral and compassionate faces, was used to create an attentional bias task called the dot probe task. The processing of emotional faces was assessed before and after exposure to either CFI or neutral imagery, controlling for the process of sensory integration (n = 80). Results: Before the imagery task, participants tended to look away from critical faces, and their level of self-criticism played a role. Both types of imagery significantly reduced the bias away from critical faces when the stimuli were presented for 1200 ms. This effect was reversed in the neutral condition for participants with high levels of self-criticism but not in the CFI condition. Discussion: Interestingly, self-criticism impacts the attentional treatment of critical faces and the effect of imagery entailing sensory integration on this treatment. CFI seems to preserve this effect for participants with high levels of self-criticism, possibly due to the activation of positive social emotions.

  12. Pedro Bank Jamaica Benthic Habitat condensed 2014

    • geospatial.tnc.org
    • maps-tnc.hub.arcgis.com
    Updated Oct 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Nature Conservancy (2023). Pedro Bank Jamaica Benthic Habitat condensed 2014 [Dataset]. https://geospatial.tnc.org/datasets/pedro-bank-jamaica-benthic-habitat-condensed-2014
    Explore at:
    Dataset updated
    Oct 18, 2023
    Dataset authored and provided by
    The Nature Conservancyhttp://www.nature.org/
    Area covered
    Description

    WorldView-2, launched October 2009 and operated by DigitalGlobe Inc., is the first high-resolution 8-band multispectral commercial satellite. Operating at an altitude of 770 km, WorldView-2 provides 0.48 cm panchromatic resolution and 1.85 meter multispectral resolution. The mosaic used in this study was acquired on April 17th, 2014. The imagery was compromised by high-altitude cloud along the north-eastern limb of the bank but these areas could be mapped through cross-comparison with Landsat imagery and the Google Earth database. For cost savings, the WorldView-2 images used in this study lacked the panchromatic channel and contained only the blue, green, red and infrared spectral bands. The dataset was thus commensurate with WorldView’s predecessor, QuickBird. The satellite’s blue, green and red bands are useful for benthic habitat mapping and the fourth infrared band was used to identify emergent areas. In collaboration with the Living Oceans Foundation and the National Coral Reef Institute, a team from TNC visited the study area March 10th-20th, 2012 to collect ground data to support the mapping effort. The field team collected 96 GPS-referenced field points using a dropcam video and 27 km tracks of depth soundings. These video samples were analyzed and interpreted, serving as training sites to classify WorldView-2 satellite images that were collected on April 17th, 2014. An object-oriented approach was adopted for delineating benthic habitat in the satellite imagery. In contrast to pixel-based classification methods, object-oriented image analysis, the strategy used to produce the maps in this study, segments satellite data into landscape objects that have ecologically-meaningful shapes, and classifies the objects across spatial, spectral, and textural scales. In the context of this study, we employ object-oriented classification to delineate habitat “bodies”, interpreted to be distinct patches of uniform benthic habitat. Because of the flexibility afforded by including non-spectral attributes of the imagery (e.g., texture, spatial, and contextual information) into the classification workflow, object-oriented methods have been shown to yield significant accuracy improvements over traditional pixel-based image analysis techniques (Kelly and Tuxen, 2009; Purkis and Klemas, 2011; Purkis et al., 2014a,b). The software used for mapping in this study, eCognition (v. 8.9, Trimble Inc.), tenders a suite of object-oriented image analysis algorithms having particular utility for creating thematic maps from remote sensing data, including coral reefs.

  13. d

    Aerial imagery from UAS survey of the intertidal zone at Puget Creek and...

    • catalog.data.gov
    • data.usgs.gov
    • +2more
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Aerial imagery from UAS survey of the intertidal zone at Puget Creek and Dickman Mill Park, Tacoma, WA, 2019-06-03 [Dataset]. https://catalog.data.gov/dataset/aerial-imagery-from-uas-survey-of-the-intertidal-zone-at-puget-creek-and-dickman-mill-par-
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    U.S. Geological Survey
    Area covered
    Tacoma, Washington
    Description

    This portion of the data release presents the raw aerial imagery collected during an Unmanned Aerial System (UAS) survey of the intertidal zone at Puget Creek and Dickman Mill Park, Tacoma, WA, on 2019-06-03. The imagery was acquired using a Department of Interior-owned 3DR Solo quadcopter fitted with a Ricoh GR II digital camera featuring a global shutter. The camera was mounted using a fixed mount on the bottom of the UAS and oriented in an approximately nadir orientation. The UAS was flown on pre-programmed autonomous flight lines at an approximate altitude of 50 meters above ground level (AGL), resulting in a nominal ground-sample-distance (GSD) of 1.3 centimeters per pixel. The flight lines were oriented roughly shore-parallel and were spaced to provide approximately 70 percent overlap between images from adjacent lines. The camera was triggered at 1 Hz using a built-in intervalometer. Flight F01 covered the Puget Creek area; flight F02 covered the Dickman Mill Park area. After acquisition, the images were renamed to include the flight number and acquisition time in the file name. The coordinates of the approximate image acquisition locations were added ('geotagged') to the image metadata (EXIF) using the telemetry log from the UAS onboard single-frequency autonomous GPS. The image EXIF were also updated to include additional information related to the acquisition. Although the images were recorded in both JPG and camera raw (Adobe DNG) formats, only the JPG images are provided in this data release. The data release includes a total of 1,171 JPG images. Images from takeoff and landing sequences were not used for processing and have been omitted from the data release. The images from each flight are provided in a zip file named with the flight number.

  14. d

    Aerial imagery from UAS survey of the intertidal zone at Post Point,...

    • catalog.data.gov
    • data.usgs.gov
    • +2more
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Aerial imagery from UAS survey of the intertidal zone at Post Point, Bellingham Bay, WA, 2019-06-06 [Dataset]. https://catalog.data.gov/dataset/aerial-imagery-from-uas-survey-of-the-intertidal-zone-at-post-point-bellingham-bay-wa-201-
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    U.S. Geological Survey
    Area covered
    Washington, Bellingham Bay, Post Point
    Description

    This portion of the data release presents the raw aerial imagery collected during an Unmanned Aerial System (UAS) survey of the intertidal zone at Post Point, Bellingham Bay, WA, on 2019-06-06. The imagery was acquired using a Department of Interior-owned 3DR Solo quadcopter fitted with a Ricoh GR II digital camera featuring a global shutter. The camera was mounted using a fixed mount on the bottom of the UAS and oriented in an approximately nadir orientation. The UAS was flown on pre-programmed autonomous flight lines which were oriented roughly shore-parallel and were spaced to provide approximately 70 percent overlap between images from adjacent lines. Three flights (F01, F02, F03) covering the survey area were conducted at an approximate altitude of 70 meters above ground level (AGL), resulting in a nominal ground-sample-distance (GSD) of 1.8 centimeters per pixel. Two additional flights (F04, which was aborted early and not included in this data release, and F05) were conducted over a smaller area within the main survey area at an approximate altitude of 35 meters AGL, resulting in a nominal GSD of 0.9 centimeters per pixel. The camera was triggered at 1 Hz using a built-in intervalometer. After acquisition, the images were renamed to include the flight number and acquisition time in the file name. The coordinates of the approximate image acquisition locations were added ('geotagged') to the image metadata (EXIF) using the telemetry log from the UAS onboard single-frequency autonomous GPS. The image EXIF were also updated to include additional information related to the acquisition. Although the images were recorded in both JPG and camera raw (Adobe DNG) formats, only the JPG images are provided in this data release. The data release includes a total of 1,662 JPG images. Images from takeoff and landing sequences were not used for processing and have been omitted from the data release. The images from each flight are provided in a zip file named with the flight number.

  15. f

    Cerebral Activations Related to Audition-Driven Performance Imagery in...

    • plos.figshare.com
    doc
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Robert Harris; Bauke M. de Jong (2023). Cerebral Activations Related to Audition-Driven Performance Imagery in Professional Musicians [Dataset]. http://doi.org/10.1371/journal.pone.0093681
    Explore at:
    docAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Robert Harris; Bauke M. de Jong
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Functional Magnetic Resonance Imaging (fMRI) was used to study the activation of cerebral motor networks during auditory perception of music in professional keyboard musicians (n = 12). The activation paradigm implied that subjects listened to two-part polyphonic music, while either critically appraising the performance or imagining they were performing themselves. Two-part polyphonic audition and bimanual motor imagery circumvented a hemisphere bias associated with the convention of playing the melody with the right hand. Both tasks activated ventral premotor and auditory cortices, bilaterally, and the right anterior parietal cortex, when contrasted to 12 musically unskilled controls. Although left ventral premotor activation was increased during imagery (compared to judgment), bilateral dorsal premotor and right posterior-superior parietal activations were quite unique to motor imagery. The latter suggests that musicians not only recruited their manual motor repertoire but also performed a spatial transformation from the vertically perceived pitch axis (high and low sound) to the horizontal axis of the keyboard. Imagery-specific activations in controls were seen in left dorsal parietal-premotor and supplementary motor cortices. Although these activations were less strong compared to musicians, this overlapping distribution indicated the recruitment of a general ‘mirror-neuron’ circuitry. These two levels of sensori-motor transformations point towards common principles by which the brain organizes audition-driven music performance and visually guided task performance.

  16. i

    Oriented Cell Dataset (OCD)

    • ieee-dataport.org
    Updated May 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lucas Kirsten (2024). Oriented Cell Dataset (OCD) [Dataset]. https://ieee-dataport.org/documents/oriented-cell-dataset-ocd
    Explore at:
    Dataset updated
    May 18, 2024
    Authors
    Lucas Kirsten
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    OCD description. Cell lines A172 and U251: human glioblastoma; MCF7: human breast cancer; MRC5: human lung fibroblast; SCC25: human squamous cell carcinoma. Cultivation condition CTR: cells belonging to the control group - without the addition of chemotherapy; TMZ: cells treated with 50 μM temozolomide in some cultivation step.Split

  17. Data for "Schlieren and BOS velocimetry of a round turbulent helium jet in...

    • zenodo.org
    • data.niaid.nih.gov
    bin, zip
    Updated Feb 19, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gary Settles; Alex Liberzon; Alex Liberzon; Gary Settles (2022). Data for "Schlieren and BOS velocimetry of a round turbulent helium jet in air" [Dataset]. http://doi.org/10.5281/zenodo.6136052
    Explore at:
    zip, binAvailable download formats
    Dataset updated
    Feb 19, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Gary Settles; Alex Liberzon; Alex Liberzon; Gary Settles
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is a subset of data used in our publication Schlieren and BOS velocimetry of a round turbulent helium jet in air

    In the four helium-jet schlieren-image datasets given below, the nozzle diameter is 1.4 mm, the camera frame rate is 6000 fps and the individual image exposure is 0.0001667 s. 3000 images are provided for each dataset, or 1/2 s in real-time, which we determined to provide statistically-adequate data. Please see the paper for more detail.

    20210125-Run1: traditional mirror-schlieren, $Re_d = 5,890$, $U_j = 436$ m/s, and the scale is 0.29 mm/pixel.

    20210206-Run1: traditional mirror-schlieren, $Re_d = 11,300$, $U_j = 682$ m/s, and the scale is 0.29 mm/pixel.

    20210419-Run1: background-oriented schlieren (BOS), $Re_d = 5,890$, $U_j = 436$ m/s, and the scale is 0.26 mm/pixel. These are raw BOS images that must be processed with a reference image in order to yield pseudo-schlieren results. The reference (flow-off, tare) image is the first image in the sequence and is so named.

    20210420-Run1: background-oriented schlieren (BOS), $Re_d = 11,300$, $U_j = 682$ m/s, and the scale is 0.26 mm/pixel. These are raw BOS images that must be processed with a reference image in order to yield pseudo-schlieren results. The reference (flow-off, tare) image is the first image in the sequence and is so named.

  18. f

    Data from: Evaluating the versatility of EEG models generated from motor...

    • figshare.com
    application/x-rar
    Updated Nov 8, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xin Zhang; Carlo Menon; Xinyi Yong (2017). Evaluating the versatility of EEG models generated from motor imagery tasks: an exploratory investigation on upper-limb elbow-centered motor imagery tasks [Dataset]. http://doi.org/10.6084/m9.figshare.5579461.v1
    Explore at:
    application/x-rarAvailable download formats
    Dataset updated
    Nov 8, 2017
    Dataset provided by
    figshare
    Authors
    Xin Zhang; Carlo Menon; Xinyi Yong
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Electroencephalography (EEG) has recently been considered for use in rehabilitation of people with motor deficits. EEG data from the motor imagery of different body movements have been used, for instance, as an EEG-based control method to send commands to rehabilitation devices that assist people to perform a variety of different motor tasks. However, it is both time and effort consuming to go through data collection and model training for every rehabilitation task. In this paper, we investigate the possibility of using an EEG model from one type of motor imagery (e.g.: elbow extension and flexion) to classify EEG from other types of motor imagery activities (e.g.: open a drawer). In order to study the problem, we focused on the elbow joint. Specifically, nine kinesthetic motor imagery tasks involving the elbow were investigated in twelve healthy individuals who participated in the study. While results reported that models from goal-oriented motor imagery tasks had higher accuracy than models from the simple joint tasks in intra-task testing (e.g., model from elbow extension and flexion task was tested on EEG data collected from elbow extension and flexion task), models from simple joint tasks had higher accuracies than the others in inter-task testing (e.g., model from elbow extension and flexion task task tested on EEG data collected from drawer opening task). Simple single joint motor imagery tasks could, therefore, be considered for training models to potentially reduce the number of repetitive data acquisitions and model training in rehabilitation applications.

  19. f

    Quantitative index of enhancement results.

    • plos.figshare.com
    xls
    Updated Jun 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hongchun Zhu; Lijie Cai; Haiying Liu; Wei Huang (2023). Quantitative index of enhancement results. [Dataset]. http://doi.org/10.1371/journal.pone.0158585.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 15, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Hongchun Zhu; Lijie Cai; Haiying Liu; Wei Huang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Quantitative index of enhancement results.

  20. f

    The extraction accuracy corresponding to its optimal segmentation...

    • plos.figshare.com
    xls
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hongchun Zhu; Lijie Cai; Haiying Liu; Wei Huang (2023). The extraction accuracy corresponding to its optimal segmentation parameters. [Dataset]. http://doi.org/10.1371/journal.pone.0158585.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Hongchun Zhu; Lijie Cai; Haiying Liu; Wei Huang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The extraction accuracy corresponding to its optimal segmentation parameters.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
asturksever_mapillary_meta (2023). Mapillary Oriented Imagery Catalog [Dataset]. https://gemelo-digital-en-arcgis-gemelodigital.hub.arcgis.com/content/6a86b887f1cd47d3a97f36ff2b27717d
Organization logo

Mapillary Oriented Imagery Catalog

Explore at:
Dataset updated
Apr 15, 2023
Dataset provided by
Mapillaryhttps://mapillary.com/
Metahttp://meta.com/
Authors
asturksever_mapillary_meta
Description

With either the Oriented Imagery add-in for ArcGIS Pro or the Oriented Imagery widget for ArcGIS Experience Builder, you can access Mapillary’s growing archive of over 2 billion images directly from your ArcGIS Pro project or Experience Builder web app. Once the Mapillary oriented imagery catalog is added to your app or project, you’ll be able to click on a point of interest on your map or scene, then explore all the images of that location in Mapillary’s open database. Learn more about Oriented Imagery. Interested in adding your own imagery to Mapillary’s archive? Learn more about making your images available as open data.

Getting Started:

To add Mapillary imagery to your Experience Builder app:

Create a new Experience Builder app, or edit an existing app. Add the Oriented Imagery widget to your web app. Configure the Oriented Imagery widget: Select the map widget you want to use. Under Choose catalog, select “From item URL” Under “Item URL,” Copy and paste https://mapillary-meta.maps.arcgis.com/home/item.html?id=6a86b887f1cd47d3a97f36ff2b27717d Select Add. Optionally, under “Configure editing” you can make feature layers from your web map or web scene available as overlays in the Oriented Imagery viewer.

Save and preview your app.

In the app, navigate to your area of interest, select the Oriented Imagery widget, and click anywhere highlighted in green to pull up images of that location.

Search
Clear search
Close search
Google apps
Main menu