3 datasets found
  1. t

    INDIGO Change Detection Reference Dataset

    • researchdata.tuwien.at
    jpeg, png, zip
    Updated Jun 25, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Benjamin Wild; Benjamin Wild; Geert Verhoeven; Geert Verhoeven; Rafał Muszyński; Rafał Muszyński; Norbert Pfeifer; Norbert Pfeifer (2024). INDIGO Change Detection Reference Dataset [Dataset]. http://doi.org/10.48436/ayj4e-v4864
    Explore at:
    jpeg, zip, pngAvailable download formats
    Dataset updated
    Jun 25, 2024
    Dataset provided by
    TU Wien
    Authors
    Benjamin Wild; Benjamin Wild; Geert Verhoeven; Geert Verhoeven; Rafał Muszyński; Rafał Muszyński; Norbert Pfeifer; Norbert Pfeifer
    Description

    The INDIGO Change Detection Reference Dataset

    Description

    This graffiti-centred change detection dataset was developed in the context of INDIGO, a research project focusing on the documentation, analysis and dissemination of graffiti along Vienna's Donaukanal. The dataset aims to support the development and assessment of change detection algorithms.

    The dataset was collected from a test site approximately 50 meters in length along Vienna's Donaukanal during 11 days between 2022/10/21 and 2022/12/01. Various cameras with different settings were used, resulting in a total of 29 data collection sessions or "epochs" (see "EpochIDs.jpg" for details). Each epoch contains 17 images generated from 29 distinct 3D models with different textures. In total, the dataset comprises 6,902 unique image pairs, along with corresponding reference change maps. Additionally, exclusion masks are provided to ignore parts of the scene that might be irrelevant, such as the background.

    To summarise, the dataset, labelled as "Data.zip," includes the following:

    • Synthetic Images: These are colour images created within Agisoft Metashape Professional 1.8.4, generated by rendering views from 17 artificial cameras observing 29 differently textured versions of the same 3D surface model.
    • Change Maps: Binary images that were manually and programmatically generated, using a Python script, from two synthetic graffiti images. These maps highlight the areas where changes have occurred.
    • Exclusion Masks: Binary images are manually created from synthetic graffiti images to identify "no data" areas or irrelevant ground pixels.

    Image Acquisition

    Image acquisition involved the use of two different camera setups. The first two datasets (ID 1 and 2; cf. "EpochIDs.jpg") were obtained using a Nikon Z 7II camera with a pixel count of 45.4 MP, paired with a Nikon NIKKOR Z 20 mm lens. For the remaining image datasets (ID 3-29), a triple GoPro setup was employed. This triple setup featured three GoPro cameras, comprising two GoPro HERO 10 cameras and one GoPro HERO 11, all securely mounted within a frame. This triple-camera setup was utilised on nine different days with varying camera settings, resulting in the acquisition of 27 image datasets in total (nine days with three datasets each).

    Data Structure

    The "Data.zip" file contains two subfolders:

    • 1_ImagesAndChangeMaps: This folder contains the primary dataset. Each subfolder corresponds to a specific epoch. Within each epoch folder resides a subfolder for every other epoch with which a distinct epoch pair can be created. It is important to note that the pairs "Epoch Y and Epoch Z" are equivalent to "Epoch Z and Epoch Y", so the latter combinations are not included in this dataset. Each sub-subfolder, organised by epoch, contains 17 more subfolders, which hold the image data. These subfolders consist of:
      • Two synthetic images rendered from the same synthetic camera ("X_Y.jpg" and "X_Z.jpg")
      • The corresponding binary reference change map depicting the graffiti-related differences between the two images ("X_YZ.png"). Black areas denote new graffiti (i.e. "change"), and white denotes "no change". "DataStructure.png" provides a visual explanation concerning the creation of the dataset.

        The filenames follow the following pattern:
        • X - Is the ID number of the synthetic camera. In total, 17 synthetic cameras were placed along the test site
        • Y - Corresponds to the reference epoch (i.e. the "older epoch")
        • Z - Corresponds to the "new epoch"
    • 2_ExclusionMasks: This folder contains the binary exclusion masks. They were manually created from synthetic graffiti images and identify "no data" areas or areas considered irrelevant, such as "ground pixels". Two exclusion masks were generated for each of the 17 synthetic cameras:
      • "groundMasks": depict ground pixels which are usually irrelevant for the detection of graffiti
      • "noDataMasks": depict "background" for which no data is available.

    A detailed dataset description (including detailed explanations of the data creation) is part of a journal paper currently in preparation. The paper will be linked here for further clarification as soon as it is available.

    Licensing

    Due to the nature of the three image types, this dataset comes with two licenses:

    Every synthetic image, change map and mask has this licensing information embedded as IPTC photo metadata. In addition, the images' IPTC metadata also provide a short image description, the image creator and the creator's identity (in the form of an ORCiD).

    -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    If there are any questions, problems or suggestions for the dataset or the description, please do not hesitate to contact the corresponding author, Benjamin Wild.

  2. UC2018 DualMyo Hand Gesture Dataset

    • zenodo.org
    • data.niaid.nih.gov
    bin, zip
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Miguel Simão; Miguel Simão; Pedro Neto; Pedro Neto; Olivier Gibaru; Olivier Gibaru (2020). UC2018 DualMyo Hand Gesture Dataset [Dataset]. http://doi.org/10.5281/zenodo.1320922
    Explore at:
    zip, binAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Miguel Simão; Miguel Simão; Pedro Neto; Pedro Neto; Olivier Gibaru; Olivier Gibaru
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    This is set of data obtained from two consumer-market EMG sensors (Myo) with a subject performs 8 distinct hand gestures.

    There are a total of 110 repetitions of each class of gesture obtained across 5 recording sessions.

    Besides the data set, which is saved in a python pickle file, we include a python test script to load the data, generate random synthetic sequences of gestures and classify them with multiple models.

    Gesture library:

    1. Rest
    2. Closed fist
    3. Open hand
    4. Wave in
    5. Wave out
    6. Double-tap
    7. Hand down
    8. Hand up

    Device placement:

    1. The two Myos are placed on the forearm with the usb port pointing outwards, palm and sensor 5 facing upwards.
    2. The Myos are next to one another with their middle position close the the thickest section of the forearm.
    3. The outwards Myo is rotated slightly so that sensor 5 is aligned with the axis of the palmaris longus tendon.
    4. The Myo inside is rotated so that it has an angle of 22.5 degrees with the first Myo, in clockwise direction (subject perspective).

    Acquisition protocol:

    The subjects wear the armbands according to the instructions above. The sensors are run for a few minutes to warm-up.

    The subjects are requested to hold the positions of the gestures for a few seconds while we record 2 seconds of data. The gestures are repeated in random order in several sessions.

  3. Job workload data and transaction logs for the eBlocBroker

    • zenodo.org
    application/gzip
    Updated Oct 21, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alimoglu; Alimoglu (2022). Job workload data and transaction logs for the eBlocBroker [Dataset]. http://doi.org/10.5281/zenodo.7176702
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Oct 21, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Alimoglu; Alimoglu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    * What is eBlocBroker?

    eBlocBroker is a blockchain based autonomous computational resource broker.

    * Job workload data and transaction logs for the eBlocBroker

    This repository contains job workload data and test results for the paper titled: "eBlocBroker: A Blockchain Based Autonomous Computational Resource Broker". eBlocBroker and its driver programs are available from the following GitHub repository: https://github.com/ebloc/ebloc-broker.

    We first deployed our eBlocBroker contract on bloxberg. Then we tested eBlocBroker and the robustness of our Python scripts that allow provider and requester nodes to interact with each other through eBlocBroker and cloud storage services using two types of synthetic CPU workloads explained as follows:

    1. This workload helps to test running source code in combination with already cached and not cached datasets on the provider. As the source code, the cppr (colored parallel rush-relabel algorithm) is used, which runs with additional datasets. Three cppr processes run one after another with different randomly selected datasets. All four providers have the same 12 medium-size datasets, of which only 3 distinct ones from each other have lower prices. Two data files are the provider’s registered data, and one is from the requester’s local storage.
    2. The NAS Parallel Benchmarks, a small group of programs targeting the performance evaluation of parallel supercomputers. One of the NAS serialized benchmarks in Class B (Block Tridiagonal solver, Scalar Pentadiagonal solver, Unstructured Adaptive mesh, and Lower-Upper Gauss-Seidel solver) is selected randomly. Since providers’ prices are the same, the calculated cost for NAS jobs will be the same for all providers.

    In the test, our helper Python script maintains one hundred synthetic requesters within the requester node continually submit the following chosen workload randomly for 14 hours and 30 minutes.

    This record provides logs of clusters and clients, results of the submitted jobs on each cluster that is either completed of failed and their gained and returned fees, logs of the submitted jobs and their transaction hashes, and Slurm's job submission information, which are generated by the Driver programs.

    ** Transactions are taken from bloxberg (https://blockexplorer.bloxberg.org)

    - Transactions deployed on the eBlocBroker Smart Contract:
    https://blockexplorer.bloxberg.org/address/0xa0Fac3232234478E6A0d4d5564ed239c956A21f0/transactions

    - Transactions of the provider0_0x29e613B04125c16db3f3613563bFdd0BA24Cb629
    - Transactions of the provider1_0x1926b36af775e1312fdebcc46303ecae50d945af
    - Transactions of the provider2_0x4934a70Ba8c1C3aCFA72E809118BDd9048563A24
    - Transactions of the provider3_0x51e2b36469cdbf58863db70cc38652da84d20c67

    * Files

    Each provider contains eudat, gdrive, ipfs, and ipfs_gpg folders that contains the patch results that obtained from the named cloud storage.

    $ tree -L 2 .
    ├── README.org
    ├── base_test_eblocbroker
    │ ├── NPB3.3-SER_source_code
    │ ├── README.md
    │ ├── _cppr
    │ ├── cppr
    │ ├── cppr_example.sh
    │ ├── datasets
    │ ├── run_cppr
    │ ├── setup.sh
    │ └── test_data
    ├── check_list.org
    ├── provider0_0x29e613B04125c16db3f3613563bFdd0BA24Cb629
    │ ├── ebloc-broker
    │ ├── eudat
    │ ├── gdrive
    │ ├── ipfs
    │ ├── ipfs_gpg
    │ ├── jobs_info_0x29e613b04125c16db3f3613563bfdd0ba24cb629.out
    │ ├── result_ipfs_hashes.txt
    │ ├── transactions_0x29e613B04125c16db3f3613563bFdd0BA24Cb629.csv
    │ └── watch_0x29e613b04125c16db3f3613563bfdd0ba24cb629.out
    ├── provider1_0x1926b36af775e1312fdebcc46303ecae50d945af
    │ ├── ebloc-broker
    │ ├── eudat
    │ ├── gdrive
    │ ├── ipfs
    │ ├── ipfs_gpg
    │ ├── jobs_info_0x1926b36af775e1312fdebcc46303ecae50d945af.out
    │ ├── result_ipfs_hashes.txt
    │ ├── transactions_0x1926b36af775e1312fdebcc46303ecae50d945af.csv
    │ └── watch_0x1926b36af775e1312fdebcc46303ecae50d945af.out
    ├── provider2_0x4934a70Ba8c1C3aCFA72E809118BDd9048563A24
    │ ├── ebloc-broker
    │ ├── eudat
    │ ├── ipfs
    │ ├── ipfs_gpg
    │ ├── jobs_info_0x4934a70ba8c1c3acfa72e809118bdd9048563a24.out
    │ ├── result_ipfs_hashes.txt
    │ ├── transactions_0x4934a70Ba8c1C3aCFA72E809118BDd9048563A24.csv
    │ └── watch_0x4934a70ba8c1c3acfa72e809118bdd9048563a24.out
    ├── provider3_0x51e2b36469cdbf58863db70cc38652da84d20c67
    │ ├── ebloc-broker
    │ ├── eudat
    │ ├── gdrive
    │ ├── ipfs
    │ ├── ipfs_gpg
    │ ├── jobs_info_0x51e2b36469cdbf58863db70cc38652da84d20c67.out
    │ ├── result_ipfs_hashes.txt
    │ ├── transactions_0x51e2b36469cdbf58863db70cc38652da84d20c67.csv
    │ └── watch_0x51e2b36469cdbf58863db70cc38652da84d20c67.out
    ├── requesters
    │ ├── ebloc-broker-logs
    │ └── gdrive
    └── transactions_contract_0xa0Fac3232234478E6A0d4d5564ed239c956A21f0.csv

  4. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Benjamin Wild; Benjamin Wild; Geert Verhoeven; Geert Verhoeven; Rafał Muszyński; Rafał Muszyński; Norbert Pfeifer; Norbert Pfeifer (2024). INDIGO Change Detection Reference Dataset [Dataset]. http://doi.org/10.48436/ayj4e-v4864

INDIGO Change Detection Reference Dataset

Explore at:
2 scholarly articles cite this dataset (View in Google Scholar)
jpeg, zip, pngAvailable download formats
Dataset updated
Jun 25, 2024
Dataset provided by
TU Wien
Authors
Benjamin Wild; Benjamin Wild; Geert Verhoeven; Geert Verhoeven; Rafał Muszyński; Rafał Muszyński; Norbert Pfeifer; Norbert Pfeifer
Description

The INDIGO Change Detection Reference Dataset

Description

This graffiti-centred change detection dataset was developed in the context of INDIGO, a research project focusing on the documentation, analysis and dissemination of graffiti along Vienna's Donaukanal. The dataset aims to support the development and assessment of change detection algorithms.

The dataset was collected from a test site approximately 50 meters in length along Vienna's Donaukanal during 11 days between 2022/10/21 and 2022/12/01. Various cameras with different settings were used, resulting in a total of 29 data collection sessions or "epochs" (see "EpochIDs.jpg" for details). Each epoch contains 17 images generated from 29 distinct 3D models with different textures. In total, the dataset comprises 6,902 unique image pairs, along with corresponding reference change maps. Additionally, exclusion masks are provided to ignore parts of the scene that might be irrelevant, such as the background.

To summarise, the dataset, labelled as "Data.zip," includes the following:

  • Synthetic Images: These are colour images created within Agisoft Metashape Professional 1.8.4, generated by rendering views from 17 artificial cameras observing 29 differently textured versions of the same 3D surface model.
  • Change Maps: Binary images that were manually and programmatically generated, using a Python script, from two synthetic graffiti images. These maps highlight the areas where changes have occurred.
  • Exclusion Masks: Binary images are manually created from synthetic graffiti images to identify "no data" areas or irrelevant ground pixels.

Image Acquisition

Image acquisition involved the use of two different camera setups. The first two datasets (ID 1 and 2; cf. "EpochIDs.jpg") were obtained using a Nikon Z 7II camera with a pixel count of 45.4 MP, paired with a Nikon NIKKOR Z 20 mm lens. For the remaining image datasets (ID 3-29), a triple GoPro setup was employed. This triple setup featured three GoPro cameras, comprising two GoPro HERO 10 cameras and one GoPro HERO 11, all securely mounted within a frame. This triple-camera setup was utilised on nine different days with varying camera settings, resulting in the acquisition of 27 image datasets in total (nine days with three datasets each).

Data Structure

The "Data.zip" file contains two subfolders:

  • 1_ImagesAndChangeMaps: This folder contains the primary dataset. Each subfolder corresponds to a specific epoch. Within each epoch folder resides a subfolder for every other epoch with which a distinct epoch pair can be created. It is important to note that the pairs "Epoch Y and Epoch Z" are equivalent to "Epoch Z and Epoch Y", so the latter combinations are not included in this dataset. Each sub-subfolder, organised by epoch, contains 17 more subfolders, which hold the image data. These subfolders consist of:
    • Two synthetic images rendered from the same synthetic camera ("X_Y.jpg" and "X_Z.jpg")
    • The corresponding binary reference change map depicting the graffiti-related differences between the two images ("X_YZ.png"). Black areas denote new graffiti (i.e. "change"), and white denotes "no change". "DataStructure.png" provides a visual explanation concerning the creation of the dataset.

      The filenames follow the following pattern:
      • X - Is the ID number of the synthetic camera. In total, 17 synthetic cameras were placed along the test site
      • Y - Corresponds to the reference epoch (i.e. the "older epoch")
      • Z - Corresponds to the "new epoch"
  • 2_ExclusionMasks: This folder contains the binary exclusion masks. They were manually created from synthetic graffiti images and identify "no data" areas or areas considered irrelevant, such as "ground pixels". Two exclusion masks were generated for each of the 17 synthetic cameras:
    • "groundMasks": depict ground pixels which are usually irrelevant for the detection of graffiti
    • "noDataMasks": depict "background" for which no data is available.

A detailed dataset description (including detailed explanations of the data creation) is part of a journal paper currently in preparation. The paper will be linked here for further clarification as soon as it is available.

Licensing

Due to the nature of the three image types, this dataset comes with two licenses:

Every synthetic image, change map and mask has this licensing information embedded as IPTC photo metadata. In addition, the images' IPTC metadata also provide a short image description, the image creator and the creator's identity (in the form of an ORCiD).

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

If there are any questions, problems or suggestions for the dataset or the description, please do not hesitate to contact the corresponding author, Benjamin Wild.

Search
Clear search
Close search
Google apps
Main menu