This graffiti-centred change detection dataset was developed in the context of INDIGO, a research project focusing on the documentation, analysis and dissemination of graffiti along Vienna's Donaukanal. The dataset aims to support the development and assessment of change detection algorithms.
The dataset was collected from a test site approximately 50 meters in length along Vienna's Donaukanal during 11 days between 2022/10/21 and 2022/12/01. Various cameras with different settings were used, resulting in a total of 29 data collection sessions or "epochs" (see "EpochIDs.jpg" for details). Each epoch contains 17 images generated from 29 distinct 3D models with different textures. In total, the dataset comprises 6,902 unique image pairs, along with corresponding reference change maps. Additionally, exclusion masks are provided to ignore parts of the scene that might be irrelevant, such as the background.
To summarise, the dataset, labelled as "Data.zip," includes the following:
Image acquisition involved the use of two different camera setups. The first two datasets (ID 1 and 2; cf. "EpochIDs.jpg") were obtained using a Nikon Z 7II camera with a pixel count of 45.4 MP, paired with a Nikon NIKKOR Z 20 mm lens. For the remaining image datasets (ID 3-29), a triple GoPro setup was employed. This triple setup featured three GoPro cameras, comprising two GoPro HERO 10 cameras and one GoPro HERO 11, all securely mounted within a frame. This triple-camera setup was utilised on nine different days with varying camera settings, resulting in the acquisition of 27 image datasets in total (nine days with three datasets each).
The "Data.zip" file contains two subfolders:
A detailed dataset description (including detailed explanations of the data creation) is part of a journal paper currently in preparation. The paper will be linked here for further clarification as soon as it is available.
Due to the nature of the three image types, this dataset comes with two licenses:
Every synthetic image, change map and mask has this licensing information embedded as IPTC photo metadata. In addition, the images' IPTC metadata also provide a short image description, the image creator and the creator's identity (in the form of an ORCiD).
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
If there are any questions, problems or suggestions for the dataset or the description, please do not hesitate to contact the corresponding author, Benjamin Wild.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This is set of data obtained from two consumer-market EMG sensors (Myo) with a subject performs 8 distinct hand gestures.
There are a total of 110 repetitions of each class of gesture obtained across 5 recording sessions.
Besides the data set, which is saved in a python pickle file, we include a python test script to load the data, generate random synthetic sequences of gestures and classify them with multiple models.
Gesture library:
Device placement:
Acquisition protocol:
The subjects wear the armbands according to the instructions above. The sensors are run for a few minutes to warm-up.
The subjects are requested to hold the positions of the gestures for a few seconds while we record 2 seconds of data. The gestures are repeated in random order in several sessions.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
* What is eBlocBroker?
eBlocBroker is a blockchain based autonomous computational resource broker.
* Job workload data and transaction logs for the eBlocBroker
This repository contains job workload data and test results for the paper titled: "eBlocBroker: A Blockchain Based Autonomous Computational Resource Broker". eBlocBroker and its driver programs are available from the following GitHub repository: https://github.com/ebloc/ebloc-broker.
We first deployed our eBlocBroker contract on bloxberg. Then we tested eBlocBroker and the robustness of our Python scripts that allow provider and requester nodes to interact with each other through eBlocBroker and cloud storage services using two types of synthetic CPU workloads explained as follows:
In the test, our helper Python script maintains one hundred synthetic requesters within the requester node continually submit the following chosen workload randomly for 14 hours and 30 minutes.
This record provides logs of clusters and clients, results of the submitted jobs on each cluster that is either completed of failed and their gained and returned fees, logs of the submitted jobs and their transaction hashes, and Slurm's job submission information, which are generated by the Driver programs.
** Transactions are taken from bloxberg (https://blockexplorer.bloxberg.org)
- Transactions deployed on the eBlocBroker Smart Contract:
https://blockexplorer.bloxberg.org/address/0xa0Fac3232234478E6A0d4d5564ed239c956A21f0/transactions
- Transactions of the provider0_0x29e613B04125c16db3f3613563bFdd0BA24Cb629
- Transactions of the provider1_0x1926b36af775e1312fdebcc46303ecae50d945af
- Transactions of the provider2_0x4934a70Ba8c1C3aCFA72E809118BDd9048563A24
- Transactions of the provider3_0x51e2b36469cdbf58863db70cc38652da84d20c67
* Files
Each provider contains eudat, gdrive, ipfs, and ipfs_gpg folders that contains the patch results that obtained from the named cloud storage.
$ tree -L 2 .
├── README.org
├── base_test_eblocbroker
│ ├── NPB3.3-SER_source_code
│ ├── README.md
│ ├── _cppr
│ ├── cppr
│ ├── cppr_example.sh
│ ├── datasets
│ ├── run_cppr
│ ├── setup.sh
│ └── test_data
├── check_list.org
├── provider0_0x29e613B04125c16db3f3613563bFdd0BA24Cb629
│ ├── ebloc-broker
│ ├── eudat
│ ├── gdrive
│ ├── ipfs
│ ├── ipfs_gpg
│ ├── jobs_info_0x29e613b04125c16db3f3613563bfdd0ba24cb629.out
│ ├── result_ipfs_hashes.txt
│ ├── transactions_0x29e613B04125c16db3f3613563bFdd0BA24Cb629.csv
│ └── watch_0x29e613b04125c16db3f3613563bfdd0ba24cb629.out
├── provider1_0x1926b36af775e1312fdebcc46303ecae50d945af
│ ├── ebloc-broker
│ ├── eudat
│ ├── gdrive
│ ├── ipfs
│ ├── ipfs_gpg
│ ├── jobs_info_0x1926b36af775e1312fdebcc46303ecae50d945af.out
│ ├── result_ipfs_hashes.txt
│ ├── transactions_0x1926b36af775e1312fdebcc46303ecae50d945af.csv
│ └── watch_0x1926b36af775e1312fdebcc46303ecae50d945af.out
├── provider2_0x4934a70Ba8c1C3aCFA72E809118BDd9048563A24
│ ├── ebloc-broker
│ ├── eudat
│ ├── ipfs
│ ├── ipfs_gpg
│ ├── jobs_info_0x4934a70ba8c1c3acfa72e809118bdd9048563a24.out
│ ├── result_ipfs_hashes.txt
│ ├── transactions_0x4934a70Ba8c1C3aCFA72E809118BDd9048563A24.csv
│ └── watch_0x4934a70ba8c1c3acfa72e809118bdd9048563a24.out
├── provider3_0x51e2b36469cdbf58863db70cc38652da84d20c67
│ ├── ebloc-broker
│ ├── eudat
│ ├── gdrive
│ ├── ipfs
│ ├── ipfs_gpg
│ ├── jobs_info_0x51e2b36469cdbf58863db70cc38652da84d20c67.out
│ ├── result_ipfs_hashes.txt
│ ├── transactions_0x51e2b36469cdbf58863db70cc38652da84d20c67.csv
│ └── watch_0x51e2b36469cdbf58863db70cc38652da84d20c67.out
├── requesters
│ ├── ebloc-broker-logs
│ └── gdrive
└── transactions_contract_0xa0Fac3232234478E6A0d4d5564ed239c956A21f0.csv
Not seeing a result you expected?
Learn how you can add new datasets to our index.
This graffiti-centred change detection dataset was developed in the context of INDIGO, a research project focusing on the documentation, analysis and dissemination of graffiti along Vienna's Donaukanal. The dataset aims to support the development and assessment of change detection algorithms.
The dataset was collected from a test site approximately 50 meters in length along Vienna's Donaukanal during 11 days between 2022/10/21 and 2022/12/01. Various cameras with different settings were used, resulting in a total of 29 data collection sessions or "epochs" (see "EpochIDs.jpg" for details). Each epoch contains 17 images generated from 29 distinct 3D models with different textures. In total, the dataset comprises 6,902 unique image pairs, along with corresponding reference change maps. Additionally, exclusion masks are provided to ignore parts of the scene that might be irrelevant, such as the background.
To summarise, the dataset, labelled as "Data.zip," includes the following:
Image acquisition involved the use of two different camera setups. The first two datasets (ID 1 and 2; cf. "EpochIDs.jpg") were obtained using a Nikon Z 7II camera with a pixel count of 45.4 MP, paired with a Nikon NIKKOR Z 20 mm lens. For the remaining image datasets (ID 3-29), a triple GoPro setup was employed. This triple setup featured three GoPro cameras, comprising two GoPro HERO 10 cameras and one GoPro HERO 11, all securely mounted within a frame. This triple-camera setup was utilised on nine different days with varying camera settings, resulting in the acquisition of 27 image datasets in total (nine days with three datasets each).
The "Data.zip" file contains two subfolders:
A detailed dataset description (including detailed explanations of the data creation) is part of a journal paper currently in preparation. The paper will be linked here for further clarification as soon as it is available.
Due to the nature of the three image types, this dataset comes with two licenses:
Every synthetic image, change map and mask has this licensing information embedded as IPTC photo metadata. In addition, the images' IPTC metadata also provide a short image description, the image creator and the creator's identity (in the form of an ORCiD).
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
If there are any questions, problems or suggestions for the dataset or the description, please do not hesitate to contact the corresponding author, Benjamin Wild.