http://researchdatafinder.qut.edu.au/display/n442http://researchdatafinder.qut.edu.au/display/n442
1.59 GB; md5sum: f87fb213c0e1c439e1b727fb258ef2cd QUT Research Data Respository Dataset Resource available for download
http://researchdatafinder.qut.edu.au/display/n1251http://researchdatafinder.qut.edu.au/display/n1251
932 MB. MD5Sum: d039a0796349c5a1599a97a2a71ef1a4 QUT Research Data Respository Dataset Resource available for download
SAIVT Thermal Feature Detection
Overview
The SAIVT-Thermal Feature Detection Database contains a number of images suitable for evaluating the performance of feature detection and matching in the thermal image domain.
The database includes conditions unique to the thermal domain such as non-uniformity noise; as well as condition common to other domains such as viewpoint changes, and compression and blur.
You can read our paper on eprints.
Contact Dr Simon Denman for further information.
Licensing
The SAIVT Thermal Feature Detection Database is © 2012 QUT and is licensed under the Creative Commons Attribution-ShareAlike 3.0 Australia License.
Attribution
To attribute this database, please include the following citation:
An exploration of feature detector performance in the thermal-infrared modality. Vidas, Stephen, Lakemond, Ruan, Denman, Simon, Fookes, Clinton B., Sridharan, Sridha, & Wark, Tim. (2011) In Bradley, Andrew, Jackway, Paul, Gal, Yaniv, & Salvado, Olivier (Eds.) Proceedings of the 2011 International Conference on Digital Image Computing: Techniques and Applications, IEEE , Sheraton Noosa Resort & Spa, Noosa, QLD, pp. 217-223. http://eprints.qut.edu.au/48161/
Acknowledging the database in your publications
In addition to citing our paper, we kindly request that the following text be included in an acknowledgements section at the end of your publications:
We would like to thank the SAIVT Research Labs at Queensland University of Technology (QUT) for freely supplying us with the SAIVT Thermal Feature Detection Database for our research.
Installing the database
Download and unzip the following archive:
SAIVT-ThermalFeatureDetection.tar.gz (187MB, md5sum: 73565fcc95ae987adf446dd2cbc6be4c)
A copy of the publication can be found at http://eprints.qut.edu.au/48161/, and is also included in this package (Vidas 2011 - An exploration of feature detector performance in the thermal-infrared modality.pdf).
Related publications of interest may be found on the following webpages:
Stephen Vidas articles on eprints
Other articles by Stephen Vidas.
The database has the following structure:
Each of the ten environments is allocated its own directory.
Within most of these directories, thermal-infrared and visible-spectrum data is separated into the thermal and visible subdirectories respectively
Within each of these subdirectories, a profile folder is present which contains a sequence of ideal (untransformed) images in 8-bit depth format.
The thermal subdirectories also contain a pure folder which contains identical images in their original 16-bit depth format (which is difficult to visualize).
Also within each thermal subdirectory there may be additional folders present.
Each of these folders contain images under a single, controlled image transformation, the acronyms for which are expanded at the end of this document. The level of transformation varies (generally increasing in severity) as the numerical label for each subfolder increases.
ACRONYMS:
CMP Image Compression
GAU Gaussian Noise
NRM Histogram Normalization
NUC Non-Uniformities Noise
OFB Out-of-focus Blur
QNT Quantization Noise
ROT Image Rotation
SAP Salt and Pepper Noise
TOD Time of day variation
VPT Viewpoint change
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset was collected for an assessment of a crowd counting alogorithm. The dataset is a vision dataset taken from a QUT Campus and contains three challenging viewpoints, which are referred to as Camera A, Camera B and Camera C. The sequences contain reflections, shadows and difficult lighting fluctuations, which makes crowd counting difficult. Furthermore, Camera C is positioned at a particularly low camera angle, leading to stronger occlusion than is present in other datasets.The QUT datasets are annotated at sparse intervals: every 100 frames for cameras B and C, and every 200 frames for camera A as this is a longer sequence. Testing is then performed by comparing the crowd size estimate to the ground truth at these sparse intervals, rather than at every frame. This closely resembles the intended real-world application of this technology, where an operator may periodically ‘query’ the system for a crowd count. Due to the difficulty of the environmental conditions in these scenes, the first 400-500 frames of each sequence is set aside for learning the background model.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Part of the dissertation Pitch of Voiced Speech in the Short-Time Fourier Transform: Algorithms, Ground Truths, and Evaluation Methods.
© 2020, Bastian Bechtold. All rights reserved.
Estimating the fundamental frequency of speech remains an active area of research, with varied applications in speech recognition, speaker identification, and speech compression. A vast number of algorithms for estimatimating this quantity have been proposed over the years, and a number of speech and noise corpora have been developed for evaluating their performance. The present dataset contains estimated fundamental frequency tracks of 25 algorithms, six speech corpora, two noise corpora, at nine signal-to-noise ratios between -20 and 20 dB SNR, as well as an additional evaluation of synthetic harmonic tone complexes in white noise.
The dataset also contains pre-calculated performance measures both novel and traditional, in reference to each speech corpus’ ground truth, the algorithms’ own clean-speech estimate, and our own consensus truth. It can thus serve as the basis for a comparison study, or to replicate existing studies from a larger dataset, or as a reference for developing new fundamental frequency estimation algorithms. All source code and data is available to download, and entirely reproducible, albeit requiring about one year of processor-time.
Included Code and Data
ground truth data.zip
is a JBOF dataset of fundamental frequency estimates and ground truths of all speech files in the following corpora:
noisy speech data.zip
is a JBOF datasets of fundamental frequency estimates of speech files mixed with noise from the following corpora:
synthetic speech data.zip
is a JBOF dataset of fundamental frequency estimates of synthetic harmonic tone complexes in white noise.noisy_speech.pkl
and synthetic_speech.pkl
are pickled Pandas dataframes of performance metrics derived from the above data for the following list of fundamental frequency estimation algorithms:
noisy speech evaluation.py
and synthetic speech evaluation.py
are Python programs to calculate the above Pandas dataframes from the above JBOF datasets. They calculate the following performance measures:
Pipfile
is a pipenv-compatible pipfile for installing all prerequisites necessary for running the above Python programs.The Python programs take about an hour to compute on a fast 2019 computer, and require at least 32 Gb of memory.
References:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a custom-built blast database of higher plant viruses and viroids.
A challenge associated with the bioinformatics analysis of sequencing data for diagnostic purposes is the dependency on sequence databases for taxonomic assignment of detection. Although public databases such as the GenBank database maintained at NCBI are the most up to date, the enormous nature of these databases limits their portability across different computing resources. Moreover, sequencing data submitted by users to these public databases may not be accurate, and annotations provided in the GenBank record, such as the taxonomy assignment, which is crucial for accurate diagnosis, may be inaccurate and/or out of data. Additionally, the descriptors of the sequences in the public databases are not harmonized and lack taxonomic information posing an additional challenge to validate sequence homology-based pathogen detections.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is an UNOFFICIAL host for the GTDB mash sketch based on GTDB r226
Intended use of this file is to include in the VEBA database for quicker GTDB-Tk analysis.
Created by running the following command using GTDB-Tk v2.4.1 on the S1 sample from Zenodo:7946802:
gtdbtk classify_wf --genome_dir veba_output/binning/prokaryotic/S1/output/genomes/ --out_dir test_output -x fa --cpus 1 --mash_db ./gtdb_r226.msh
Source Files:
RELEASE_NOTES.txt
Release 226.0: -------------- GTDB release R10-RS226 comprises 732,475 genomes organised into 143,614 species clusters. Additional statistics for this release are available on the GTDB Statistics page. Release notes: -------------- - Post-curation cycle, we identified updated spelling for 1 taxon and a valid name for a placeholder: g_Prometheoarchaeum (updated name: Promethearchaeum) f_MK-D1 (updated name: Promethearchaeaceae) Note that the LPSN linkouts point to the correct updated names. We encourage users to use the updated names as these will appear in the next release. - QC criteria for GTDB was modified to consider CheckM v1 and v2 completeness and contamination estimates. In order to pass QC, a genome must have completeness >=50%, contamination <5%, and quality (completeness - 5*contamination) >=50% using both the CheckM v1 and v2 estimates. The exception is that a contig comprised of <10 contigs passes QC if these criteria are meet be either CheckM v1 or v2. - Mash is no longer used as a prefilter for establishing GTDB species clusters as this was found to be unnecessary with the prefiltering provided internally by skani (Shaw et al., Nat Methods, 2023). - The 20% most heterogeneous sites were removed from the archaeal MSA using alignment_pruner.pl (https://github.com/novigit/broCode/blob/master/alignment_pruner.pl). - The GTDB taxonomy tree now provides links to Sandpiper (https://sandpiper.qut.edu.au) results which provide information about the geographic and environmental distribution of a taxon. - We thank Jan Mares for his assistance in curating the class Cyanobacteriia, Peter Golyshin for bringing Ferroplasma acidiphilum strain Y (GCF_002078355.1) to our attention, and Brian Kemish for providing IT support to the project.
If you have found this useful, please cite the original publications:
Attribution-ShareAlike 3.0 (CC BY-SA 3.0)https://creativecommons.org/licenses/by-sa/3.0/
License information was derived automatically
The SAIVT-Campus Database is an abnormal event detection database captured on a university campus, where the abnormal events are caused by the onset of a storm. Contact or for more information.
The SAIVT-Campus database is © 2012 QUT and is licensed under the .
To attribute this database, please include the following citation:
Xu, Jingxin, Denman, Simon, Fookes, Clinton B., & Sridharan, Sridha (2012) Activity analysis in complicated scenes using DFT coefficients of particle trajectories. In 9th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS 2012), 18-21 September 2012, Beijing, China. available at .
In addition to citing our paper, we kindly request that the following text be included in an acknowledgements section at the end of your publications:
We would like to thank the SAIVT Research Labs at Queensland University of Technology (QUT) for freely supplying us with the SAIVT-Campus database for our research.
After downloading and unpacking the archive, you should have the following structure:
SAIVT-Campus +-- LICENCE.txt +-- README.txt +-- test_dataset.avi +-- training_dataset.avi +-- Xu2012 - Activity analysis in complicated scenes using DFT coefficients of particle trajectories.pdf
The SAIVT-Campus dataset is captured at the Queensland University of Technology, Australia.
It contains two video files from real-world surveillance footage without any actors:
This dataset contains a mixture of crowd densities and it has been used in the following paper for abnormal event detection:
The normal activities include pedestrians entering or exiting the building, entering or exiting a lecture theatre (yellow door), and going to the counter at the bottom right. The abnormal events are caused by a heavy rain outside, and include people running in from the rain, people walking towards the door to exit and turning back, wearing raincoats, loitering and standing near the door and overcrowded scenes. The rain happens only in the later part of the test dataset.
As a result, we assume that the training dataset only contains the normal activities. We have manually made an annotation as below:
SAIVT-DGD Database
Overview
Further information about the SAIVT-DGD database is available in our paper:
Sivapalan, Sabesan, Chen, Daniel, Denman, Simon, Sridharan, Sridha, & Fookes, Clinton B. (2011) Gait energy volumes and frontal gait recognition using depth images. In Proceeding of the International Joint Conference on Biometrics, Washington DC, USA, available at http://eprints.qut.edu.au/46382/
Licensing
The SAIVT-DGD database is © 2012 QUT, and is licensed under the Creative Commons Attribution-ShareAlike 3.0 Australia License.
Attribution
To attribute this database, please include the following citation:
Sivapalan, Sabesan, Chen, Daniel, Denman, Simon, Sridharan, Sridha, & Fookes, Clinton B. (2011) Gait energy volumes and frontal gait recognition using depth images. In Proceeding of the International Joint Conference on Biometrics, Washington DC, USA, available at http://eprints.qut.edu.au/46382/
Acknowledgement in publications
In addition to citing our paper, we kindly request that the following text be included in an acknowledgements section at the end of your publications:
'We would like to thank the SAIVT Research Labs at Queensland University of Technology (QUT) for freely supplying us with the SAIVT-DGD database for our research'.
Installing the SAIVT-BuildingMonitoring Database
Download and unzip the following archives in the same directory:
SAIVT_DGD.tar.gz (1.6M, md5sum: da3916615b109557d5975aad9263e4a2)
SAIVT_DGD_depth_raw_sub0000_0009.tar.gz (957M, md5sum: ee25e12ed96356a516179859a9677455)
SAIVT_DGD_depth_raw_sub0010_0019.tar.gz (1.1G, md5sum: a80c3b5c5ce22a00709548486911dc9f)
SAIVT_DGD_depth_raw_sub0020_0035.tar.gz (1.7G, md5sum: e7261b6860c745f92c48fac0d900b983)
SAIVT_DGD_depth_silhouette.tar.gz (264M, md5sum: 7d297af6affa7342ea1b77442ca477e4)
SAIVT_DGD_volume.tar.gz (725M, md5sum: 99bc5b9dbf2a56474568115b257e274b)
At this point, you should have the following data structure and the SAIVT-DGD database is installed:
SAIVT-DGD +--DGD +--depth_raw +--depth_silhouette +--volume +--docs
The database itself is located in the DGD sub directory. Documentation on the database, including calibration information and a copy of our paper is included in the docs sub directory.
http://researchdatafinder.qut.edu.au/display/n6106http://researchdatafinder.qut.edu.au/display/n6106
QUT Research Data Respository Dataset Resource available for download
This dataset contains common speech and noise corpora for evaluating fundamental frequency estimation algorithms as convenient JBOF dataframes. Each corpus is available freely on its own, and allows redistribution:
CMU-ARCTIC (BSD license) [1]
FDA (free to download) [2]
KEELE (free for noncommercial use) [3]
MOCHA-TIMIT (free for noncommercial use) [4]
PTDB-TUG (ODBL license) [5]
NOISEX (free to download) [7]
QUT-NOISE (CC-BY-SA license) [8]
These files are published as part of my dissertation, "Pitch of Voiced Speech in the Short-Time Fourier Transform: Algorithms, Ground Truths, and Evaluation Methods", and in support of the Replication Dataset for Fundamental Frequency Estimation.
References:
John Kominek and Alan W Black. CMU ARCTIC database for speech synthesis, 2003.
Paul C Bagshaw, Steven Hiller, and Mervyn A Jack. Enhanced Pitch Tracking and the Processing of F0 Contours for Computer Aided Intonation Teaching. In EUROSPEECH, 1993.
F Plante, Georg F Meyer, and William A Ainsworth. A Pitch Extraction Reference Database. In Fourth European Conference on Speech Communication and Technology, pages 837–840, Madrid, Spain, 1995.
Alan Wrench. MOCHA MultiCHannel Articulatory database: English, November 1999.
Gregor Pirker, Michael Wohlmayr, Stefan Petrik, and Franz Pernkopf. A Pitch Tracking Corpus with Evaluation on Multipitch Tracking Scenario. page 4, 2011.
John S. Garofolo, Lori F. Lamel, William M. Fisher, Jonathan G. Fiscus, David S. Pallett, Nancy L. Dahlgren, and Victor Zue. TIMIT Acoustic-Phonetic Continuous Speech Corpus, 1993.
Andrew Varga and Herman J.M. Steeneken. Assessment for automatic speech recognition: II. NOISEX-92: A database and an experiment to study the effect of additive noise on speech recog- nition systems. Speech Communication, 12(3):247–251, July 1993.
David B. Dean, Sridha Sridharan, Robert J. Vogt, and Michael W. Mason. The QUT-NOISE-TIMIT corpus for the evaluation of voice activity detection algorithms. Proceedings of Interspeech 2010, 2010.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
SAIVT-BuildingMonitoring
Overview
The SAIVT-BuildingMonitoring database contains footage from 12 cameras capturing a single work day at a busy university campus building. A portion of the database has been annotated for crowd counting and pedestrian throughput estimation, and is freely available for download. Contact Dr Simon Denman for more information.
Licensing
The SAIVT-BuildingMonitoring database is © 2015 QUT, and is licensed under the Creative Commons Attribution-ShareAlike 4.0 License.
Attribution
To attribute this database, use the citation provided on our publication at eprints:
S. Denman, C. Fookes, D. Ryan, & S. Sridharan (2015) Large scale monitoring of crowds and building utilisation: A new database and distributed approach. In 12th IEEE International Conference on Advanced Video and Signal Based Surveillance, 25-28 August 2015, Karlsruhe, Germany.
Acknowledgement in publications
In addition to citing our paper, we kindly request that the following text be included in an acknowledgements section at the end of your publications:
'We would like to thank the SAIVT Research Labs at Queensland University of Technology (QUT) for freely supplying us with the SAIVT-BuildingMonitoring database for our research'.
Installing the SAIVT-BuildingMonitoring Database
Download, join, and unzip the following archives
Annotated Data
Part 1 (2GB, md5sum: 50e63a6ee394751fad75dc43017710e8)
Part 2 (2GB, md5sum: 49859f0046f0b15d4cf0cfafceb9e88f)
Part 3 (2GB, md5sum: b3c7386204930bc9d8545c1f4eb0c972)
Part 4 (2GB, md5sum: 4606fc090f6020b771f74d565fc73f6d)
Part 5 (632 MB, md5sum: 116aade568ccfeaefcdd07b5110b815a)
Full Sequences
Part 1 (2 GB, md5sum: 068ed015e057afb98b404dd95dc8fbb3)
Part 2 (2GB, md5sum: 763f46fc1251a2301cb63b697c881db2)
Part 3 (2GB, md5sum: 75e7090c6035b0962e2b05a3a8e4c59e)
Part 4 (2GB, md5sum: 34481b1e81e06310238d9ed3a57b25af)
Part 5 (2GB, md5sum: 9ef895c2def141d712a557a6a72d3bcc)
Part 6 (2GB, md5sum: 2a76e6b199dccae0113a8fd509bf8a04)
Part 7 (2GB, md5sum: 77c659ab6002767cc13794aa1279f2dd)
Part 8 (2GB, md5sum: 703f54f297b4c93e53c662c83e42372c)
Part 9 (2GB, md5sum: 65ebdab38367cf22b057a8667b76068d)
Part 10 (2GB, md5sum: bb5f6527f65760717cd819b826674d83)
Part 11 (2GB, md5sum: 01a562f7bd659fb9b81362c44838bfb1)
Part 12 (2GB, md5sum: 5e4a0d4bb99cde17158c1f346bbbdad8)
Part 13 (2GB, md5sum: 9c454d9381a1c8a4e8dc68cfaeaf4622)
Part 14 (2GB, md5sum: 8ff2b03b22d0c9ca528544193599dc18)
Part 15 (2GB, md5sum: 86efac1962e2bef3afd3867f8dda1437)
To rejoin the invidual parts, use:
cat SAIVT-BuildingMonitoring-AnnotatedData.tar.gz.* > SAIVT-BuildingMonitoring-AnnotatedData.tar.gz
cat SAIVT-BuildingMonitoring-FullSequences.tar.gz.* > SAIVT-BuildingMonitoring-FullSequences.tar.gz
At this point, you should have the following data structure and the SAIVT-BuildingMonitoring database is installed:
SAIVT-BuildingMonitoring +-- AnnotatedData +-- P_Lev_4_Entry_Way_ip_107 +-- Frames +-- Entry_ip107_00000.png +-- Entry_ip107_00001.png +-- ... +-- GroundTruth.xml +-- P_Lev_4_Entry_Way_ip_107-20140730-090000.avi +-- perspectivemap.xml +-- ROI.xml
+-- P_Lev_4_external_419_ip_52 +-- ...
+-- P_Lev_4_External_Lift_foyer_ip_70 +-- Frames +-- Entry_ip107_00000.png +-- Entry_ip107_00001.png +-- ... +-- GroundTruth.xml +-- P_Lev_4_External_Lift_foyer_ip_70-20140730-090000.avi +-- perspectivemap.xml +-- ROI.xml +-- VG-GroundTruth.xml +-- VG-ROI.xml
+-- ...
+-- Calibration +-- Lev4Entry_ip107.xml +-- Lev4Ext_ip51.xml +-- ...
+-- FullSequences +-- P_Lev_4_Entry_Way_ip_107-20140730-090000.avi +-- P_Lev_4_external_419_ip_52-20140730-090000.avi +-- ...
+-- MotionSegmentation +-- Lev4Entry_ip107.avi +-- Lev4Entry_ip107-Full.avi +-- Lev4Ext_ip51.avi +-- Lev4Ext_ip51-Full.avi +-- ...
+-- Denman 2015 - Large scale monitoring of crowds and building utilisation.pdf +-- LICENSE.txt +-- README.txt
Data is organised into two sections, AnnotatedData and FullSequences. Additional data that may be of use is provided in Calibration and MotionSegmentation.
AnnotatedData contains the two hour sections that have been annotated (from 11am to 1pm), alongside the ground truth and any other data generated during the annotation process. Each camera has a directory, the contents of which depends on what the camera has been annotated for.
All cameras will have:
a video file, such as P_Lev_4_Entry_Way_ip_107-20140730-090000.avi, which is the 2 hour video from 11am to 1pm
a Frames directory, that has 120 frames taken at minute intervals from the sequence. There are the frames that have been annotated for crowd counting. Even if the camera has not been annotated for crowd counting (i.e. P_Lev_4_Main_Entry_ip_54), this directory is included.
The following files exist for crowd counting cameras:
GroundTruth.xml, which contains the ground truth in the following format:
The file contains a list of annotated frames, and the location of the approximate centre of mass of any people within the frame. The interval-scale attribute indicates the distance between the annotated frames in the original video.
perspectivemap.xml, a file that defines the perspective map used to correct for perspective distortion. Parameters for a bilinear perspective map are included along with the original annotations that were used to generate the map.
ROI.xml, which defines the region of interest as follows:
This defines a polygon within the image that is used for crowd counting. Only people within this region are annotated.
For cameras that have been annotated with a virtual gate, the following additional files are present:
VG-GroundTruth.xml, which contains ground truth in the following format:
The ROI is repeated within the ground truth, and a direction of interest (the tag) is also included, which indicates the primary direction for the gait (i.e. the direction that denotes a positive count. Each pedestrian crossing is represented by a tag, which contains the approximate frame the crossing occurred in (when the centre of mass was at the centre of the gait region), the x and y location of the centre of mass of the person during the crossing, and the direction (0 being the primary direction, 1 being the secondary). VG-ROI.xml, which contains the region of interest for the virtual gate
The Calibration directory contains camera calibration for the cameras (with the exception of ip107, which has an uneven ground plane and is thus difficult to calibrate). All calibration is done using Tsai's method.
FullSequences contains the full sequences (9am - 5pm) for each of the cameras.
MotionSegmentation contains motion segmentation videos for all clips. Segmentation videos for both the full sequences and the 2 hour annotated segments are provided. Motion segmentation is done using the ViBE algorithm. Motion videos for the entire sequence have Full in the file name before the extension (i.e. Lev4Entry_ip107-Full.avi).
Further information on the SAIVT-BuildingMonitoring database in our paper: S. Denman, C. Fookes, D. Ryan, & S. Sridharan (2015) Large scale monitoring of crowds and building utilisation: A new database and distributed approach. In 12th IEEE International Conference on Advanced Video and Signal Based Surveillance, 25-28 August 2015, Karlsruhe, Germany.
This paper is also available alongside this document in the file: 'Denman 2015 - Large scale monitoring of crowds and building utilisation.pdf'.
http://researchdatafinder.qut.edu.au/display/n85511http://researchdatafinder.qut.edu.au/display/n85511
1.71 GB; md5sum: e7261b6860c745f92c48fac0d900b983 QUT Research Data Respository Dataset Resource available for download
Noisy MOBIO Landmarks
Overview
Face landmarks for the MOBIO database (https://www.idiap.ch/dataset/mobio) with different level of noise provided to evaluate face recognition in the presence of localisation noise. Contact Dr Simon Denman for further information.
Licensing
The Noisy MOBIO Landmarks are © 2014 QUT and is licensed under the Creative Commons Attribution-ShareAlike 3.0 Australia License.
Attribution
To attribute this database, include the following citation: K. Anantharajah, Z. Ge, C. McCool, S. Denman, C. Fookes, P. Corke, D. Tjondronegoro, S. Sridharan (2014) Local Inter-Session Variability Modelling for Object Classification. In IEEE Winter Conference on Applications of Computer Vision (WACV) Please note that authors should also cite and acknowledge the MOBIO database as outlined on the MOBIO website.
Downloading and using the Noisy MOBIO Landmarks database
Four sets of landmarks are provided, corresponding to added uniform noise equal to 2, 5, 10 and 20% of the average inter-eye distance:
landmarks_2.txt
landmarks_5.txt
landmarks_10.txt
landmarks_20.txt
Each file contains a list of images and new landmark points. Each line consists of (in order) the filename, right eye X coordinate, right eye Y coordinate, left eye X coordinate, and left eye Y coordinate. These landmark files should be used in place of the landmarks provided with the MOBIO database.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset, the SAIVT-Campus Database is an abnormal event detection database captured at the Queensland University of Technology, Australia. It contains two video files from real world surveillance footage without any actors. Each video file is one hour in duration. The normal activities include pedestrians entering or exiting the building, entering or exiting a lecture theatre (yellow door), and going to the counter at the bottom right. The abnormal events are caused by a heavy rain outside, and include people running in from the rain, people walking towards the door to exit and turning back, wearing raincoats, loitering and standing near the door and overcrowded scenes. The rain happens only in the later part of the test dataset. As a result, we assume that the training dataset only contains the normal activities.
http://researchdatafinder.qut.edu.au/display/n6810http://researchdatafinder.qut.edu.au/display/n6810
908 MB, md5sum: 79debc4b219bfc4dfe65eea06f57b458 QUT Research Data Respository Dataset Resource available for download
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset includes search results from various health databases, and files generated during analysis of the extracted data from the selected articles.
This dataset was produced by PhD student Tina Gingell for the thesis titled Exploring Food Security Among People with Lived Refugee Experiences using a Co-Design Approach and the project Connecting with Cultural Foods.
http://researchdatafinder.qut.edu.au/display/n14971http://researchdatafinder.qut.edu.au/display/n14971
QUT Research Data Respository Dataset and Resources
http://researchdatafinder.qut.edu.au/display/n27416http://researchdatafinder.qut.edu.au/display/n27416
SAIVT-SoftBio database, file 2 of 4. md5sum: fd4d7909c80c1b979309622c3ce1689b Download all files, then join using cat SAIVT-SoftBio.tar.gz.* > SAIVT-SoftBio.tar.gz QUT Research Data Respository Dataset Resource available for download
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Genotype sequences produced by MetaGaAP. If you use any of these sequences please cite.
http://researchdatafinder.qut.edu.au/display/n442http://researchdatafinder.qut.edu.au/display/n442
1.59 GB; md5sum: f87fb213c0e1c439e1b727fb258ef2cd QUT Research Data Respository Dataset Resource available for download