Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset was developed as part of the final project requirements for the BSc in Computer Science at the British University of Bahrain.
The Acoustic Guitar Notes Dataset is a dataset consisting of almost 1500 acoustic guitar notes encompassing every possible note on a standard 6 string guitar up to the 16th fret, with the addition of the D2 and Dsharp2 notes due to the popularity of the Drop D tuning. The notes range from D2 being the lowest note, up to Gsharp5 being the highest, with a frequency range of 73.42 hZ to 830.61 hZ. Each note class contains at least 24 recordings.
The notes are recorded using two different guitars with an equal amount of samples divided between the two. The first is a Walden G551E guitar with steel strings, while the second is a Yamaha CM-40 classical guitar which has nylon strings.
The dataset was designed in particular for training with Convolutional Neural Networks in mind. Each recording is exactly 2 seconds in length at a 44.1 kHz sampling frequency and has been converted to mono format.
The notes included in the dataset are all sounded and played directly with no additional techniques (such as hammer ons or slides). Slight variations in playing style are included in an attempt to add variance to the dataset. These variations are labeled with a three character identifier at the end of the title of each recording. It is important to note that it is not advisable in the case of this dataset to try to train a model on these variations, as they are not exhibited consistently enough to be trained on. The identifiers are however, still included for the sake of completeness, or in the case that any of these variations has a sound that is deemed undesirable for a particular use case.
1- The first character denotes the type of string used in the recording: - An 's' denotes a steel string. - An 'n' denotes a nylon string.
2- The second character denotes the apparatus used to pluck the string: - A 'p' denotes that the string was plucked with a pick (or plectrum). - An 'f' denotes that the string was plucked with a finger or thumb. - An 'n' denotes that the string was plucked with a nail.
3- The third character denotes how the note was sounded: - An 'n' denotes the note was sounded normally and allowed to ring out. - An 'l' denotes that the note was played louder than normal. - An 'm' denotes that the note was muted early with the palm (usually one second after playing).
Note from the author:
I have tried my best to make sure that this dataset has been recorded professionally, structured correctly, and appropriately preprocessed. However, as both an amateur guitarist and a fledgling data scientist, I am unsure as to the true usefulness of the dataset in serving to train an artificial intelligence model to recognize naturally played guitar recordings. If this dataset is truly useful, then I am committed to improving and expanding the dataset where I can, and I am deeply curious as to whether anyone can use it outside of the scope of my small little university experiment.
If you have any suggestions, observations, or discussions regarding the dataset, please feel free to email me at koohejix@gmail.com, and I will respond promptly when I can. Thank you for taking the time to look through my dataset. Happy coding!
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset was developed as part of the final project requirements for the BSc in Computer Science at the British University of Bahrain.
The Acoustic Guitar Notes Dataset is a dataset consisting of almost 1500 acoustic guitar notes encompassing every possible note on a standard 6 string guitar up to the 16th fret, with the addition of the D2 and Dsharp2 notes due to the popularity of the Drop D tuning. The notes range from D2 being the lowest note, up to Gsharp5 being the highest, with a frequency range of 73.42 hZ to 830.61 hZ. Each note class contains at least 24 recordings.
The notes are recorded using two different guitars with an equal amount of samples divided between the two. The first is a Walden G551E guitar with steel strings, while the second is a Yamaha CM-40 classical guitar which has nylon strings.
The dataset was designed in particular for training with Convolutional Neural Networks in mind. Each recording is exactly 2 seconds in length at a 44.1 kHz sampling frequency and has been converted to mono format.
The notes included in the dataset are all sounded and played directly with no additional techniques (such as hammer ons or slides). Slight variations in playing style are included in an attempt to add variance to the dataset. These variations are labeled with a three character identifier at the end of the title of each recording. It is important to note that it is not advisable in the case of this dataset to try to train a model on these variations, as they are not exhibited consistently enough to be trained on. The identifiers are however, still included for the sake of completeness, or in the case that any of these variations has a sound that is deemed undesirable for a particular use case.
1- The first character denotes the type of string used in the recording: - An 's' denotes a steel string. - An 'n' denotes a nylon string.
2- The second character denotes the apparatus used to pluck the string: - A 'p' denotes that the string was plucked with a pick (or plectrum). - An 'f' denotes that the string was plucked with a finger or thumb. - An 'n' denotes that the string was plucked with a nail.
3- The third character denotes how the note was sounded: - An 'n' denotes the note was sounded normally and allowed to ring out. - An 'l' denotes that the note was played louder than normal. - An 'm' denotes that the note was muted early with the palm (usually one second after playing).
Note from the author:
I have tried my best to make sure that this dataset has been recorded professionally, structured correctly, and appropriately preprocessed. However, as both an amateur guitarist and a fledgling data scientist, I am unsure as to the true usefulness of the dataset in serving to train an artificial intelligence model to recognize naturally played guitar recordings. If this dataset is truly useful, then I am committed to improving and expanding the dataset where I can, and I am deeply curious as to whether anyone can use it outside of the scope of my small little university experiment.
If you have any suggestions, observations, or discussions regarding the dataset, please feel free to email me at koohejix@gmail.com, and I will respond promptly when I can. Thank you for taking the time to look through my dataset. Happy coding!