Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the raw data used for a research study that examined university students' music listening habits while studying. There are two experiments in this research study. Experiment 1 is a retrospective survey, and Experiment 2 is a mobile experience sampling research study. This repository contains five Microsoft Excel files with data obtained from both experiments. The files are as follows:
onlineSurvey_raw_data.xlsx esm_raw_data.xlsx esm_music_features_analysis.xlsx esm_demographics.xlsx index.xlsx Files Description File: onlineSurvey_raw_data.xlsx This file contains the raw data from Experiment 1, including the (anonymised) demographic information of the sample. The sample characteristics recorded are:
studentship area of study country of study type of accommodation a participant was living in age self-identified gender language ability (mono- or bi-/multilingual) (various) personality traits (various) musicianship (various) everyday music uses (various) music capacity The file also contains raw data of responses to the questions about participants' music listening habits while studying in real life. These pieces of data are:
likelihood of listening to specific (rated across 23) music genres while studying and during everyday listening. likelihood of listening to music with specific acoustic features (e.g., with/without lyrics, loud/soft, fast/slow) music genres while studying and during everyday listening. general likelihood of listening to music while studying in real life. (verbatim) responses to participants' written responses to the open-ended questions about their real-life music listening habits while studying. File: esm_raw_data.xlsx This file contains the raw data from Experiment 2, including the following variables:
information of the music tracks (track name, artist name, and if available, Spotify ID of those tracks) each participant was listening to during each music episode (both while studying and during everyday-listening) level of arousal at the onset of music playing and the end of the 30-minute study period level of valence at the onset of music playing and the end of the 30-minute study period specific mood at the onset of music playing and the end of the 30-minute study period whether participants were studying their location at that moment (if studying) whether they were studying alone (if studying) the types of study tasks (if studying) the perceived level of difficulty of the study task whether participants were planning to listen to music while studying (various) reasons for music listening (various) perceived positive and negative impacts of studying with music Each row represents the data for a single participant. Rows with a record of a participant ID but no associated data indicate that the participant did not respond to the questionnaire (i.e., missing data). File: esm_music_features_analysis.xlsx This file presents the music features of each recorded music track during both the study-episodes and the everyday-episodes (retrieved from Spotify's "Get Track's Audio Features" API). These features are:
energy level loudness valence tempo mode The contextual details of the moments each track was being played are also presented here, which include:
whether the participant was studying their location (e.g., at home, cafe, university) whether they were studying alone the type of study tasks they were engaging with (e.g., reading, writing) the perceived difficulty level of the task File: esm_demographics.xlsx This file contains the demographics of the sample in Experiment 2 (N = 10), which are the same as in Experiment 1 (see above). Each row represents the data for a single participant. Rows with a record of a participant ID but no associated demographic data indicate that the participant did not respond to the questionnaire (i.e., missing data). File: index.xlsx Finally, this file contains all the abbreviations used in each document as well as their explanations.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of ‘People who usually listen to music according to the number of hours per week. (% and mean value)’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from http://data.europa.eu/88u/dataset/https-opendata-euskadi-eus-catalogo-estadistica-territorio-zona-geografica-y-dimension-municipal-musica-personas-que-escuchan-habitualmente-musica-segun-el-numero-de-horas-a-la-semana-y-valor-medio- on 17 January 2022.
--- Dataset description provided by original source is as follows ---
The Basque Observatory of Culture was created to place culture as a central element of social and economic development, with the mission of rigorously filling the information gap in the cultural field, in line with the Basque Culture Plan of which it forms part. The Observatory’s scope of action focuses on traditional areas of culture: cultural heritage, artistic creation and expression, industries and cross-cutting areas.The Basque Observatory of Culture publishes and updates more than 200 statistical indicators that can be consulted in euskadi.eus along with other research and reports.
--- Original source retains full ownership of the source dataset ---
Music is part of our lives.
Many of us can't stop listening to music and spend a considerable amount of time trying to decide what to listen to next or what is worse, looking for that song whose title we have forgotten! How do we go about finding that song we want to listen to, but have forgotten?
We can try to remember a fragment of the lyrics and simply use a text-based search engine. What if we don't recall the lyrics or they are in a language we don't even speak? Well, we can ask for help: we hum to the song and hope that someone will recognize it - no matter how poorly we do it. Think about it. It is amazing that we can recognize a song when we listen to it. But isn't it even more amazing that we can recognize it when someone else is humming or whistling to it? Wouldn't it be great to have an audio-based search engine that did this for us?
This would truly be extreme music recognition.
The MLEnd Hums and Whistles dataset will give you an opportunity to explore the non-trivial problem of recognizing music from extreme interpretations, in our case, hums and whistles produced by people like you and me. This dataset comes with additional demographic information about our participants, so that you can explore how people with different backgrounds interpret music. A small version of this dataset can be found here.
The MLEnd datasets have been created by students at the School of Electronic Engineering and Computer Science, Queen Mary University of London. Other datasets include the MLEnd Spoken Numerals dataset, also available on Kaggle. Do not hesitate to reach out if you want to know more about how we did it.
Enjoy!
The statistic provides data on favorite music genres among consumers in the United States as of July 2018, sorted by age group. According to the source, 52 percent of respondents aged 16 to 19 years old stated that pop music was their favorite music genre, compared to 19 percent of respondents aged 65 or above. Country music in the United States – additional information
In 2012, country music topped the list; 27.6 percent of respondents picked it among their three favorite genres. A year earlier, the result was one percent lower, which allowed classic rock to take the lead. The figures show, however, the genre’s popularity across the United States is unshakeable and it has also been spreading abroad. This could be demonstrated by the international success of (among others) Shania Twain or the second place the Dutch country duo “The Common Linnets” received in the Eurovision Song Contest in 2014, singing “Calm after the storm.”
The genre is also widely popular among American teenagers, earning the second place and 15.3 percent of votes in a survey in August 2012. The first place and more than 18 percent of votes was awarded to pop music, rock scored 13.1 percent and landed in fourth place. Interestingly, Christian music made it to top five with nine percent of votes. The younger generation is also widely represented among country music performers with such prominent names as Taylor Swift (born in 1989), who was the highest paid musician in 2015, and Hunter Hayes (born in 1991).
Country music is also able to attract crowds (and large sums of money) to live performances. Luke Bryan’s tour was the most successful tour in North America in 2016 based on ticket sales as almost 1.43 million tickets were sold for his shows. Fellow country singer, Garth Brooks, came second on the list, selling 1.4 million tickets for his tour in North America in 2016.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of ‘People who usually listen to music according to the media they use. Multi-response (%)’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from http://data.europa.eu/88u/dataset/https-opendata-euskadi-eus-catalogo-estadistica-territorio-zona-geografica-y-dimension-municipal-musica-personas-que-escuchan-habitualmente-musica-segun-los-medios-que-utilizan-multirespuesta- on 18 January 2022.
--- Dataset description provided by original source is as follows ---
The Basque Observatory of Culture was created to place culture as a central element of social and economic development, with the mission of rigorously filling the information gap in the cultural field, in line with the Basque Culture Plan of which it forms part. The Observatory’s scope of action focuses on traditional areas of culture: cultural heritage, artistic creation and expression, industries and cross-cutting areas.The Basque Observatory of Culture publishes and updates more than 200 statistical indicators that can be consulted in euskadi.eus along with other research and reports.
--- Original source retains full ownership of the source dataset ---
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of ‘People who usually listen to music according to their preferences. Multi-response (%)’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from http://data.europa.eu/88u/dataset/https-opendata-euskadi-eus-catalogo-estadistica-territorio-zona-geografica-y-dimension-municipal-musica-personas-que-escuchan-habitualmente-musica-segun-sus-preferencias-multirespuesta- on 17 January 2022.
--- Dataset description provided by original source is as follows ---
The Basque Observatory of Culture was created to place culture as a central element of social and economic development, with the mission of rigorously filling the information gap in the cultural field, in line with the Basque Culture Plan of which it forms part. The Observatory’s scope of action focuses on traditional areas of culture: cultural heritage, artistic creation and expression, industries and cross-cutting areas.The Basque Observatory of Culture publishes and updates more than 200 statistical indicators that can be consulted in euskadi.eus along with other research and reports.
--- Original source retains full ownership of the source dataset ---
CAL500 (Computer Audition Lab 500) is a dataset aimed for evaluation of music information retrieval systems. It consists of 502 songs picked from western popular music. The audio is represented as a time series of the first 13 Mel-frequency cepstral coefficients (and their first and second derivatives) extracted by sliding a 12 ms half-overlapping short-time window over the waveform of each song. Each song has been annotated by at least 3 people with 135 musically-relevant concepts spanning six semantic categories:
29 instruments were annotated as present in the song or not, 22 vocal characteristics were annotated as relevant to the singer or not, 36 genres, 18 emotions were rated on a scale from one to three (e.g., not happy",neutral", ``happy"), 15 song concepts describing the acoustic qualities of the song, artist and recording (e.g., tempo, energy, sound quality), 15 usage terms (e.g., "I would listen to this song while driving, sleeping, etc.").
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of ‘Young People Survey’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://www.kaggle.com/miroslavsabo/young-people-survey on 13 February 2022.
--- Dataset description provided by original source is as follows ---
In 2013, students of the Statistics class at "https://fses.uniba.sk/en/">FSEV UK were asked to invite their friends to participate in this survey.
responses.csv
) consists of 1010 rows and 150 columns (139
integer and 11 categorical).columns.csv
file if you want to match the data with the original names.The variables can be split into the following groups:
Many different techniques can be used to answer many questions, e.g.
(in slovak) Sleziak, P. - Sabo, M.: Gender differences in the prevalence of specific phobias. Forum Statisticum Slovacum. 2014, Vol. 10, No. 6. [Differences (gender + whether people lived in village/town) in the prevalence of phobias.]
Sabo, Miroslav. Multivariate Statistical Methods with Applications. Diss. Slovak University of Technology in Bratislava, 2014. [Clustering of variables (music preferences, movie preferences, phobias) + Clustering of people w.r.t. their interests.]
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The present study examines the prevalence, localization, frequency, and intensity of playing-related pain (PRP) in a sample of high-performing young musicians. We also address coping behavior and communication about PRP between young musicians, teachers, parents, and other people, such as friends. The aim is to provide information on PRP among high-performing musicians in childhood and adolescence, which can serve as a basis for music education, practice, and prevention in the context of instrumental teaching and musicians’ health. The study is part of a large-scale study (N = 1,143) with highly musically gifted participants (age 9–24 years; M = 15.1; SD = 2.14, female = 62%) at the national level of the “Jugend musiziert” (youth making music) contest. For data analyses, we used descriptive statistics, correlations, Chi2-tests, principal component analysis, Kruskal–Wallis H tests, and multivariate regression. About three-quarters (76%) of the surveyed participants stated that they had experienced pain during or after playing their instrument. Female musicians were significantly more frequently affected (79%) than male musicians (71%). With increasing age, the prevalence of PRP rises from 71 percent (9–13 years) to 85 percent (18–24 years). Regarding localization of pain, results are in line with many other studies with musculoskeletal problems the most common. Furthermore, data show a clear relationship between the duration of practice and the prevalence of PRP. Our study found averages of 7:18 h/week, whereas mean values of the duration of practice vary considerably between different instruments. The variance in practice duration is very large within the different instruments. Thus, when researching PRP, it is necessary to consider both the differences between different groups of instruments in the average duration of practice as well as the very large inter-individual variation in the duration of practice within a given instrument group. While just over half of the young musicians (56%) felt they had been taken seriously, 32 percent felt that their complaints were not completely taken seriously, and 12 percent did not feel taken seriously at all. Therefore, it is necessary to improve communication and information about PRP to prevent PRP and counteract existing complaints.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
In 2013, students of the Statistics class at "https://fses.uniba.sk/en/">FSEV UK were asked to invite their friends to participate in this survey.
responses.csv
) consists of 1010 rows and 150 columns (139
integer and 11 categorical).columns.csv
file if you want to match the data with the original names.The variables can be split into the following groups:
Many different techniques can be used to answer many questions, e.g.
(in slovak) Sleziak, P. - Sabo, M.: Gender differences in the prevalence of specific phobias. Forum Statisticum Slovacum. 2014, Vol. 10, No. 6. [Differences (gender + whether people lived in village/town) in the prevalence of phobias.]
Sabo, Miroslav. Multivariate Statistical Methods with Applications. Diss. Slovak University of Technology in Bratislava, 2014. [Clustering of variables (music preferences, movie preferences, phobias) + Clustering of people w.r.t. their interests.]
https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement
Welcome to the Thai Wake Word & Command Dataset, meticulously designed to advance the development and accuracy of voice-activated systems. This dataset features an extensive collection of wake words and commands, essential for triggering and interacting with voice assistants and other voice-activated devices. Our dataset ensures these systems respond promptly and accurately to user inputs, enhancing their reliability and user experience.
This training dataset comprises over 20,000 audio recordings of wake words and command phrases designed to build robust and accurate voice assistant speech technology. Each participant recorded 400 recordings in diverse environments and at varying speeds. This dataset contains audio recordings of wake words, as well as wake words followed by commands.
This dataset includes recordings of various types of wake words and commands, in different environments and at different speeds, making it highly diverse.
This extensive coverage ensures the dataset includes realistic scenarios, which is essential for developing effective voice assistant speech recognition models.
The dataset provides comprehensive metadata for each audio recording and participant:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Here are a few use cases for this project:
Sports Analytics: The "PersonDetection" model could be used to analyze individual athletes' performances in various sports such as skateboarding, basketball, or soccer, by detecting and tracking the movements of players.
Surveillance Security: It could be utilized in CCTV systems and security cameras. By recognizing people in real-time, it could alert security personnel when unauthorized individuals are detected in restricted areas.
Social Distancing Detection: In light of the Covid-19 pandemic, this model could be used to enforce social distancing measures by tracking the number of people and their relative distances in public spaces like parks, malls, or transportation systems.
Smart Home Management: It can be deployed in smart homes devices to recognize the home occupants and subsequently adapt the environment to their preferences such as lighting, temperature or even play their favorite music upon entry.
Motion Capture and Gaming: In the gaming and animation industry, this model could be used for real-time motion capture, allowing developers to create more realistic and immersive human characters.
https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement
Welcome to the Hindi Wake Word & Command Dataset, meticulously designed to advance the development and accuracy of voice-activated systems. This dataset features an extensive collection of wake words and commands, essential for triggering and interacting with voice assistants and other voice-activated devices. Our dataset ensures these systems respond promptly and accurately to user inputs, enhancing their reliability and user experience.
This training dataset comprises over 20,000 audio recordings of wake words and command phrases designed to build robust and accurate voice assistant speech technology. Each participant recorded 400 recordings in diverse environments and at varying speeds. This dataset contains audio recordings of wake words, as well as wake words followed by commands.
This dataset includes recordings of various types of wake words and commands, in different environments and at different speeds, making it highly diverse.
This extensive coverage ensures the dataset includes realistic scenarios, which is essential for developing effective voice assistant speech recognition models.
The dataset provides comprehensive metadata for each audio recording and participant:
This dataset contains 160 high-quality audio samples for kick, snare, toms and overheads (40 each) present in the .wav format. These samples are a collection of live-recorded and simulated drum sounds. Each audio sample roughly occupies 530KB.
This dataset was used in "Drum Instrument Classification using Machine Learning". Chhabra, A., Singh, A. V., Srivastava, R., & Mittal, V. (2020, December). Drum Instrument Classification Using Machine Learning. In 2020 2nd International Conference on Advances in Computing, Communication Control and Networking (ICACCCN) (pp. 217-221). IEEE.
People can make use of this dataset for various tasks. These may include but are not limited to: - Use this dataset, augment the sounds using audio-processing techniques and use a wider and rich dataset for ML tasks. - We used this dataset for a classification task, the same can be used to perform clustering.
Licence Ouverte / Open Licence 1.0https://www.etalab.gouv.fr/wp-content/uploads/2014/05/Open_Licence.pdf
License information was derived automatically
The dataset concerns aid granted since 2011 under the support for current and amplified music festivals. Beneficiaries - Generalist scenes invested in a music distribution project current. - Places specially dedicated to current music. - Organizing structures of festivals. - Local authorities developing a cultural policy in favour of current music. Eligibility conditions Beneficiaries must comply with all of the following criteria: - take into account the constraints linked to spatial planning in relation to the greater or lesser diversity of the tender proposed residents; - have proven public or private partnerships; - present one or more original creation(s) by artists in residence or not, in their programming; - develop actions aimed at public and community ownership content populations; - play a structuring role in the cultural field on their territory ; - limit the ecological footprint of the festival; - propose a pricing policy that promotes accessibility for all the general public, in particular young people under the age of 26, in particular high school students and apprentices, and jobseekers. - finally, festivals must give rise to regular editions. ***Nature and modality of aid *** - Maximum rate of 30% excl. tax of the project cost, up to a maximum of 50,000 €. - For events with at least two partners public other than the Region: maximum rate of 20 % excl. tax of the cost of project, up to a maximum of €100,000. Attention: aid which cannot be cumulated with aid granted under Regional Artistic and Cultural Permanence (PAC) and in respect of aid for events.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the raw data used for a research study that examined university students' music listening habits while studying. There are two experiments in this research study. Experiment 1 is a retrospective survey, and Experiment 2 is a mobile experience sampling research study. This repository contains five Microsoft Excel files with data obtained from both experiments. The files are as follows:
onlineSurvey_raw_data.xlsx esm_raw_data.xlsx esm_music_features_analysis.xlsx esm_demographics.xlsx index.xlsx Files Description File: onlineSurvey_raw_data.xlsx This file contains the raw data from Experiment 1, including the (anonymised) demographic information of the sample. The sample characteristics recorded are:
studentship area of study country of study type of accommodation a participant was living in age self-identified gender language ability (mono- or bi-/multilingual) (various) personality traits (various) musicianship (various) everyday music uses (various) music capacity The file also contains raw data of responses to the questions about participants' music listening habits while studying in real life. These pieces of data are:
likelihood of listening to specific (rated across 23) music genres while studying and during everyday listening. likelihood of listening to music with specific acoustic features (e.g., with/without lyrics, loud/soft, fast/slow) music genres while studying and during everyday listening. general likelihood of listening to music while studying in real life. (verbatim) responses to participants' written responses to the open-ended questions about their real-life music listening habits while studying. File: esm_raw_data.xlsx This file contains the raw data from Experiment 2, including the following variables:
information of the music tracks (track name, artist name, and if available, Spotify ID of those tracks) each participant was listening to during each music episode (both while studying and during everyday-listening) level of arousal at the onset of music playing and the end of the 30-minute study period level of valence at the onset of music playing and the end of the 30-minute study period specific mood at the onset of music playing and the end of the 30-minute study period whether participants were studying their location at that moment (if studying) whether they were studying alone (if studying) the types of study tasks (if studying) the perceived level of difficulty of the study task whether participants were planning to listen to music while studying (various) reasons for music listening (various) perceived positive and negative impacts of studying with music Each row represents the data for a single participant. Rows with a record of a participant ID but no associated data indicate that the participant did not respond to the questionnaire (i.e., missing data). File: esm_music_features_analysis.xlsx This file presents the music features of each recorded music track during both the study-episodes and the everyday-episodes (retrieved from Spotify's "Get Track's Audio Features" API). These features are:
energy level loudness valence tempo mode The contextual details of the moments each track was being played are also presented here, which include:
whether the participant was studying their location (e.g., at home, cafe, university) whether they were studying alone the type of study tasks they were engaging with (e.g., reading, writing) the perceived difficulty level of the task File: esm_demographics.xlsx This file contains the demographics of the sample in Experiment 2 (N = 10), which are the same as in Experiment 1 (see above). Each row represents the data for a single participant. Rows with a record of a participant ID but no associated demographic data indicate that the participant did not respond to the questionnaire (i.e., missing data). File: index.xlsx Finally, this file contains all the abbreviations used in each document as well as their explanations.