In 2018, hip-hop and rap music accounted for 21.7 percent of total music consumption in the United States, more than double the percentage of R&B music sales. Other highly popular genres included pop and rock music, whereas just 1.1 percent of all music sold in the U.S. in 2018 was jazz.
Why are some genres more popular than others?
Whilst music is a highly subjective medium in terms of the listener’s taste and preferences, the top genres in terms of consumption tend not to fluctuate heavily. The catchiness and familiarity of pop music is appealing to a wide range of music fans. Pop songs tend to be easy to listen to and remember, usually feature simple, snappy lyrics to avoid polarizing listeners, making pop overall less divisive than other genres because it is designed to generate mass appeal.
Conversely, religious music by its very nature is a niche genre in that it encompasses, describes or advocates certain beliefs, giving it the equal ability to alienate some listeners while appealing enormously to others, depending on their religious stance.
The hit genre of 2018 was hip-hop and rap, a music style notorious for its tendency to divide listeners. Singer Drake arguably influenced sales within the genre that year, with ‘Scorpion’ topping the list of best-selling albums in the United States based on total streams and ‘Scary Hours’ also making the top ten. Drake came tenth in the list of most successful music tours in North America, with revenue from his live concerts amounting to 79 million U.S. dollars, and second in the ranking was Jay-Z and Beyoncé with 166.4 million dollars in revenue, artists whose music is also strongly aligned with the rap and hip-hop genre.
Other artists in the genre who achieved significant influence in 2018 include Kendrick Lamar, Childish Gambino, Cardi B, Travis Scott and Post Malone, many of whom released songs that year which garnered hundreds of millions of audio streams. The sheer amount of hip-hop and rap music flooding the music industry has had a profound effect on the genre’s popularity, and musicians in the category tend to be prolific songwriters and active social media users. Equally, artists in the genre are arguably passionate about creating music which challenges social norms in a way that rock music has always been famous for.
This statistic shows the public opinion on the music genres which are most representative of America today in the United States as of May 2018, by ethnicity. During the survey, 54 percent of White respondents stated that they considered country music to be representative of modern America.
The statistic provides data on favorite music genres among consumers in the United States as of July 2018, sorted by age group. According to the source, 52 percent of respondents aged 16 to 19 years old stated that pop music was their favorite music genre, compared to 19 percent of respondents aged 65 or above. Country music in the United States – additional information
In 2012, country music topped the list; 27.6 percent of respondents picked it among their three favorite genres. A year earlier, the result was one percent lower, which allowed classic rock to take the lead. The figures show, however, the genre’s popularity across the United States is unshakeable and it has also been spreading abroad. This could be demonstrated by the international success of (among others) Shania Twain or the second place the Dutch country duo “The Common Linnets” received in the Eurovision Song Contest in 2014, singing “Calm after the storm.”
The genre is also widely popular among American teenagers, earning the second place and 15.3 percent of votes in a survey in August 2012. The first place and more than 18 percent of votes was awarded to pop music, rock scored 13.1 percent and landed in fourth place. Interestingly, Christian music made it to top five with nine percent of votes. The younger generation is also widely represented among country music performers with such prominent names as Taylor Swift (born in 1989), who was the highest paid musician in 2015, and Hunter Hayes (born in 1991).
Country music is also able to attract crowds (and large sums of money) to live performances. Luke Bryan’s tour was the most successful tour in North America in 2016 based on ticket sales as almost 1.43 million tickets were sold for his shows. Fellow country singer, Garth Brooks, came second on the list, selling 1.4 million tickets for his tour in North America in 2016.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset is an overview of the people with whom Jay Z collaborated through his 13 solo studio albums released between 1996 and 2017.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This is the statistics for the Top 10 songs of various spotify artists and their YouTube videos. The Creators above generated the data and uploaded it to Kaggle on February 6-7 2023. The license to use this data is "CC0: Public Domain", allowing the data to be copied, modified, distributed, and worked on without having to ask permission. The data is in numerical and textual CSV format as attached. This dataset contains the statistics and attributes of the top 10 songs of various artists in the world. As described by the creators above, it includes 26 variables for each of the songs collected from spotify. These variables are briefly described next:
Track: name of the song, as visible on the Spotify platform. Artist: name of the artist. Url_spotify: the Url of the artist. Album: the album in wich the song is contained on Spotify. Album_type: indicates if the song is relesead on Spotify as a single or contained in an album. Uri: a spotify link used to find the song through the API. Danceability: describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable. Energy: is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy. Key: the key the track is in. Integers map to pitches using standard Pitch Class notation. E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on. If no key was detected, the value is -1. Loudness: the overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typically range between -60 and 0 db. Speechiness: detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks. Acousticness: a confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic. Instrumentalness: predicts whether a track contains no vocals. "Ooh" and "aah" sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly "vocal". The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content. Values above 0.5 are intended to represent instrumental tracks, but confidence is higher as the value approaches 1.0. Liveness: detects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides strong likelihood that the track is live. Valence: a measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry). Tempo: the overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece and derives directly from the average beat duration. Duration_ms: the duration of the track in milliseconds. Stream: number of streams of the song on Spotify. Url_youtube: url of the video linked to the song on Youtube, if it have any. Title: title of the videoclip on youtube. Channel: name of the channel that have published the video. Views: number of views. Likes: number of likes. Comments: number of comments. Description: description of the video on Youtube. Licensed: Indicates whether the video represents licensed content, which means that the content was uploaded to a channel linked to a YouTube content partner and then claimed by that partner. official_video: boolean value that indicates if the video found is the official video of the song. The data was last updated on February 7, 2023.
According to a study carried out by Deezer in May 2018, the most popular genre among Americans was rock music, with 56.8 percent of respondents stating that they were currently listening to music within this genre as of the date of survey. Pop and country music were the second and third most popular genres respectively, and 20.2 percent of respondents said they preferred jazz.
The appeal of rock and pop music
The broad appeal of rock and pop music can in part be attributed to how both genres often blend seamlessly into one another and influence other music styles. Heavy rock bands like Led Zeppelin and AC/DC are often more divisive than melodic rock groups like Bon Jovi or Genesis, just like pop music which strays into R&B territory or is better associated with hip hop or EDM. Each have their appeal to fans with different tastes, and the versatility of rock and pop (and music which combines the two) allows such music to reach adults of all ages and backgrounds.
Rock albums also account for the majority of vinyl album sales in the United States, with pop albums ranking second. However, although the resurgence of vinyl has to a certain extent been reliant on the rock genre, this is not the case when it comes to digital music consumption. Rap and hip hop accounted for 22.8 percent of music video streams in the U.S. in 2018, whereas for rock music videos the share was just 7.1 percent. Rock fared similarly when it came to audio streams, once again losing out to rap and hip hop. Taking such data into consideration, it would seem that rock music fans are generally more drawn to traditional formats and are less inclined to enjoy their music via streaming platforms.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset is an overview of the samples contained on Jay Z's 13 solo studio albums released between 1996 and 2017.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Dataset Card for RapBank
RapBank is the first dataset for rap generation. The rap songs are collected from YouTube, and we provide a meticulously designed pipeline for data processing
Dataset Details
Dataset Sources [optional]
Repository: https://github.com/NZqian/RapBank Paper: https://arxiv.org/abs/2408.15474 Demo: https://nzqian.github.io/Freestyler/
Statistics
The RapBank dataset comprises links to a total of 94, 164 songs. However, due to… See the full description on the dataset page: https://huggingface.co/datasets/zqning/RapBank.
According to the most recent data, the world's highest earning hip-hop artists is Kanye West, who made an estimated 150 million U.S. dollars in the year leading to June 2019. Also in the ranking were Jay-Z with 81 million, Drake with 75 million, and Diddy with 70 million.
Hip-hop and rap music – additional information
Hip-hop and rap as a music genre and culture dates back to 1970s. Hip-hop started as movement in South Bronx in New York City, spreading among wider African American communities in the 1980s and finally gaining mainstream attention in the 2000s, with artists like OutKast and Kanye West.
Since its boom, the music genre maintains strong presence in its home country. It is estimated that almost 6.3 million people in the U.S. visited R&B/rap/hip-hop concerts within a period of six months in 2018, a figure that is expected to grow to 6.93 million by 2020.
Canadian rapper Drake had one of the top-selling digital music albums in the U.S. in 2018 with his fifth studio album 'Scorpion', which was also the top-selling music album based on total streams that year. Despite Drake's popularity in North America, the artist lost out to the likes of Ed Sheeran, Pink and Taylor Swift when it came to a ranking of most profitable music tours in 2018.
This dataset contains audio statistics of the top 2000 tracks on Spotify from 2000-2019. The data contains about 18 columns each describing the track and it's qualities.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of ‘Spotify Recommendation’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://www.kaggle.com/bricevergnou/spotify-recommendation on 28 January 2022.
--- Dataset description provided by original source is as follows ---
( You can check how I used this dataset on my github repository )
I am basically a HUGE fan of music ( mostly French rap though with some exceptions but I love music ). And someday , while browsing stuff on Internet , I found the Spotify's API . I knew I had to use it when I found out you could get information like danceability about your favorite songs just with their id's.
https://user-images.githubusercontent.com/86613710/127216769-745ac143-7456-4464-bbe3-adc53872c133.png" alt="image">
Once I saw that , my machine learning instincts forced me to work on this project.
I collected 100 liked songs and 95 disliked songs
For those I like , I made a playlist of my favorite 100 songs. It is mainly French Rap , sometimes American rap , rock or electro music.
For those I dislike , I collected songs from various kind of music so the model will have a broader view of what I don't like
There is : - 25 metal songs ( Cannibal Corps ) - 20 " I don't like " rap songs ( PNL ) - 25 classical songs - 25 Disco songs
I didn't include any Pop song because I'm kinda neutral about it
From the Spotify's API "Get a playlist's Items" , I turned the playlists into json formatted data which cointains the ID and the name of each track ( ids/yes.py and ids/no.py ). NB : on the website , specify "items(track(id,name))" in the fields format , to avoid being overwhelmed by useless data.
With a script ( ids/ids_to_data.py ) , I turned the json data into a long string with each ID separated with a comma.
Now I just had to enter the strings into the Spotify API "Get Audio Features from several tracks" and get my data files ( data/good.json and data/dislike.json )
From Spotify's API documentation :
And the variable that has to be predicted :
--- Original source retains full ownership of the source dataset ---
According to a study on representation and equality in the music industry, female artists were most prevalent in the pop genre between 2012 and 2023. While female artists released roughly **** percent of the highest-charting pop songs during that period, they only accounted for **** percent of top songs in the hip-hop/rap genre.
Women in the music industry Gender inequality remains an ongoing issue across all areas of the music industry. According to the findings of a recent report, the share of female songwriters on the Billboard Top 100 charts has stood below ** percent for nearly a decade now, highlighting how little progress has been made in terms of equal representation in music over the years. Meanwhile, an even bigger representation gap can be observed when looking at the share of female producers in the United States. Women working as producers remain a rare sight, and between 2012 and 2020, only * to * percent of producers of the top-charting songs were female. Who are the top female solo artists? As of 2020, Nicki Minaj (Onika Maraj) and Rihanna (Robyn Fenty) were the two top-performing female solo artists in the United States, with ** songs that placed on the Billboard Top 100 charts each. *** out of the top 13 performers in this ranking were women, and while **** of them fall under the pop category, two female artists (Nicki Minaj and Cardi B) release hip-hop/rap music. Considering that the latter genre is particularly hard to break into for women, the recent wave of successful female rappers (spearheaded by Nicki Minaj, Cardi B, Megan Thee Stallion, and Doja Cat, to name a few) might indicate the beginning of a new musical era.
http://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/
Almost 30,000 Songs from the Spotify API. See the readme file for a formatted data dictionary table.
Data Dictionary:
variable | class | description |
---|---|---|
track_id | character | Song unique ID |
track_name | character | Song Name |
track_artist | character | Song Artist |
track_popularity | double | Song Popularity (0-100) where higher is better |
track_album_id | character | Album unique ID |
track_album_name | character | Song album name |
track_album_release_date | character | Date when album released |
playlist_name | character | Name of playlist |
playlist_id | character | Playlist ID |
playlist_genre | character | Playlist genre |
playlist_subgenre | character | Playlist subgenre |
danceability | double | Danceability describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable. |
energy | double | Energy is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy. |
key | double | The estimated overall key of the track. Integers map to pitches using standard Pitch Class notation . E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on. If no key was detected, the value is -1. |
loudness | double | The overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typical range between -60 and 0 db. |
mode | double | Mode indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. Major is represented by 1 and minor is 0. |
speechiness | double | Speechiness detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks. |
acousticness | double | A confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic. |
instrumentalness | double | Predicts whether a track contains no vocals. "Ooh" and "aah" sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly "vocal". The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content. Values above 0.5 are intended to represent instrumental tracks, but confidence is higher as the value approaches 1.0. |
liveness | double | Detects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides strong likelihood that the track is live. |
valence | double | A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry). |
tempo | double | The overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece and derives directly from the average beat duration. |
duration_ms | double | Duration of song in milliseconds |
https://rightsstatements.org/page/InC/1.0/https://rightsstatements.org/page/InC/1.0/
This research data set is related to the research project The multisemiotic construction of ’new’ ethnicities and (non)belonging in Finnish hip hop culture, funded by the Academy of Finland (2019-2022; project number 315461). The data set comprise a) interview data of Finnish rappers and b) media data concerning Finnish rappers.
UMD-350MB
The Universal MIDI Dataset 350MB (UMD-350MB) is a proprietary collection of 85,618 MIDI files curated for research and development within our organization. This collection is a subset sampled from a larger dataset developed for pretraining symbolic music models.
The field of symbolic music generation is constrained by limited data compared to language models. Publicly available datasets, such as the Lakh MIDI Dataset, offer large collections of MIDI files sourced from the web. While the sheer volume of musical data might appear beneficial, the actual amount of valuable data is less than anticipated, as many songs contain less desirable melodies with erratic and repetitive events.
The UMD-350MB employs an attention-based approach to achieve more desirable output generations by focusing on human-reviewed training examples of single-track melodies, chord progressions, leads and arpeggios with an average duration of 8 bars. This was achieved by refining the dataset over 24 months, ensuring consistent quality and tempo alignment. Moreover, the dataset is normalized by setting the timing information to 120 BPM with a tick resolution (PPQ) of 96 and transposing the musical scales to C major and A minor (natural scales).
Melody Styles
A major portion of the dataset is composed of newly produced private data to represent modern musical styles.
Pop: 1970s to 2020s Pop music
EDM: Trance, House, Synthwave, Dance, Arcade
Jazz: Bebop, Ballad, Latin-Jazz, Bossa-Jazz, Ragtime
Soul: 80s Classic, Neo-Soul, Latin-Soul
Urban: Pop, Hip-Hop, Trap, R&B, Afrobeat
World: Latin, Bossa Nova, European
Other: Film, Cinematic, Game music and piano references
Actual MIDI files are unlabeled for unsupervised training.
Dataset Access
Please note that this is a closed-source dataset with very limited access. Considerations for access include proposals for data augmentation, chord extraction and other enhancement methods, whether through scripts, algorithmic techniques, manual editing in a DAW or additional processing methods.
For inquiries about this dataset, please email us.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset was compiled by me for a personal project. It contains lyrics from 11 different artists including: Drake, J. Cole, Kendrick Lamar, Eminem, Nas, Skepta, Rapsody, Nicki Minaj, Dave, 2Pac, and Future.
All data was compiled using Spotify's API and Genius' API. FEATURES track_name: the name of each track artist: the name of each artist raw_lyrics: raw text of lyrics scraped from Genius website artist_verses: text extracted from raw_lyrics — verses performed by each artist only NOTE: Some entires in raw_lyrics may contain a different formatting structure to others, so text consistency will vary.
What can this dataset be used for? Text analysis Text pre-processing Text EDA Text classification
CC0
Original Data Source: Rap Lyrics Dataset
To explore the relationship between alcohol, drugs, and crime in the electronic dance music and hip hop nightclub scenes of Philadelphia, Pennsylvania, researchers utilized a multi-faceted ethnographic approach featuring in-depth interviews with 51 respondents (Dataset 1, Initial Interview Qualitative Data) and two Web-based follow-up surveys with respondents (Dataset 2, Follow-Up Surveys Quantitative Data). Recruitment of respondents began in April of 2005 and was conducted in two ways. Slightly more than half of the respondents (n = 30) were recruited with the help of staff from two small, independent record stores. The remaining 21 respondents were recruited at electronic dance music or hip hop nightclub events. Dataset 1 includes structured and open-ended questions about the respondent's background, living situation and lifestyle, involvement and commitment to the electronic dance music and hip hop scenes, nightclub culture and interaction therein, and experiences with drugs, criminal activity, and victimization. Dataset 2 includes descriptive information on how many club events were attended, which ones, and the activities (including drug use and crime/victimization experiences) taking place therein. Dataset 3 (Demographic Quantitative Data) includes coded demographic information from the Dataset 1 interviews.
This dataset is extracted from Spotify API, containing audio features of the songs.
The popularity of a track is a value between 0 and 100, with 100 being the most popular.
The popularity is calculated by algorithm and is based, in the most part, on the total number of plays the track has had and how recent those plays are.
Generally speaking, songs that are being played a lot now will have a higher popularity than songs that were played a lot in the past.
This dataset can be used in many ways try out - clustering - classification - visualization - Exploratory Data Analysis - Add new features of your own
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset provides a regional detailed overview of the Brazil digital music consumption in Spotify between 2021-2023. It includes acoustic features and all genres/artists that are listened at least one time in those years. The data is provided by the Spotify API for Developers and the SpotifyCharts wich are used to collect the acoustic features and the summarized most listened songs in city, respectively.
It contemplates 17 cities of 16 different states in Brazil that achieved 5190 unique tracks, 487 different genres and 2056 artists. The covered cities are: Belém, Belo Horizonte, Brasília, Campinas, Campo Grande, Cuiabá, Curitiba, Florianópolis, Fortaleza, Goiânia, Manaus, Porto Alegre, Recife, Rio de Janeiro, Salvador, São Paulo and Uberlândia. Each city has 119 different weekly's charts wich the week period is described by the file name.
The covered acoustic features are provided by Spotify and are described as: - Acousticness: Measures from 0.0 to 1.0 of wheter the track is acoustic; 1.0 indicates a totally acoustic song and 0.0 means a song without any acoustic element - Danceability: Describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable. - Energy: is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy. - Instrumentalness: Predicts whether a track contains no vocals. "Ooh" and "aah" sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly "vocal". The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content. Values above 0.5 are intended to represent instrumental tracks, but confidence is higher as the value approaches 1.0. - Key: The key the track is in. Integers map to pitches using standard Pitch Class notation. E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on. If no key was detected, the value is -1. - Liveness: Detects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides strong likelihood that the track is live. - Loudness: The overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typically range between -60 and 0 db. - Mode: Mode indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. Major is represented by 1 and minor is 0. - Speechiness: Detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks. - Tempo: The overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece and derives directly from the average beat duration. - Time Signature: An estimated time signature. The time signature (meter) is a notational convention to specify how many beats are in each bar (or measure). The time signature ranges from 3 to 7 indicating time signatures of "3/4", to "7/4". - Valence: A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry).
The statistic presents the preferred music genres of the average U.S. teenage internet user as of August 2012. During the survey, 15.3 percent of respondents stated they would choose country music if they could only listen to one genre of music for the rest of their lives. The most popular answer was pop, which 18.4 percent of respondents selected as the one genre of music they would listen to for the rest of their lives. According to the recent data, however, hip-hop/rap is the genre that is most consumed in the United States. In 2016, this genre accounted for 18.2 percent of all music songs consumed. Pop came in second place with 15.3 percent.
Preferred music genres of U.S. teens - additional information
While popular music ultimately prevails among the 13- to 19-year old age group, their allegiance to country music aligns with the preferences of rest of the population, among which country music ousted classic rock as the most popular genre in the United States.
Teenagers tend to rely on their friends to discover new music, but the majority of them never actually purchase music, or they purchase very little. This does not mean that teens do not discover new music often; besides traditional radio, the advent of online music streaming through platforms such as YouTube, Pandora, and Spotify has made it easier than ever for consumers to listen to music for minimal or no cost. In fact, listening to music online is one of the most common activities for teens and young adults in the U.S. to do when using the internet .
Music piracy is also of growing concern in the United States, especially among the younger, tech-savvy generations. Despite the availability of free music sources, almost a third of teens have pirated music or movies, and more than half of the music collections of Americans under 30 years of age has been copied, ripped, or downloaded for free. The United States leads the world in the volume of music pirated via BitTorrent.
In 2018, hip-hop and rap music accounted for 21.7 percent of total music consumption in the United States, more than double the percentage of R&B music sales. Other highly popular genres included pop and rock music, whereas just 1.1 percent of all music sold in the U.S. in 2018 was jazz.
Why are some genres more popular than others?
Whilst music is a highly subjective medium in terms of the listener’s taste and preferences, the top genres in terms of consumption tend not to fluctuate heavily. The catchiness and familiarity of pop music is appealing to a wide range of music fans. Pop songs tend to be easy to listen to and remember, usually feature simple, snappy lyrics to avoid polarizing listeners, making pop overall less divisive than other genres because it is designed to generate mass appeal.
Conversely, religious music by its very nature is a niche genre in that it encompasses, describes or advocates certain beliefs, giving it the equal ability to alienate some listeners while appealing enormously to others, depending on their religious stance.
The hit genre of 2018 was hip-hop and rap, a music style notorious for its tendency to divide listeners. Singer Drake arguably influenced sales within the genre that year, with ‘Scorpion’ topping the list of best-selling albums in the United States based on total streams and ‘Scary Hours’ also making the top ten. Drake came tenth in the list of most successful music tours in North America, with revenue from his live concerts amounting to 79 million U.S. dollars, and second in the ranking was Jay-Z and Beyoncé with 166.4 million dollars in revenue, artists whose music is also strongly aligned with the rap and hip-hop genre.
Other artists in the genre who achieved significant influence in 2018 include Kendrick Lamar, Childish Gambino, Cardi B, Travis Scott and Post Malone, many of whom released songs that year which garnered hundreds of millions of audio streams. The sheer amount of hip-hop and rap music flooding the music industry has had a profound effect on the genre’s popularity, and musicians in the category tend to be prolific songwriters and active social media users. Equally, artists in the genre are arguably passionate about creating music which challenges social norms in a way that rock music has always been famous for.