64 datasets found
  1. Music album consumption U.S. 2018, by genre

    • statista.com
    Updated May 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2024). Music album consumption U.S. 2018, by genre [Dataset]. https://www.statista.com/statistics/310746/share-music-album-sales-us-genre/
    Explore at:
    Dataset updated
    May 29, 2024
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    2018
    Area covered
    United States
    Description

    In 2018, hip-hop and rap music accounted for 21.7 percent of total music consumption in the United States, more than double the percentage of R&B music sales. Other highly popular genres included pop and rock music, whereas just 1.1 percent of all music sold in the U.S. in 2018 was jazz.

    Why are some genres more popular than others? 

    Whilst music is a highly subjective medium in terms of the listener’s taste and preferences, the top genres in terms of consumption tend not to fluctuate heavily. The catchiness and familiarity of pop music is appealing to a wide range of music fans. Pop songs tend to be easy to listen to and remember, usually feature simple, snappy lyrics to avoid polarizing listeners, making pop overall less divisive than other genres because it is designed to generate mass appeal.

    Conversely, religious music by its very nature is a niche genre in that it encompasses, describes or advocates certain beliefs, giving it the equal ability to alienate some listeners while appealing enormously to others, depending on their religious stance.

    The hit genre of 2018 was hip-hop and rap, a music style notorious for its tendency to divide listeners. Singer Drake arguably influenced sales within the genre that year, with ‘Scorpion’ topping the list of best-selling albums in the United States based on total streams and ‘Scary Hours’ also making the top ten. Drake came tenth in the list of most successful music tours in North America, with revenue from his live concerts amounting to 79 million U.S. dollars, and second in the ranking was Jay-Z and Beyoncé with 166.4 million dollars in revenue, artists whose music is also strongly aligned with the rap and hip-hop genre.

    Other artists in the genre who achieved significant influence in 2018 include Kendrick Lamar, Childish Gambino, Cardi B, Travis Scott and Post Malone, many of whom released songs that year which garnered hundreds of millions of audio streams. The sheer amount of hip-hop and rap music flooding the music industry has had a profound effect on the genre’s popularity, and musicians in the category tend to be prolific songwriters and active social media users. Equally, artists in the genre are arguably passionate about creating music which challenges social norms in a way that rock music has always been famous for.

  2. Music genres which represent modern America in the U.S. 2018, by ethnicity

    • statista.com
    Updated May 29, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2024). Music genres which represent modern America in the U.S. 2018, by ethnicity [Dataset]. https://www.statista.com/statistics/864610/music-genre-modern-america-ethnicity/
    Explore at:
    Dataset updated
    May 29, 2024
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    May 10, 2018 - May 11, 2018
    Area covered
    United States
    Description

    This statistic shows the public opinion on the music genres which are most representative of America today in the United States as of May 2018, by ethnicity. During the survey, 54 percent of White respondents stated that they considered country music to be representative of modern America.

  3. Favorite music genres in the U.S. 2018, by age

    • statista.com
    Updated May 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2024). Favorite music genres in the U.S. 2018, by age [Dataset]. https://www.statista.com/statistics/253915/favorite-music-genres-in-the-us/
    Explore at:
    Dataset updated
    May 29, 2024
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Jul 2018
    Area covered
    United States
    Description

    The statistic provides data on favorite music genres among consumers in the United States as of July 2018, sorted by age group. According to the source, 52 percent of respondents aged 16 to 19 years old stated that pop music was their favorite music genre, compared to 19 percent of respondents aged 65 or above. Country music in the United States – additional information

    In 2012, country music topped the list; 27.6 percent of respondents picked it among their three favorite genres. A year earlier, the result was one percent lower, which allowed classic rock to take the lead. The figures show, however, the genre’s popularity across the United States is unshakeable and it has also been spreading abroad. This could be demonstrated by the international success of (among others) Shania Twain or the second place the Dutch country duo “The Common Linnets” received in the Eurovision Song Contest in 2014, singing “Calm after the storm.”

    The genre is also widely popular among American teenagers, earning the second place and 15.3 percent of votes in a survey in August 2012. The first place and more than 18 percent of votes was awarded to pop music, rock scored 13.1 percent and landed in fourth place. Interestingly, Christian music made it to top five with nine percent of votes. The younger generation is also widely represented among country music performers with such prominent names as Taylor Swift (born in 1989), who was the highest paid musician in 2015, and Hunter Hayes (born in 1991).

    Country music is also able to attract crowds (and large sums of money) to live performances. Luke Bryan’s tour was the most successful tour in North America in 2016 based on ticket sales as almost 1.43 million tickets were sold for his shows. Fellow country singer, Garth Brooks, came second on the list, selling 1.4 million tickets for his tour in North America in 2016.

  4. T

    Data from: Jay Z Collaborators Dataset - Vol.1

    • dataverse.tdl.org
    csv, pdf, txt
    Updated Apr 2, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kenton Rambsy; Kenton Rambsy; Howard II Rambsy; Howard II Rambsy (2021). Jay Z Collaborators Dataset - Vol.1 [Dataset]. http://doi.org/10.18738/T8/IYUWGK
    Explore at:
    csv(58819), pdf(55513), txt(3091)Available download formats
    Dataset updated
    Apr 2, 2021
    Dataset provided by
    Texas Data Repository
    Authors
    Kenton Rambsy; Kenton Rambsy; Howard II Rambsy; Howard II Rambsy
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset is an overview of the people with whom Jay Z collaborated through his 13 solo studio albums released between 1996 and 2017.

  5. Z

    Spotify and Youtube

    • data.niaid.nih.gov
    Updated Dec 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Guarisco, Marco (2023). Spotify and Youtube [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10253414
    Explore at:
    Dataset updated
    Dec 4, 2023
    Dataset provided by
    Sallustio, Marco
    Rastelli, Salvatore
    Guarisco, Marco
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    YouTube
    Description

    This is the statistics for the Top 10 songs of various spotify artists and their YouTube videos. The Creators above generated the data and uploaded it to Kaggle on February 6-7 2023. The license to use this data is "CC0: Public Domain", allowing the data to be copied, modified, distributed, and worked on without having to ask permission. The data is in numerical and textual CSV format as attached. This dataset contains the statistics and attributes of the top 10 songs of various artists in the world. As described by the creators above, it includes 26 variables for each of the songs collected from spotify. These variables are briefly described next:

    Track: name of the song, as visible on the Spotify platform. Artist: name of the artist. Url_spotify: the Url of the artist. Album: the album in wich the song is contained on Spotify. Album_type: indicates if the song is relesead on Spotify as a single or contained in an album. Uri: a spotify link used to find the song through the API. Danceability: describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable. Energy: is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy. Key: the key the track is in. Integers map to pitches using standard Pitch Class notation. E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on. If no key was detected, the value is -1. Loudness: the overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typically range between -60 and 0 db. Speechiness: detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks. Acousticness: a confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic. Instrumentalness: predicts whether a track contains no vocals. "Ooh" and "aah" sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly "vocal". The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content. Values above 0.5 are intended to represent instrumental tracks, but confidence is higher as the value approaches 1.0. Liveness: detects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides strong likelihood that the track is live. Valence: a measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry). Tempo: the overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece and derives directly from the average beat duration. Duration_ms: the duration of the track in milliseconds. Stream: number of streams of the song on Spotify. Url_youtube: url of the video linked to the song on Youtube, if it have any. Title: title of the videoclip on youtube. Channel: name of the channel that have published the video. Views: number of views. Likes: number of likes. Comments: number of comments. Description: description of the video on Youtube. Licensed: Indicates whether the video represents licensed content, which means that the content was uploaded to a channel linked to a YouTube content partner and then claimed by that partner. official_video: boolean value that indicates if the video found is the official video of the song. The data was last updated on February 7, 2023.

  6. Music genres preferred by consumers in the U.S. 2018

    • statista.com
    Updated May 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2024). Music genres preferred by consumers in the U.S. 2018 [Dataset]. https://www.statista.com/statistics/442354/music-genres-preferred-consumers-usa/
    Explore at:
    Dataset updated
    May 29, 2024
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    May 2018
    Area covered
    United States
    Description

    According to a study carried out by Deezer in May 2018, the most popular genre among Americans was rock music, with 56.8 percent of respondents stating that they were currently listening to music within this genre as of the date of survey. Pop and country music were the second and third most popular genres respectively, and 20.2 percent of respondents said they preferred jazz.

    The appeal of rock and pop music

    The broad appeal of rock and pop music can in part be attributed to how both genres often blend seamlessly into one another and influence other music styles. Heavy rock bands like Led Zeppelin and AC/DC are often more divisive than melodic rock groups like Bon Jovi or Genesis, just like pop music which strays into R&B territory or is better associated with hip hop or EDM. Each have their appeal to fans with different tastes, and the versatility of rock and pop (and music which combines the two) allows such music to reach adults of all ages and backgrounds.

    Rock albums also account for the majority of vinyl album sales in the United States, with pop albums ranking second. However, although the resurgence of vinyl has to a certain extent been reliant on the rock genre, this is not the case when it comes to digital music consumption. Rap and hip hop accounted for 22.8 percent of music video streams in the U.S. in 2018, whereas for rock music videos the share was just 7.1 percent. Rock fared similarly when it came to audio streams, once again losing out to rap and hip hop. Taking such data into consideration, it would seem that rock music fans are generally more drawn to traditional formats and are less inclined to enjoy their music via streaming platforms.

  7. T

    Data from: Jay Z Samples Dataset - Vol.1

    • dataverse.tdl.org
    pdf, tsv, txt
    Updated Apr 2, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kenton Rambsy; Kenton Rambsy; Howard II Rambsy; Howard II Rambsy (2021). Jay Z Samples Dataset - Vol.1 [Dataset]. http://doi.org/10.18738/T8/XK2LEW
    Explore at:
    tsv(38461), txt(3076), pdf(54531)Available download formats
    Dataset updated
    Apr 2, 2021
    Dataset provided by
    Texas Data Repository
    Authors
    Kenton Rambsy; Kenton Rambsy; Howard II Rambsy; Howard II Rambsy
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset is an overview of the samples contained on Jay Z's 13 solo studio albums released between 1996 and 2017.

  8. h

    RapBank

    • huggingface.co
    Updated Sep 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ziqianning (2024). RapBank [Dataset]. https://huggingface.co/datasets/zqning/RapBank
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 14, 2024
    Authors
    ziqianning
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Dataset Card for RapBank

    RapBank is the first dataset for rap generation. The rap songs are collected from YouTube, and we provide a meticulously designed pipeline for data processing

      Dataset Details
    
    
    
    
    
      Dataset Sources [optional]
    

    Repository: https://github.com/NZqian/RapBank Paper: https://arxiv.org/abs/2408.15474 Demo: https://nzqian.github.io/Freestyler/

      Statistics
    

    The RapBank dataset comprises links to a total of 94, 164 songs. However, due to… See the full description on the dataset page: https://huggingface.co/datasets/zqning/RapBank.

  9. Earnings of the world's highest earning rappers and hip-hop artists 2019

    • ai-chatbox.pro
    • statista.com
    Updated May 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2024). Earnings of the world's highest earning rappers and hip-hop artists 2019 [Dataset]. https://www.ai-chatbox.pro/?_=%2Fstatistics%2F223233%2Fforbes-ranking-of-the-worlds-richest-hip-hop-artists%2F%23XgboD02vawLZsmJjSPEePEUG%2FVFd%2Bik%3D
    Explore at:
    Dataset updated
    May 29, 2024
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Jun 2018 - Jun 2019
    Area covered
    World
    Description

    According to the most recent data, the world's highest earning hip-hop artists is Kanye West, who made an estimated 150 million U.S. dollars in the year leading to June 2019. Also in the ranking were Jay-Z with 81 million, Drake with 75 million, and Diddy with 70 million.

    Hip-hop and rap music – additional information

    Hip-hop and rap as a music genre and culture dates back to 1970s. Hip-hop started as movement in South Bronx in New York City, spreading among wider African American communities in the 1980s and finally gaining mainstream attention in the 2000s, with artists like OutKast and Kanye West.

    Since its boom, the music genre maintains strong presence in its home country. It is estimated that almost 6.3 million people in the U.S. visited R&B/rap/hip-hop concerts within a period of six months in 2018, a figure that is expected to grow to 6.93 million by 2020.

    Canadian rapper Drake had one of the top-selling digital music albums in the U.S. in 2018 with his fifth studio album 'Scorpion', which was also the top-selling music album based on total streams that year. Despite Drake's popularity in North America, the artist lost out to the likes of Ed Sheeran, Pink and Taylor Swift when it came to a ranking of most profitable music tours in 2018.

  10. Top Hits Spotify from 2000-2019

    • kaggle.com
    Updated May 31, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mark Koverha (2022). Top Hits Spotify from 2000-2019 [Dataset]. https://www.kaggle.com/datasets/paradisejoy/top-hits-spotify-from-20002019
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 31, 2022
    Dataset provided by
    Kaggle
    Authors
    Mark Koverha
    Description

    Context

    This dataset contains audio statistics of the top 2000 tracks on Spotify from 2000-2019. The data contains about 18 columns each describing the track and it's qualities.

    Content

    • artist: Name of the Artist.
    • song: Name of the Track.
    • duration_ms: Duration of the track in milliseconds.
    • explicit: The lyrics or content of a song or a music video contain one or more of the criteria which could be considered offensive or unsuitable for children.
    • year: Release Year of the track.
    • popularity: The higher the value the more popular the song is.
    • danceability: Danceability describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable.
    • energy: Energy is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity.
    • key: The key the track is in. Integers map to pitches using standard Pitch Class notation. E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on. If no key was detected, the value is -1.
    • loudness: The overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typically range between -60 and 0 db.
    • mode: Mode indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. Major is represented by 1 and minor is 0.
    • speechiness: Speechiness detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks.
    • acousticness: A confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic.
    • instrumentalness: Predicts whether a track contains no vocals. "Ooh" and "aah" sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly "vocal". The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content. Values above 0.5 are intended to represent instrumental tracks, but confidence is higher as the value approaches 1.0.
    • liveness: Detects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides strong likelihood that the track is live.
    • valence: A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry).
    • tempo: The overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece and derives directly from the average beat duration.
    • genre: Genre of the track.
  11. A

    ‘Spotify Recommendation’ analyzed by Analyst-2

    • analyst-2.ai
    Updated Jan 28, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Analyst-2 (analyst-2.ai) / Inspirient GmbH (inspirient.com) (2022). ‘Spotify Recommendation’ analyzed by Analyst-2 [Dataset]. https://analyst-2.ai/analysis/kaggle-spotify-recommendation-3903/3a5b5131/?iid=006-678&v=presentation
    Explore at:
    Dataset updated
    Jan 28, 2022
    Dataset authored and provided by
    Analyst-2 (analyst-2.ai) / Inspirient GmbH (inspirient.com)
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Analysis of ‘Spotify Recommendation’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://www.kaggle.com/bricevergnou/spotify-recommendation on 28 January 2022.

    --- Dataset description provided by original source is as follows ---

    Spotify Recommandation

    ( You can check how I used this dataset on my github repository )

    I am basically a HUGE fan of music ( mostly French rap though with some exceptions but I love music ). And someday , while browsing stuff on Internet , I found the Spotify's API . I knew I had to use it when I found out you could get information like danceability about your favorite songs just with their id's.

    https://user-images.githubusercontent.com/86613710/127216769-745ac143-7456-4464-bbe3-adc53872c133.png" alt="image">

    Once I saw that , my machine learning instincts forced me to work on this project.

    1. Data Collection

    1.1 Playlist creation

    I collected 100 liked songs and 95 disliked songs

    For those I like , I made a playlist of my favorite 100 songs. It is mainly French Rap , sometimes American rap , rock or electro music.

    For those I dislike , I collected songs from various kind of music so the model will have a broader view of what I don't like

    There is : - 25 metal songs ( Cannibal Corps ) - 20 " I don't like " rap songs ( PNL ) - 25 classical songs - 25 Disco songs

    I didn't include any Pop song because I'm kinda neutral about it

    1.2 Getting the ID's

    1. From the Spotify's API "Get a playlist's Items" , I turned the playlists into json formatted data which cointains the ID and the name of each track ( ids/yes.py and ids/no.py ). NB : on the website , specify "items(track(id,name))" in the fields format , to avoid being overwhelmed by useless data.

    2. With a script ( ids/ids_to_data.py ) , I turned the json data into a long string with each ID separated with a comma.

    1.3 Getting the statistics

    Now I just had to enter the strings into the Spotify API "Get Audio Features from several tracks" and get my data files ( data/good.json and data/dislike.json )

    2. Data features

    From Spotify's API documentation :

    • acousticness : A confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic.
    • danceability : Danceability describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable.
    • duration_ms : The duration of the track in milliseconds.
    • energy : Energy is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy.
    • instrumentalness : Predicts whether a track contains no vocals. “Ooh” and “aah” sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly “vocal”. The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content. Values above 0.5 are intended to represent instrumental tracks, but confidence is higher as the value approaches 1.0.
    • key : The key the track is in. Integers map to pitches using standard Pitch Class notation . E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on.
    • liveness : Detects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides strong likelihood that the track is live.
    • loudness : The overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typical range between -60 and 0 db.
    • mode : Mode indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. Major is represented by 1 and minor is 0.
    • speechiness : Speechiness detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks.
    • tempo : The overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece and derives directly from the average beat duration.
    • time_signature : An estimated overall time signature of a track. The time signature (meter) is a notational convention to specify how many beats are in each bar (or measure).
    • valence : A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry).

    And the variable that has to be predicted :

    • liked : 1 for liked songs , 0 for disliked songs

    --- Original source retains full ownership of the source dataset ---

  12. Gender distribution of popular songs in the U.S. 2012-2023, by genre

    • statista.com
    Updated Jun 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Gender distribution of popular songs in the U.S. 2012-2023, by genre [Dataset]. https://www.statista.com/statistics/801266/gender-distribution-popular-songs-genre/
    Explore at:
    Dataset updated
    Jun 23, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    United States
    Description

    According to a study on representation and equality in the music industry, female artists were most prevalent in the pop genre between 2012 and 2023. While female artists released roughly **** percent of the highest-charting pop songs during that period, they only accounted for **** percent of top songs in the hip-hop/rap genre.

    Women in the music industry Gender inequality remains an ongoing issue across all areas of the music industry. According to the findings of a recent report, the share of female songwriters on the Billboard Top 100 charts has stood below ** percent for nearly a decade now, highlighting how little progress has been made in terms of equal representation in music over the years. Meanwhile, an even bigger representation gap can be observed when looking at the share of female producers in the United States. Women working as producers remain a rare sight, and between 2012 and 2020, only * to * percent of producers of the top-charting songs were female. Who are the top female solo artists? As of 2020, Nicki Minaj (Onika Maraj) and Rihanna (Robyn Fenty) were the two top-performing female solo artists in the United States, with ** songs that placed on the Billboard Top 100 charts each. *** out of the top 13 performers in this ranking were women, and while **** of them fall under the pop category, two female artists (Nicki Minaj and Cardi B) release hip-hop/rap music. Considering that the latter genre is particularly hard to break into for women, the recent wave of successful female rappers (spearheaded by Nicki Minaj, Cardi B, Megan Thee Stallion, and Doja Cat, to name a few) might indicate the beginning of a new musical era.

  13. 30000 Spotify Songs

    • kaggle.com
    Updated Nov 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joakim Arvidsson (2023). 30000 Spotify Songs [Dataset]. https://www.kaggle.com/datasets/joebeachcapital/30000-spotify-songs/suggestions
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 1, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Joakim Arvidsson
    License

    http://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/

    Description

    Almost 30,000 Songs from the Spotify API. See the readme file for a formatted data dictionary table.

    Data Dictionary:

    variableclassdescription
    track_idcharacterSong unique ID
    track_namecharacterSong Name
    track_artistcharacterSong Artist
    track_popularitydoubleSong Popularity (0-100) where higher is better
    track_album_idcharacterAlbum unique ID
    track_album_namecharacterSong album name
    track_album_release_datecharacterDate when album released
    playlist_namecharacterName of playlist
    playlist_idcharacterPlaylist ID
    playlist_genrecharacterPlaylist genre
    playlist_subgenrecharacterPlaylist subgenre
    danceabilitydoubleDanceability describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable.
    energydoubleEnergy is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy.
    keydoubleThe estimated overall key of the track. Integers map to pitches using standard Pitch Class notation . E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on. If no key was detected, the value is -1.
    loudnessdoubleThe overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typical range between -60 and 0 db.
    modedoubleMode indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. Major is represented by 1 and minor is 0.
    speechinessdoubleSpeechiness detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks.
    acousticnessdoubleA confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic.
    instrumentalnessdoublePredicts whether a track contains no vocals. "Ooh" and "aah" sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly "vocal". The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content. Values above 0.5 are intended to represent instrumental tracks, but confidence is higher as the value approaches 1.0.
    livenessdoubleDetects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides strong likelihood that the track is live.
    valencedoubleA measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry).
    tempodoubleThe overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece and derives directly from the average beat duration.
    duration_msdoubleDuration of song in milliseconds
  14. j

    Ethnicity and belonging in Finnish hip hop culture in the 2020s Etnisyys ja...

    • jyx.jyu.fi
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Elina Westinen (2023). Ethnicity and belonging in Finnish hip hop culture in the 2020s Etnisyys ja kuuluminen suomalaisessa hiphop-kulttuurissa 2020-luvulla [Dataset]. http://doi.org/10.17011/jyx/dataset/87335
    Explore at:
    Dataset updated
    May 31, 2023
    Authors
    Elina Westinen
    License

    https://rightsstatements.org/page/InC/1.0/https://rightsstatements.org/page/InC/1.0/

    Area covered
    Finland
    Description

    This research data set is related to the research project The multisemiotic construction of ’new’ ethnicities and (non)belonging in Finnish hip hop culture, funded by the Academy of Finland (2019-2022; project number 315461). The data set comprise a) interview data of Finnish rappers and b) media data concerning Finnish rappers.

  15. UMD-350MB: Refined MIDI Dataset for Symbolic Music Generation

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Patchbanks (2024). UMD-350MB: Refined MIDI Dataset for Symbolic Music Generation [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_13125873
    Explore at:
    Dataset updated
    Jul 29, 2024
    Dataset authored and provided by
    Patchbanks
    Description

    UMD-350MB

    The Universal MIDI Dataset 350MB (UMD-350MB) is a proprietary collection of 85,618 MIDI files curated for research and development within our organization. This collection is a subset sampled from a larger dataset developed for pretraining symbolic music models.

    The field of symbolic music generation is constrained by limited data compared to language models. Publicly available datasets, such as the Lakh MIDI Dataset, offer large collections of MIDI files sourced from the web. While the sheer volume of musical data might appear beneficial, the actual amount of valuable data is less than anticipated, as many songs contain less desirable melodies with erratic and repetitive events.

    The UMD-350MB employs an attention-based approach to achieve more desirable output generations by focusing on human-reviewed training examples of single-track melodies, chord progressions, leads and arpeggios with an average duration of 8 bars. This was achieved by refining the dataset over 24 months, ensuring consistent quality and tempo alignment. Moreover, the dataset is normalized by setting the timing information to 120 BPM with a tick resolution (PPQ) of 96 and transposing the musical scales to C major and A minor (natural scales).

    Melody Styles

    A major portion of the dataset is composed of newly produced private data to represent modern musical styles.

    Pop: 1970s to 2020s Pop music

    EDM: Trance, House, Synthwave, Dance, Arcade

    Jazz: Bebop, Ballad, Latin-Jazz, Bossa-Jazz, Ragtime

    Soul: 80s Classic, Neo-Soul, Latin-Soul

    Urban: Pop, Hip-Hop, Trap, R&B, Afrobeat

    World: Latin, Bossa Nova, European

    Other: Film, Cinematic, Game music and piano references

    Actual MIDI files are unlabeled for unsupervised training.

    Dataset Access

    Please note that this is a closed-source dataset with very limited access. Considerations for access include proposals for data augmentation, chord extraction and other enhancement methods, whether through scripts, algorithmic techniques, manual editing in a DAW or additional processing methods.

    For inquiries about this dataset, please email us.

  16. o

    Rap Lyrics Dataset

    • opendatabay.com
    .undefined
    Updated Jun 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Datasimple (2025). Rap Lyrics Dataset [Dataset]. https://www.opendatabay.com/data/ai-ml/ea749641-0e75-4202-8c97-65dc7552e51a
    Explore at:
    .undefinedAvailable download formats
    Dataset updated
    Jun 17, 2025
    Dataset authored and provided by
    Datasimple
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Entertainment & Media Consumption
    Description

    This dataset was compiled by me for a personal project. It contains lyrics from 11 different artists including: Drake, J. Cole, Kendrick Lamar, Eminem, Nas, Skepta, Rapsody, Nicki Minaj, Dave, 2Pac, and Future.

    All data was compiled using Spotify's API and Genius' API. FEATURES track_name: the name of each track artist: the name of each artist raw_lyrics: raw text of lyrics scraped from Genius website artist_verses: text extracted from raw_lyrics — verses performed by each artist only NOTE: Some entires in raw_lyrics may contain a different formatting structure to others, so text consistency will vary.

    What can this dataset be used for? Text analysis Text pre-processing Text EDA Text classification

    License

    CC0

    Original Data Source: Rap Lyrics Dataset

  17. d

    Data from: Exploring the Drugs-Crime Connection Within the Electronic Dance...

    • catalog.data.gov
    • icpsr.umich.edu
    Updated Mar 12, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Justice (2025). Exploring the Drugs-Crime Connection Within the Electronic Dance Music and Hip Hop Nightclub Scenes in Philadelphia, Pennsylvania, 2005-2006 [Dataset]. https://catalog.data.gov/dataset/exploring-the-drugs-crime-connection-within-the-electronic-dance-music-and-hip-hop-ni-2005-c0575
    Explore at:
    Dataset updated
    Mar 12, 2025
    Dataset provided by
    National Institute of Justice
    Area covered
    Pennsylvania, Philadelphia
    Description

    To explore the relationship between alcohol, drugs, and crime in the electronic dance music and hip hop nightclub scenes of Philadelphia, Pennsylvania, researchers utilized a multi-faceted ethnographic approach featuring in-depth interviews with 51 respondents (Dataset 1, Initial Interview Qualitative Data) and two Web-based follow-up surveys with respondents (Dataset 2, Follow-Up Surveys Quantitative Data). Recruitment of respondents began in April of 2005 and was conducted in two ways. Slightly more than half of the respondents (n = 30) were recruited with the help of staff from two small, independent record stores. The remaining 21 respondents were recruited at electronic dance music or hip hop nightclub events. Dataset 1 includes structured and open-ended questions about the respondent's background, living situation and lifestyle, involvement and commitment to the electronic dance music and hip hop scenes, nightclub culture and interaction therein, and experiences with drugs, criminal activity, and victimization. Dataset 2 includes descriptive information on how many club events were attended, which ones, and the activities (including drug use and crime/victimization experiences) taking place therein. Dataset 3 (Demographic Quantitative Data) includes coded demographic information from the Dataset 1 interviews.

  18. Spotify_last_decade_songs

    • kaggle.com
    Updated Feb 1, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tanisha Yadav (2022). Spotify_last_decade_songs [Dataset]. https://www.kaggle.com/datasets/tanishayadav10000/spotify-last-decade-songs
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 1, 2022
    Dataset provided by
    Kaggle
    Authors
    Tanisha Yadav
    Description

    Context

    This dataset is extracted from Spotify API, containing audio features of the songs.

    Features

    • artist_name (object): Name of the artist.
    • track_id (object): Spotify unique track id.
    • track_name (object): Name of the track/song.
    1. acousticness (float): A confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic.
    2. danceability (float): Danceability describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable.
    3. duration_ms (int): The duration of the track in milliseconds.
    4. energy (float): Energy is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy.
    5. instrumentalness (float): Predicts whether a track contains no vocals. “Ooh” and “aah” sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly “vocal”. The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content. Values above 0.5 are intended to represent instrumental tracks, but confidence is higher as the value approaches 1.0.
    6. key (int): The estimated overall key of the track. Integers map to pitches using standard Pitch Class notation. E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on. If no key was detected, the value is -1.
    7. liveness (float): Detects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides strong likelihood that the track is live.
    8. loudness (float): The overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typical range between -60 and 0 db.
    9. mode( int): Mode indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. Major is represented by 1 and minor is 0.
    10. speechiness (float): Speechiness detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audio book, poetry),the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks.
    11. tempo - beats per minute
    12. time_signature (int): An estimated overall time signature of a track. The time signature (meter) is a notational convention to specify how many beats are in each bar (or measure).
    13. valence (float): A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry). popularity - overall popularity score (based on # of clicks) The popularity of the track. The value will be between 0 and 100, with 100 being the most popular.

    The popularity of a track is a value between 0 and 100, with 100 being the most popular.

    The popularity is calculated by algorithm and is based, in the most part, on the total number of plays the track has had and how recent those plays are.

    Generally speaking, songs that are being played a lot now will have a higher popularity than songs that were played a lot in the past.

    USE

    This dataset can be used in many ways try out - clustering - classification - visualization - Exploratory Data Analysis - Add new features of your own

  19. Brazil regional spotify charts

    • kaggle.com
    zip
    Updated Apr 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Filipe Moura (2024). Brazil regional spotify charts [Dataset]. https://www.kaggle.com/datasets/filipeasm/brazil-regional-spotify-charts
    Explore at:
    zip(10117250 bytes)Available download formats
    Dataset updated
    Apr 14, 2024
    Authors
    Filipe Moura
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Area covered
    Brazil
    Description

    This dataset provides a regional detailed overview of the Brazil digital music consumption in Spotify between 2021-2023. It includes acoustic features and all genres/artists that are listened at least one time in those years. The data is provided by the Spotify API for Developers and the SpotifyCharts wich are used to collect the acoustic features and the summarized most listened songs in city, respectively.

    Data description

    It contemplates 17 cities of 16 different states in Brazil that achieved 5190 unique tracks, 487 different genres and 2056 artists. The covered cities are: Belém, Belo Horizonte, Brasília, Campinas, Campo Grande, Cuiabá, Curitiba, Florianópolis, Fortaleza, Goiânia, Manaus, Porto Alegre, Recife, Rio de Janeiro, Salvador, São Paulo and Uberlândia. Each city has 119 different weekly's charts wich the week period is described by the file name.

    Acoustic features

    The covered acoustic features are provided by Spotify and are described as: - Acousticness: Measures from 0.0 to 1.0 of wheter the track is acoustic; 1.0 indicates a totally acoustic song and 0.0 means a song without any acoustic element - Danceability: Describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable. - Energy: is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy. - Instrumentalness: Predicts whether a track contains no vocals. "Ooh" and "aah" sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly "vocal". The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content. Values above 0.5 are intended to represent instrumental tracks, but confidence is higher as the value approaches 1.0. - Key: The key the track is in. Integers map to pitches using standard Pitch Class notation. E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on. If no key was detected, the value is -1. - Liveness: Detects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides strong likelihood that the track is live. - Loudness: The overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typically range between -60 and 0 db. - Mode: Mode indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. Major is represented by 1 and minor is 0. - Speechiness: Detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks. - Tempo: The overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece and derives directly from the average beat duration. - Time Signature: An estimated time signature. The time signature (meter) is a notational convention to specify how many beats are in each bar (or measure). The time signature ranges from 3 to 7 indicating time signatures of "3/4", to "7/4". - Valence: A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry).

    Data Science Applications:

    • Time Series Analysis: Identify seasonal behaviors and the deviation of each city during those 2 years
    • Trend Analysis: Identify patterns and trends in digital music consumption based in genres and/or acoustic features in each city to understand seasonal changes
    • Clustering Tasks: Group cities based on genre and/or acoustic features to identify different regional patterns between Brazil's regions and describe the difference between each group
  20. Preferred music genres among U.S. teenage internet users as of August 2012

    • statista.com
    • ai-chatbox.pro
    Updated Sep 28, 2012
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2012). Preferred music genres among U.S. teenage internet users as of August 2012 [Dataset]. https://www.statista.com/statistics/245743/preferred-music-genres-among-teenagers-in-the-us/
    Explore at:
    Dataset updated
    Sep 28, 2012
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Aug 2012
    Area covered
    United States
    Description

    The statistic presents the preferred music genres of the average U.S. teenage internet user as of August 2012. During the survey, 15.3 percent of respondents stated they would choose country music if they could only listen to one genre of music for the rest of their lives. The most popular answer was pop, which 18.4 percent of respondents selected as the one genre of music they would listen to for the rest of their lives. According to the recent data, however, hip-hop/rap is the genre that is most consumed in the United States. In 2016, this genre accounted for 18.2 percent of all music songs consumed. Pop came in second place with 15.3 percent.

    Preferred music genres of U.S. teens - additional information

    While popular music ultimately prevails among the 13- to 19-year old age group, their allegiance to country music aligns with the preferences of rest of the population, among which country music ousted classic rock as the most popular genre in the United States.

    Teenagers tend to rely on their friends to discover new music, but the majority of them never actually purchase music, or they purchase very little. This does not mean that teens do not discover new music often; besides traditional radio, the advent of online music streaming through platforms such as YouTube, Pandora, and Spotify has made it easier than ever for consumers to listen to music for minimal or no cost. In fact, listening to music online is one of the most common activities for teens and young adults in the U.S. to do when using the internet .

    Music piracy is also of growing concern in the United States, especially among the younger, tech-savvy generations. Despite the availability of free music sources, almost a third of teens have pirated music or movies, and more than half of the music collections of Americans under 30 years of age has been copied, ripped, or downloaded for free. The United States leads the world in the volume of music pirated via BitTorrent.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Statista (2024). Music album consumption U.S. 2018, by genre [Dataset]. https://www.statista.com/statistics/310746/share-music-album-sales-us-genre/
Organization logo

Music album consumption U.S. 2018, by genre

Explore at:
13 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
May 29, 2024
Dataset authored and provided by
Statistahttp://statista.com/
Time period covered
2018
Area covered
United States
Description

In 2018, hip-hop and rap music accounted for 21.7 percent of total music consumption in the United States, more than double the percentage of R&B music sales. Other highly popular genres included pop and rock music, whereas just 1.1 percent of all music sold in the U.S. in 2018 was jazz.

Why are some genres more popular than others? 

Whilst music is a highly subjective medium in terms of the listener’s taste and preferences, the top genres in terms of consumption tend not to fluctuate heavily. The catchiness and familiarity of pop music is appealing to a wide range of music fans. Pop songs tend to be easy to listen to and remember, usually feature simple, snappy lyrics to avoid polarizing listeners, making pop overall less divisive than other genres because it is designed to generate mass appeal.

Conversely, religious music by its very nature is a niche genre in that it encompasses, describes or advocates certain beliefs, giving it the equal ability to alienate some listeners while appealing enormously to others, depending on their religious stance.

The hit genre of 2018 was hip-hop and rap, a music style notorious for its tendency to divide listeners. Singer Drake arguably influenced sales within the genre that year, with ‘Scorpion’ topping the list of best-selling albums in the United States based on total streams and ‘Scary Hours’ also making the top ten. Drake came tenth in the list of most successful music tours in North America, with revenue from his live concerts amounting to 79 million U.S. dollars, and second in the ranking was Jay-Z and Beyoncé with 166.4 million dollars in revenue, artists whose music is also strongly aligned with the rap and hip-hop genre.

Other artists in the genre who achieved significant influence in 2018 include Kendrick Lamar, Childish Gambino, Cardi B, Travis Scott and Post Malone, many of whom released songs that year which garnered hundreds of millions of audio streams. The sheer amount of hip-hop and rap music flooding the music industry has had a profound effect on the genre’s popularity, and musicians in the category tend to be prolific songwriters and active social media users. Equally, artists in the genre are arguably passionate about creating music which challenges social norms in a way that rock music has always been famous for.

Search
Clear search
Close search
Google apps
Main menu