33 datasets found
  1. Indian Songs Viewership Forecasting Data

    • kaggle.com
    Updated Jan 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TheViewlytics (2024). Indian Songs Viewership Forecasting Data [Dataset]. https://www.kaggle.com/datasets/theviewlytics/indian-songs-viewership-forecasting-data/data
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 5, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    TheViewlytics
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Dataset

    This dataset was created by TheViewlytics

    Released under Apache 2.0

    Contents

  2. h

    youtube-music-hits

    • huggingface.co
    Updated Nov 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Akbar Gherbal (2024). youtube-music-hits [Dataset]. https://huggingface.co/datasets/akbargherbal/youtube-music-hits
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 14, 2024
    Authors
    Akbar Gherbal
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Area covered
    YouTube
    Description

    YouTube Music Hits Dataset

    A collection of YouTube music video data sourced from Wikidata, focusing on videos with significant viewership metrics.

      Dataset Description
    
    
    
    
    
      Overview
    

    24,329 music videos View range: 1M to 5.5B views Temporal range: 1977-2024

      Features
    

    youtubeId: YouTube video identifier itemLabel: Video/song title performerLabel: Artist/band name youtubeViews: View count year: Release year genreLabel: Musical genre(s)

      View… See the full description on the dataset page: https://huggingface.co/datasets/akbargherbal/youtube-music-hits.
    
  3. YouTube users worldwide 2020-2029

    • statista.com
    Updated Mar 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). YouTube users worldwide 2020-2029 [Dataset]. https://www.statista.com/forecasts/1144088/youtube-users-in-the-world
    Explore at:
    Dataset updated
    Mar 3, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    World
    Description

    The global number of Youtube users in was forecast to continuously increase between 2024 and 2029 by in total 232.5 million users (+24.91 percent). After the ninth consecutive increasing year, the Youtube user base is estimated to reach 1.2 billion users and therefore a new peak in 2029. Notably, the number of Youtube users of was continuously increasing over the past years.User figures, shown here regarding the platform youtube, have been estimated by taking into account company filings or press material, secondary research, app downloads and traffic data. They refer to the average monthly active users over the period.The shown data are an excerpt of Statista's Key Market Indicators (KMI). The KMI are a collection of primary and secondary indicators on the macro-economic, demographic and technological environment in up to 150 countries and regions worldwide. All indicators are sourced from international and national statistical offices, trade associations and the trade press and they are processed to generate comparable data sets (see supplementary notes under details for more information).Find more key insights for the number of Youtube users in countries like Africa and South America.

  4. f

    Microsoft Excel dataset file of YouTube videos.

    • plos.figshare.com
    xlsx
    Updated Nov 29, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dan Sun; Guochang Zhao (2023). Microsoft Excel dataset file of YouTube videos. [Dataset]. http://doi.org/10.1371/journal.pone.0294665.s002
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Nov 29, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Dan Sun; Guochang Zhao
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    YouTube
    Description

    News dissemination plays a vital role in supporting people to incorporate beneficial actions during public health emergencies, thereby significantly reducing the adverse influences of events. Based on big data from YouTube, this research study takes the declaration of COVID-19 National Public Health Emergency (PHE) as the event impact and employs a DiD model to investigate the effect of PHE on the news dissemination strength of relevant videos. The study findings indicate that the views, comments, and likes on relevant videos significantly increased during the COVID-19 public health emergency. Moreover, the public’s response to PHE has been rapid, with the highest growth in comments and views on videos observed within the first week of the public health emergency, followed by a gradual decline and returning to normal levels within four weeks. In addition, during the COVID-19 public health emergency, in the context of different types of media, lifestyle bloggers, local media, and institutional media demonstrated higher growth in the news dissemination strength of relevant videos as compared to news & political bloggers, foreign media, and personal media, respectively. Further, the audience attracted by related news tends to display a certain level of stickiness, therefore this audience may subscribe to these channels during public health emergencies, which confirms the incentive mechanisms of social media platforms to foster relevant news dissemination during public health emergencies. The proposed findings provide essential insights into effective news dissemination in potential future public health events.

  5. o

    Mr Beast: Most Viewed YT Video 100K Comments

    • opendatabay.com
    .undefined
    Updated Jun 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Datasimple (2025). Mr Beast: Most Viewed YT Video 100K Comments [Dataset]. https://www.opendatabay.com/data/ai-ml/b7b113c0-df09-4136-bc69-ba32c06f34d4
    Explore at:
    .undefinedAvailable download formats
    Dataset updated
    Jun 20, 2025
    Dataset authored and provided by
    Datasimple
    Area covered
    Entertainment & Media Consumption, YouTube
    Description

    Summary: This dataset encapsulates the vibrant community interaction surrounding MrBeast's most viewed YouTube video, "$456,000 Squid Game In Real Life!" By meticulously compiling 100,000 comments, this collection offers a unique window into the public discourse, engagement, and sentiments of one of YouTube's most significant viral phenomena. Researchers, linguists, marketers, and sociocultural analysts can delve into the depth and diversity of viewer conversations, drawing insights and patterns from a broad swath of public opinion. Privacy Preservation: Recognizing the importance of privacy and ethical data handling, we've employed SHA-256 encryption to anonymize user names. Each username is first salted with a unique sequence and then passed through the SHA-256 algorithm, turning it into a distinct and irreversible hash. This approach ensures the confidentiality of individuals while retaining the integrity and utility of the data for analysis. Dataset Usage: This dataset is a rich resource for various analytical pursuits, including but not limited to sentiment analysis, linguistic trends, engagement patterns, and sociocultural research. Academics can explore the dynamics of viral content and public interaction, while marketers and content creators can gauge audience response and engagement strategies. The dataset is also invaluable for training machine learning models in natural language processing and understanding, providing real-world text data in a diverse, dynamic context. Acknowledgements: We extend our gratitude to YouTube and MrBeast for the platform and content that made this dataset possible. Their commitment to fostering a robust and engaging digital community is at the heart of this collection. We also acknowledge the YouTube API for enabling access to this rich vein of public discourse, and we commend the countless viewers whose comments have collectively shaped this dataset into a mirror of public sentiment and interaction. Column Descriptors:

    Comment: The full text of the user's comment, reflecting their thoughts, reactions, or interactions with the content. Anonymized Author: The SHA-256 hashed representation of the user's name, ensuring anonymity while maintaining a unique identifier for each commenter. Published At: The ISO 8601 timestamp marking when the comment was originally posted, offering insights into the timing and relevance of user interactions. Likes: The number of likes attributed to the comment, serving as an indicator of its resonance or approval among the community. Reply Count: The count of replies to the comment, reflecting its capacity to engage and provoke discussion within the community.

    License

    CC-BY-NC

    Original Data Source: Mr Beast: Most Viewed YT Video 100K Comments

  6. l

    YouTube RPM by Niche (2025)

    • learningrevolution.net
    html
    Updated Jun 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jawad Khan (2025). YouTube RPM by Niche (2025) [Dataset]. https://www.learningrevolution.net/how-much-money-does-youtube-pay-for-1-million-views/
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Jun 23, 2025
    Dataset provided by
    Learning Revolution
    Authors
    Jawad Khan
    Area covered
    YouTube
    Variables measured
    Gaming, Travel, Finance, Education, Technology, Memes/Vlogs
    Description

    This dataset provides estimated YouTube RPM (Revenue Per Mille) ranges for different niches in 2025, based on ad revenue earned per 1,000 monetized views.

  7. Blue-Ringed Octopus

    • kaggle.com
    Updated Dec 21, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yusuf Syam (2022). Blue-Ringed Octopus [Dataset]. https://www.kaggle.com/datasets/yusufsyam/blue-ringed-octopus-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 21, 2022
    Dataset provided by
    Kaggle
    Authors
    Yusuf Syam
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    This dataset is for object detection task of the blue-ringed octopus (one of the most venomous animals in the world). With this dataset, I hope people can become more familiar with the blue ringed octopus and be aware of its dangers

    I collected the images and labeled them myself (for a competition). I have less experience in collecting datasets, so I cannot guarantee the quality of this dataset. I trained a yolov7 object detection model with this data and got a mean average precision of 0.987 (with an IoU threshold of 0.5) .

    About Datasets:

    • Consists of 316 images with each label in the Pascal Voc format
    • No pre-processing or image augmentation
    • Not separated into train and test
    • To use as an image classification, just delete its xml/label file
    • Made in August-September 2022

    How I Collect the Data:

    I didn't go into the field to take these images, instead I took them from Google, some also from screenshots of some Youtube videos: - https://www.youtube.com/watch?v=MBHjo6UaHzk&t=62s - https://www.youtube.com/watch?v=c4BoYORmgSM - https://www.youtube.com/watch?v=DSdq8XFQdKo - https://www.youtube.com/watch?v=64mY1klkf4I&t=215s - https://www.youtube.com/watch?v=C0DOusbGWbU - https://www.youtube.com/watch?v=mTnmw5o4vRI - https://www.youtube.com/watch?v=bejKAB2Eazw&t=317s - https://www.youtube.com/watch?v=emisZUHJAEA - https://www.youtube.com/watch?v=6b_UYwyWI6E - https://www.youtube.com/watch?v=vVamzP52qwA - https://www.youtube.com/watch?v=3Bt1LvpZ1Oo

    I also played around with the ai text to image generator to create multiple images and manually choose which one is acceptable (r_blue_ringed_octopus_100 - r_blue_ringed_octopus_110 , you can remove it if you want). After collecting the images, I do the labeling my self.

  8. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS)

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Oct 19, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Steven R. Livingstone; Steven R. Livingstone; Frank A. Russo; Frank A. Russo (2024). The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) [Dataset]. http://doi.org/10.5281/zenodo.1188976
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 19, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Steven R. Livingstone; Steven R. Livingstone; Frank A. Russo; Frank A. Russo
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Description

    The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) contains 7356 files (total size: 24.8 GB). The dataset contains 24 professional actors (12 female, 12 male), vocalizing two lexically-matched statements in a neutral North American accent. Speech includes calm, happy, sad, angry, fearful, surprise, and disgust expressions, and song contains calm, happy, sad, angry, and fearful emotions. Each expression is produced at two levels of emotional intensity (normal, strong), with an additional neutral expression. All conditions are available in three modality formats: Audio-only (16bit, 48kHz .wav), Audio-Video (720p H.264, AAC 48kHz, .mp4), and Video-only (no sound). Note, there are no song files for Actor_18.

    The RAVDESS was developed by Dr Steven R. Livingstone, who now leads the Affective Data Science Lab, and Dr Frank A. Russo who leads the SMART Lab.

    Citing the RAVDESS

    The RAVDESS is released under a Creative Commons Attribution license, so please cite the RAVDESS if it is used in your work in any form. Published academic papers should use the academic paper citation for our PLoS1 paper. Personal works, such as machine learning projects/blog posts, should provide a URL to this Zenodo page, though a reference to our PLoS1 paper would also be appreciated.

    Academic paper citation

    Livingstone SR, Russo FA (2018) The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE 13(5): e0196391. https://doi.org/10.1371/journal.pone.0196391.

    Personal use citation

    Include a link to this Zenodo page - https://zenodo.org/record/1188976

    Commercial Licenses

    Commercial licenses for the RAVDESS can be purchased. For more information, please visit our license page of fees, or contact us at ravdess@gmail.com.

    Contact Information

    If you would like further information about the RAVDESS, to purchase a commercial license, or if you experience any issues downloading files, please contact us at ravdess@gmail.com.

    Example Videos

    Watch a sample of the RAVDESS speech and song videos.

    Emotion Classification Users

    If you're interested in using machine learning to classify emotional expressions with the RAVDESS, please see our new RAVDESS Facial Landmark Tracking data set [Zenodo project page].

    Construction and Validation

    Full details on the construction and perceptual validation of the RAVDESS are described in our PLoS ONE paper - https://doi.org/10.1371/journal.pone.0196391.

    The RAVDESS contains 7356 files. Each file was rated 10 times on emotional validity, intensity, and genuineness. Ratings were provided by 247 individuals who were characteristic of untrained adult research participants from North America. A further set of 72 participants provided test-retest data. High levels of emotional validity, interrater reliability, and test-retest intrarater reliability were reported. Validation data is open-access, and can be downloaded along with our paper from PLoS ONE.

    Contents

    Audio-only files

    Audio-only files of all actors (01-24) are available as two separate zip files (~200 MB each):

    • Speech file (Audio_Speech_Actors_01-24.zip, 215 MB) contains 1440 files: 60 trials per actor x 24 actors = 1440.
    • Song file (Audio_Song_Actors_01-24.zip, 198 MB) contains 1012 files: 44 trials per actor x 23 actors = 1012.

    Audio-Visual and Video-only files

    Video files are provided as separate zip downloads for each actor (01-24, ~500 MB each), and are split into separate speech and song downloads:

    • Speech files (Video_Speech_Actor_01.zip to Video_Speech_Actor_24.zip) collectively contains 2880 files: 60 trials per actor x 2 modalities (AV, VO) x 24 actors = 2880.
    • Song files (Video_Song_Actor_01.zip to Video_Song_Actor_24.zip) collectively contains 2024 files: 44 trials per actor x 2 modalities (AV, VO) x 23 actors = 2024.

    File Summary

    In total, the RAVDESS collection includes 7356 files (2880+2024+1440+1012 files).

    File naming convention

    Each of the 7356 RAVDESS files has a unique filename. The filename consists of a 7-part numerical identifier (e.g., 02-01-06-01-02-01-12.mp4). These identifiers define the stimulus characteristics:

    Filename identifiers

    • Modality (01 = full-AV, 02 = video-only, 03 = audio-only).
    • Vocal channel (01 = speech, 02 = song).
    • Emotion (01 = neutral, 02 = calm, 03 = happy, 04 = sad, 05 = angry, 06 = fearful, 07 = disgust, 08 = surprised).
    • Emotional intensity (01 = normal, 02 = strong). NOTE: There is no strong intensity for the 'neutral' emotion.
    • Statement (01 = "Kids are talking by the door", 02 = "Dogs are sitting by the door").
    • Repetition (01 = 1st repetition, 02 = 2nd repetition).
    • Actor (01 to 24. Odd numbered actors are male, even numbered actors are female).


    Filename example: 02-01-06-01-02-01-12.mp4

    1. Video-only (02)
    2. Speech (01)
    3. Fearful (06)
    4. Normal intensity (01)
    5. Statement "dogs" (02)
    6. 1st Repetition (01)
    7. 12th Actor (12)
    8. Female, as the actor ID number is even.

    License information

    The RAVDESS is released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, CC BY-NC-SA 4.0

    Commercial licenses for the RAVDESS can also be purchased. For more information, please visit our license fee page, or contact us at ravdess@gmail.com.

    Related Data sets

  9. P

    TikTok Dataset Dataset

    • paperswithcode.com
    Updated Jul 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yasamin Jafarian; Hyun Soo Park (2024). TikTok Dataset Dataset [Dataset]. https://paperswithcode.com/dataset/tiktok-dataset
    Explore at:
    Dataset updated
    Jul 22, 2024
    Authors
    Yasamin Jafarian; Hyun Soo Park
    Description

    We learn high fidelity human depths by leveraging a collection of social media dance videos scraped from the TikTok mobile social networking application. It is by far one of the most popular video sharing applications across generations, which include short videos (10-15 seconds) of diverse dance challenges as shown above. We manually find more than 300 dance videos that capture a single person performing dance moves from TikTok dance challenge compilations for each month, variety, type of dances, which are moderate movements that do not generate excessive motion blur. For each video, we extract RGB images at 30 frame per second, resulting in more than 100K images. We segmented these images using Removebg application, and computed the UV coordinates from DensePose.

    Download TikTok Dataset:

    Please use the dataset only for the research purpose.

    The dataset can be viewed and downloaded from the Kaggle page. (you need to make an account in Kaggle to be able to download the data. It is free!)

    The dataset can also be downloaded from here (42 GB). The dataset resolution is: (1080 x 604)

    The original YouTube videos corresponding to each sequence and the dance name can be downloaded from here (2.6 GB).

  10. Data from: The viewer doesn’t always seem to care - response to fake animal...

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated Oct 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lauren Harrington; Angie Elwin; Suzi Patterson; Neil D'Cruze (2022). The viewer doesn’t always seem to care - response to fake animal rescues on YouTube and implications for social media self-policing policies [Dataset]. http://doi.org/10.5061/dryad.q573n5tn6
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 14, 2022
    Dataset provided by
    World Animal Protectionhttp://worldanimalprotection.org/
    University of Oxford
    Authors
    Lauren Harrington; Angie Elwin; Suzi Patterson; Neil D'Cruze
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Area covered
    YouTube
    Description

    Animal-related content on social media is hugely popular but is not always appropriate in terms of how animals are portrayed or how they are treated. This has potential implications beyond the individual animals involved, for viewers, for wild animal populations, and for societies and their interactions with animals. Whilst social media platforms usually publish guidelines for permitted content, enforcement relies at least in part on viewers reporting inappropriate posts. Currently, there is no external regulation of social media platforms. Based on a set of 241 "fake animal rescue" videos that exhibited clear signs of animal cruelty and strong evidence of being deliberately staged (i.e. fake), we found little evidence that viewers disliked the videos and an overall mixed response in terms of awareness of the fake nature of the videos, and their attitudes towards the welfare of the animals involved. Our findings suggest, firstly, that despite the narrowly defined nature of the videos used in this case study, exposure rates can be extremely high (one of the videos had been viewed over 100 million times), and, secondly, that many YouTube viewers cannot identify (or are not concerned by) animal welfare or conservation issues within a social media context. In terms of the current policy approach of social media platforms, our findings raise questions regarding the value of their current reliance on consumers as watch dogs. Methods Data collection The dataset pertains to 241YouTube videos identified using the search function in YouTube and the search terms "primitive man saves" and "primitive boy saves" between May and July 2021; supplemented with additional similar videos held in a database collated by Animals for Asia (www.asiaforanimals.com). Video metrics were extracted automatically between 24.06.21 and 02.08.21 using the "tuber" package in R (Sood 2020, https://cran.r-project.org/web/packages/tuber/tuber.pdf ). Additional information (e.g. on animal taxa) was obtained manually by screening the videos. For five of the videos that received > 1,000 comments, comment text was also extracted using the tuber package. Only publicly available videos were accessed.
    Data processing Users (video posters and commenters) have been de-identified. For each video for which comment text was analysed, the text was converted into a list of the most frequently used words and emojis. Please refer to the manuscript for further details on the methods and approach used to identify and define the most frequently used words/emojis, and to assign sentiment scores.

  11. Data (i.e., evidence) about evidence based medicine

    • figshare.com
    • search.datacite.org
    png
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jorge H Ramirez (2023). Data (i.e., evidence) about evidence based medicine [Dataset]. http://doi.org/10.6084/m9.figshare.1093997.v24
    Explore at:
    pngAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Jorge H Ramirez
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Update — December 7, 2014. – Evidence-based medicine (EBM) is not working for many reasons, for example: 1. Incorrect in their foundations (paradox): hierarchical levels of evidence are supported by opinions (i.e., lowest strength of evidence according to EBM) instead of real data collected from different types of study designs (i.e., evidence). http://dx.doi.org/10.6084/m9.figshare.1122534 2. The effect of criminal practices by pharmaceutical companies is only possible because of the complicity of others: healthcare systems, professional associations, governmental and academic institutions. Pharmaceutical companies also corrupt at the personal level, politicians and political parties are on their payroll, medical professionals seduced by different types of gifts in exchange of prescriptions (i.e., bribery) which very likely results in patients not receiving the proper treatment for their disease, many times there is no such thing: healthy persons not needing pharmacological treatments of any kind are constantly misdiagnosed and treated with unnecessary drugs. Some medical professionals are converted in K.O.L. which is only a puppet appearing on stage to spread lies to their peers, a person supposedly trained to improve the well-being of others, now deceits on behalf of pharmaceutical companies. Probably the saddest thing is that many honest doctors are being misled by these lies created by the rules of pharmaceutical marketing instead of scientific, medical, and ethical principles. Interpretation of EBM in this context was not anticipated by their creators. “The main reason we take so many drugs is that drug companies don’t sell drugs, they sell lies about drugs.” ―Peter C. Gøtzsche “doctors and their organisations should recognise that it is unethical to receive money that has been earned in part through crimes that have harmed those people whose interests doctors are expected to take care of. Many crimes would be impossible to carry out if doctors weren’t willing to participate in them.” —Peter C Gøtzsche, The BMJ, 2012, Big pharma often commits corporate crime, and this must be stopped. Pending (Colombia): Health Promoter Entities (In Spanish: EPS ―Empresas Promotoras de Salud).

    1. Misinterpretations New technologies or concepts are difficult to understand in the beginning, it doesn’t matter their simplicity, we need to get used to new tools aimed to improve our professional practice. Probably the best explanation is here in these videos (credits to Antonio Villafaina for sharing these videos with me). English https://www.youtube.com/watch?v=pQHX-SjgQvQ&w=420&h=315 Spanish https://www.youtube.com/watch?v=DApozQBrlhU&w=420&h=315 ----------------------- Hypothesis: hierarchical levels of evidence based medicine are wrong Dear Editor, I have data to support the hypothesis described in the title of this letter. Before rejecting the null hypothesis I would like to ask the following open question:Could you support with data that hierarchical levels of evidence based medicine are correct? (1,2) Additional explanation to this question: – Only respond to this question attaching publicly available raw data.– Be aware that more than a question this is a challenge: I have data (i.e., evidence) which is contrary to classic (i.e., McMaster) or current (i.e., Oxford) hierarchical levels of evidence based medicine. An important part of this data (but not all) is publicly available. References
    2. Ramirez, Jorge H (2014): The EBM challenge. figshare. http://dx.doi.org/10.6084/m9.figshare.1135873
    3. The EBM Challenge Day 1: No Answers. Competing interests: I endorse the principles of open data in human biomedical research Read this letter on The BMJ – August 13, 2014.http://www.bmj.com/content/348/bmj.g3725/rr/762595Re: Greenhalgh T, et al. Evidence based medicine: a movement in crisis? BMJ 2014; 348: g3725. _ Fileset contents Raw data: Excel archive: Raw data, interactive figures, and PubMed search terms. Google Spreadsheet is also available (URL below the article description). Figure 1. Unadjusted (Fig 1A) and adjusted (Fig 1B) PubMed publication trends (01/01/1992 to 30/06/2014). Figure 2. Adjusted PubMed publication trends (07/01/2008 to 29/06/2014) Figure 3. Google search trends: Jan 2004 to Jun 2014 / 1-week periods. Figure 4. PubMed publication trends (1962-2013) systematic reviews and meta-analysis, clinical trials, and observational studies.
      Figure 5. Ramirez, Jorge H (2014): Infographics: Unpublished US phase 3 clinical trials (2002-2014) completed before Jan 2011 = 50.8%. figshare.http://dx.doi.org/10.6084/m9.figshare.1121675 Raw data: "13377 studies found for: Completed | Interventional Studies | Phase 3 | received from 01/01/2002 to 01/01/2014 | Worldwide". This database complies with the terms and conditions of ClinicalTrials.gov: http://clinicaltrials.gov/ct2/about-site/terms-conditions Supplementary Figures (S1-S6). PubMed publication delay in the indexation processes does not explain the descending trends in the scientific output of evidence-based medicine. Acknowledgments I would like to acknowledge the following persons for providing valuable concepts in data visualization and infographics:
    4. Maria Fernanda Ramírez. Professor of graphic design. Universidad del Valle. Cali, Colombia.
    5. Lorena Franco. Graphic design student. Universidad del Valle. Cali, Colombia. Related articles by this author (Jorge H. Ramírez)
    6. Ramirez JH. Lack of transparency in clinical trials: a call for action. Colomb Med (Cali) 2013;44(4):243-6. URL: http://www.ncbi.nlm.nih.gov/pubmed/24892242
    7. Ramirez JH. Re: Evidence based medicine is broken (17 June 2014). http://www.bmj.com/node/759181
    8. Ramirez JH. Re: Global rules for global health: why we need an independent, impartial WHO (19 June 2014). http://www.bmj.com/node/759151
    9. Ramirez JH. PubMed publication trends (1992 to 2014): evidence based medicine and clinical practice guidelines (04 July 2014). http://www.bmj.com/content/348/bmj.g3725/rr/759895 Recommended articles
    10. Greenhalgh Trisha, Howick Jeremy,Maskrey Neal. Evidence based medicine: a movement in crisis? BMJ 2014;348:g3725
    11. Spence Des. Evidence based medicine is broken BMJ 2014; 348:g22
    12. Schünemann Holger J, Oxman Andrew D,Brozek Jan, Glasziou Paul, JaeschkeRoman, Vist Gunn E et al. Grading quality of evidence and strength of recommendations for diagnostic tests and strategies BMJ 2008; 336:1106
    13. Lau Joseph, Ioannidis John P A, TerrinNorma, Schmid Christopher H, OlkinIngram. The case of the misleading funnel plot BMJ 2006; 333:597
    14. Moynihan R, Henry D, Moons KGM (2014) Using Evidence to Combat Overdiagnosis and Overtreatment: Evaluating Treatments, Tests, and Disease Definitions in the Time of Too Much. PLoS Med 11(7): e1001655. doi:10.1371/journal.pmed.1001655
    15. Katz D. A-holistic view of evidence based medicinehttp://thehealthcareblog.com/blog/2014/05/02/a-holistic-view-of-evidence-based-medicine/ ---
  12. R

    Towards A Mobile Assistive Robot For Specially Abled Personals: Case Study...

    • universe.roboflow.com
    zip
    Updated Feb 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    robotics (2025). Towards A Mobile Assistive Robot For Specially Abled Personals: Case Study Of Turtlebot3 With Intelrealsense Payload Dataset [Dataset]. https://universe.roboflow.com/robotics-sulpp/towards-a-mobile-assistive-robot-for-specially-abled-personals-case-study-of-turtlebot3-with-intelrealsense-payload/dataset/15
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 25, 2025
    Dataset authored and provided by
    robotics
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Mobility Aids Bounding Boxes
    Description

    This dataset is designed for hospital assistive robots to detect people and their mobility aids Sources: A lot of the images in the dataset has been token from this paper dataset : [1] A. Vasquez, M. Kollmitz, A. Eitel, and W. Burgard, “Deep detection of people and their mobility aids for a hospital robot,” in 2017 European Conference on Mobile Robots (ECMR), pp. 1–7, IEEE, 2017. and some others collected from YouTube videos: [2] https://www.youtube.com/watch?v=f-1e-m7WnVo [3] https://www.youtube.com/watch?v=m4VzCiTvQlA [4] https://www.youtube.com/watch?v=QG8zq90zi0A [5] https://www.youtube.com/watch?v=mYNEpGDZJjc [6] https://www.youtube.com/watch?v=X_KMEJ7rYLc [7] https://www.youtube.com/watch?v=2nW78NxWpag [8] https://www.youtube.com/watch?v=ghudpH_d5Io

  13. osu!_standard_top50_from_2017_to_2025

    • kaggle.com
    Updated Jan 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ELLIMAC (2025). osu!_standard_top50_from_2017_to_2025 [Dataset]. https://www.kaggle.com/datasets/ellimaaac/osu-standard-top50-from-2017-to-2025
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 17, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    ELLIMAC
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    osu! Standard - Top Players Archive (2019-02-28)

    https://www.youtube.com/watch?v=21qqFM-XPVU" alt="YT Video">

    📌 Overview

    This dataset contains an archive of the top-ranked osu! standard players. It provides insights into player rankings, performance points, and nationality, making it valuable for analyzing historical trends in osu! competitive play.

    📂 Dataset Content

    The dataset includes the following columns:

    • rank → The global ranking of the player (e.g., #1, #2, #3).
    • user link → Archived URL to the player's osu! profile.
    • country → The nationality of the player.
    • ranking-page-table_user-link href 2 → Another archived profile link (potential duplicate).
    • players name → The username of the player.
    • performance points → The pp (performance points), which measure player skill in osu!.

    🔍 Potential Use Cases

    • Competitive Analysis: Study the progression of top players.
    • Performance Trends: Analyze the distribution of performance points among the best players.
    • Historical Data: Compare with newer datasets to track skill evolution.

    📊 Data Sample

    Here’s a preview of the dataset:

    RankPlayerCountryPerformance Points
    #1FlyingTuna🇰🇷 South Korea15,321 pp
    #2idke🇺🇸 USA15,297 pp
    #3Mathi🇩🇪 Germany14,994 pp
    #4Freddie Benson🇺🇸 USA14,977 pp
    #5nathan on osu🇦🇺 Australia14,690 pp
  14. Z

    Data Set htwddKogRob-TSDReal for Localization and Lifelong Mapping

    • data.niaid.nih.gov
    Updated Jul 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bahrmann, Frank (2024). Data Set htwddKogRob-TSDReal for Localization and Lifelong Mapping [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4270151
    Explore at:
    Dataset updated
    Jul 19, 2024
    Dataset authored and provided by
    Bahrmann, Frank
    License

    Attribution 1.0 (CC BY 1.0)https://creativecommons.org/licenses/by/1.0/
    License information was derived automatically

    Description

    This dataset represents a 4.7 km long tour (odometry path shown in htwddKogRob-TSDReal_path.png) in an environment whose representation (see map htwddKogRob-TSDReal.png | 1px (\widehat{=}) 0.1m) is now obsolete. Several static objects have been moved or removed, and there are varying numbers of dynamic obstacles (people walking around).

    The work was first presented in:

    A Fuzzy-based Adaptive Environment Model for Indoor Robot Localization

    Authors: Frank Bahrmann, Sven Hellbach, Hans-Joachim Böhme

    Date of Publication: 2016/10/6

    Conference: Telehealth and Assistive Technology / 847: Intelligent Systems and Robotics

    Publisher: ACTA Press

    Additionally, we present a video with the proposed algorithm and an insight of this dataset under:

    youtube.com/AugustDerSmarte

    https://www.youtube.com/watch?v=26NBFN_XeQg

    Instructions for use

    The zip archive contains ascii files, which hold the log files of the robot observations and robot poses. Since this data set was recorded in a real environment, the logfiles hold only the odometry based robot poses. For further information, please refer to the header of the logfiles. To simplify the parsing of the files, you can use these two Java snippets:

    Laser Range Measurements:

      List ranges = new ArrayList<>(numOfLaserRays);
      List errors = new ArrayList<>(numOfLaserRays);
    
      String s = line.substring(4);
      String delimiter = "()";
      StringTokenizer tokenizer = new StringTokenizer(s, delimiter);
    
      while(tokenizer.hasMoreElements()){
        String[] arr = tokenizer.nextToken().split(";");
        boolean usable = (arr[0].equals("0")?false:true);
        double range = Double.parseDouble(arr[1]);
    
        ranges.add(range);
        errors.add(usable?Error.OKAY:Error.INVALID_MEASUREMENT);
      }
    

    Poses:

      String poseString = line.split(":")[2];
      String[] elements = poseString.substring(1, poseString.length()-1).split(";");
      double x = Double.parseDouble(elements[0]);
      double y = Double.parseDouble(elements[1]);
      double phi = Double.parseDouble(elements[2]);
    
  15. End ALS Kaggle Challenge

    • kaggle.com
    Updated Aug 2, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ALS Group (2022). End ALS Kaggle Challenge [Dataset]. https://www.kaggle.com/alsgroup/end-als/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 2, 2022
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    ALS Group
    Description

    Challenge Overview

    Amyotrophic Lateral Sclerosis (ALS) is a devastating neurological disease that affects 1 in 400 people. Patients suffer a progressive loss of voluntary muscle control leading to paralysis, difficulty speaking, swallowing and ultimately, breathing. Over 60% of patients with ALS die within 3 years of clinical presentation and 90% will die within 10 years. But there is reason for hope.

    We stand at a very important time and place in our mission to end ALS. Biotechnologies are rapidly evolving to produce new sources of data and change the way we learn about the brain in health and disease. Answer ALS has generated an unprecedented amount of clinical and biological data from ALS patients and healthy controls. We need your help to analyze that data, increase our understanding of ALS and bring clarity to potential therapeutic targets.

    Answer ALS, EverythingALS and Roche Canada's Artificial Intelligence Centre of Excellence is requesting the collaborative effort of the AI community to fight ALS. This challenge presents a curated collection of datasets from a number of Answer ALS sources and asks you to model solutions to key questions that were developed and evaluated by ALS neurologists, researchers, and patient communities. See Task Detail Pages for more detail.

    In 2014, Jay Fishman, chairman and CEO of The Travelers Companies, was diagnosed with ALS. He lost his battle with the disease just two years later. Although Jay realized that medical research would not move quickly enough to help him, he was determined to find and support a research effort that might someday change the prognosis for people diagnosed with ALS. The Answer ALS Research Program began in 2015. Learn more about Jay, ALS and Answer ALS in the video below.

    Challenge Details

    The tasks associated with this dataset were developed and evaluated by ALS neurologists, researchers, and patient communities. They represent key research questions, for which insights developed by the Kaggle community can be most impactful in identifying at-risk ALS populations and possible early treatments.

    To participate in this challenge, review the research questions posed in the dataset tasks and submit solutions in the form of Kaggle Notebooks.

    We encourage participants to use the presented data and if needed, additional public datasets, to create their submissions.

    Challenge Timeline

    The goal of this challenge is to connect the AI community to explore the ALS datasets, surface insights, and findings:
    - May 15, 2021, 11:59PM UTC

    On this date, each task will have one submission identified as the best response to the research question posed in the task. That submission will be marked as the “accepted solution” to that task. Submissions will be reviewed on a rolling basis, so participants are encouraged to work publicly and collaboratively to accelerate the research available for each task.

    Prizes: - The award will be given to the participant who is identified as best meeting the evaluation criteria. To be eligible for a prize you will need to make a submission using the submission form. - A multi-disciplinary group of subject matter experts derived from named collaborators will evaluate solutions and identify a winner two weeks after the deadline submission. A team or user is eligible to receive awards for multiple tasks. - All judgments are considered final. - The winner will be awarded an opportunity to pitch to a panel of industry and VC investors (i.e. General Catalyst - Hemant Taneja, Google - Graham Spencer, and others) who have a proven track record in digital solution development, implementation and scale.

    Dataset Content

    Overview of biology - The simplified central dogma of molecular biology for an organism is: DNA -> RNA -> Protein. Nature Scitable explanation.

    Data Description: This dataset is composed of a curated collection of 134 people with ALS or motor neuron disease (M...

  16. l

    TL;DR Dataset: Best YouTube Alternatives for Creators in 2025

    • learningrevolution.net
    html
    Updated Sep 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jawad Khan (2024). TL;DR Dataset: Best YouTube Alternatives for Creators in 2025 [Dataset]. https://www.learningrevolution.net/youtube-alternatives/
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Sep 25, 2024
    Dataset provided by
    Learning Revolution
    Authors
    Jawad Khan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    YouTube
    Variables measured
    Platform, Best Use Case
    Description

    Concise comparison of the top 10 YouTube alternatives for content creators in 2025. Covers monetization, audience size, and ideal use cases.

  17. The Chemoiton Files - Case 002

    • zenodo.org
    zip
    Updated Mar 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Benjamin Golub; Benjamin Golub (2024). The Chemoiton Files - Case 002 [Dataset]. http://doi.org/10.5281/zenodo.10890293
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 28, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Benjamin Golub; Benjamin Golub
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Chemotion Files is a series of datasets that can be used to get to know the electronic lab notebook Chemotion ELN. This is the second lesson in the series. With these datasets, people can get familiar with some of the features of the electronic lab notebook in a mystery, escape-room-like style "game".

    How to use The Chemotion Files:

    Download the .zip file and go to the Chemotion ELN Demo hosted by the KIT (https://demo.chemotion.ibcs.kit.edu/home), this is the official demo version of the electronic lab notebook. If it is your first time there, then you need to sign up. After signing up, you can directly log into the electronic lab notebook and upload the .zip with "import collection". After the import is finished, you will see one new collection in the ELN called "The_Chemotion_Files" with a sub folder called Case_002-The-Missing_Spectra. Now you can start and try to solve the mystery of the Hidden sample.

    You can check the following video for details on how to "play" The Chemotion Files - Case 002":

    https://www.youtube.com/watch?v=Soqtw4KTT0I

    The dataset was tested with the Chemotion ELN version v1.8.1

    Comment: The data used in this riddle was published by Simone Gräßle (see cited works). I recommend not to check out the original dataset before solving the riddle; otherwise you might spoil your fun with the riddle.

  18. R

    Mountain Dew Commercial Dataset

    • universe.roboflow.com
    zip
    Updated Feb 8, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joseph Nelson (2021). Mountain Dew Commercial Dataset [Dataset]. https://universe.roboflow.com/joseph-nelson/mountain-dew-commercial/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 8, 2021
    Dataset authored and provided by
    Joseph Nelson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Bottles Bounding Boxes
    Description

    Overview

    Mountain Dew is running a $1,000,000 counting contest. Computer Vision can help you win.

    :fa-spacer:

    Watch our video explaining how to use this dataset.

    https://i.imgur.com/ED4jpM3.png" alt="Mountain Dew">

    During Super Bowl LV, Mountain Dew sponsored an ad that encourages viewers to count all unique occurrences of Mountain Dew bottles. You can watch the full ad here. The first person to tweet the exactly correct count at Mountain Dew is eligible to win $1 million (see rules here).

    Counting things is a perfect place for where computer vision can help.

    We uploaded the Mountain Dew video to Roboflow, created three images per each second of the commercial (91 images from ~30 seconds of commercial), and annotated all bottles we could see. This dataset is the result.

    We trained a model to recognize the Mountain Dew bottles, and then ran the original commercial back through this model. This helps identify Mountain Dew bottles that the human eye may have missed when completing counts.

    https://i.imgur.com/rjZCS2a.png" alt="Image example">

    Getting Started

    Click "Fork" in the upper right hand corner or download the raw annotations in your desired format.

    Note that while the images are property of PepsiCo, we are using them here as fair-use for educational purposes and have released the annotations under a Creative Commons license.

    About Roboflow

    Roboflow enables teams to use computer vision. :fa-spacer: Our end-to-end platform enables developers to collect, organize, annotate, train, deploy, and improve their computer vision models -- all without needing to hire a new ML engineering team. :fa-spacer:

    Roboflow Wordmark

  19. c

    Truths and Tales: Understanding Online Fake News Networks in South Korea

    • researchdata.canberra.edu.au
    Updated Nov 24, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Benedict Sheehy (2023). Truths and Tales: Understanding Online Fake News Networks in South Korea [Dataset]. http://doi.org/10.17632/3xb4n9n6t4.1
    Explore at:
    Dataset updated
    Nov 24, 2023
    Authors
    Benedict Sheehy
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    South Korea
    Description

    This study investigates the features of fake news networks and how they spread during the 2020 South Korean election. Using Actor-Network Theory (ANT), we assessed the network's central players and how they are connected. Results reveal the characteristics of the videoclips and channel networks responsible for the propagation of fake news. Analysis of the videoclip network reveals a high number of detected fake news videos and a high density of connections among users. Assessment of news videoclips on both actual and fake news networks reveals that the real news network is more concentrated. However, the scale of the network may play a role in these variations. Statistics for network centralization reveal that users are spread out over the network, pointing to its decentralized character. A closer look at the real and fake news networks inside videos and channels reveals similar trends. We find that the density of the real news videoclip network is higher than that of the fake news network, whereas the fake news channel networks are denser than their real news counterparts, which may indicate greater activity and interconnectedness in their transmission. We also found that fake news videoclips had more likes than real news videoclips, whereas real news videoclips had more dislikes than fake news videoclips. These findings strongly suggest that fake news videoclips are more accepted when people watch them on YouTube. In addition, we used semantic networks and automated content analysis to uncover common language patterns in fake news which helps us better understand the structure and dynamics of the networks involved in the dissemination of fake news. The findings reported here provide important insights on how fake news spread via social networks during the South Korean election of 2020. The results of this study have important implications for the campaign against fake news and ensuring factual coverage.

  20. Synthetic Multimodal Dataset for Daily Life Activities

    • zenodo.org
    • data.niaid.nih.gov
    application/gzip, zip
    Updated Jan 29, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Takanori Ugai; Takanori Ugai; Shusaku Egami; Shusaku Egami; Swe Nwe Nwe Htun; Swe Nwe Nwe Htun; Kouji Kozaki; Kouji Kozaki; Takahiro Kawamura; Takahiro Kawamura; Ken Fukuda; Ken Fukuda (2024). Synthetic Multimodal Dataset for Daily Life Activities [Dataset]. http://doi.org/10.5281/zenodo.8046267
    Explore at:
    zip, application/gzipAvailable download formats
    Dataset updated
    Jan 29, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Takanori Ugai; Takanori Ugai; Shusaku Egami; Shusaku Egami; Swe Nwe Nwe Htun; Swe Nwe Nwe Htun; Kouji Kozaki; Kouji Kozaki; Takahiro Kawamura; Takahiro Kawamura; Ken Fukuda; Ken Fukuda
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Outline

    • This dataset is originally created for the Knowledge Graph Reasoning Challenge for Social Issues (KGRC4SI)
    • Video data that simulates daily life actions in a virtual space from Scenario Data.
    • Knowledge graphs, and transcriptions of the Video Data content ("who" did what "action" with what "object," when and where, and the resulting "state" or "position" of the object).
    • Knowledge Graph Embedding Data are created for reasoning based on machine learning
    • This data is open to the public as open data

    Details

    • Videos

      • mp4 format
      • 203 action scenarios
      • For each scenario, there is a character rear view (file name ending in 0), an indoor camera switching view (file name ending in 1), and a fixed camera view placed in each corner of the room (file name ending in 2-5). Also, for each action scenario, data was generated for a minimum of 1 to a maximum of 7 patterns with different room layouts (scenes). A total of 1,218 videos
      • Videos with slowly moving characters simulate the movements of elderly people.
    • Knowledge Graphs

      • RDF format
      • 203 knowledge graphs corresponding to the videos
      • Includes schema and location supplement information
      • The schema is described below
      • SPARQL endpoints and query examples are available
    • Script Data

      • txt format
      • Data provided to VirtualHome2KG to generate videos and knowledge graphs
      • Includes the action title and a brief description in text format.
    • Embedding

    Specification of Ontology

    Related Resources

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
TheViewlytics (2024). Indian Songs Viewership Forecasting Data [Dataset]. https://www.kaggle.com/datasets/theviewlytics/indian-songs-viewership-forecasting-data/data
Organization logo

Indian Songs Viewership Forecasting Data

Month on Month Data of YouTube Viewership for Indian Songs - Updated Monthly

Explore at:
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Jan 5, 2024
Dataset provided by
Kagglehttp://kaggle.com/
Authors
TheViewlytics
License

Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically

Description

Dataset

This dataset was created by TheViewlytics

Released under Apache 2.0

Contents

Search
Clear search
Close search
Google apps
Main menu