16 datasets found
  1. b

    YouTube Revenue and Usage Statistics (2025)

    • businessofapps.com
    Updated May 22, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Business of Apps (2018). YouTube Revenue and Usage Statistics (2025) [Dataset]. https://www.businessofapps.com/data/youtube-statistics/
    Explore at:
    Dataset updated
    May 22, 2018
    Dataset authored and provided by
    Business of Apps
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Area covered
    YouTube
    Description

    YouTube was launched in 2005. It was founded by three PayPal employees: Chad Hurley, Steve Chen, and Jawed Karim, who ran the company from an office above a small restaurant in San Mateo. The first...

  2. Google: global annual revenue 2002-2024

    • statista.com
    Updated Feb 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Google: global annual revenue 2002-2024 [Dataset]. https://www.statista.com/statistics/266206/googles-annual-global-revenue/
    Explore at:
    Dataset updated
    Feb 5, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    Worldwide
    Description

    In the most recently reported fiscal year, Google's revenue amounted to 348.16 billion U.S. dollars. Google's revenue is largely made up by advertising revenue, which amounted to 264.59 billion U.S. dollars in 2024. As of October 2024, parent company Alphabet ranked first among worldwide internet companies, with a market capitalization of 2,02 billion U.S. dollars. Google’s revenue Founded in 1998, Google is a multinational internet service corporation headquartered in California, United States. Initially conceptualized as a web search engine based on a PageRank algorithm, Google now offers a multitude of desktop, mobile and online products. Google Search remains the company’s core web-based product along with advertising services, communication and publishing tools, development and statistical tools as well as map-related products. Google is also the producer of the mobile operating system Android, Chrome OS, Google TV as well as desktop and mobile applications such as the internet browser Google Chrome or mobile web applications based on pre-existing Google products. Recently, Google has also been developing selected pieces of hardware which ranges from the Nexus series of mobile devices to smart home devices and driverless cars. Due to its immense scale, Google also offers a crisis response service covering disasters, turmoil and emergencies, as well as an open source missing person finder in times of disaster. Despite the vast scope of Google products, the company still collects the majority of its revenue through online advertising on Google Site and Google network websites. Other revenues are generated via product licensing and most recently, digital content and mobile apps via the Google Play Store, a distribution platform for digital content. As of September 2020, some of the highest-grossing Android apps worldwide included mobile games such as Candy Crush Saga, Pokemon Go, and Coin Master.

  3. AI Financial Market Data

    • kaggle.com
    Updated Aug 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Science Lovers (2025). AI Financial Market Data [Dataset]. https://www.kaggle.com/datasets/rohitgrewal/ai-financial-and-market-data
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 6, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Data Science Lovers
    License

    http://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/

    Description

    📹Project Video available on YouTube - https://youtu.be/WmJYHz_qn5s

    Realistic Synthetic - AI Financial & Market Data for Gemini(Google), ChatGPT(OpenAI), Llama(Meta)

    This dataset provides a synthetic, daily record of financial market activities related to companies involved in Artificial Intelligence (AI). There are key financial metrics and events that could influence a company's stock performance like launch of Llama by Meta, launch of GPT by OpenAI, launch of Gemini by Google etc. Here, we have the data about how much amount the companies are spending on R & D of their AI's Products & Services, and how much revenue these companies are generating. The data is from January 1, 2015, to December 31, 2024, and includes information for various companies : OpenAI, Google and Meta.

    This data is available as a CSV file. We are going to analyze this data set using the Pandas DataFrame.

    This analyse will be helpful for those working in Finance or Share Market domain.

    From this dataset, we extract various insights using Python in our Project.

    1) How much amount the companies spent on R & D ?

    2) Revenue Earned by the companies

    3) Date-wise Impact on the Stock

    4) Events when Maximum Stock Impact was observed

    5) AI Revenue Growth of the companies

    6) Correlation between the columns

    7) Expenditure vs Revenue year-by-year

    8) Event Impact Analysis

    9) Change in the index wrt Year & Company

    These are the main Features/Columns available in the dataset :

    1) Date: This column indicates the specific calendar day for which the financial and AI-related data is recorded. It allows for time-series analysis of the trends and impacts.

    2) Company: This column specifies the name of the company to which the data in that particular row belongs. Examples include "OpenAI" and "Meta".

    3) R&D_Spending_USD_Mn: This column represents the Research and Development (R&D) spending of the company, measured in Millions of USD. It serves as an indicator of a company's investment in innovation and future growth, particularly in the AI sector.

    4) AI_Revenue_USD_Mn: This column denotes the revenue generated specifically from AI-related products or services, also measured in Millions of USD. This metric highlights the direct financial success derived from AI initiatives.

    5) AI_Revenue_Growth_%: This column shows the percentage growth of AI-related revenue for the company on a daily basis. It indicates the pace at which a company's AI business is expanding or contracting.

    6) Event: This column captures any significant events or announcements made by the company that could potentially influence its financial performance or market perception. Examples include "Cloud AI launch," "AI partnership deal," "AI ethics policy update," and "AI speech recognition release." These events are crucial for understanding sudden shifts in stock impact.

    7) Stock_Impact_%: This column quantifies the percentage change in the company's stock price on a given day, likely in response to the recorded financial metrics or events. It serves as a direct measure of market reaction.

  4. o

    How to make google plus posts private - Dataset - openAFRICA

    • open.africa
    Updated Jan 4, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). How to make google plus posts private - Dataset - openAFRICA [Dataset]. https://open.africa/dataset/how-to-make-google-plus-posts-private
    Explore at:
    Dataset updated
    Jan 4, 2018
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    so if you have to have a G+ account (for YouTube, location services, or other reasons) - here's how you can make it totally private! No one will be able to add you, send you spammy links, or otherwise annoy you. You need to visit the "Audience Settings" page - https://plus.google.com/u/0/settings/audience You can then set a "custom audience" - usually you would use this to restrict your account to people from a specific geographic location, or within a specific age range. In this case, we're going to choose a custom audience of "No-one" Check the box and hit save. Now, when people try to visit your Google+ profile - they'll see this "restricted" message. You can visit my G+ Profile if you want to see this working. (https://plus.google.com/114725651137252000986) If you are not able to understand you can follow this website : http://www.livehuntz.com/google-plus/support-phone-number

  5. CCTV Action Recognition Dataset

    • kaggle.com
    Updated Sep 25, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jonathan Ledur-Nield (2023). CCTV Action Recognition Dataset [Dataset]. https://www.kaggle.com/datasets/jonathannield/cctv-action-recognition-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 25, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Jonathan Ledur-Nield
    Description

    This action recognition dataset contains short video clips sourced from CCTV footage from existing CCTV datasets as well as YouTube and Google. 13 Action categories are present: Fall, Grab, Gun, Hit, Kick, LyingDown, Run, Sit, Stand, Sneak, Struggle, Throw, Walk. Each are represented by 200 video clips each. It should be noted that Throw, Kick and Sneak contain 100 unique video clips which have been duplicated to create 200.

    For a given video clip name "NTU_fight0003_fall_2",

    NTU: is the source of the data fight0003: is the name of the video clip fall: is the action category 2: is the clip number that has been sourced from the same original video file, in this case it is the second video clip.

    Test and Train Splits have been generated for your convenience, as well as using 50% and 75% of the original dataset size. Within the Text files, for each video clip, 1 is for training and 2 is for evaluation.

    This dataset was used in my PhD research, which identified that multiple actions in scene make the singular annotations that the video clips have wildly impracticable. Yolov5 + StrongSort was used to isolate each person instance in each video clip, with the area they occupy cropped, extracted and overlayed onto a blacked out background in a new video clip, thereby ensuring only one person is ever present in a given video clip. This approach improved action recognition performance by approximately 8% when using OpenPose/AlphaPose skeletal data combined with STGCN and 2sAGCN.

  6. Z

    Video-EEG Encoding-Decoding Dataset KU Leuven

    • data.niaid.nih.gov
    • zenodo.org
    Updated Feb 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stebner, Axel (2025). Video-EEG Encoding-Decoding Dataset KU Leuven [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10512413
    Explore at:
    Dataset updated
    Feb 24, 2025
    Dataset provided by
    Bertrand, Alexander
    Stebner, Axel
    Tuytelaars, Tinne
    Geirnaert, Simon
    Yao, Yuanyuan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Leuven
    Description

    If using this dataset, please cite the following paper and the current Zenodo repository.

    This dataset is described in detail in the following paper:

    [1] Yao, Y., Stebner, A., Tuytelaars, T., Geirnaert, S., & Bertrand, A. (2024). Identifying temporal correlations between natural single-shot videos and EEG signals. Journal of Neural Engineering, 21(1), 016018. doi:10.1088/1741-2552/ad2333

    The associated code is available at: https://github.com/YYao-42/Identifying-Temporal-Correlations-Between-Natural-Single-shot-Videos-and-EEG-Signals?tab=readme-ov-file

    Introduction

    The research work leading to this dataset was conducted at the Department of Electrical Engineering (ESAT), KU Leuven.

    This dataset contains electroencephalogram (EEG) data collected from 19 young participants with normal or corrected-to-normal eyesight when they were watching a series of carefully selected YouTube videos. The videos were muted to avoid the confounds introduced by audio. For synchronization, a square box was encoded outside of the original frames and flashed every 30 seconds in the top right corner of the screen. A photosensor, detecting the light changes from this flashing box, was affixed to that region using black tape to ensure that the box did not distract participants. The EEG data was recorded using a BioSemi ActiveTwo system at a sample rate of 2048 Hz. Participants wore a 64-channel EEG cap, and 4 electrooculogram (EOG) sensors were positioned around the eyes to track eye movements.

    The dataset includes a total of (19 subjects x 63 min + 9 subjects x 24 min) of data. Further details can be found in the following section.

    Content

    YouTube Videos: Due to copyright constraints, the dataset includes links to the original YouTube videos along with precise timestamps for the segments used in the experiments. The features proposed in 1 have been extracted and can be downloaded here: https://drive.google.com/file/d/1J1tYrxVizrl1xP-W1imvlA_v-DPzZ2Qh/view?usp=sharing.

    Raw EEG Data: Organized by subject ID, the dataset contains EEG segments corresponding to the presented videos. Both EEGLAB .set files (containing metadata) and .fdt files (containing raw data) are provided, which can also be read by popular EEG analysis Python packages such as MNE.

    The naming convention links each EEG segment to its corresponding video. E.g., the EEG segment 01_eeg corresponds to video 01_Dance_1, 03_eeg corresponds to video 03_Acrob_1, Mr_eeg corresponds to video Mr_Bean, etc.

    The raw data have 68 channels. The first 64 channels are EEG data, and the last 4 channels are EOG data. The position coordinates of the standard BioSemi headcaps can be downloaded here: https://www.biosemi.com/download/Cap_coords_all.xls.

    Due to minor synchronization ambiguities, different clocks in the PC and EEG recorder, and missing or extra video frames during video playback (rarely occurred), the length of the EEG data may not perfectly match the corresponding video data. The difference, typically within a few milliseconds, can be resolved by truncating the modality with the excess samples.

    Signal Quality Information: A supplementary .txt file detailing potential bad channels. Users can opt to create their own criteria for identifying and handling bad channels.

    The dataset is divided into two subsets: Single-shot and MrBean, based on the characteristics of the video stimuli.

    Single-shot Dataset

    The stimuli of this dataset consist of 13 single-shot videos (63 min in total), each depicting a single individual engaging in various activities such as dancing, mime, acrobatics, and magic shows. All the participants watched this video collection.

    Video ID Link Start time (s) End time (s)

    01_Dance_1 https://youtu.be/uOUVE5rGmhM 8.54 231.20

    03_Acrob_1 https://youtu.be/DjihbYg6F2Y 4.24 231.91

    04_Magic_1 https://youtu.be/CvzMqIQLiXE 3.68 348.17

    05_Dance_2 https://youtu.be/f4DZp0OEkK4 5.05 227.99

    06_Mime_2 https://youtu.be/u9wJUTnBdrs 5.79 347.05

    07_Acrob_2 https://youtu.be/kRqdxGPLajs 183.61 519.27

    08_Magic_2 https://youtu.be/FUv-Q6EgEFI 3.36 270.62

    09_Dance_3 https://youtu.be/LXO-jKksQkM 5.61 294.17

    12_Magic_3 https://youtu.be/S84AoWdTq3E 1.76 426.36

    13_Dance_4 https://youtu.be/0wc60tA1klw 14.28 217.18

    14_Mime_3 https://youtu.be/0Ala3ypPM3M 21.87 386.84

    15_Dance_5 https://youtu.be/mg6-SnUl0A0 15.14 233.85

    16_Mime_6 https://youtu.be/8V7rhAJF6Gc 31.64 388.61

    MrBean Dataset

    Additionally, 9 participants watched an extra 24-minute clip from the first episode of Mr. Bean, where multiple (moving) objects may exist and interact, and the camera viewpoint may change. The subject IDs and the signal quality files are inherited from the single-shot dataset.

    Video ID Link Start time (s) End time (s)

    Mr_Bean https://www.youtube.com/watch?v=7Im2I6STbms 39.77 1495.00

    Acknowledgement

    This research is funded by the Research Foundation - Flanders (FWO) project No G081722N, junior postdoctoral fellowship fundamental research of the FWO (for S. Geirnaert, No. 1242524N), the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No 802895), the Flemish Government (AI Research Program), and the PDM mandate from KU Leuven (for S. Geirnaert, No PDMT1/22/009).

    We also thank the participants for their time and effort in the experiments.

    Contact Information

    Executive researcher: Yuanyuan Yao, yuanyuan.yao@kuleuven.be

    Led by: Prof. Alexander Bertrand, alexander.bertrand@kuleuven.be

  7. RECOD.ai Events Dataset

    • zenodo.org
    application/gzip, pdf
    Updated Jul 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    José Nascimento; José Nascimento; Anderson Rocha; Anderson Rocha (2024). RECOD.ai Events Dataset [Dataset]. http://doi.org/10.5281/zenodo.5547606
    Explore at:
    pdf, application/gzipAvailable download formats
    Dataset updated
    Jul 17, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    José Nascimento; José Nascimento; Anderson Rocha; Anderson Rocha
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Overview

    This data set consists of links to social network items for 34 different forensic events that took place between August 14th, 2018 and January 06th, 2021. The majority of the text and images are from Twitter (a minor part is from Flickr, Facebook and Google+), and every video is from YouTube.

    Data Collection

    We used Social Tracker (https://github.com/MKLab-ITI/mmdemo-dockerized), along with the social medias' APIs, to gather most of the collections. For a minor part, we used Twint (https://github.com/twintproject/twint). In both cases, we provided keywords related to the event to receive the data.

    It is important to mention that, in procedures like this one, usually only a small fraction of the collected data is in fact related to the event and useful for a further forensic analysis.

    Content

    We have data from 34 events, and for each of them we provide the files:

    items_full.csv: It contains links to any social media post that was collected.

    images.csv: Enlists the images collected. In some files there is a field called "ItemUrl", that refers to the social network post (e.g., a tweet) that mentions that media.

    video.csv: Urls of YouTube videos that were gathered about the event.

    video_tweet.csv: This file contains IDs of tweets and IDs of YouTube videos. A tweet whose ID is in this file has a video in its content. In turn, the link of a Youtube video whose ID is in this file was mentioned by at least one collected tweet. Only two collections have this file.

    description.txt: Contains some standard information about the event, and possibly some comments about any specific issue related to it.

    In fact, most of the collections do not have all the files above. Such an issue is due to changes in our collection procedure throughout the time of this work.

    Events

    We divided the events into six groups. They are,

    1. Fire

    • Devastating fire is the main issue of the event, therefore most of the informative pictures show flames or burned constructions

    • 14 Events

    2. Collapse

    • Most of the relevant images depict collapsed buildings, bridges, etc. (not caused by fire).

    • 5 Events

    3. Shooting

    • Likely images of guns and police officers. Few or no destruction of the environment.

    • 5 Events

    4. Demonstration

    • Plethora of people on the streets. Possibly some problem took place on that, but in most cases the demonstration is the actual event.

    • 7 Events

    5. Collision

    • Traffic collision. Pictures of damaged vehicles on an urban landscape. Possibly there are images with victims on the street.

    • 1 Event

    6. Flood

    • Events that range from fierce rain to a tsunami. Many pictures depict water.

    • 2 Events

    We enlist the events in the file recod-ai-events-dataset-list.pdf

    Media Content

    Due to the terms of use from the social networks, we do not make publicly available the texts, images and videos that were collected. However, we can provide some extra piece of media content related to one (or more) events by contacting the authors.

    Funding

    DéjàVu thematic project, São Paulo Research Foundation (grants 2017/12646-3, 2018/18264-8 and 2020/02241-9)

  8. COVID-19 PPE Dataset for Object Detection

    • kaggle.com
    Updated Nov 21, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ali Mustufa Shaikh (2020). COVID-19 PPE Dataset for Object Detection [Dataset]. http://doi.org/10.34740/kaggle/dsv/1667216
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 21, 2020
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Ali Mustufa Shaikh
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Our Paper is out here: https://arxiv.org/abs/2112.09569

    We have launched a new version of the dataset so please refer it here: https://github.com/Rishit-dagli/CPPE-Dataset/

    COVID-19 Personal Protective Equipment (PPE) Detection Dataset for Our COVID-19 Warriors

    We wanted to do something for our COVID Warriors and hence decided to detect the PPE kit(Mask, Face Sheild, Full Cover, Gloves, Goggles) which they wear before entering a Ward using Machine Learning so that anyone without a PPE can be detected and doesn't incidentally get into a COVID ward. The Dataset didn't exist so we decided to create one.

    About Data

    We have collected data from Google Image Search and Images extracted using Youtube Video (Tutorials showing how to wear PPE). It was then labeled for 5 Classes(Mask, Face Sheild, Full Cover, Gloves, Goggles) using LabelImage.

    We have included a sample trained model and TensorFlow record files.

    Acknowledgements

    We thank all these Youtube Videos, Newspaper cuttings with PPE kits, and public images available on Google Search. 1. https://youtu.be/eCcX1oIIXPE 2. https://youtu.be/R8lmIdLHEgI 3. https://youtu.be/FrauHnD9pPU 4. https://youtu.be/H4jQUBAlBrI

    Inspiration

    We would love to see how the Kaggle community would utilize this dataset! We have trained a model which gives us good results (find it in the model folder)

  9. Data (i.e., evidence) about evidence based medicine

    • figshare.com
    • search.datacite.org
    png
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jorge H Ramirez (2023). Data (i.e., evidence) about evidence based medicine [Dataset]. http://doi.org/10.6084/m9.figshare.1093997.v24
    Explore at:
    pngAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Jorge H Ramirez
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Update — December 7, 2014. – Evidence-based medicine (EBM) is not working for many reasons, for example: 1. Incorrect in their foundations (paradox): hierarchical levels of evidence are supported by opinions (i.e., lowest strength of evidence according to EBM) instead of real data collected from different types of study designs (i.e., evidence). http://dx.doi.org/10.6084/m9.figshare.1122534 2. The effect of criminal practices by pharmaceutical companies is only possible because of the complicity of others: healthcare systems, professional associations, governmental and academic institutions. Pharmaceutical companies also corrupt at the personal level, politicians and political parties are on their payroll, medical professionals seduced by different types of gifts in exchange of prescriptions (i.e., bribery) which very likely results in patients not receiving the proper treatment for their disease, many times there is no such thing: healthy persons not needing pharmacological treatments of any kind are constantly misdiagnosed and treated with unnecessary drugs. Some medical professionals are converted in K.O.L. which is only a puppet appearing on stage to spread lies to their peers, a person supposedly trained to improve the well-being of others, now deceits on behalf of pharmaceutical companies. Probably the saddest thing is that many honest doctors are being misled by these lies created by the rules of pharmaceutical marketing instead of scientific, medical, and ethical principles. Interpretation of EBM in this context was not anticipated by their creators. “The main reason we take so many drugs is that drug companies don’t sell drugs, they sell lies about drugs.” ―Peter C. Gøtzsche “doctors and their organisations should recognise that it is unethical to receive money that has been earned in part through crimes that have harmed those people whose interests doctors are expected to take care of. Many crimes would be impossible to carry out if doctors weren’t willing to participate in them.” —Peter C Gøtzsche, The BMJ, 2012, Big pharma often commits corporate crime, and this must be stopped. Pending (Colombia): Health Promoter Entities (In Spanish: EPS ―Empresas Promotoras de Salud).

    1. Misinterpretations New technologies or concepts are difficult to understand in the beginning, it doesn’t matter their simplicity, we need to get used to new tools aimed to improve our professional practice. Probably the best explanation is here in these videos (credits to Antonio Villafaina for sharing these videos with me). English https://www.youtube.com/watch?v=pQHX-SjgQvQ&w=420&h=315 Spanish https://www.youtube.com/watch?v=DApozQBrlhU&w=420&h=315 ----------------------- Hypothesis: hierarchical levels of evidence based medicine are wrong Dear Editor, I have data to support the hypothesis described in the title of this letter. Before rejecting the null hypothesis I would like to ask the following open question:Could you support with data that hierarchical levels of evidence based medicine are correct? (1,2) Additional explanation to this question: – Only respond to this question attaching publicly available raw data.– Be aware that more than a question this is a challenge: I have data (i.e., evidence) which is contrary to classic (i.e., McMaster) or current (i.e., Oxford) hierarchical levels of evidence based medicine. An important part of this data (but not all) is publicly available. References
    2. Ramirez, Jorge H (2014): The EBM challenge. figshare. http://dx.doi.org/10.6084/m9.figshare.1135873
    3. The EBM Challenge Day 1: No Answers. Competing interests: I endorse the principles of open data in human biomedical research Read this letter on The BMJ – August 13, 2014.http://www.bmj.com/content/348/bmj.g3725/rr/762595Re: Greenhalgh T, et al. Evidence based medicine: a movement in crisis? BMJ 2014; 348: g3725. _ Fileset contents Raw data: Excel archive: Raw data, interactive figures, and PubMed search terms. Google Spreadsheet is also available (URL below the article description). Figure 1. Unadjusted (Fig 1A) and adjusted (Fig 1B) PubMed publication trends (01/01/1992 to 30/06/2014). Figure 2. Adjusted PubMed publication trends (07/01/2008 to 29/06/2014) Figure 3. Google search trends: Jan 2004 to Jun 2014 / 1-week periods. Figure 4. PubMed publication trends (1962-2013) systematic reviews and meta-analysis, clinical trials, and observational studies.
      Figure 5. Ramirez, Jorge H (2014): Infographics: Unpublished US phase 3 clinical trials (2002-2014) completed before Jan 2011 = 50.8%. figshare.http://dx.doi.org/10.6084/m9.figshare.1121675 Raw data: "13377 studies found for: Completed | Interventional Studies | Phase 3 | received from 01/01/2002 to 01/01/2014 | Worldwide". This database complies with the terms and conditions of ClinicalTrials.gov: http://clinicaltrials.gov/ct2/about-site/terms-conditions Supplementary Figures (S1-S6). PubMed publication delay in the indexation processes does not explain the descending trends in the scientific output of evidence-based medicine. Acknowledgments I would like to acknowledge the following persons for providing valuable concepts in data visualization and infographics:
    4. Maria Fernanda Ramírez. Professor of graphic design. Universidad del Valle. Cali, Colombia.
    5. Lorena Franco. Graphic design student. Universidad del Valle. Cali, Colombia. Related articles by this author (Jorge H. Ramírez)
    6. Ramirez JH. Lack of transparency in clinical trials: a call for action. Colomb Med (Cali) 2013;44(4):243-6. URL: http://www.ncbi.nlm.nih.gov/pubmed/24892242
    7. Ramirez JH. Re: Evidence based medicine is broken (17 June 2014). http://www.bmj.com/node/759181
    8. Ramirez JH. Re: Global rules for global health: why we need an independent, impartial WHO (19 June 2014). http://www.bmj.com/node/759151
    9. Ramirez JH. PubMed publication trends (1992 to 2014): evidence based medicine and clinical practice guidelines (04 July 2014). http://www.bmj.com/content/348/bmj.g3725/rr/759895 Recommended articles
    10. Greenhalgh Trisha, Howick Jeremy,Maskrey Neal. Evidence based medicine: a movement in crisis? BMJ 2014;348:g3725
    11. Spence Des. Evidence based medicine is broken BMJ 2014; 348:g22
    12. Schünemann Holger J, Oxman Andrew D,Brozek Jan, Glasziou Paul, JaeschkeRoman, Vist Gunn E et al. Grading quality of evidence and strength of recommendations for diagnostic tests and strategies BMJ 2008; 336:1106
    13. Lau Joseph, Ioannidis John P A, TerrinNorma, Schmid Christopher H, OlkinIngram. The case of the misleading funnel plot BMJ 2006; 333:597
    14. Moynihan R, Henry D, Moons KGM (2014) Using Evidence to Combat Overdiagnosis and Overtreatment: Evaluating Treatments, Tests, and Disease Definitions in the Time of Too Much. PLoS Med 11(7): e1001655. doi:10.1371/journal.pmed.1001655
    15. Katz D. A-holistic view of evidence based medicinehttp://thehealthcareblog.com/blog/2014/05/02/a-holistic-view-of-evidence-based-medicine/ ---
  10. R

    Accident Detection Model Dataset

    • universe.roboflow.com
    zip
    Updated Apr 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Accident detection model (2024). Accident Detection Model Dataset [Dataset]. https://universe.roboflow.com/accident-detection-model/accident-detection-model/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 8, 2024
    Dataset authored and provided by
    Accident detection model
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Accident Bounding Boxes
    Description

    Accident-Detection-Model

    Accident Detection Model is made using YOLOv8, Google Collab, Python, Roboflow, Deep Learning, OpenCV, Machine Learning, Artificial Intelligence. It can detect an accident on any accident by live camera, image or video provided. This model is trained on a dataset of 3200+ images, These images were annotated on roboflow.

    Problem Statement

    • Road accidents are a major problem in India, with thousands of people losing their lives and many more suffering serious injuries every year.
    • According to the Ministry of Road Transport and Highways, India witnessed around 4.5 lakh road accidents in 2019, which resulted in the deaths of more than 1.5 lakh people.
    • The age range that is most severely hit by road accidents is 18 to 45 years old, which accounts for almost 67 percent of all accidental deaths.

    Accidents survey

    https://user-images.githubusercontent.com/78155393/233774342-287492bb-26c1-4acf-bc2c-9462e97a03ca.png" alt="Survey">

    Literature Survey

    • Sreyan Ghosh in Mar-2019, The goal is to develop a system using deep learning convolutional neural network that has been trained to identify video frames as accident or non-accident.
    • Deeksha Gour Sep-2019, uses computer vision technology, neural networks, deep learning, and various approaches and algorithms to detect objects.

    Research Gap

    • Lack of real-world data - We trained model for more then 3200 images.
    • Large interpretability time and space needed - Using google collab to reduce interpretability time and space required.
    • Outdated Versions of previous works - We aer using Latest version of Yolo v8.

    Proposed methodology

    • We are using Yolov8 to train our custom dataset which has been 3200+ images, collected from different platforms.
    • This model after training with 25 iterations and is ready to detect an accident with a significant probability.

    Model Set-up

    Preparing Custom dataset

    • We have collected 1200+ images from different sources like YouTube, Google images, Kaggle.com etc.
    • Then we annotated all of them individually on a tool called roboflow.
    • During Annotation we marked the images with no accident as NULL and we drew a box on the site of accident on the images having an accident
    • Then we divided the data set into train, val, test in the ratio of 8:1:1
    • At the final step we downloaded the dataset in yolov8 format.
      #### Using Google Collab
    • We are using google colaboratory to code this model because google collab uses gpu which is faster than local environments.
    • You can use Jupyter notebooks, which let you blend code, text, and visualisations in a single document, to write and run Python code using Google Colab.
    • Users can run individual code cells in Jupyter Notebooks and quickly view the results, which is helpful for experimenting and debugging. Additionally, they enable the development of visualisations that make use of well-known frameworks like Matplotlib, Seaborn, and Plotly.
    • In Google collab, First of all we Changed runtime from TPU to GPU.
    • We cross checked it by running command ‘!nvidia-smi’
      #### Coding
    • First of all, We installed Yolov8 by the command ‘!pip install ultralytics==8.0.20’
    • Further we checked about Yolov8 by the command ‘from ultralytics import YOLO from IPython.display import display, Image’
    • Then we connected and mounted our google drive account by the code ‘from google.colab import drive drive.mount('/content/drive')’
    • Then we ran our main command to run the training process ‘%cd /content/drive/MyDrive/Accident Detection model !yolo task=detect mode=train model=yolov8s.pt data= data.yaml epochs=1 imgsz=640 plots=True’
    • After the training we ran command to test and validate our model ‘!yolo task=detect mode=val model=runs/detect/train/weights/best.pt data=data.yaml’ ‘!yolo task=detect mode=predict model=runs/detect/train/weights/best.pt conf=0.25 source=data/test/images’
    • Further to get result from any video or image we ran this command ‘!yolo task=detect mode=predict model=runs/detect/train/weights/best.pt source="/content/drive/MyDrive/Accident-Detection-model/data/testing1.jpg/mp4"’
    • The results are stored in the runs/detect/predict folder.
      Hence our model is trained, validated and tested to be able to detect accidents on any video or image.

    Challenges I ran into

    I majorly ran into 3 problems while making this model

    • I got difficulty while saving the results in a folder, as yolov8 is latest version so it is still underdevelopment. so i then read some blogs, referred to stackoverflow then i got to know that we need to writ an extra command in new v8 that ''save=true'' This made me save my results in a folder.
    • I was facing problem on cvat website because i was not sure what
  11. H

    Waterhackweek2019 Data Access and Time-series Statistics Cyberseminar

    • hydroshare.org
    • search.dataone.org
    • +1more
    zip
    Updated Apr 1, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Emilio Mayorga; Yifan Cheng (2019). Waterhackweek2019 Data Access and Time-series Statistics Cyberseminar [Dataset]. http://doi.org/10.4211/hs.9985b3cb38c94cee872b28f6dcdef739
    Explore at:
    zip(3.7 MB)Available download formats
    Dataset updated
    Apr 1, 2019
    Dataset provided by
    HydroShare
    Authors
    Emilio Mayorga; Yifan Cheng
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data about water are found in many types of formats distributed by many different sources and depicting different spatial representations such as points, polygons and grids. How do we find and explore the data we need for our specific research or application? This seminar will present common challenges and strategies for finding and accessing relevant datasets, focusing on time series data from sites commonly represented as fixed geographical points. This type of data may come from automated monitoring stations such as river gauges and weather stations, from repeated in-person field observations and samples, or from model output and processed data products. We will present and explore useful data catalogs, including the CUAHSI HIS catalog accessible via HydroClient, CUAHSI HydroShare, the EarthCube Data Discovery Studio, Google Dataset search, and agency-specific catalogs. We will also discuss programmatic data access approaches and tools in Python, particularly the ulmo data access package, touching on the role of community standards for data formats and data access protocols. Once we have accessed datasets we are interested in, the next steps are typically exploratory, focusing on visualization and statistical summaries. This seminar will illustrate useful approaches and Python libraries used for processing and exploring time series data, with an emphasis on the distinctive needs posed by temporal data. Core Python packages used include Pandas, GeoPandas, Matplotlib and the geospatial visualization tools introduced at the last seminar. Approaches presented can be applied to other data types that can be summarized as single time series, such as averages over a watershed or data extracts from a single cell in a gridded dataset – the topic for the next seminar.

    Cyberseminar recording is available on Youtube at https://youtu.be/uQXuS1AB2M0

  12. Countries with the most YouTube users 2025

    • statista.com
    Updated Feb 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Countries with the most YouTube users 2025 [Dataset]. https://www.statista.com/statistics/280685/number-of-monthly-unique-youtube-users/
    Explore at:
    Dataset updated
    Feb 17, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Feb 2025
    Area covered
    YouTube, Worldwide
    Description

    As of February 2025, India was the country with the largest YouTube audience by far, with approximately 491 million users engaging with the popular social video platform. The United States followed, with around 253 million YouTube viewers. Brazil came in third, with 144 million users watching content on YouTube. The United Kingdom saw around 54.8 million internet users engaging with the platform in the examined period. What country has the highest percentage of YouTube users? In July 2024, the United Arab Emirates was the country with the highest YouTube penetration worldwide, as around 94 percent of the country's digital population engaged with the service. In 2024, YouTube counted around 100 million paid subscribers for its YouTube Music and YouTube Premium services. YouTube mobile markets In 2024, YouTube was among the most popular social media platforms worldwide. In terms of revenues, the YouTube app generated approximately 28 million U.S. dollars in revenues in the United States in January 2024, as well as 19 million U.S. dollars in Japan.

  13. United States: number of internet users 2015-2025

    • statista.com
    Updated Apr 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). United States: number of internet users 2015-2025 [Dataset]. https://www.statista.com/statistics/276445/number-of-internet-users-in-the-united-states/
    Explore at:
    Dataset updated
    Apr 29, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    United States
    Description

    As of February 2025, around 322 million people in the United States accessed the internet, making it one of the largest online markets worldwide. The country currently ranks third after China and India by the online audience size. Overview of internet usage in the United States The digital population in the United States has constantly increased in recent years. Among the most common reasons is the growing accessibility of broadband internet. A big part of the country's digital audience accesses the web via mobile phones. In 2024, the country saw an estimated 97.1 percent mobile internet user penetration. According to a 2024 survey, over 51 percent of U.S. women and 43 percent of men said it is important to them to have mobile internet access anywhere, at any time. Another 41 percent of respondents could not imagine their everyday life without the internet. Google and YouTube are the most visited websites in the country, while music, food, and drinks were the most discussed online topics. Internet usage demographics in the United States While some users can no longer imagine their life without the internet, others do not use it at all. According to 2021 data, 25 percent of U.S. adults 65 and older reported not using the internet. Despite this, online usage was strong across other age groups, especially young adults aged 18 to 49. This age group also reported the highest percentage of smartphone usage in the country as of 2023. Due to a persistent lack of connectivity in rural areas, more online users were based in urban areas of the U.S. than in the countryside.

  14. Daily time spent online by users worldwide Q3 2024, by region

    • statista.com
    Updated Jun 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Daily time spent online by users worldwide Q3 2024, by region [Dataset]. https://www.statista.com/statistics/1258232/daily-time-spent-online-worldwide/
    Explore at:
    Dataset updated
    Jun 23, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    World
    Description

    As of the third quarter of 2024, internet users in South Africa spent more than **** hours and ** minutes online per day, ranking first among the regions worldwide. Brazil followed, with roughly **** hours of daily online usage. As of the examined period, Japan registered the lowest number of daily hours spent online, with users in the country spending an average of over **** hours per day using the internet. The data includes the daily time spent online on any device. Social media usage In recent years, social media has become integral to internet users' daily lives, with users spending an average of *** minutes daily on social media activities. In April 2024, global social network penetration reached **** percent, highlighting its widespread adoption. Among the various platforms, YouTube stands out, with over *** billion monthly active users, making it one of the most popular social media platforms. YouTube’s global popularity In 2023, the keyword "YouTube" ranked among the most popular search queries on Google, highlighting the platform's immense popularity. YouTube generated most of its traffic through mobile devices, with about 98 billion visits. This popularity was particularly evident in the United Arab Emirates, where YouTube penetration reached approximately **** percent, the highest in the world.

  15. UK children daily time on selected social media apps 2024

    • statista.com
    Updated Jun 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). UK children daily time on selected social media apps 2024 [Dataset]. https://www.statista.com/statistics/1124962/time-spent-by-children-on-social-media-uk/
    Explore at:
    Dataset updated
    Jun 24, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    2024
    Area covered
    United Kingdom
    Description

    In 2024, children in the United Kingdom spent an average of *** minutes per day on TikTok. This was followed by Instagram, as children in the UK reported using the app for an average of ** minutes daily. Children in the UK aged between four and 18 years also used Facebook for ** minutes a day on average in the measured period. Mobile ownership and usage among UK children In 2021, around ** percent of kids aged between eight and 11 years in the UK owned a smartphone, while children aged between five and seven having access to their own device were approximately ** percent. Mobile phones were also the second most popular devices used to access the web by children aged between eight and 11 years, as tablet computers were still the most popular option for users aged between three and 11 years. Children were not immune to the popularity acquired by short video format content in 2020 and 2021, spending an average of ** minutes per day engaging with TikTok, as well as over ** minutes on the YouTube app in 2021. Children data protection In 2021, ** percent of U.S. parents and ** percent of UK parents reported being slightly concerned with their children’s device usage habits. While the share of parents reporting to be very or extremely concerned was considerably smaller, children are considered among the most vulnerable digital audiences and need additional attention when it comes to data and privacy protection. According to a study conducted during the first quarter of 2022, ** percent of children’s apps hosted in the Google Play Store and ** percent of apps hosted in the Apple App Store transmitted users’ locations to advertisers. Additionally, ** percent of kids’ apps were found to collect persistent identifiers, such as users’ IP addresses, which could potentially lead to Children’s Online Privacy Protection Act (COPPA) violations in the United States. In the United Kingdom, companies have to take into account several obligations when considering online environments for children, including an age-appropriate design and avoiding sharing children’s data.

  16. Facebook: distribution of global audiences 2024, by age and gender

    • statista.com
    • de.statista.com
    • +1more
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stacy Jo Dixon, Facebook: distribution of global audiences 2024, by age and gender [Dataset]. https://www.statista.com/topics/1164/social-networks/
    Explore at:
    Dataset provided by
    Statistahttp://statista.com/
    Authors
    Stacy Jo Dixon
    Description

    As of April 2024, it was found that men between the ages of 25 and 34 years made up Facebook largest audience, accounting for 18.4 percent of global users. Additionally, Facebook's second largest audience base could be found with men aged 18 to 24 years.

                  Facebook connects the world
    
                  Founded in 2004 and going public in 2012, Facebook is one of the biggest internet companies in the world with influence that goes beyond social media. It is widely considered as one of the Big Four tech companies, along with Google, Apple, and Amazon (all together known under the acronym GAFA). Facebook is the most popular social network worldwide and the company also owns three other billion-user properties: mobile messaging apps WhatsApp and Facebook Messenger,
                  as well as photo-sharing app Instagram. Facebook usersThe vast majority of Facebook users connect to the social network via mobile devices. This is unsurprising, as Facebook has many users in mobile-first online markets. Currently, India ranks first in terms of Facebook audience size with 378 million users. The United States, Brazil, and Indonesia also all have more than 100 million Facebook users each.
    
  17. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Business of Apps (2018). YouTube Revenue and Usage Statistics (2025) [Dataset]. https://www.businessofapps.com/data/youtube-statistics/

YouTube Revenue and Usage Statistics (2025)

Explore at:
185 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
May 22, 2018
Dataset authored and provided by
Business of Apps
License

Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically

Area covered
YouTube
Description

YouTube was launched in 2005. It was founded by three PayPal employees: Chad Hurley, Steve Chen, and Jawed Karim, who ran the company from an office above a small restaurant in San Mateo. The first...

Search
Clear search
Close search
Google apps
Main menu