9 datasets found
  1. Formula One (F1) racing global TV audience 2021

    • statista.com
    Updated Jun 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Formula One (F1) racing global TV audience 2021 [Dataset]. https://www.statista.com/statistics/480129/cable-or-broadcast-tv-networks-formula-one-f1-racing-watched-within-the-last-12-months-usa/
    Explore at:
    Dataset updated
    Jun 26, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    Worldwide
    Description

    This statistic illustrates the number of Formula One (F1) racing TV viewers worldwide from 2008 to 2021. According to the source, the global audience for Formula One in 2021 stood at *** million viewers, an increase of roughly three percent to the previous year.

  2. f

    Raw questionnaire data

    • figshare.com
    txt
    Updated Apr 24, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Heidi Silvennoinen; Saskia Kuliga; Pieter Herthogs; Daniela Rodrigues Recchia; Bige Tuncer (2022). Raw questionnaire data [Dataset]. http://doi.org/10.6084/m9.figshare.12221711.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Apr 24, 2022
    Dataset provided by
    figshare
    Authors
    Heidi Silvennoinen; Saskia Kuliga; Pieter Herthogs; Daniela Rodrigues Recchia; Bige Tuncer
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains the raw questionnaire data collected from the virtual reality experiment.

    Each row contains one participant's answers to all questionnaire items for one environment. Given that each participant experienced nine environments, the file contains 9 rows per participant. Environments are represented by a number (1-14), see explanation below. For questions included in the questionnaire, please see the file "Environment questionnaire."

    Environments

    Virtual environments in the experiment varied along 4 dimensions (location, building height, facade quality and number of people present). The following list explains the features of each environment.

    1. S-H0-F0-P0
    2. S-H2-F0-P0
    3. S-H0-F1-P0
    4. S-H0-F0-P1
    5. S-H0-F1-P1
    6. S-H1-F1-P1
    7. S-H2-F1-P1
    8. B-H0-F0-P0
    9. B-H2-F0-P0
    10. B-H0-F1-P0
    11. B-H0-F0-P1
    12. B-H0-F1-P1
    13. B-H1-F1-P1
    14. B-H2-F1-P1

    B and S refer to the locations Bedok and Simei respectively.

    H refers to building height and has three levels: 0 - uniformly tall 1 - tall at the back, low at the front 2 - uniformly low

    F refers to facade quality and has two levels: 0 - low quality 1- high quality

    P refers to number of people present and has two levels 0 - few 1- many

    Environments experienced by each group Group A (participants 1-25, 51-52): 1, 2, 6, 7, 8, 10, 11, 12, 14 Group B (participants 26-50): 1, 3, 4, 5, 7, 8, 9, 13, 14

    Environment quality

    The column "Environment quality" refers to hypothesised environment quality. According to our hypothesis, environments with low buildings, high-quality facades and many people present would be the best environments, with each of these features being independently positive. Thus environment quality ranges from 0-3, with a score of 0 being achieved when all these features are at their "worst" (tall buildings, low-quality facades, and few people present), and a score of 1 being achieved when all features have their best possible value.

    Note that building height has 3 possible values. In this case, tall buildings gain a score of 0, while low buildings gain a score of 1. Buildings that are tall at the back and low at the front gain a score of 0.5. Facade quality and presence of people both have possible values 0 and 1. Thus possible environment scores are 0, 0.5, 1, 1.5, 2, 2.5 and 3.

    Missing data Some participants did not complete the questionnaire fully, and some technical errors also happened. For these reasons, there is some missing data.

  3. f

    A few samples from the dataset.

    • figshare.com
    xls
    Updated Sep 28, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Patrick Bernard Washington; Pradeep Gali; Furqan Rustam; Imran Ashraf (2023). A few samples from the dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0286541.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Sep 28, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Patrick Bernard Washington; Pradeep Gali; Furqan Rustam; Imran Ashraf
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    COVID-19 affected the world’s economy severely and increased the inflation rate in both developed and developing countries. COVID-19 also affected the financial markets and crypto markets significantly, however, some crypto markets flourished and touched their peak during the pandemic era. This study performs an analysis of the impact of COVID-19 on public opinion and sentiments regarding the financial markets and crypto markets. It conducts sentiment analysis on tweets related to financial markets and crypto markets posted during COVID-19 peak days. Using sentiment analysis, it investigates the people’s sentiments regarding investment in these markets during COVID-19. In addition, damage analysis in terms of market value is also carried out along with the worse time for financial and crypto markets. For analysis, the data is extracted from Twitter using the SNSscraper library. This study proposes a hybrid model called CNN-LSTM (convolutional neural network-long short-term memory model) for sentiment classification. CNN-LSTM outperforms with 0.89, and 0.92 F1 Scores for crypto and financial markets, respectively. Moreover, topic extraction from the tweets is also performed along with the sentiments related to each topic.

  4. Detection of Areas with Human Vulnerability Using Public Satellite Images...

    • zenodo.org
    zip
    Updated Sep 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Flavio de Barros Vidal; Flavio de Barros Vidal (2024). Detection of Areas with Human Vulnerability Using Public Satellite Images and Deep Learning (Dataset) [Dataset]. http://doi.org/10.5281/zenodo.13768463
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 16, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Flavio de Barros Vidal; Flavio de Barros Vidal
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Mar 1, 2023
    Description

    Overview

    This repository contains the code and resources for the project titled "Detection of Areas with Human Vulnerability Using Public Satellite Images and Deep Learning". The goal of this project is to identify regions where individuals are living under precarious conditions and facing neglected basic needs, a situation often seen in Brazil. This concept is referred to as "human vulnerability" and is exemplified by families living in inadequate shelters or on the streets in both urban and rural areas.

    Focusing on the Federal District of Brazil as the research area, this project aims to develop two novel public datasets consisting of satellite images. The datasets contain imagery captured at 50m and 100m scales, covering regions of human vulnerability, traditional areas, and improperly disposed waste sites.

    The project also leverages these datasets for training deep learning models, including YOLOv7 and other state-of-the-art models, to perform image segmentation. A comparative analysis is conducted between the models using two training strategies: training from scratch with random weight initialization and fine-tuning using pre-trained weights through transfer learning.

    Key Achievements

    • Two new satellite image datasets focusing on human vulnerability and improperly disposed waste sites, available in public domains.
    • Comparison of image segmentation models, including YOLOv7 and Segmentation Models, with performance metrics.
    • Best F1-scores: 0.55 for YOLOv7 and 0.64 for Segmentation Models.

    This repository provides the code, models, and data pipelines used for training, evaluation, and performance comparison of these deep learning models.

    Citation (Bibtex)

    @TECHREPORT {TechReport-Julia-Laura-HumanVulnerability-2024,
      author   = "Julia Passos Pontes, Laura Maciel Neves Franco, Flavio De Barros Vidal",
      title    = "Detecção de Áreas com Atividades de Vulnerabilidade Humana utilizando Imagens Públicas de Satélites e Aprendizagem Profunda",
      institution = "University of Brasilia",
      year    = "2024",
      type    = "Undergraduate Thesis",
      address   = "Computer Science Department - University of Brasilia - Asa Norte - Brasilia - DF, Brazil",
      month    = "aug",
      note    = "People living in precarious conditions and with their basic needs neglected is an unfortunate reality in Brazil. This scenario will be approached in this work according to the concept of \"human vulnerability\" and can be exemplified through families who live in inadequate shelters, without basic structures and on the streets of urban or rural centers. Therefore, assuming the Federal District as the research scope, this project proposes to develop two new databases to be made available publicly, considering the map scales of 50m and 100m, and composed by satellite images of human vulnerability areas,
    regions treated as traditional and waste disposed inadequately. Furthermore, using these image bases, trainings were done with the YOLOv7 model and other deep learning models for image segmentation. By adopting an exploratory approach, this work compares the results of different image segmentation models and training strategies, using random weight initialization
    (from scratch) and pre-trained weights (transfer learning). Thus, the present work was able to reach maximum F1
    score values of 0.55 for YOLOv7 and 0.64 for other segmentation models."
    }
    

    License

    This project is licensed under the MIT License - see the LICENSE file for details.

  5. A Dataset for Troll Classification of Tamil Memes

    • zenodo.org
    zip
    Updated May 16, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bharathi Raja Chakravarthi; Shardul Suryawanshi; Bharathi Raja Chakravarthi; Shardul Suryawanshi (2021). A Dataset for Troll Classification of Tamil Memes [Dataset]. http://doi.org/10.5281/zenodo.4765573
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 16, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Bharathi Raja Chakravarthi; Shardul Suryawanshi; Bharathi Raja Chakravarthi; Shardul Suryawanshi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Social media are interactive platforms that facilitate the creation or sharing of information, ideas or other forms of expression among people. This exchange is not free from offensive, trolling or malicious contents targeting users or communities. One way of trolling is by making memes, which in most cases combines an image with a concept or catchphrase. The challenge of dealing with memes is that they are region-specific and their meaning is often obscured in humour or sarcasm. To facilitate the computational modelling of trolling in the memes for Indian languages, we created a meme dataset for Tamil (TamilMemes). We annotated and released the dataset containing suspected trolls and not-troll memes. In this paper, we use the a image classification to address the difficulties involved in the classification of troll memes with the existing methods. We found that the identification of a troll meme with such an image classifier is not feasible which has been corroborated with precision, recall and F1-score.

    The internet has facilitated its user-base with a platform to communicate and express their views without any censorship. On the other hand, this freedom of expression or free speech can be abused by its user or a troll to demean an individual or a group. Demeaning people based on their gender, sexual orientation, religious believes or any other characteristics –trolling– could cause great distress in the online community. Hence, the content posted by a troll needs to be identified and dealt with before causing any more damage. Amongst all the forms of troll content, memes are most prevalent due to their popularity and ability to propagate across cultures. A troll uses a meme to demean, attack or offend its targetted audience. In this shared task, we provide a resource (TamilMemes) that could be used to train a system capable of identifying a troll meme in the Tamil language. In our TamilMemes dataset, each meme has been categorized into either a “troll” or a “not_troll” class. Along with the meme images, we also provided the Latin transcripted text from memes. We received 10 system submissions from the participants which were evaluated using the weighted average F1-score. The system with the weighted average F1-score of 0.55 secured the first rank.

    @inproceedings{suryawanshi-etal-2020-tamil-meme,
    title = "A Dataset for Troll Classification of {Tamil} Memes",
    author = "Suryawanshi, Shardul and
     Chakravarthi, Bharathi Raja and
     Verma, Pranav and
     Arcan, Mihael and
     McCrae, John P and
     Buitelaar, Paul",
    booktitle = "Proceedings of the 5th Workshop on Indian Language Data Resource and Evaluation (WILDRE-5)",
    month = May,
    year = "2020",
    address = "Marseille, France",
    publisher = "European Language Resources Association (ELRA)"
    }
    
    @inproceedings{suryawanshi-chakravarthi-2021-findings,
      title = "Findings of the Shared Task on Troll Meme Classification in {T}amil",
      author = "Suryawanshi, Shardul and
       Chakravarthi, Bharathi Raja",
      booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
      month = apr,
      year = "2021",
      address = "Kyiv",
      publisher = "Association for Computational Linguistics",
      url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.16",
      pages = "126--132",
      abstract = "The internet has facilitated its user-base with a platform to communicate and express their views without any censorship. On the other hand, this freedom of expression or free speech can be abused by its user or a troll to demean an individual or a group. Demeaning people based on their gender, sexual orientation, religious believes or any other characteristics {--}trolling{--} could cause great distress in the online community. Hence, the content posted by a troll needs to be identified and dealt with before causing any more damage. Amongst all the forms of troll content, memes are most prevalent due to their popularity and ability to propagate across cultures. A troll uses a meme to demean, attack or offend its targetted audience. In this shared task, we provide a resource (TamilMemes) that could be used to train a system capable of identifying a troll meme in the Tamil language. In our TamilMemes dataset, each meme has been categorized into either a {``}troll{''} or a {``}not{\_}troll{''} class. Along with the meme images, we also provided the Latin transcripted text from memes. We received 10 system submissions from the participants which were evaluated using the weighted average F1-score. The system with the weighted average F1-score of 0.55 secured the first rank.",
    }
  6. Liberty Formula One (FWONA) Race to New Highs? (Forecast)

    • kappasignal.com
    Updated Nov 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    KappaSignal (2024). Liberty Formula One (FWONA) Race to New Highs? (Forecast) [Dataset]. https://www.kappasignal.com/2024/11/liberty-formula-one-fwona-race-to-new.html
    Explore at:
    Dataset updated
    Nov 5, 2024
    Dataset authored and provided by
    KappaSignal
    License

    https://www.kappasignal.com/p/legal-disclaimer.htmlhttps://www.kappasignal.com/p/legal-disclaimer.html

    Description

    This analysis presents a rigorous exploration of financial data, incorporating a diverse range of statistical features. By providing a robust foundation, it facilitates advanced research and innovative modeling techniques within the field of finance.

    Liberty Formula One (FWONA) Race to New Highs?

    Financial data:

    • Historical daily stock prices (open, high, low, close, volume)

    • Fundamental data (e.g., market capitalization, price to earnings P/E ratio, dividend yield, earnings per share EPS, price to earnings growth, debt-to-equity ratio, price-to-book ratio, current ratio, free cash flow, projected earnings growth, return on equity, dividend payout ratio, price to sales ratio, credit rating)

    • Technical indicators (e.g., moving averages, RSI, MACD, average directional index, aroon oscillator, stochastic oscillator, on-balance volume, accumulation/distribution A/D line, parabolic SAR indicator, bollinger bands indicators, fibonacci, williams percent range, commodity channel index)

    Machine learning features:

    • Feature engineering based on financial data and technical indicators

    • Sentiment analysis data from social media and news articles

    • Macroeconomic data (e.g., GDP, unemployment rate, interest rates, consumer spending, building permits, consumer confidence, inflation, producer price index, money supply, home sales, retail sales, bond yields)

    Potential Applications:

    • Stock price prediction

    • Portfolio optimization

    • Algorithmic trading

    • Market sentiment analysis

    • Risk management

    Use Cases:

    • Researchers investigating the effectiveness of machine learning in stock market prediction

    • Analysts developing quantitative trading Buy/Sell strategies

    • Individuals interested in building their own stock market prediction models

    • Students learning about machine learning and financial applications

    Additional Notes:

    • The dataset may include different levels of granularity (e.g., daily, hourly)

    • Data cleaning and preprocessing are essential before model training

    • Regular updates are recommended to maintain the accuracy and relevance of the data

  7. f

    Static analysis of Yelp and Amazon data sets.

    • plos.figshare.com
    xls
    Updated May 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    An Tong; Bochao Chen; Zhe Wang; Jiawei Gao; Chi Kin Lam (2025). Static analysis of Yelp and Amazon data sets. [Dataset]. http://doi.org/10.1371/journal.pone.0322004.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 30, 2025
    Dataset provided by
    PLOS ONE
    Authors
    An Tong; Bochao Chen; Zhe Wang; Jiawei Gao; Chi Kin Lam
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In recent years, the number of telecom frauds has increased significantly, causing substantial losses to people’s daily lives. With technological advancements, telecom fraud methods have also become more sophisticated, making fraudsters harder to detect as they often imitate normal users and exhibit highly similar features. Traditional graph neural network (GNN) methods aggregate the features of neighboring nodes, which makes it difficult to distinguish between fraudsters and normal users when their features are highly similar. To address this issue, we proposed a spatio-temporal graph attention network (GDFGAT) with feature difference-based weight updates. We conducted comprehensive experiments on our method on a real telecom fraud dataset. Our method obtained an accuracy of 93.28%, f1 score of 92.08%, precision rate of 93.51%, recall rate of 90.97%, and AUC value of 94.53%. The results showed that our method (GDFGAT) is better than the classical method, the latest methods and the baseline model in many metrics; each metric improved by nearly 2%. In addition, we also conducted experiments on the imbalanced datasets: Amazon and YelpChi. The results showed that our model GDFGAT performed better than the baseline model in some metrics.

  8. f

    S1 Data -

    • plos.figshare.com
    zip
    Updated Mar 22, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yunxia Wang (2024). S1 Data - [Dataset]. http://doi.org/10.1371/journal.pone.0299425.s001
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 22, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Yunxia Wang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    To help non-native English speakers quickly master English vocabulary, and improve reading, writing, listening and speaking skills, and communication skills, this study designs, constructs, and improves an English vocabulary learning model that integrates Spiking Neural Network (SNN) and Convolutional Long Short-Term Memory (Conv LSTM) algorithms. The fusion of SNN and Conv LSTM algorithm can fully utilize the advantages of SNN in processing temporal information and Conv LSTM in sequence data modeling, and implement a fusion model that performs well in English vocabulary learning. By adding information transfer and interaction modules, the feature learning and the timing information processing are optimized to improve the vocabulary learning ability of the model in different text contents. The training set used in this study is an open data set from the WordNet and Oxford English Corpus data corpora. The model is presented as a computer program and applied to an English learning application program, an online vocabulary learning platform, or a language education software. The experiment will use the open data set to generate a test set with text volume ranging from 100 to 4000. The performance indicators of the proposed fusion model are compared with those of five traditional models and applied to the latest vocabulary exercises. From the perspective of learners, 10 kinds of model accuracy, loss, polysemy processing accuracy, training time, syntactic structure capturing accuracy, vocabulary coverage, F1-score, context understanding accuracy, word sense disambiguation accuracy, and word order relation processing accuracy are considered. The experimental results reveal that the performance of the fusion model is better under different text sizes. In the range of 100–400 text volume, the accuracy is 0.75–0.77, the loss is less than 0.45, the F1-score is greater than 0.75, the training time is within 300s, and the other performance indicators are more than 65%; In the range of 500–1000 text volume, the accuracy is 0.81–0.83, the loss is not more than 0.40, the F1-score is not less than 0.78, the training time is within 400s, and the other performance indicators are above 70%; In the range of 1500–3000 text volume, the accuracy is 0.82–0.84, the loss is less than 0.28, the F1-score is not less than 0.78, the training time is within 600s, and the remaining performance indicators are higher than 70%. The fusion model can adapt to various types of questions in practical application. After the evaluation of professional teachers, the average scores of the choice, filling-in-the-blank, spelling, matching, exercises, and synonyms are 85.72, 89.45, 80.31, 92.15, 87.62, and 78.94, which are much higher than other traditional models. This shows that as text volume increases, the performance of the fusion model is gradually improved, indicating higher accuracy and lower loss. At the same time, in practical application, the fusion model proposed in this study has a good effect on English learning tasks and offers greater benefits for people unfamiliar with English vocabulary structure, grammar, and question types. This study aims to provide efficient and accurate natural language processing tools to help non-native English speakers understand and apply language more easily, and improve English vocabulary learning and comprehension.

  9. f

    Number of images of the MURA dataset.

    • plos.figshare.com
    xls
    Updated Mar 11, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Laith Alzubaidi; Asma Salhi; Mohammed A.Fadhel; Jinshuai Bai; Freek Hollman; Kristine Italia; Roberto Pareyon; A. S. Albahri; Chun Ouyang; Jose Santamaría; Kenneth Cutbush; Ashish Gupta; Amin Abbosh; Yuantong Gu (2024). Number of images of the MURA dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0299545.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Mar 11, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Laith Alzubaidi; Asma Salhi; Mohammed A.Fadhel; Jinshuai Bai; Freek Hollman; Kristine Italia; Roberto Pareyon; A. S. Albahri; Chun Ouyang; Jose Santamaría; Kenneth Cutbush; Ashish Gupta; Amin Abbosh; Yuantong Gu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Musculoskeletal conditions affect an estimated 1.7 billion people worldwide, causing intense pain and disability. These conditions lead to 30 million emergency room visits yearly, and the numbers are only increasing. However, diagnosing musculoskeletal issues can be challenging, especially in emergencies where quick decisions are necessary. Deep learning (DL) has shown promise in various medical applications. However, previous methods had poor performance and a lack of transparency in detecting shoulder abnormalities on X-ray images due to a lack of training data and better representation of features. This often resulted in overfitting, poor generalisation, and potential bias in decision-making. To address these issues, a new trustworthy DL framework has been proposed to detect shoulder abnormalities (such as fractures, deformities, and arthritis) using X-ray images. The framework consists of two parts: same-domain transfer learning (TL) to mitigate imageNet mismatch and feature fusion to reduce error rates and improve trust in the final result. Same-domain TL involves training pre-trained models on a large number of labelled X-ray images from various body parts and fine-tuning them on the target dataset of shoulder X-ray images. Feature fusion combines the extracted features with seven DL models to train several ML classifiers. The proposed framework achieved an excellent accuracy rate of 99.2%, F1Score of 99.2%, and Cohen’s kappa of 98.5%. Furthermore, the accuracy of the results was validated using three visualisation tools, including gradient-based class activation heat map (Grad CAM), activation visualisation, and locally interpretable model-independent explanations (LIME). The proposed framework outperformed previous DL methods and three orthopaedic surgeons invited to classify the test set, who obtained an average accuracy of 79.1%. The proposed framework has proven effective and robust, improving generalisation and increasing trust in the final results.

  10. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Statista (2025). Formula One (F1) racing global TV audience 2021 [Dataset]. https://www.statista.com/statistics/480129/cable-or-broadcast-tv-networks-formula-one-f1-racing-watched-within-the-last-12-months-usa/
Organization logo

Formula One (F1) racing global TV audience 2021

Explore at:
9 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Jun 26, 2025
Dataset authored and provided by
Statistahttp://statista.com/
Area covered
Worldwide
Description

This statistic illustrates the number of Formula One (F1) racing TV viewers worldwide from 2008 to 2021. According to the source, the global audience for Formula One in 2021 stood at *** million viewers, an increase of roughly three percent to the previous year.

Search
Clear search
Close search
Google apps
Main menu