3 datasets found
  1. P

    AVA Dataset

    • paperswithcode.com
    Updated Apr 21, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chunhui Gu; Chen Sun; David A. Ross; Carl Vondrick; Caroline Pantofaru; Yeqing Li; Sudheendra Vijayanarasimhan; George Toderici; Susanna Ricco; Rahul Sukthankar; Cordelia Schmid; Jitendra Malik (2021). AVA Dataset [Dataset]. https://paperswithcode.com/dataset/ava
    Explore at:
    Dataset updated
    Apr 21, 2021
    Authors
    Chunhui Gu; Chen Sun; David A. Ross; Carl Vondrick; Caroline Pantofaru; Yeqing Li; Sudheendra Vijayanarasimhan; George Toderici; Susanna Ricco; Rahul Sukthankar; Cordelia Schmid; Jitendra Malik
    Description

    AVA is a project that provides audiovisual annotations of video for improving our understanding of human activity. Each of the video clips has been exhaustively annotated by human annotators, and together they represent a rich variety of scenes, recording conditions, and expressions of human activity. There are annotations for:

    Kinetics (AVA-Kinetics) - a crossover between AVA and Kinetics. In order to provide localized action labels on a wider variety of visual scenes, authors provide AVA action labels on videos from Kinetics-700, nearly doubling the number of total annotations, and increasing the number of unique videos by over 500x. Actions (AvA Actions) - the AVA dataset densely annotates 80 atomic visual actions in 430 15-minute movie clips, where actions are localized in space and time, resulting in 1.62M action labels with multiple labels per human occurring frequently. Spoken Activity (AVA ActiveSpeaker, AVA Speech). AVA ActiveSpeaker: associates speaking activity with a visible face, on the AVA v1.0 videos, resulting in 3.65 million frames labeled across ~39K face tracks. AVA Speech densely annotates audio-based speech activity in AVA v1.0 videos, and explicitly labels 3 background noise conditions, resulting in ~46K labeled segments spanning 45 hours of data.

  2. Interview Ava.docx

    • figshare.com
    docx
    Updated May 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Melada Sudajit-apa (2025). Interview Ava.docx [Dataset]. http://doi.org/10.6084/m9.figshare.29109656.v1
    Explore at:
    docxAvailable download formats
    Dataset updated
    May 20, 2025
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Melada Sudajit-apa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In-depth interview transcription

  3. S2 Table

    • figshare.com
    • plos.figshare.com
    xlsx
    Updated Apr 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ava K Chow; Rachel Low; Jerald Yuan; Karen K Yee; Jaskaranjit Kaur Dhaliwal; Shanice Govia; Nazlee Sharmin (2024). S2 Table [Dataset]. http://doi.org/10.6084/m9.figshare.25546426.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Apr 4, 2024
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Ava K Chow; Rachel Low; Jerald Yuan; Karen K Yee; Jaskaranjit Kaur Dhaliwal; Shanice Govia; Nazlee Sharmin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    List of genes and related information involved in tooth development

  4. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Chunhui Gu; Chen Sun; David A. Ross; Carl Vondrick; Caroline Pantofaru; Yeqing Li; Sudheendra Vijayanarasimhan; George Toderici; Susanna Ricco; Rahul Sukthankar; Cordelia Schmid; Jitendra Malik (2021). AVA Dataset [Dataset]. https://paperswithcode.com/dataset/ava

AVA Dataset

Atomic Visual Actions

Explore at:
2 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Apr 21, 2021
Authors
Chunhui Gu; Chen Sun; David A. Ross; Carl Vondrick; Caroline Pantofaru; Yeqing Li; Sudheendra Vijayanarasimhan; George Toderici; Susanna Ricco; Rahul Sukthankar; Cordelia Schmid; Jitendra Malik
Description

AVA is a project that provides audiovisual annotations of video for improving our understanding of human activity. Each of the video clips has been exhaustively annotated by human annotators, and together they represent a rich variety of scenes, recording conditions, and expressions of human activity. There are annotations for:

Kinetics (AVA-Kinetics) - a crossover between AVA and Kinetics. In order to provide localized action labels on a wider variety of visual scenes, authors provide AVA action labels on videos from Kinetics-700, nearly doubling the number of total annotations, and increasing the number of unique videos by over 500x. Actions (AvA Actions) - the AVA dataset densely annotates 80 atomic visual actions in 430 15-minute movie clips, where actions are localized in space and time, resulting in 1.62M action labels with multiple labels per human occurring frequently. Spoken Activity (AVA ActiveSpeaker, AVA Speech). AVA ActiveSpeaker: associates speaking activity with a visible face, on the AVA v1.0 videos, resulting in 3.65 million frames labeled across ~39K face tracks. AVA Speech densely annotates audio-based speech activity in AVA v1.0 videos, and explicitly labels 3 background noise conditions, resulting in ~46K labeled segments spanning 45 hours of data.

Search
Clear search
Close search
Google apps
Main menu