31 datasets found
  1. h

    avatar-recognition-nuexe

    • huggingface.co
    Updated Mar 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zuppichini (2023). avatar-recognition-nuexe [Dataset]. https://huggingface.co/datasets/Francesco/avatar-recognition-nuexe
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 30, 2023
    Authors
    Zuppichini
    License

    https://choosealicense.com/licenses/cc/https://choosealicense.com/licenses/cc/

    Description

    Dataset Card for avatar-recognition-nuexe

    ** The original COCO dataset is stored at dataset.tar.gz**

      Dataset Summary
    

    avatar-recognition-nuexe

      Supported Tasks and Leaderboards
    

    object-detection: The dataset can be used to train a model for Object Detection.

      Languages
    

    English

      Dataset Structure
    
    
    
    
    
      Data Instances
    

    A data point comprises an image and its object annotations. { 'image_id': 15, 'image':… See the full description on the dataset page: https://huggingface.co/datasets/Francesco/avatar-recognition-nuexe.

  2. Avatar Recognition Dataset

    • universe.roboflow.com
    zip
    Updated May 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Roboflow 100 (2023). Avatar Recognition Dataset [Dataset]. https://universe.roboflow.com/roboflow-100/avatar-recognition-nuexe/dataset/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 7, 2023
    Dataset provided by
    Roboflowhttps://roboflow.com/
    Authors
    Roboflow 100
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Avatar Bounding Boxes
    Description

    This dataset was originally created by Seokjin Ko. To see the current project, which may have been updated since this version, please go here: https://universe.roboflow.com/new-workspace-0pohs/avatar-recognition-rfw8d.

    This dataset is part of RF100, an Intel-sponsored initiative to create a new object detection benchmark for model generalizability.

    Access the RF100 Github repo: https://github.com/roboflow-ai/roboflow-100-benchmark

  3. R

    Avatar Circle Detection Dataset

    • universe.roboflow.com
    zip
    Updated Feb 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yolov8boxdetection (2024). Avatar Circle Detection Dataset [Dataset]. https://universe.roboflow.com/yolov8boxdetection/avatar-circle-detection/model/4
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 8, 2024
    Dataset authored and provided by
    Yolov8boxdetection
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Circle Bounding Boxes
    Description

    Avatar Circle Detection

    ## Overview
    
    Avatar Circle Detection is a dataset for object detection tasks - it contains Circle annotations for 276 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  4. R

    Avatar Dataset Dataset

    • universe.roboflow.com
    zip
    Updated Mar 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    jazzsarin28 (2025). Avatar Dataset Dataset [Dataset]. https://universe.roboflow.com/jazzsarin28/avatar-dataset/model/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 26, 2025
    Dataset authored and provided by
    jazzsarin28
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Riots Bounding Boxes
    Description

    Avatar Dataset

    ## Overview
    
    Avatar Dataset is a dataset for object detection tasks - it contains Riots annotations for 5,946 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  5. f

    Data from: Predicting proteus effect via the user avatar bond: a...

    • tandf.figshare.com
    docx
    Updated May 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mohammed Qasim Latifi; Dylan Poulus; Michaella Richards; Yang Yap; Vasileios Stavropoulos (2025). Predicting proteus effect via the user avatar bond: a longitudinal study using machine learning [Dataset]. http://doi.org/10.6084/m9.figshare.26201165.v1
    Explore at:
    docxAvailable download formats
    Dataset updated
    May 15, 2025
    Dataset provided by
    Taylor & Francis
    Authors
    Mohammed Qasim Latifi; Dylan Poulus; Michaella Richards; Yang Yap; Vasileios Stavropoulos
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The impact of an avatar on real-world behaviors of users is known as the Proteus Effect. Different user avatar bond (UAB) aspects, including identifying, immersing, and compensating via the avatar, influence an individual’s Proteus Effect propensity. This study aimed to use machine learning (ML) classifiers to automate the prediction of those likely to experience Proteus Effect, based on their reports of identifying, immersing, and compensating with their avatar. Participants were 565 gamers (Mage = 29.3 years; SD = 10.6), assessed twice, six months apart, using the User-Avatar-Bond Scale and the Proteus Effect Scale. Tuned and untuned ML classifiers showed ML models could accurately identify individuals with higher Proteus Effect propensity, informed by a gamer’s reported UAB, age, and length of gaming involvement, both concurrently and longitudinally (i.e., six months later). Random forests performed better than other MLs, with avatar identification as the strongest predictor. This suggests higher Proteus Effect propensity for those with a stronger user-avatar bond, informing gamified health applications to introduce adaptive behavioral changes via the avatar. Prevention and practice implications are discussed.

  6. R

    Avatar Dataset

    • universe.roboflow.com
    zip
    Updated Mar 23, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    tutorial (2025). Avatar Dataset [Dataset]. https://universe.roboflow.com/tutorial-lnd54/avatar-tntda/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 23, 2025
    Dataset authored and provided by
    tutorial
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Objects Bounding Boxes
    Description

    Avatar

    ## Overview
    
    Avatar is a dataset for object detection tasks - it contains Objects annotations for 287 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  7. Digital Avatar Telepresence Robot Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Jun 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Digital Avatar Telepresence Robot Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/digital-avatar-telepresence-robot-market
    Explore at:
    pdf, pptx, csvAvailable download formats
    Dataset updated
    Jun 29, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Digital Avatar Telepresence Robot Market Outlook



    According to our latest research, the global Digital Avatar Telepresence Robot market size in 2024 stands at USD 2.48 billion, reflecting the sector’s robust expansion fueled by advancements in artificial intelligence, robotics, and immersive communication technologies. The market is expected to grow at a CAGR of 19.7% from 2025 to 2033, reaching a forecasted market size of USD 10.23 billion by 2033. This impressive growth trajectory is primarily driven by the increasing demand for remote collaboration solutions, the integration of AI-powered avatars, and the rising need for enhanced telepresence across industries such as healthcare, education, and corporate enterprises.




    A significant growth factor for the Digital Avatar Telepresence Robot market is the rapid digital transformation occurring globally, especially in sectors that rely heavily on remote interaction and collaboration. The COVID-19 pandemic accelerated the adoption of telepresence technologies, with enterprises and educational institutions seeking robust solutions for virtual engagement. Digital avatar telepresence robots provide a more immersive and interactive experience compared to traditional video conferencing, enabling users to navigate remote environments, interact with objects, and communicate through lifelike avatars. This capability is especially valuable in healthcare, where remote consultations, patient monitoring, and even surgical assistance are increasingly facilitated by telepresence robots, thereby reducing the risk of infection and improving access to specialized care.




    Another key driver propelling the market forward is the ongoing innovation in artificial intelligence and robotics. The integration of natural language processing, facial recognition, and emotion detection into digital avatars has significantly enhanced the realism and effectiveness of telepresence robots. These advancements allow for more intuitive and human-like interactions, making it easier for users to engage in meaningful conversations and collaborate on complex tasks. The proliferation of 5G networks and edge computing further augments the capabilities of telepresence robots by enabling low-latency, high-bandwidth communication, which is critical for seamless real-time interactions. As a result, organizations are increasingly investing in digital avatar telepresence solutions to bridge the gap between physical and virtual presence.




    The market is also benefiting from the growing emphasis on inclusivity and accessibility in digital communication. Digital avatar telepresence robots are being adopted to support individuals with disabilities, providing them with new opportunities to participate in work, education, and social activities from remote locations. Enterprises are leveraging these robots to facilitate hybrid work models, ensuring that remote employees have a tangible presence in physical offices. In retail and hospitality, telepresence robots are being used to enhance customer engagement and deliver personalized services, further expanding the market’s reach. The convergence of these factors is expected to sustain the high growth rate of the Digital Avatar Telepresence Robot market in the coming years.




    Regionally, North America dominates the Digital Avatar Telepresence Robot market, accounting for the largest share due to its advanced technological infrastructure, high adoption of AI-driven solutions, and strong presence of leading market players. However, the Asia Pacific region is emerging as the fastest-growing market, driven by increasing investments in digital transformation, rising demand for telehealth solutions, and the expansion of smart education initiatives. Europe is also witnessing significant growth, supported by government initiatives to promote digital innovation and the adoption of telepresence technologies across various sectors. The Middle East & Africa and Latin America are gradually integrating these solutions, with growth expected to accelerate as digital infrastructure improves and awareness increases.





    Compo

  8. D

    Digital Human Avatar Market Research Report 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Jun 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Digital Human Avatar Market Research Report 2033 [Dataset]. https://dataintelo.com/report/digital-human-avatar-market
    Explore at:
    csv, pdf, pptxAvailable download formats
    Dataset updated
    Jun 28, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Digital Human Avatar Market Outlook



    According to our latest research, the global digital human avatar market size reached USD 7.9 billion in 2024, reflecting robust adoption across diverse industries. The market is projected to grow at a CAGR of 33.2% during the forecast period, with the market size expected to surpass USD 96.8 billion by 2033. This exceptional growth trajectory is fueled by advancements in artificial intelligence, increasing demand for personalized digital experiences, and the expanding use of digital avatars in customer engagement and training applications worldwide.



    One of the primary growth drivers for the digital human avatar market is the accelerating integration of AI-powered avatars into customer service and virtual assistant roles. Organizations across sectors are leveraging digital avatars to provide 24/7 support, enhance user engagement, and streamline operational efficiency. The ability of digital human avatars to simulate human-like interactions, understand natural language, and deliver personalized recommendations is revolutionizing customer experience. This trend is especially pronounced in sectors such as BFSI, retail, and telecommunications, where timely and efficient customer interaction is a key differentiator. The growing sophistication of AI algorithms and natural language processing is further enhancing the realism and effectiveness of digital avatars, making them indispensable tools for forward-thinking enterprises.



    Another significant factor propelling market growth is the increasing application of digital human avatars in training and education. With the rise of remote learning and digital transformation initiatives, educational institutions and corporate training programs are adopting avatars to create immersive, interactive, and personalized learning environments. These avatars facilitate real-time feedback, adaptive learning pathways, and emotionally intelligent interactions, which significantly boost learner engagement and retention. The healthcare sector is also witnessing a surge in avatar-based solutions for patient engagement, mental health support, and medical training, leveraging avatars to bridge communication gaps and deliver empathetic care at scale.



    The entertainment and media industry is rapidly embracing digital human avatars for content creation, virtual influencers, and immersive storytelling. The ability to create hyper-realistic 3D avatars that can perform, interact, and even serve as brand ambassadors has opened new revenue streams and creative possibilities. The convergence of virtual reality, augmented reality, and AI technologies is enabling the development of avatars that are not only visually compelling but also contextually aware and responsive. As consumer expectations for personalized digital experiences continue to rise, the demand for advanced digital human avatars is expected to accelerate, further expanding the market landscape.



    From a regional perspective, North America currently dominates the digital human avatar market, driven by early technology adoption, strong presence of leading AI companies, and significant investments in digital transformation initiatives. Asia Pacific is emerging as the fastest-growing region, fueled by rapid digitization, expanding internet penetration, and a burgeoning population of tech-savvy consumers. Europe is also witnessing substantial growth, particularly in sectors such as healthcare, education, and retail. The Middle East & Africa and Latin America are gradually catching up, supported by increasing awareness and adoption of digital solutions. Overall, the global digital human avatar market is poised for exponential growth, with regional dynamics shaping the adoption patterns and innovation trajectories.



    Component Analysis



    The digital human avatar market is segmented by component into software and services, each playing a pivotal role in market expansion. The software segment encompasses platforms and applications that enable the creation, customization, and deployment of digital avatars. This segment commands the largest market share due to the growing demand for advanced AI-driven avatar creation tools, animation software, and real-time rendering engines. The continuous evolution of software capabilities, including facial recognition, emotion detection, and natural language understanding, is driving enterprise adoption across various verticals. The ability to integrate these software solutions seamlessly with existing business systems is a key factor in th

  9. f

    Matrix describing ordinal criteria for paradigms social scene complexity....

    • figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Carlos P. Amaral; Marco A. Simões; Miguel S. Castelo-Branco (2023). Matrix describing ordinal criteria for paradigms social scene complexity. These criteria define the hierarchy of social complexity of the paradigms. [Dataset]. http://doi.org/10.1371/journal.pone.0121970.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Carlos P. Amaral; Marco A. Simões; Miguel S. Castelo-Branco
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Matrix describing ordinal criteria for paradigms social scene complexity. These criteria define the hierarchy of social complexity of the paradigms.

  10. R

    Roblox Avatars Dataset

    • universe.roboflow.com
    zip
    Updated Nov 29, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    myworkspc (2024). Roblox Avatars Dataset [Dataset]. https://universe.roboflow.com/myworkspc/roblox-avatars/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 29, 2024
    Dataset authored and provided by
    myworkspc
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Characters Roblox Avatars Bounding Boxes
    Description

    Roblox Avatars

    ## Overview
    
    Roblox Avatars is a dataset for object detection tasks - it contains Characters Roblox Avatars annotations for 238 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  11. c

    Reproduction Materials for: Despite Appearances: Comparing Emotion...

    • archive.ciser.cornell.edu
    Updated Nov 23, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yilu Sun; Andrea Won (2022). Reproduction Materials for: Despite Appearances: Comparing Emotion Recognition in Abstract and Humanoid Avatars Using Nonverbal Behavior in Social Virtual Reality [Dataset]. http://doi.org/10.6077/xvcp-p578
    Explore at:
    Dataset updated
    Nov 23, 2022
    Authors
    Yilu Sun; Andrea Won
    Description

    The ability to perceive emotional states is a critical part of social interactions, shaping how people understand and respond to each other. In face-to-face communication, people perceive others’ emotions through observing their appearance and behavior. In virtual reality, how appearance and behavior are rendered must be designed. In this study, we asked whether people conversing in immersive virtual reality (VR) would perceive emotion more accurately depending on whether they and their partner were presented by realistic or abstract avatars. In both cases, participants got similar information about the tracked movement of their partner’s heads and hands, though how this information was expressed varied. We collected participants’ self-reported emotional state ratings of themselves and their ratings of their conversational partners’ emotional states after a conversation in VR. Participants’ ratings of their partners’ emotional states correlated to their partners’ self-reported ratings regardless of which of the avatar conditions they experienced. We then explored how these states were reflected in their nonverbal behavior, using a dyadic measure of nonverbal behavior (proximity between conversational partners) and an individual measure (expansiveness of gesture). We discuss how this relates to measures of social presence and social closeness.

  12. f

    datasheet1_Avatar Embodiment. A Standardized Questionnaire.docx

    • frontiersin.figshare.com
    docx
    Updated Jun 8, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tabitha C. Peck; Mar Gonzalez-Franco (2023). datasheet1_Avatar Embodiment. A Standardized Questionnaire.docx [Dataset]. http://doi.org/10.3389/frvir.2020.575943.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jun 8, 2023
    Dataset provided by
    Frontiers
    Authors
    Tabitha C. Peck; Mar Gonzalez-Franco
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The aim of this paper is to further the understanding of embodiment by 1) analytically determining the components defining embodiment, 2) increasing comparability and standardization of the measurement of embodiment across experiments by providing a universal embodiment questionnaire that is validated and reliable, and 3) motivating researchers to use a standardized questionnaire. In this paper we validate numerically and refine our previously proposed Embodiment Questionnaire. We collected data from nine experiments, with over 400 questionnaires, that used all or part of the original embodiment 25-item questionnaire. Analysis was performed to eliminate non-universal questions, redundant questions, and questions that were not strongly correlated with other questions. We further numerically categorized and weighted sub-scales and determined that embodiment is comprised of interrelated categories of Appearance, Response, Ownership, and Multi-Sensory. The final questionnaire consists of 16 questions and four interrelated sub-scales with high reliability within each sub-scale, Chronbach’s α ranged from 0.72 to 0.82. Results of the original and refined questionnaire are compared over all nine experiments and in detail for three of the experiments. The updated questionnaire produced a wider range of embodiment scores compared to the original questionnaire, was able to detect the presence of a self-avatar, and was able to discern that participants over 30 years of age have significantly lower embodiment scores compared to participants under 30 years of age. Removed questions and further research of interest to the community are discussed.

  13. Video Fight Detection Dataset

    • kaggle.com
    Updated Jan 14, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Avatar (2021). Video Fight Detection Dataset [Dataset]. https://www.kaggle.com/datasets/naveenk903/movies-fight-detection-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 14, 2021
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Avatar
    Description

    Dataset

    This dataset was created by Avatar

    Contents

  14. e

    Do you read me? (E)motion Legibility of Virtual Reality Character...

    • b2find.eudat.eu
    Updated Aug 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Do you read me? (E)motion Legibility of Virtual Reality Character Representations - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/35a17cd1-16fd-5190-b1fd-442bd41196d4
    Explore at:
    Dataset updated
    Aug 6, 2024
    Description

    Abstract: We compared the body movements of five VR avatar representations in a user study (N=53) to ascertain how well these representations could convey body motions associated with different emotions: One head-and-hands representation using only tracking data, one upper-body representation using inverse kinematics (IK), and three full-body representations using IK, motion-capture, and the state-of-the-art deep-learning model AGRoL. Participants emotion detection accuracies were similar for the IK and AGRoL representations, highest for the full-body motion-caupture representation and lowest for the head-and-hands representation. Our findings suggest that from the perspective of emotion expressivity, connected upper-body parts that provide visual continuity improve clarity, and that current techniques for algorithmically animating the lower-body are ineffective. In particular, the deep-learning technique studied did not produce more expressive results, suggesting the need for training data specifically made for social VR applications.This repository contains the Unity projects for the Emotion Legibility study. Read the individual READMEs in the .zip folders for more details about each project.AGRoL-Unity contains the AGRoL implementation with a Unity server and Python client for sending motion data from Unity to the AGRoL network and back.IKRepresentations-Unity contains the other four avatar representations (FBMC, UBIK, FBIK, HH) and the code to animate them and create videos for the study.StudyApp-WebGL-Unity contains the Unity project and the WebGL build for the user study. It also contains all the videos from the pre- and main study.participant_accuracies.csv contains the participant data from the main user study.Licenses:AGRoL code is licensed under CC-BY-NC (parts of it are licensed under different license terms (see AGRoL Github page).The dataset we used was "Kinematic dataset of actors expressing emotions", licensed under PhysioNet Restricted Health Data License 1.5.0.We used the SMPL avatar model for our user study. A SMPL Unity project can be downloaded from the MPG website (login required).SMPL-Body is licensed under the Creative Commons Attribution 4.0 International License.Textures that were used in the Unity projects are from Kenney, CC0.

  15. f

    Selfie Video Dataset | 7340 minutes | 1121 Individuals | 4K & FullHD |...

    • data.filemarket.ai
    Updated Jul 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FileMarket (2025). Selfie Video Dataset | 7340 minutes | 1121 Individuals | 4K & FullHD | Multilingual | Face‑&‑Pose Computer‑Vision Data [Dataset]. https://data.filemarket.ai/products/selfie-video-dataset-7340-minutes-1121-individuals-4k-filemarket
    Explore at:
    Dataset updated
    Jul 30, 2025
    Dataset authored and provided by
    FileMarket
    Area covered
    India, Russian Federation, Spain, Kazakhstan
    Description

    Dataset: 1121 selfie videos (≈122h), single standing speaker, horiz, fixed cam. All 1080p+ (39 % 4K, 55 % Full HD), sharp faces, clear light, native monologues, natural expressions, strict QC passed; no occlusions or blur, ideal for face detection, AI-avatar training and CV tasks.

  16. A

    AI Human Generator Report

    • marketreportanalytics.com
    doc, pdf, ppt
    Updated Apr 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Report Analytics (2025). AI Human Generator Report [Dataset]. https://www.marketreportanalytics.com/reports/ai-human-generator-55851
    Explore at:
    pdf, ppt, docAvailable download formats
    Dataset updated
    Apr 3, 2025
    Dataset authored and provided by
    Market Report Analytics
    License

    https://www.marketreportanalytics.com/privacy-policyhttps://www.marketreportanalytics.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The AI human generator market, valued at approximately $2 billion in 2025, is experiencing rapid growth, projected to expand at a compound annual growth rate (CAGR) of 11.4% from 2025 to 2033. This robust expansion is driven by several key factors. Increasing demand across diverse sectors, including marketing and advertising (for creating realistic avatars and personalized campaigns), gaming (for generating non-player characters and enhancing realism), media and entertainment (for producing lifelike digital actors and characters), and design (for prototyping and creating diverse human models), fuels market growth. Furthermore, advancements in AI technologies, particularly in deep learning and generative adversarial networks (GANs), are leading to the creation of increasingly realistic and high-quality AI-generated humans, broadening the market's applications. The availability of cloud-based solutions offers scalability and accessibility, further boosting adoption among businesses of all sizes. However, challenges remain. Concerns regarding ethical implications, including potential misuse for creating deepfakes and spreading misinformation, pose a significant restraint on market growth. Data privacy and security issues, along with the computational costs associated with generating high-resolution AI humans, also present hurdles. Despite these obstacles, the market segmentation, encompassing both on-premise and cloud-based solutions across various applications, reflects a diverse and evolving landscape. North America currently holds a substantial market share, owing to early adoption and significant technological advancements, but the Asia-Pacific region is expected to witness substantial growth in the coming years driven by rapid digitalization and increasing technological investments. The ongoing innovation and refinement of AI human generation technologies are poised to mitigate some of these challenges and further accelerate market expansion in the long term.

  17. n

    Development of human pancreatic cancer avatars as a model for dynamic immune...

    • data.niaid.nih.gov
    • search.dataone.org
    • +2more
    zip
    Updated Jun 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Frances Willenbrock; Eric O'Neill (2024). Development of human pancreatic cancer avatars as a model for dynamic immune landscape profiling and personalised therapy [Dataset]. http://doi.org/10.5061/dryad.qnk98sfr2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 25, 2024
    Dataset provided by
    University of Oxford
    Authors
    Frances Willenbrock; Eric O'Neill
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Pancreatic ductal adenocarcinoma (PDAC) is the most common form of pancreatic cancer, a disease with dismal overall survival. Advances in treatment are hindered by a lack of preclinical models. Here we show how a personalised organotypic ‘avatar’ created from resected tissue, allows spatial and temporal reporting on a complete in situ tumour microenvironment, and mirrors clinical responses. Our perfusion culture method extends tumour slice viability, maintaining stable tumour content, metabolism, stromal composition, and immune cell populations for 12 days. Using multiplexed immunofluorescence and spatial transcriptomics, we identify immune neighbourhoods and potential for immunotherapy. We employed avatars to assess the impact of a pre-clinically validated metabolic therapy and show recovery of stromal and immune phenotypes and tumour re-differentiation. To determine clinical relevance, we monitored avatar response to gemcitabine treatment and identified a patient avatar-predicable response from clinical follow-up. Thus, avatars provide valuable information for the syngeneic testing of novel therapeutics and a truly personalised therapeutic assessment platform for patients.

    Methods Acquisition of tissue and blood samples The study (REC number 19/A056) was approved for the collection of tumour and healthy pancreatic tissue by the Oxford Radcliffe BioBank. The collection of the specimens was supported by the Oxford Centre for Histopathology Research. All patients recruited to the study provided written consent confirming voluntary participation and permission for tissue donation for research. Biopsy punch samples (5mm diameter) were obtained by the pathologist at the John Radcliffe Hospital following surgery provided that clear surgical margins could be determined. Sectioning and culture of live tumour slices Samples were transported on ice in unsupplemented low-glucose DMEM media before suspension in agarose scaffolds. Following a manual wash in LG DMEM media, biopsy punches were suspended in 4% low-gelling-temperature agarose and cooled. The agarose scaffold structure was generated by melting the solution and suspending the slice in a small embedding mould. 250 µm sections were generated using the Leica VT1200 vibratome (blade angle +21°C, speed 1.5 mms-1, amplitude 2 mm) in a bath of ice-cold PBS and transported in LG DMEM media on ice. Alvetex perfusion plates (REPROCELL) were used to conduct dynamic perfusion experiments, with syringe pumps maintaining a constant flow of 10 µLmin-1. The apparatus was assembled within a sterile tissue culture hood. To construct the circuit, 60 mL syringes were connected to silicone tubing and flushed with 70% ethanol before washing in unsupplemented LG DMEM media. Alvetex 12-well tissue culture inserts were activated in 70% ethanol for 2 minutes before washing in LG DMEM media. Long-term culture of tissue slices was conducted using LG DMEM complete pancreatic medium (see reagents table) at +37°C and 5% CO2. Treatment of slices in perfusion culture All ex vivo treatment of avatars was commenced on day 0 (day of tumour acquisition). Systemic chemotherapy was prepared in DMSO. Drug delivery commenced after the avatars had been created and subsequently placed in the perfusion plate, via a inflow circuit comprising a syringe and tubing, attached to the perfusion plate and connected to the perfusion pump. After the intended time period for drug delivery, the infusion was stopped, and the perfusion plate removed from the incubator and placed within a tissue culture hood. The inflow circuit was removed and the inflow channel on the perfusion plate temporarily occluded using a 2ml syringe. An inflow circuit was then created, and the syringe filled with culturing media. The inflow circuit was primed with media (from the attached syringe) to remove any air bubbles present in the tubing prior to attachment to the perfusion plate to ensure that there was a constant column of media from the syringe and the tubing to the perfusion plate. For treatment with metformin and ascorbic acid, 20 mM metformin and 100 mM ascorbic acid was perfused for 5 days, with freshly made-up solutions applied each day. Gemcitabine concentration was perfused for one hour at a concentration of 250 mM. GeoMX spatial transcriptomic analysis Spatial transcriptomic analysis was provided by NanoString Technologies, Inc. through their GeoMX® DSP Technology Access Program Grant. Formalin-fixed paraffin-embedded (FFPE) samples were provided cut at 5 µm thickness and mounted on negatively-charged slides. Samples were sent to the NanoString Technology Access Program labs for analysis using the GeoMX Digital Spatial Profiler. Slides were incubated with oligonucleotide-antibody conjugates with photocleavable linkers. UV light was then used to selectively release oligonucleotide barcodes which were read and quantified through sequencing. Four immunofluorescent markers were used for morphology definition, facilitating region of interest (ROI) selection. A total of 68 ROIs were selected across treated and control samples. The Whole Transcriptome Atlas was used for sample profiling, with genes mapped to barcodes using an in-house algorithm produced by NanoString Technologies, Inc., generating spatially resolved transcriptional data. Quality control involved assessing raw read threshold, percent-aligned reads, and sequencing saturation. A quantification limit was set based on a negative probe signal (mean + two standard deviations). Reads were filtered based on their expression in >5% AOIs and counts were normalised to account for differences in AOI size and cellularity. Multiplexed immunofluorescence and HALO analysis The multiplex IF staining of avatars was performed in collaboration with the Oxford Translational Histopathology Lab. Slides were stained using the Leica BOND RXm autostainer machine (Leica, Microsystems) whilst following the OPAL™ protocol 28 (AKOYA Biosciences). A total of 6 staining cycles were subsequently performed. The following primary antibody-opal fluorophore pairing was used: CD4 – Opal 520, CD8 – Opal 570, CD20 – Opal 480, Foxp3 – Opal 620, CD68 – Opal 690, pan Cytokerratin – Opal 780. In adherence to the manufacturer's instructions, the primary antibodies were incubated for one hour and subsequently detected with the BOND™ 4 Polymer Refine Detection System (DS9800, Leica Biosystems). The DAB step was replaced by the opal fluorophores which consisted of a 10 minute incubation with no haematoxylin step. The antigen retrieval step was performed with Epitope Retrieval (ER) Solution 2 (AR9640, Leica 8 Biosystems) for 20 minutes at 100 °C. This was performed prior to the application of each primary antibody. VECTASHIELD® Vibrance™ Antifade Mounting Medium with DAPI 10 (H-1800-10, Vector Laboratories.) was used to mount each slide. The AKOYA Biosciences Vectra® Polaris™ was used in order to obtain whole slide scans and multispectral images (MSI). A batch analysis of every MSIs was performed using the inForm 2.4.8 software. The final step consisted of fusing the multiple batched analysed MSIs on the HALO (Indica Labs) software. This resulted in the creation of a spectrally unmixed reconstructed whole image of the avatar. Analysis of the multiplex IF was performed on the Indica Labs HALO® (version 3.0.311.407). This software allows for deconvolution of the image by selecting individual fluorophores for analysis. The initial step was teaching the software a Random Forest Classifier module in which the images were segmented into tumour and stromal regions. Manual annotation of the slides was performed to exclude areas with staining artefacts. Cell detection and subsequent phenotyping was performed using the Indica Labs - HighPlex FL v3.1.0 (fluorescent images). Individual cells were defined by their expression of the specific markers: Tumour (DAPI+ panCytokeratin+), CD4 helper (DAPI+CD4+), CD8 cytotoxic (DAPI+CD8+), regulatory T-cell (DAPI+CD4+Foxp3+), B cells (DAPI+CD20+) and Macrophages (DAPI+CD68+).

  18. A

    AI-powered Face Generator Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Jun 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). AI-powered Face Generator Report [Dataset]. https://www.datainsightsmarket.com/reports/ai-powered-face-generator-1947427
    Explore at:
    ppt, pdf, docAvailable download formats
    Dataset updated
    Jun 22, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The AI-powered face generator market is experiencing rapid growth, driven by increasing demand across various sectors. The market's expansion is fueled by advancements in deep learning and generative adversarial networks (GANs), enabling the creation of highly realistic and diverse synthetic faces. Applications range from entertainment and gaming (character creation, virtual influencers) to marketing and advertising (personalized campaigns, realistic avatars), research (simulating human behavior in studies), and security (anonymizing identities). While precise market sizing data isn't provided, a reasonable estimate based on the rapid growth of AI and similar generative technologies puts the 2025 market value at approximately $500 million. Considering a conservative CAGR of 25% (a figure reflective of the growth in related AI segments), the market could reach $1.95 billion by 2033. Several factors are shaping this growth trajectory. The decreasing cost of computation and the increasing availability of large datasets are key drivers. However, ethical considerations surrounding deepfakes and the potential for misuse remain significant restraints. To mitigate these concerns, the industry is actively developing technologies to detect synthetic media and implementing responsible AI guidelines. Segmentation within the market is evident, with distinct categories emerging for different user needs and applications: consumer-facing tools (e.g., Fotor, VanceAI), professional-grade software (e.g., Datagen, Daz 3D), and specialized solutions for specific sectors (e.g., anonymization for security). Competitive landscape analysis reveals a diverse group of players ranging from established software companies to specialized AI startups. Future growth will depend on addressing ethical concerns, fostering innovation in generative models, and expanding applications to address new market demands.

  19. 3

    3D Face Modeling System Report

    • archivemarketresearch.com
    doc, pdf, ppt
    Updated Jun 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Archive Market Research (2025). 3D Face Modeling System Report [Dataset]. https://www.archivemarketresearch.com/reports/3d-face-modeling-system-559315
    Explore at:
    doc, ppt, pdfAvailable download formats
    Dataset updated
    Jun 23, 2025
    Dataset authored and provided by
    Archive Market Research
    License

    https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The 3D face modeling system market is experiencing robust growth, driven by increasing demand across diverse sectors like healthcare, entertainment, and security. While precise market size figures for 2025 aren't provided, considering the presence of established players like 3D Systems and Artec 3D, and the rapid advancements in 3D scanning and modeling technologies, a reasonable estimate for the 2025 market size is $250 million. Assuming a conservative Compound Annual Growth Rate (CAGR) of 15% for the forecast period (2025-2033), the market is projected to reach approximately $1.2 billion by 2033. This growth is fueled by several key trends including the rising adoption of personalized medicine, the burgeoning demand for realistic digital avatars in gaming and virtual reality, and the increasing need for accurate facial recognition technologies in security applications. Further expansion is expected through improvements in scanning speed and accuracy, the development of more user-friendly software, and the integration of AI-powered features for automated processing and analysis. Despite this promising outlook, the market faces certain challenges. High initial investment costs for equipment and software can restrict entry for smaller players. Moreover, ensuring data privacy and security, particularly concerning sensitive biometric information, remains a crucial concern that needs to be addressed through stringent regulatory compliance and robust security protocols. The competition among established players and the emergence of new entrants will also shape the market landscape in the coming years. The segmentation of the market will likely continue to evolve, driven by factors like application, technology, and geographical distribution. This evolving landscape presents both opportunities and challenges for companies operating in this sector, emphasizing the need for continuous innovation and strategic adaptation.

  20. Dunn’s tests with Bonferroni adjustment for elapsed time of nodule detection...

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Min Li; Sina Sareh; Guanghua Xu; Maisarah Binti Ridzuan; Shan Luo; Jun Xie; Helge Wurdemann; Kaspar Althoefer (2023). Dunn’s tests with Bonferroni adjustment for elapsed time of nodule detection using vibration feedback and pseudo-haptic feedback. [Dataset]. http://doi.org/10.1371/journal.pone.0157681.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Min Li; Sina Sareh; Guanghua Xu; Maisarah Binti Ridzuan; Shan Luo; Jun Xie; Helge Wurdemann; Kaspar Althoefer
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dunn’s tests with Bonferroni adjustment for elapsed time of nodule detection using vibration feedback and pseudo-haptic feedback.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Zuppichini (2023). avatar-recognition-nuexe [Dataset]. https://huggingface.co/datasets/Francesco/avatar-recognition-nuexe

avatar-recognition-nuexe

avatar-recognition-nuexe

Francesco/avatar-recognition-nuexe

Explore at:
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Mar 30, 2023
Authors
Zuppichini
License

https://choosealicense.com/licenses/cc/https://choosealicense.com/licenses/cc/

Description

Dataset Card for avatar-recognition-nuexe

** The original COCO dataset is stored at dataset.tar.gz**

  Dataset Summary

avatar-recognition-nuexe

  Supported Tasks and Leaderboards

object-detection: The dataset can be used to train a model for Object Detection.

  Languages

English

  Dataset Structure





  Data Instances

A data point comprises an image and its object annotations. { 'image_id': 15, 'image':… See the full description on the dataset page: https://huggingface.co/datasets/Francesco/avatar-recognition-nuexe.

Search
Clear search
Close search
Google apps
Main menu