7 datasets found
  1. i

    Users' Trajectory Training Dataset

    • ieee-dataport.org
    Updated Jun 10, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jianxin Sun (2024). Users' Trajectory Training Dataset [Dataset]. https://ieee-dataport.org/documents/users-trajectory-training-dataset
    Explore at:
    Dataset updated
    Jun 10, 2024
    Authors
    Jianxin Sun
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The training trajectory datasets are collected from real users when exploring the volume dataset on our interactive 3D visualization framework. The format of the training dataset collected is trajectories of POVs in the Cartesian space. Multiple volume datasets with distinct spatial features and transfer functions are used to collect comprehensive training datasets of trajectories. The initial point is randomly selected for each user. Collected training trajectories are cleaned by removing POV outliers due to users' misoperations to improve uniformity.

  2. d

    Data from: Are multiple views superior to a single view when teaching hip...

    • search.dataone.org
    • datasetcatalog.nlm.nih.gov
    • +3more
    Updated Apr 19, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Huixiang Wang; Kapil Sugand; Simon Newman; Gareth Jones; Justin Cobb; Edouard Auvinet (2025). Are multiple views superior to a single view when teaching hip surgery? a single-blinded randomized controlled trial of technical skill acquisition [Dataset]. http://doi.org/10.5061/dryad.qr60ps0
    Explore at:
    Dataset updated
    Apr 19, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Huixiang Wang; Kapil Sugand; Simon Newman; Gareth Jones; Justin Cobb; Edouard Auvinet
    Time period covered
    Feb 12, 2019
    Description

    Purpose: Surgical education videos currently all use a single point of view (POV) with the trainee locked onto a fixed viewpoint, which may not deliver sufficient information for complex procedures. We developed a novel multiple POV video system and evaluated its training outcome compared with traditional single POV.

    Methods: We filmed a hip resurfacing procedure performed by an expert attending using 8 cameras in theatre. 30 medical students were randomly and equally allocated to learn the procedure using the multiple POV (experiment group [EG]) versus single POV system (control group [CG]).

    Participants advanced a pin into the femoral head as demonstrated in the video. We measured the drilling trajectories and compared it with pre-operative plan to evaluate distance of the pin insertion and angular deviations. Two orthopedic attendings expertly evaluated the participants' performance using a modified global rating scale (GRS). There was a pre-video knowledge test that was repea...

  3. f

    Data from: Wiki-Reliability: A Large Scale Dataset for Content Reliability...

    • figshare.com
    txt
    Updated Mar 14, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    KayYen Wong; Diego Saez-Trumper; Miriam Redi (2021). Wiki-Reliability: A Large Scale Dataset for Content Reliability on Wikipedia [Dataset]. http://doi.org/10.6084/m9.figshare.14113799.v4
    Explore at:
    txtAvailable download formats
    Dataset updated
    Mar 14, 2021
    Dataset provided by
    figshare
    Authors
    KayYen Wong; Diego Saez-Trumper; Miriam Redi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Wiki-Reliability: Machine Learning datasets for measuring content reliability on WikipediaConsists of metadata features and content text datasets, with the formats:- {template_name}_features.csv - {template_name}_difftxt.csv.gz - {template_name}_fulltxt.csv.gz For more details on the project, dataset schema, and links to data usage and benchmarking:https://meta.wikimedia.org/wiki/Research:Wiki-Reliability:_A_Large_Scale_Dataset_for_Content_Reliability_on_Wikipedia

  4. The angular deviation and distance deviation of entry points between the...

    • plos.figshare.com
    • datasetcatalog.nlm.nih.gov
    xls
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Huixiang Wang; Kapil Sugand; Simon Newman; Gareth Jones; Justin Cobb; Edouard Auvinet (2023). The angular deviation and distance deviation of entry points between the actually drilled and ideal virtual trajectory of the pin insertion in the femoral head in both groups. [Dataset]. http://doi.org/10.1371/journal.pone.0209904.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Huixiang Wang; Kapil Sugand; Simon Newman; Gareth Jones; Justin Cobb; Edouard Auvinet
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The angular deviation and distance deviation of entry points between the actually drilled and ideal virtual trajectory of the pin insertion in the femoral head in both groups.

  5. DREAMING - Diminished Reality for Emerging Applications in Medicine through...

    • zenodo.org
    zip
    Updated Jan 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christina Gsaxner; Christina Gsaxner; Timo van Meegdenburg; Gijs Luijten; Gijs Luijten; Viet Duc Vu; Behrus Puladi; Behrus Puladi; Jan Egger; Jan Egger; Timo van Meegdenburg; Viet Duc Vu (2024). DREAMING - Diminished Reality for Emerging Applications in Medicine through Inpainting Dataset [Dataset]. http://doi.org/10.5281/zenodo.10471365
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 8, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Christina Gsaxner; Christina Gsaxner; Timo van Meegdenburg; Gijs Luijten; Gijs Luijten; Viet Duc Vu; Behrus Puladi; Behrus Puladi; Jan Egger; Jan Egger; Timo van Meegdenburg; Viet Duc Vu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 8, 2024
    Description

    Dataset for the DREAMING - Diminished Reality for Emerging Applications in Medicine through Inpainting Challenge

    More information about the challenge can be found on the challenge website!

    Timeline:

    8th January 2024: First subset of training & validation data available.

    22nd January 2024: Second subset of training & validation data available.

    29th January 2024: Full training & validation data available.

    Description:

    The dataset was created using Unreal Engine 5.1, Unreal MetaHumans, 3D-COSI surgical instruments, POV-Surgery grasp generation and EasySynth.

    Each scene contains:

    • "color": RGB input images
    • "gt": RGB ground truth images
    • "mask": Mask defining the area to be inpainted. White -> Background, Black -> Inpainting area
    • "CameraPoses.csv": Camera poses
    • "CameraRig.json": Camera intrinsics

  6. f

    Overall scores and sub-scores in each category in the knowledge test before...

    • plos.figshare.com
    • datasetcatalog.nlm.nih.gov
    xls
    Updated Jun 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Huixiang Wang; Kapil Sugand; Simon Newman; Gareth Jones; Justin Cobb; Edouard Auvinet (2023). Overall scores and sub-scores in each category in the knowledge test before and after video learning in both groups. [Dataset]. http://doi.org/10.1371/journal.pone.0209904.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 5, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Huixiang Wang; Kapil Sugand; Simon Newman; Gareth Jones; Justin Cobb; Edouard Auvinet
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Score values are mean (SD) [95% CI].

  7. R

    The Finals Player Detection V0.1 Dataset

    • universe.roboflow.com
    zip
    Updated Nov 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Finals (2024). The Finals Player Detection V0.1 Dataset [Dataset]. https://universe.roboflow.com/the-finals-fbucj/the-finals-player-detection-v0.1/dataset/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 28, 2024
    Dataset authored and provided by
    The Finals
    Variables measured
    Player Head Bounding Boxes
    Description

    This project aims to accurately identify players in The Finals, with a 'player' box and a 'head' box. All of the other models on here seem to be created without 'care', so I'm creating an AiO model that will be scalable. It will not distinguish between friend and foe. Just identify a player and their head. This model is primarily designed for the development of a 'soft aim,' to help in those pesky stressful situations, where a little help could change your odds. It's not being designed for a full-fledged aimbot. However, one can theoretically use it for such a task if desired.

    Inclusions: - Outfits (L/M & H) + perspectives - Various head/facewear - Emotes (Because of odd body mechanics) - Training dummies from the practice range and tutorials - As well as training dummies getting shot, burned, and 'sploded for accuracy. - Blue and Red outlines from gamemodes like Power Shift and Terminal Attack - Orange, Pink and Purple outlines from gamemodes like Quick Cash, Bank It, WT, and Ranked. - Players crouching, jumping, emoting, shooting, losing, etc. in-game. - Players using gadgets like 'dash', players visible though 'dematerializer', possibly 'invisible' players. I may create a small model to purely test the 'invisibility' identification, as, it's almost impossible to identify it yourself (opinion). - POV of 'thermal vision.' - Interested in trying to identify scope glares, but since it's a purely white light, like some of the spotlights in the game it may be a challenge. I may need to find examples from other games to try and implement this.

    Notes: - I'm playing (purposely) with all medium settings, besides view distance (max), and Nvidia illumination is set to 'static.'

  8. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Jianxin Sun (2024). Users' Trajectory Training Dataset [Dataset]. https://ieee-dataport.org/documents/users-trajectory-training-dataset

Users' Trajectory Training Dataset

Explore at:
Dataset updated
Jun 10, 2024
Authors
Jianxin Sun
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

The training trajectory datasets are collected from real users when exploring the volume dataset on our interactive 3D visualization framework. The format of the training dataset collected is trajectories of POVs in the Cartesian space. Multiple volume datasets with distinct spatial features and transfer functions are used to collect comprehensive training datasets of trajectories. The initial point is randomly selected for each user. Collected training trajectories are cleaned by removing POV outliers due to users' misoperations to improve uniformity.

Search
Clear search
Close search
Google apps
Main menu