Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The training trajectory datasets are collected from real users when exploring the volume dataset on our interactive 3D visualization framework. The format of the training dataset collected is trajectories of POVs in the Cartesian space. Multiple volume datasets with distinct spatial features and transfer functions are used to collect comprehensive training datasets of trajectories. The initial point is randomly selected for each user. Collected training trajectories are cleaned by removing POV outliers due to users' misoperations to improve uniformity.
Purpose: Surgical education videos currently all use a single point of view (POV) with the trainee locked onto a fixed viewpoint, which may not deliver sufficient information for complex procedures. We developed a novel multiple POV video system and evaluated its training outcome compared with traditional single POV.
Methods: We filmed a hip resurfacing procedure performed by an expert attending using 8 cameras in theatre. 30 medical students were randomly and equally allocated to learn the procedure using the multiple POV (experiment group [EG]) versus single POV system (control group [CG]).
Participants advanced a pin into the femoral head as demonstrated in the video. We measured the drilling trajectories and compared it with pre-operative plan to evaluate distance of the pin insertion and angular deviations. Two orthopedic attendings expertly evaluated the participants' performance using a modified global rating scale (GRS). There was a pre-video knowledge test that was repea...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Wiki-Reliability: Machine Learning datasets for measuring content reliability on WikipediaConsists of metadata features and content text datasets, with the formats:- {template_name}_features.csv - {template_name}_difftxt.csv.gz - {template_name}_fulltxt.csv.gz For more details on the project, dataset schema, and links to data usage and benchmarking:https://meta.wikimedia.org/wiki/Research:Wiki-Reliability:_A_Large_Scale_Dataset_for_Content_Reliability_on_Wikipedia
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The angular deviation and distance deviation of entry points between the actually drilled and ideal virtual trajectory of the pin insertion in the femoral head in both groups.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset for the DREAMING - Diminished Reality for Emerging Applications in Medicine through Inpainting Challenge
More information about the challenge can be found on the challenge website!
Timeline:
8th January 2024: First subset of training & validation data available.
22nd January 2024: Second subset of training & validation data available.
29th January 2024: Full training & validation data available.
Description:
The dataset was created using Unreal Engine 5.1, Unreal MetaHumans, 3D-COSI surgical instruments, POV-Surgery grasp generation and EasySynth.
Each scene contains:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Score values are mean (SD) [95% CI].
This project aims to accurately identify players in The Finals, with a 'player' box and a 'head' box. All of the other models on here seem to be created without 'care', so I'm creating an AiO model that will be scalable. It will not distinguish between friend and foe. Just identify a player and their head. This model is primarily designed for the development of a 'soft aim,' to help in those pesky stressful situations, where a little help could change your odds. It's not being designed for a full-fledged aimbot. However, one can theoretically use it for such a task if desired.
Inclusions: - Outfits (L/M & H) + perspectives - Various head/facewear - Emotes (Because of odd body mechanics) - Training dummies from the practice range and tutorials - As well as training dummies getting shot, burned, and 'sploded for accuracy. - Blue and Red outlines from gamemodes like Power Shift and Terminal Attack - Orange, Pink and Purple outlines from gamemodes like Quick Cash, Bank It, WT, and Ranked. - Players crouching, jumping, emoting, shooting, losing, etc. in-game. - Players using gadgets like 'dash', players visible though 'dematerializer', possibly 'invisible' players. I may create a small model to purely test the 'invisibility' identification, as, it's almost impossible to identify it yourself (opinion). - POV of 'thermal vision.' - Interested in trying to identify scope glares, but since it's a purely white light, like some of the spotlights in the game it may be a challenge. I may need to find examples from other games to try and implement this.
Notes: - I'm playing (purposely) with all medium settings, besides view distance (max), and Nvidia illumination is set to 'static.'
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The training trajectory datasets are collected from real users when exploring the volume dataset on our interactive 3D visualization framework. The format of the training dataset collected is trajectories of POVs in the Cartesian space. Multiple volume datasets with distinct spatial features and transfer functions are used to collect comprehensive training datasets of trajectories. The initial point is randomly selected for each user. Collected training trajectories are cleaned by removing POV outliers due to users' misoperations to improve uniformity.