Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
These models come from versions 3, 4, 5 of my EfficientNet starter notebook here
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Currently, transcatheter aortic valve implantation (TAVI) represents the most efficient treatment option for patients with aortic stenosis, yet its clinical outcomes largely depend on the accuracy of valve positioning that is frequently complicated when routine imaging modalities are applied. Therefore, existing limitations of perioperative imaging underscore the need for the development of novel visual assistance systems enabling accurate procedures. In this paper, we propose an original multi-task learning-based algorithm for tracking the location of anatomical landmarks and labeling critical keypoints on both aortic valve and delivery system during TAVI. In order to optimize the speed and precision of labeling, we designed nine neural networks and then tested them to predict 11 keypoints of interest. These models were based on a variety of neural network architectures, namely MobileNet V2, ResNet V2, Inception V3, Inception ResNet V2 and EfficientNet B5. During training and validation, ResNet V2 and MobileNet V2 architectures showed the best prediction accuracy/time ratio, predicting keypoint labels and coordinates with 97/96% accuracy and 4.7/5.6% mean absolute error, respectively. Our study provides evidence that neural networks with these architectures are capable to perform real-time predictions of aortic valve and delivery system location, thereby contributing to the proper valve positioning during TAVI.
Facebook
TwitterThis dataset was created by Steve Jang
Facebook
TwitterThis dataset was created by Steve Jang
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Alzheimer’s disease (AD) poses significant challenges to healthcare systems across the globe. Early and accurate AD diagnosis is crucial for effective management and treatment. Recent advances in neuroimaging and genomics provide an opportunity for developing multi-modality-based AD diagnosis models using artificial intelligence (AI) techniques. However, the data complexities cause challenges in developing interpretable AI-based AD identification models. In this study, the author built a comprehensive AD diagnostic model using magnetic resonance imaging (MRI) and gene expression data. MobileNet V3 and EfficientNet B7 model was employed to extract AD features from gene expression data. The author introduced a hybrid TWIN-Performer-based feature extraction model to derive features from MRI. The attention-based feature fusion was used to fuse the crucial features. An ensemble learning-based classification model integrating CatBoost, XGBoost, and extremely randomized tree (ERT) was developed to identify cognitively normal (CN) and AD features. The proposed model was validated on diverse datasets. It achieved a superior performance on MRI and gene expression datasets. The area under the receiver operating characteristic (AUROC) scores were consistently above 0.85, indicating excellent model performance. The use of Shapley Additive exPlanations (SHAP) values improved the model’s interpretability, leading to earlier interventions and personalized treatment strategies.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset was created by Tran Khanh Nguyen
Released under Apache 2.0
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comparative Analysis (Pre-trained Models) – Gene expression data (Test set).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Class Distribution Before and After Data Augmentation.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Datasets Composition and Multi-modal Data Availability.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Performance Evaluation – Gene expression data (Test set).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The FlameVision dataset is a comprehensive aerial image dataset designed specifically for detecting and classifying wildfires. It consists of a total of 8600 high-resolution images, with 5000 images depicting fire and the remaining 3600 images depicting non-fire scenes. The images are provided in PNG format for classification tasks and JPG format for detection tasks. The dataset is organized into two primary folders, one for detection and the other for classification, with further subdivisions into train, validation, and test sets for each folder. To facilitate accurate object detection, the dataset also includes 4500 image annotation files. These annotation files contain manual annotations in XML format, which specify the exact positions of objects and their corresponding labels within the images. The annotations were performed using Roboflow, ensuring high quality and consistency across the dataset. One of the notable features of the FlameVision dataset is its compatibility with various convolutional neural network (CNN) architectures, including EfficientNet, DenseNet, VGG-16, ResNet50, YOLO, and R-CNN. This makes it a versatile and valuable resource for researchers and practitioners in the field of wildfire detection and classification, enabling the development and evaluation of sophisticated ML models.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The object detection category included the default YOLOv5m architecture and its five variations (v0, v1, v2, v3 and v4; see Methods), while the second category included six state-of-the-art image classification architectures. We studied the effect of adding random background images as negative control. The best models were estimated by retraining until epoch Ep when over-fitting was observed. Performance metrics included precision (P), Recall (R), F1 score (F1), mAP@0.5 (M1) and mAP@.5,.95, for both classes combined (a), as well as individually for the low (l) and high (h) classes.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
These models come from versions 3, 4, 5 of my EfficientNet starter notebook here